2019 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2004 2003 2002 2001 2000 1999 1998 1997 1996 1995 1994 1993 1992 1991 1990 1989 1988 1987 1985 1984 1982 1980 1979 1978 1977 1976 1975

Journals  Conferences  Reports

  1. Hofstede, A.H.M. ter and Weide, Th.P. van der, Formalisation of techniques: chopping down the methodology jungle. Information and Software Technology, Nr: 1, Vol: 34, Pages: 57-65, January, 1992

    In this article we discuss formalisation of techniques in the context of Information System Development methodologies. When such methodologies are developed, the primary goal is applicability. After the methodology has proven itself in practice, the methodology will be applied in more sophisticated situations, pushing the methodology to its limits. in those cases, informal definitions are known to be inappropriate. We will go into some typical problems. After that, we describe a procedure for proper formalisation. The Predicator Model is presented as an extended example. Finally, we describe some experiences with this approach to formalisation.

    [ pdf ] [ cite ]

  2. Weide, Th.P. van der and Hofstede, A.H.M. ter and Bommel, P. van, Uniquest: Determining the Semantics of Complex Uniqueness Constraints. The Computer Journal, Nr: 2, Vol: 35, Pages: 148-156, April, 1992

    In this article the {Uniquest Algorithm} (the ``quest for uniqueness''), defined in the Predicator Model, is discussed in depth. The Predicator Model is a general platform for object-role models. The Uniquest Algorithm is a constructive formal definition of the semantics of uniqueness constraints. As such, it facilitates the implementation in so-called CASE-tools.

    The Uniquest Algorithm provides a systematic approach for the interpretation of complex uniqueness constraints. This interpretation process is easily traced, using an extra formalism, called the Object Relation Network (ORN). The ORN is a directed graph with labelled edges, representing an object-role information structure. Intermediate results that are outside the scope of the information structure at hand, are represented elegantly as an ORN.

    A number of theoretical and practical examples prove the power of the Uniquest Algorithm. In these examples we will encouter complex uniqueness constraints, that are missed easily. The Uniquest Algorithm provides a handle for recognition.

    [ see here ] [ cite ]

  3. Bommel, P. van and Weide, Th.P. van der, Reducing the search space for conceptual schema transformation. Data & Knowledge Engineering, Vol: 8, Pages: 269-292, 1992

    In this paper we focus on the transformation of a conceptual schema into an internal schema. For a given conceptual schema, quite a number of internal schemata can be derived. This number can be reduced by imposing restrictions on internal schemata. We present a transformation algorithm that can generate internal schemata of several types (including the relational model and the NF2 model). Guidance parameters are used to impose further restrictions. We harmonise the different types of schemata by extending the conceptual language, such that both the conceptual and the internal models can be represented within the same language.

    [ see here ] [ cite ]

  4. Bruza, P.D. and Weide, Th.P. van der, Stratified Hypermedia Structures for Information Disclosure. The Computer Journal, Nr: 3, Vol: 35, Pages: 208-220, 1992

    In this paper we generalize the two level approach approach to hypertext (hypermedia) systems into stratified hypermedia structures. First we describe the overall architecture of such systems, including the fundamentals of the user interface. Thereafter, its various components are discussed. Special emphasis is paid to how the underlying information model is layered. Two layers are featured: the hyperbase and the hyperindex. A characterization calculus is presented for the characterization of structured elements. This calculus forms the basis of a logic­based approach in connection with the associated information processor (Disclosure Machine). The logic­based approach is considered as the most general approach to the retrieval process. In addition, this calculus is useful for quality assurance in hypermedia applications. Attention is also paid to spatial coherence for relevance judgements.

    [ see here ] [ cite ]

  5. Hofstede, A.H.M. ter and Proper, H.A. and Weide, Th.P. van der, PSM: Datamodelleren in het Kwadraat. DB/Magazine, Nr: 4, Vol: 3, Pages: 37-41, June, 1992, In Dutch

    De huidige generatie van modelleringstechnieken schieten tekort voor het modelleren van complexe applicatiedomeinen. Voorbeelden van zulke applicatiedomeinen zijn: hypermedia-toepassingen, CAD/CAM systemen, en meta-modellering. Er is een stijgende behoefte aan modelleringstechnieken die het modelleren van dergelijke applicatiedomeinen ondersteunen.

    In dit artikel presenteren we PSM, een exponent van een nieuwe generatie van datamodelleringstechnieken, welke geschikt is voor het modelleren van complexe gegevensstructuren.

    [ cite ]

Journals  Conferences  Reports

  1. Bommel, P. van and Weide, Th.P. van der, Towards Database Optimization by Evolution. Proceedings of the International Conference on Information Systems and Management of Data (CISMOD 92), Edited by: A.K. Majumdar, and N. Prakash. Pages: 273-287, July, 1992

    In this paper we focus on optimization of database schema transformation, by evolutionary (or genetic) search. A framework for transforming conceptual data schemata into efficient internal schemata is presented. We consider this problem from the viewpoint of searching through the space of all correct, but possibly incomplete internal representations of the conceptual schema at hand. A search strategy is established, based on the use of evolutionary operators. The relevant evolutionary operators are introduced.

    [ cite ]

  2. Hofstede, A.H.M. ter and Proper, H.A. and Weide, Th.P. van der, Data Modelling in Complex Application Domains. Proceedings of the Fourth International Conference CAiSE`92 on Advanced Information Systems Engineering, Manchester, United Kingdom, EU, Edited by: P. Loucopoulos. Lecture Notes in Computer Science, Vol: 593, Pages: 364-377, May, Springer, 1992, ISBN 3540554815

    In many non trivial application domains, object types with a complex structure occur. Data modelling techniques which only allow flat structures are not suitable for representing such complex object types. In this paper a general data modelling technique, the Predicator Set Model, is introduced, which is capable of representing complex structures in a natural way.

    The expressiveness of the Predicator Set Model is illustrated by means of a number of examples. In those examples, the Predicator Set Model's expressiveness is related to the expressiveness of more traditional modelling techniques. Furthermore, some notational conventions are defined, which enable a more compact representation of complex structures.

    [ cite ]

Journals  Conferences  Reports

  1. Hofstede, A.H.M. ter and Proper, H.A. and Weide, Th.P. van der, A Note on Schema Equivalence. Technical report: CSI-R9230, Department of Information Systems, University of Nijmegen, Nijmegen, The Netherlands, EU, 1992

    In this paper we introduce some terminology for comparing the expressiveness of conceptual data modelling techniques, such as ER, NIAM, and PM, that are finitely bounded by their underlying domains. Next we consider schema equivalence and discuss the effects of the sizes of the underlying domains. This leads to the introduction of the concept of finite equivalence. We give some examples of finite equivalence and inequivalence in the context of PM.

    [ cite ]




For more information, please contact me.


© WeCo Productions 2005 - 2024