2019 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2004 2003 2002 2001 2000 1999 1998 1997 1996 1995 1994 1993 1992 1991 1990 1989 1988 1987 1985 1984 1982 1980 1979 1978 1977 1976 1975

Journals  Conferences  Reports

  1. Hofstede, A.H.M. ter and Weide, Th.P. van der, Deriving Identity from Extensionality. International Journal of Software Engineering and Knowledge Engineering, Nr: 2, Vol: 8, Pages: 189-221, June, 1997

    In recent years, a number of proposals have been made to extend conventional conceptual data modeling techniques with concepts for modeling complex object structures. Among the most prominent proposed concepts is the concept of collection type. A collection type is an object type of which the instances are sets of instances of an other object type. A drawback of the introduction of such a new concept is that the formal definition of the technique involved becomes considerably more complex. This is a result of the fact that collection types are populatable types and such types tend to complicate updates. In this paper it is shown how a new kind of constraint, the extensional uniqueness constraint, allows for an alternative treatment of collection types avoiding update problems. The formal definition of this constraint type is presented, other advantages of its introduction are discussed, and its consequences for, among others, identification schemes are elaborated.

    [ cite ]

  2. Hofstede, A.H.M. ter and Lippe, E. and Weide, Th.P. van der, Applications of a Categorical Framework for Conceptual Data Modeling. Acta Informatica, Nr: 12, Vol: 34, Pages: 927-963, December, 1997

    For successful information systems development, conceptual data modeling is essential. Nowadays a plethora of techniques for conceptual data modeling exist. Many of these techniques lack a formal foundation and a lot of theory, e.g. concerning updates or schema transformations, is highly data model specific. As such there is a need for a unifying formal framework providing a sufficiently high level of abstraction. In this paper the use of category theory for this purpose is addressed. Well-known conceptual data modeling concepts, such as relationship types, generalization, specialization, and collection types are discussed from a categorical point of view. An important advantage of this framework is its configurable semantics. Features such as null values, uncertainty, and temporal behavior can be added by selecting appropriate instance categories. The addition of these features usually requires a complete redesign of the formalization in traditional set-based approaches to semantics. Applications of the framework in the context of schema transformations and improved automated modeling support are discussed.

    [ cite ]

  3. Hofstede, A.H.M. ter and Proper, H.A. and Weide, Th.P. van der, Exploiting Fact Verbalisation in Conceptual Information Modelling. Information Systems, Nr: 6/7, Vol: 22, Pages: 349-385, September, 1997

    An increasing number of approaches to conceptual information modelling use verbalisation techniques as an aid to derive a model for a given universe of discourse (the problem domain). The underlying assumption is that by elaborate verbalisation of samples of facts, taken from the universe of discourse, one can elicit a complete overview of the relevant concepts and their inter-relationships. These verbalisations also provide a means to validate the resulting model in terms of expressions familiar to users. This approach can be found in modern ER variations, Object-Role Modelling variations, as well as different Object-Oriented Modelling techniques.

    After the modelling process has ended, the fact verbalisations are hardly put to any further use. As we belief this to be unfortunate, this article is concerned with the exploitation of fact verbalisations after finishing the actual information system. The verbalisations are exploited in four directions. We consider their use for a conceptual query language, the verbalisation of instances, the description of the contents of a database, and for the verbalisation of queries in a computer supported query environment. To put everything in perspective, we also provide an example session with an envisioned tool for end-user query formulation that exploits the verbalisations.

    [ see here ] [ cite ]

Journals  Conferences  Reports

  1. Wondergem, B.C.M. and Bommel, P. van and Huibers, T.W.C. and Weide, Th.P. van der, Towards an Agent-Based Retrieval Engine (Profile-Information Filtering Project). Proceedings of the 19th BCS-IRSG Colloquium on IR Research, Edited by: J. Furner, and D.J. Harper. Pages: 126-144, April, 1997

    This article describes and analyses the retrieval component of the Profile Information Filtering Project of the University of Nijmegen. The overall structure of this project, serving as the context for the retrieval component, is stated. This component is called the Retrieval Engine and will be implemented as an intelligent retrieval agent, using sophisticated techniques from artificial intelligence. A synthesis between information retrieval and information filtering has to be found, coping with challenging problems stemming from the combination of the difficulties of both fields. The Retrieval Engine should be capable of giving an explanation of why a document was found relevant to the information need of the user. The techniques used will rely on sophisticated natural language processing. The techniques to establish relevance degrees for documents will consist of two parts: a symbolic and a numeric one. This allows for a mechanism that is both explainable and exact. Interesting approaches for obtaining this are stated.

    [ see here ] [ cite ]

  2. Arampatzis, A.T. and Weide, Th.P. van der and Bommel, P. van and Koster, C.H.A., Linguistic Variation in Information Retrieval and Filtering. Informatiewetenschap 1997, Edited by: P.M.E. de Bra. Pages: 7-10, 1997

    In this paper, a natural language approach to Information Retrieval (IR) and Information Filtering (IF) is described. Rather than keywords, noun-phrases are used for both document description and as query language, resulting in a marked improvement of retrieval precision. Recall then can be enhanced by applying normalization to the noun-phrases and some other constructions. This new approach is incorporated in the Information Filtering Project Profile. The overall structure of the Profile project is described, focusing especially on the Parsing Engine involved in the natural language processing. Effectiveness and efficiency issues are elaborated concerning the Parsing Engine. The major contributions of this research include properties of grammars and parsers specialized in IR/IF (properties such as coverage, robustness, efficiency, ambiguity), normalization of noun-phrases, and similarity measures of noun-phrases.

    [ cite ]

  3. Bommel, P. van and Weide, Th.P. van der, Educational Flow in Computing Science Courses. 3rd International Conference on Applied Informatics (ICAI 97), 1997

    In this paper we describe the organization of a Student Research Lab (SRL) and Student Teaching Lab (STL) in the context of a computing science curriculum. The SRL and STL are inspired by the following problems found in many academic computing science curriculums today: (1) the preparation for working as an IT professional is not given sufficient attention, (2) coherence within and between educational components is too weak. Our solution to these problems consists of the SRL and STL, where the flow of educational results is operationalized and formalized.

    [ cite ]

  4. Wondergem, B.C.M. and Bommel, P. van and Huibers, T.W.C. and Weide, Th.P. van der, An Electronic Commerce Paradigm for Information Discovery. Proceedings of the Conferentie Informatiewetenschap (CIW`1997): Let your Browser do the Walking, Edited by: P.M.E. de Bra. Pages: 56-60, November, 1997

    This article investigates the connection between Electronic Commerce (EC) and Information Discovery (ID). ID is the synthesis of distributed Information Retrieval and Information Filtering, filled in with itelligent agents and information brokers. Currently, no link exists between EC and ID. We argue that this link consists of a cost model for ID. We therefore propose several (types of) cost models, which enable application of EC to the whole of ID. This is illustrated with examples.

    [ cite ]

Journals  Conferences  Reports

  1. Bleeker, PA.I. and Bruza, P.D. and Weide, Th.P. van der, A User-centred View on Hypermedia Design. Technical report: CSI-R9707, Computing Science Institute, University of Nijmegen, 1997

    Ever-increasing quantities of information, together with new developments on storage and retrieval methods, are confronting todays users with a huge information supply that they can barely oversee. Hypermedia information retrieval systems try to assist users in finding their way through the supply, but reality this is where many systems fall short. The reason is that most of them do not really communicate with users or find out what they really want. Instead, a bottom-up approach that reasons mainly from an information-oriented view point, has been a major design focus. We argue that the design of hypermedia systems should be based on an integration between both a top-down (user-oriented) and a bottom-up (information-oriented) approach, to develop hypermedia systems that know and understand their users. In this article, we present initial results of a new user-oriented approach.

    [ cite ]

  2. Arampatzis, A.T. and Weide, Th.P. van der and Bommel, P. van and Koster, C.H.A., Syntactical Analysis for Text Filtering. Technical report: CSI-R9721, November, Computing Science Institute, University of Nijmegen, Nijmegen, The Netherlands, 1997

    [ cite ]

  3. Bommel, P. van and Weide, Th.P. van der, SRL Handboek. Technical report: CSI-N9702, Januari, Radboud University Nijmegen, 1997

    [ cite ]

  4. Bommel, P. van and Weide, Th.P. van der, Conceptual Graphs as a Basis for Verification, Matching, and Similarity in the Context of Information System Development. Technical report: CSI-N9701, January, Radboud University Nijmegen, 1997

    The KISS-method uses graphical structures to represent the models which are constructed during analysis and design. Such graphical structures are referred to as model graphs. Special patterns are used to describe consistency properties of model graphs, and can serve as a basis for transformations between different types of model graphs. In this paper we show how model graphs are related to conceptual graphs, which have been widely accepted to represent knowledge structures. This relation provides the opportunity to benefit from the vast amount of results and algorithms which have been developed for conceptual graphs. As an example, we discuss similarity between models. Conceptual graphs can also serve as a uniform basis for all models of the KISS-method. This is not furhter elaborated in this paper. Conceptual graphs have a clear relation to first order prdicate calculus. This facilitates the possibility to reason about KISS-models in a formal mathematical style, meanwhile having a direct link to t he original KISS-models. In this paper the concept of patterns and its matching, is further elaborated. An inverse style of matching can be seen as an introduction of regular graph patterns. Finally we discuss graph transformation rules.

    [ cite ]

  5. Wondergem, B.C.M. and Bommel, P. van and Huibers, T.W.C. and Weide, Th.P. van der, How is this document's relevancy derived?. Technical report: CSI-R9710, June, Radboud University Nijmegen, 1997

    In Information Retrieval, user preferences and domain knowledge play an important role. This article shows how to incorporate domain knowledge in a logical framework and provides a mechanism to exploit user preferences to personalize domain knowledge, based on the inferences made in the matching functions. The matching functions are essentially symbolic logical inferences. The logic used in this article are Preferential Models, which are augmented with domain knowledge by providing an enriched aboutness relation. However, the techniques described in this article are applicable to other logics as well. A way to personalize the domain knowledge is given, which also gives the user insight into the workings of the matching functions. In addition, sound inference rules, which are tailor-made for the domain knowledge, are provided.

    [ cite ]

  6. Wondergem, B.C.M. and Bommel, P. van and Huibers, T.W.C. and Weide, Th.P. van der, Opportunities for Electronic Commerce in Agent-Based Information Discovery. Technical report: CSI-R9722, December, Radboud University Nijmegen, 1997

    This article investigates the connection between Electronic Commerce (EC) and Information Discovery (ID). ID is the synthesis of distributed Information Retrieval and Information Filtering, filled in with intelligent agents and information brokers. Currently, no link exists between EC and ID. We argue that this link consists of a cost model for ID. We therefore propose several (types of) cost models, which enable application of EC to the whole of ID. This is illustrated with examples.

    [ cite ]

  7. Jones, P.A. and Bommel, P. van and Koster, C.H.A. and Weide, Th.P. van der, Stratified Recursive Backup for Best First Search. Technical report: CSI-R9720, November, Information Systems Group, Computing Science Institute, University of Nijmegen, The Netherlands, EU, 1997

    In this paper a new abstract machine model, the Stratified Recursive Backup machine model, is described. This machine model can be used to implement best first search algorithms efficiently. Two applications of best first search, a text layouting system and a natural language parser, are analyzed to provide an in­depth understanding of the Stratified Recursive Backup machine model.

    [ cite ]




For more information, please contact me.


© WeCo Productions 2005 - 2024