- Extensions of the Turing machine or lambda calculus paradigm with "interaction", "advice" or "oracles". There is (a lot of) work on this by, for example, van Leeuwen and Wiederman, Wegner, Goldin, Smolka and many others. Many questions on the universality of these models and their relative strength remain.
- Operational Semantics and/or Axiomatics Semantics for programming laguages, as you've studied in "Semantiek en Correctheid" or "Semantiek en Logica".
- Abstracting from (2), studying Dynamic Logic or Kleene Algebra.
- Grammars and Automata for XML standards. What you've seen in "Talen en Automaten" for string languages can be extended to tree languages and hedge languages.
- Denotational semantics, for example models for type theories or models for coindcutive data types (like streams)
- Computation with and reasoning over infinite objects (like streams).

- The curricula themselves: compare the official curricula of 1978, 1988, 1998 and 2008, possibly at various Dutch universities, but also compare the actual content of the courses on the basis of lecture notes, exams, etc.
- The educational goals of the curricula (and the courses) as they have been described.
- (Possibly) Interviews with teachers/students from those periods.

- What was expected from a first year student
- What a student was expected to know at the end of the studies

- We consider streams over
*{0,S,c}*, where*c*denotes a*computation step*(a tau step in proces algebra terminology), so the number*n*is represented by*c...c S c...c S ... S c... c 0 ...*, so*n*times a*S*followed by a*0*, separated by*c*and followed by arbitrary what. - A function takes streams and produces a stream; given a (code
for a) partial recursive function
*f*of arity*m*we define its interpretation as a function (in Coq) taking*n*streams and producing a stream. - All this should be defined in Coq

- When it terminates you want to be certain all prisoners have been in the room.
- Have a low expected running time.

This is a nice example of a problem where you want to write an algorithm, and prove it's correctness (1). Then extract the algorithm and execute it to estimate (2), and compare the solutions to the puzzle.

Coq has a library of real arithmetic and calculus with which we can model the environment. We can also model the controller and prove that if it obeys certain behavioral rules, it ensures a safety requirement. So the goal is to give a formal correctness proof of a controller.

The goal of this project is to turn the decision procedure into a full-blown tactic inside Coq. To do this, one has to be able to recognize and parse primitive recursive predicates, generate the associated characteristic functions and compute them.

What to formalize exactly is open for discussion.

We want to extend tmegg with nicer and better rendering facilities and to test this basic approach to interactive mathematical documents on some larger proof developments.

There are various ways in which this interface can be enriched, e.g. by including other proof assistants like PVS.

In the project we will study and further develop Wiki technology and semantic web technology and techniques for consistency management, all in the context of proof assistant repositories of formalized mathematics. The project thus brings together the open nature of Wiki authoring with expertise in Proof Assistants and Semantic Web technologies to build a new Wiki for mathematics, supporting content creation, search and retrieval.

From the perspective of the ordinary user of mathematics, MathWiki will be important because it will provide high-level mathematical content on the web in a much more coherent and precise way than is available at present. From the proof assistant user perspective, MathWiki will be important because it will provide an advanced environment for the collaborative authoring of verified mathematics, mediated simply by a web interface.

There has been done a lot of work on the extension of formulas-as-types to classical logic, giving rise to new presentations of the logical systems, new typed lambda calculi, new reduction rules and new reduction strategies. In all these areas (or in the combination of them) there are still many open questions left.

Various questions remain, like whether this is a practical system for doing actual proof checking or whether we can refine the system to allow only well-typed terms in teh conversion relation.

Also, the proof of bot can be given in various ways, usually by interpreting some form of Burali-Forti paradox, saying that we cannot well-order the collection of all well-orderings. But can we also interpret the much simpler Russell paradox? (Stating that the collection of all sets that are not members of themselves is not a set.)