TFP 2005
Sixth Symposium on Trends in Functional Programming

Friday September 23 - Saturday September 24, 2005, Tallinn, Estonia

Programme with Minutes of Questions

The minutes have been made by the programme chair Marko van Eekelen. Several improvements have been made to an earlier (unpublished) version of these minutes thanks to the feedback of the participants of TFP05.

Friday September 23

08:45 – 09:00 Welcome, Opening: Program Chair Marko van Eekelen

 

09:00-10.40 morning session 1 (4 * 25 minutes) Chair: Graham Hutton

Reasoning and Certification

 

Donald Sannella, Martin Hofmann, David Aspinall, Stephen Gilmore, Ian Stark, Lennart Beringer, Hans-Wolfgang Loidl, Kenneth MacKenzie, Alberto Momigliano, Olha Shkaravska, Mobile Resource Guarantees, Evaluation Paper

 

Did you consider different resources?

We want to consider time consumption. So far, we considered memory consumption only since modeling a heap is easier that modeling time.

An inference system is available for that. We have a proof of concept.

Would it be possible to deal with communication resources?

No, it may be possible to deal with modeling in the future. Our next project will deal with hardware details.

Would you be able to model garbage collect, destructive access, freeing memory?

The inference system and the logic will maintain a free-list for that purpose but only with explicit free statement in the code.

 

Malcolm Dowse and Andrew Butterfield, A Toolkit For Structuring I/O

 

Could you use this to prove properties of a real Haskell API?

Yes, in principle that would be possible.

 

Reasoning about concurrency is considered very difficult.

Will it be harder or easier to reason about concurrent programs using your system?

I cannot imagine how it could be harder.

If two concurrent programs do not communicate then it is easy.

In the presence of communication, however, correctness proofs may be quite tricky.

 

Tarmo Uustalu, Varmo Vene, Comonadic functional attribute evaluation

 

What are these smileys?

The greater-than signs :> are the snoc-constructors in the list representing the past of a signal; the equals sign :=| is a constructor separating the past from the present and future. The smaller-than signs :< are the cons-constructors in the stream corresponding to the present and future.

You seem to consider left attributed grammars. Is that right?

No, it is not. The approach handles arbitrary grammars. Haskell's lazy evaluation organizes the computation, the information can flow in a tree up, down and sideways. If the grammar is circular, then the general recursion in its specification may not terminate.

What is the purpose of this work?

The idea is to get a generic comonad-based interpreter of attribute grammar specification languages which is organized in a structured fashion. We haven't implemented this fully yet because of certain limitations in Haskell's type system, but in principle this doable.

Could this be used for some sort of context analysis?

Perhaps, but I do not see an automatic way of doing anything like this. You can manually try whether you can write your semantic equations as coKleisli arrows of the comonad for purely synthesized grammars or you must go more liberal.

 

Jan van Eijck, Simona Orzan, Modelling the Epistemics of Communication with Functional Programming

 

Many symmetric situations but they cannot be expressed. How could this be modeled?

The symmetries should be discovered by the tool ad not put in there. It would be nice if it could be encoded. 

Could you model an infinite number of agents?

No.

 

10.40-11:00 coffee break

 

11:00-12.40 morning session 2 (4 * 25 minutes) Chair: Olivier Danvy

Interoperability

 

Allan Clark, Nitro Foreign Data

 

Why not link to existing C functions?

Because we have more trust in what we write our selves.     

But what if you do trust the C-stuff more?

You may still want to extend it using your own functions dealing with the data representation directly.

 

Marcin Kowalczyk, Asynchronous communication between threads

 

Semantics of this can be tricky. Haskell people have been working at this for years. Did you make a formal semantics?

No, I just implemented this library.   

Who did you expose it to? Who are your users?

I propose it to be used for practical work.

 

Martin Elsman, Type-Specialized Serialization with Sharing

 

Are functional values a problem?

We would like to distribute functional values. It can not be done now.       

Did you look at Generic Haskell?

Dealing with cycles would be hard because you cannot look into the structure.

Ralf Hinze uses memo-tables in an elegant way. Did you look at that?

I will have a look at that.

Could you deal with more complex structures?

Mutltiset discrimination would help using a list of integers to point into the structure.

 

Simon Foster, HAIFA : An XML Based Interoperability Solution for Haskell

 

If you want to do Soap in a functional language, you’d better use Clean instead of Haskell, wouldn’t you?

No, I would not like to put it that way.

Does the system help to identify Workflow Patterns?

This is being investigated, I think so.

Does the type system help with that?

No, you loose your types basically.

Do you have problems with non-determinism?

I am not an expert on the process algebra stuff. The paper will address this formal side in the paper.

Could you use the technology of Martin Elsman?

I should first read the paper. It is possible.

How does this relate to CGI-wash by Peter Thiemann?

I have not looked at it yet.

 

Phil Trinder: a Doctorate Symposium: request for ideas and suggestions.

 

12.40-14.30 lunch

14.30-16.10 afternoon session 1 (4 * 25 minutes) Chair: Doaitse Swierstra

Determining Software Quality

 

Chris Ryder, Simon Thompson, Measuring Haskell

 

How do you define the depth of an infinite list?

We treat the list at one level. It depends on how you write the pattern.

What you measure is the correlation with bug fixes. There could be bugs that are not fixed?

Indeed: we have no crystal ball this is inevitable

Do you incorporate suggestions on improving the programs, discriminating good from bad programs or converting bad into good programs?

Certainly, we have the idea that it would be possible to reduce the number of function calls and also to reduce the number of pattern variables.

But we did not look at refactoring.

Large libraries have a tendency to be more changeable. Suppose you use a library and the library changes then the functions that use it would have to be adapted. Your system would blame now the functions that use the library function instead of the library function.

You are right. You should consider the analysis just as a warning that something could be wrong at a certain point. It is not a definite conclusion.

 

Tom Shackell and Colin Runciman, Faster production of redex trails: The Hat G-Machine

 

Is there a G-machine built-in?

Yes, it is.

Would you get the same results in GHC?

Yes, it is even worse due to the optimisations done in GHC.

How does this relate to the Freja system?

The Freja system generates evaluation-dependency  trees.  And they do not consider tracing. It is not the same kind of system.

Did you implement the full Hat system?

No, it is not fully done. For instance, projection is not in there.

 

Manfred Widera, Data Flow Coverage for Testing Erlang Programs

 

Does this technique extend to non-strict languages such as Haskell.

I would expect that the dataflow criteria do extend since even in lazy language you can always be sure that every definition is evaluated before its first use. This is all you need.

So, it is a fairly general technique?

Yes, it is.

 

Pieter Koopman and Rinus Plasmeijer, Generic Generation of Elements of Types

 

Do you mean an infinite sequence or an infinite computation?

I mean a sequence of generic things that corresponds to an infinite computation.

Restricted data types: would it also work with phantom types that restrict the values that can be generated?

I guess it will work. We can work out an example.

 

16.10-16.30 coffee break

16.30-17.45 afternoon session 2 (3 * 25 minutes) Chair: Andrew Butterfield

Applying and Extending Lambda Calculus

 

Frédéric Prost, Sort Abstraction for Static Analyzes of Mobile Processes

 

What do the curly braces with assignment mean? Does it replace just one instance?

It is a standard replacement. The value vanishes.

About the sorts you use with only top and bottom:  is this relevant? Does top mean secrecy and bottom undefinedness?

No, you can do that but you can use any other sort too as long as you use different names.

 

Susumu Katayama, Systematic search for lambda expressions

 

You are taking just a single point of function space and search for a function that matches it. Could you also work with a more general specification, for instance one which specifies a result for lists of less than three elements?

I enumerate programs and just filter afterwards. What you ask for  can be done but it cannot exploit the property that is specified.

You consider only speed in your evaluation of results. How often do you get the right program?

The system is deterministic so there is no choice.

But would a constant function not also be a good answer and can you deal with functions as input?

Let‘s discuss this afterwards.

 

Morten Rhiger, First-Class Open and Closed Code Fragments

 

What is the difference between ‘run’ and ‘downarrow’?

The difference is in the level where you use it.

Have you looked at any of the dependently typed languages? They are kind of very similar.

Yes, this is related. Exact comparison remains to be done.

Where is non hygienity essential. Could alpha-conversion be done differently?

It pops up in the type system. May be this might be done, may be not.

 

Saturday September 24

09:00-10.40 morning session 1 (4 * 25 minutes) Chair: Hans-Wolfgang Loidl

Program analysis and transformation

 

Kevin Hammond, Roy Dyckhoff, Christian Ferdinand, Reinhold Heckmann, Martin Hofmann, Hans-Wolfgang Loidl, Greg Michaelson, Jocelyn Sérot and Andy Wallace, The EmBounded Project, Project Paper

 

Are you also considering reconfigurable systems?

Not as yet. General purpose systems are better understood than special purpose devices.

For special purpose devices we would need new models and new information fed into cost models.

On the longer term yes, we hope to develop a single formal language to deal with both.

Effective Different cost model are a challenge.

I do not believe in a few years we can have such a formal static analysis. I do believe that dynamic types will be used increasingly.

It is a matter of philosophy. We go as far as we can. A priori bounds are important in our context. For mobile phone this can be different

I do not believe that we can have both formal expressibility and costability

We hope to prove you wrong.

 

Hampus Weddig, Fixing the semantic

 

What is the problem you want to solve;  what are the benefits of your method?

Termination. Cyclic definitions will be able to terminate.

So, if you define F = F  it will terminate?

Yes, either with answer No or False depending on the fixed point combinator used.

Shortly after the symposium the speaker provided us with a longer answer to the second question:

Well, by F = F I asume that you mean the haskell code f x = f x. Using fixed point combinators this would be expressed as

  f = fp $ \f x ->

      f x

for some combinator fp.

 

First I will shortly explain the declarative semantics of the combinators. Suppos B is a complete lattice, then the functions of type A -> B can be viewed as a complete lattice when ordered as f <= f' if for all x in A f x <= f' x. Then according to a fixed point theorem of Tarski an order preserving function g : (A -> B) -> (A -> B) has a least and greatest fixed point of type A -> B. By order preserving it is meant that f <= f' -> g f <= g f'. The declarative semantics that I have tried to formulated says that if A and B has (or can be considered to have) a finite number elements (B including 0) then the application of the fixed point combinators will correspond to the theoretical fixed points.

 

Suppose that g f x = f x, then g is order preserving because g f = f <= f' = g f', so g has a least and greatest fixed point. Because g f = f then any function f : (A -> B) is a fixed point of g, and so the least fixed point of g will be the least element of the lattice A -> B. This is the constant function f0 where f0 x = 0, where 0 is the least element of the lattice B. Since the computation g f x = f x only have to involve one argument in A (x), and one result in B (0) we can consider A and B to have a finite number elements. So the application of the fixed point combinators will correspond to the theoretical fixed points.

 

For instance if we use the non deterministic least fixed point then the application ndlfp g would correspond to f0 where f0 assigns zero answers to all input (here B are sets of answers ordered by set inclusion, so the least element 0 of B is the empty set). If B where the lattice of boolean values then the deterministic combinators would give lfp g x = False and gfp g x = True, and so on depending on the lattice.

 

The intuitive explanation to how the combinators can work is that they use a kind of memoisation technique that ensures that each call is only made once (and so breaking infinte cycles). To actually see that this can work consider the following simple (but not efficient, answers might actually be recomputed) haskell definition of the deterministic combinators

 

fix x f = let x' = f x

          in if x == x' then x else fix x' f

                                                                               

gt e g t x = case lookup x t of

  Just y -> y

  Nothing -> fix e $ \y -> let t' = (x,y):t

                           in g (gt e g t') x

                                                                               

fp e g x = gt e g [] x

                                                                               

type FPC a b = ((a -> b) -> a -> b) -> a -> b

                                                                               

lfp :: Eq a => FPC a Bool

lfp g x = fp False g x -- Or whatever is the least element

                                                                               

gfp :: Eq a => FPC a Bool

gfp g x = fp True g x -- Or whatever is the greatest element

                                                                               

tst1 = f () where

  f = lfp $ \f x ->

      f x

                                                                               

tst2 = f () where

  f = gfp $ \f x ->

      f x

 

Kevin Millikin, A New Approach to One-Pass Transformations

 

You go from two pass to one pass but you do not precisely define what you mean by pass.

Would not the old stuff come back again if you would define that explicitly?

We should talk about it. I disagree. The constructors are inspected only once.

I would like to see memory usage or time performance.

So, the point is that I should justify that there really is an improvement here.

We shall talk about it.

 

Huiqing Li and Simon Thompson, Formalisation of Haskell Refactorings

 

What is the difficulty in composing refactoring?

The pre- and postconditions. Some may be eliminated or fixed together.

Do you think a theorem prover might help?

It may help at this stage.

It might be interesting to look at work done in Glasgow on proof assistance for equational reasoning.

 

Formalizing modifications to text would require more work, however.

 

Will it be difficult to adapt your system to  the presence of classes and overloading.

Yes, I think so.

 

10.40-11:00 coffee break

11:00-12.40 morning session 2 (4 * 25 minutes) Chair: Greg Michaelson

Static Code Analysis

 

Olha Shkaravska, Amortized Heap-Space Analysis for First-Order Functional Programs

 

The problem of higher order functions: would you think it is technical or fundamental?

I think it is technical. One can just use count the size of closures.

It is partly the cost of closures but also how the functions are applied. This can make it technically undecidable.

We can adapt this predicate in this definition.

This will indeed make it undecidable.

So, type inferencing would be impossible but type checking might be ok.

We would have to think about it in detail.

Is it correct that the soundness does not guarantee the use of destructive actions?

Resource consumption has linear typing.

There are more general approximations we have been working on, that are sound with respect to benign sharing.

This might be a solution for that.

Sometimes the amortize cost varies from the true cost. Are there pathological cases?

            They are not pathological. That is prohibited by the correctness condition in the introduction.

Do you have a feel for when a big difference might occur?

No, not so far. I would have to think about it.

 

Daan Leijen, Extensible records with scoped labels

 

Do you look at type inference or at type checking? Your statements do not hold for type checking.

Yes, I am looking at type inference.

You criticize ML-records to be ordered but you also introduce ordering in the end.

Your comparisons seem to be expensive. I presume they have factorial complexity. Is that true?

No, we do not use equality predicate. We use unification, It is quadratic: not worse than usual type inference.

Many occurrences of the same label reintroduce order. Did you consider overwrite instead of that?

Yes, I skipped in the presentation. If you update, it will not introduce a new field.

You reintroduced order in the different labels?

No, there are still different labels, not in an arbitrary order. It behaves like a stack. It is in the type.

I think it is a matter of unintendend errors by the programmer.

We need more practice to verify this claim.

Values of the same label behave as a stack. Is there an application that uses that?

I think that would maybe a program that uses some kind of information about the past.

Yes, it is useful. You think of an environment argument for pretty printing.

You might want to change it by extending it with a new colour field. You can still access the colour of a parent environment.

You move around the labels when checking types. Do you also have to move at run-time?

No, that will not happen. The type system will still introduce a specific offset that is used at run-time to access the field. 

The moving is all happening at compile time.

 

Neil Mitchell and Colin Runciman, Unfailing Haskell: A Static Checker for Pattern Matching

 

What is the cost of your analysis?

It is very cheap. No difficult stuff. Usually it comes back instantly. The complexity is probably n-square.

You remove higher order functions?

That is not always needed.

You require the complete program. What about a library?

Yes, we need that for the backwards analysis.

It can only be avoided if you would require an abstract model of the library.

What about the compositionality of safeness?

It does not compose very well.

You do not know the circumstances under which an empty list is produced for instance.

 

James Chapman, Thorsten Altenkirch, Conor McBride, Epigram Reloaded: A Standalone Typechecker for ETT

 

The terms do not mean normal forms?

Yes, this connects typing and the equality constraints.

The equality test can still run forever, right?

No, it is decidable.

I am surprised that you can make an equality test that always works.

The point is that we have this limited form of extensionality.

We cannot compare functions on all arguments but we can compare them on fresh variables.

You showed us a scary code explosion. It seemed a bit like machine code

Yes, type theory is a bit like machine code. That is why you need Epigram to do the work.

Can you work on the high level?

Yes, exactly. This type checker makes it possible.

What is the biggest program you made with Epigram?

I have written just small ones actually one due to limitations of the prototype.

I am worried about the scalability.

A compiler will help. Internal representations are important using sharing as much as possible. 

It is still an open question.

     

Phil Trinder: Discussion on a Doctorate Symposium in TFP

 

12.40-14.30 lunch

14:00 Program Committee meeting

 

14.30-16.10 afternoon session 1 (4 * 25 minutes) Chair: Phil Trinder

Application

 

Isaac Jones, The Haskell Cabal: A Common Architecture for Building Applications and Libraries

 

Can I do refinement of programs?

Autocam is the interface we do not have large libraries of components yet.

Would it be easy to incorporate another Haskell compiler?

It will be easy unless compilers are as different as Hugs and GHC. Probably it will be very easy.

Does Cabal allow you to do recursive modules?

It does the right thing. It does not do anything fancy as doing it automatically though.

 

Chunxu Liu and Greg Michaelson, Towards a Hume IDE

 

Which Java semantics are you using?

We use Ginger: the Isabel/Hol embedding of a Java subset.

Kevin developed a formal semantics for HW-Hume. We will use that semantics.

Can you use large applications?

Yes, we have run large applications through it.

How does that compare to Java?

We believe that the Hume version will be much smaller.

 

Nikolaas N. Oosterhof, Philip K.F. Hõlzenspies, and Jan Kuper, Application patterns

 

I expect confusion in debugging using application patterns.

It could happen that there is an error in a function that is never called because its inverse was called.

Do you have experience with debugging?

No, we have not written so much programs yet. So, we have not much experience with debugging.

How do you distinguish between a normal argument

and one that has to be inversed, for instance in f (x-2)?

We check which arguments are known. Otherwise it does not make sense.

What if they are bound? How do you know what the programmer means?

That depends on which identifiers are defined.

What about scheduling variable binding?

If both are defined globally which one is scheduled and which one is taken as reverse binding?

We take by default the most near binding.

With a caret prefix we indicate that the binding must be taken from the context.

I am afraid this would be confusing but that is a technical argument.

I would like to talk about that during the break.

 

S.A. Curtis, C.E. Martin, Functional Fractal Image Compression

 

Is the right one the compressed one?

Yes.

Did you quantify whether images looked better?

We did some examples. They looked nice. We looked at it qualitatively.

 

16.10-16.30 coffee break

16.30-17.45 afternoon session 2 (3 * 25 minutes) Chair: Kevin Hammond

Program optimisation

 

Tetsuo Yokoyama, Zhenjiang Hu, and Masato Takeichi, Calculation Rules for Warming-up in Fusion Transformation

 

You mentioned efficiency: how efficient is your algorithm?

The matching you mean?

Yes.

We have done no measurement but we have derived the complexity theoretically. It runs in linear time contrary to ordinary matching.

What is the largest example you have tried it on?

The examples we have tried are still very small: the largest example is the part example I just presented.

 

Kenichi Asai, Logical Relations for Call-by-value Delimited Continuations

 

You talked about Call-by-value vs call-by-name. What about call-by-need?

Aah, I have never thought about that.

Is it characterized sufficiently by call-by-name or do you need more information?

Sorry, I am completely a call-by-value person.

Did you consider whether you can use constructive proofs?

If your proofs are constructive then you can construct a program from the proofs.

I will consider that thank you.

 

Carmen Torrano, Clara Segura, Strictness Analysis and let-to-case Transformation using Template Haskell

 

I think on the level of Haskell let and case are almost equivalent expect for typing so why do you want to transform on that level?

You say they are the same except for the yping. I am not sure about that. When they are translated, it is not the same.

One builds a closure the other will not.

My aim is not to do this for the compiler but to test template haskell as a tool to do analysis.

It does not help the compiler to generate different code it only shows the information to the user.

This depends on what is done with the analysis. It could be made such that the compiler knows about it.

Writing a phase of the compiler is not fun, I can tell you. Clara, this looks much better.

You show what you have done but not what experience you had with template Haskell.

Generating names capturing variables and also writing template code such that you are manipulating definitions in another module: that is a serious problem.

You write your tool but you can apply it only in the same model.

This reification mechanism allows you to look into the type info but my problem was that you could not access the types.

That was my main problem.

I think that the types imply that the transformation would change a program into a program that does not compile due to the polymorphism.

Please, explain that to me later.

 

17:45 Closing:       Program Chair: Marko van Eekelen

General Symposium Chair Kevin Hammond.

17:50 Steering Committee Meeting

 

20:00 TFP dinner Saturday September 24 at 20.00 at Le Paris