CS 334
Programming Languages
Spring 2002

Lecture 5

More Functional Languages

Higher-order functions

Fcnal languages provide new kind of "glue" allowing programmer to write small modules and "glue" them together into larger programs.

Can build own glue by writing higher order functions.

Can write product on lists by writing

    fun prod [] = 1   
      | prod (head::rest) = head * prod rest
Similarly for
    fun sum [] = 0   
      | sum (head::rest) = head + sum rest
Notice general pattern and write higher-order "listify" function:
    fun listify oper identity [] = identity   
      | listify oper identity (fst::rest) =    
                            oper(fst,listify oper identity rest);   
    val listify = fn : ('a * 'b -> 'b) -> 'b -> 'a list -> 'b
then
    val listsum = listify (op +) 0;
   
    val listmult = listify (op *) 1;
   
    val length = let fun add1(x,y) = 1 + y    
                        in listify add1 0    
                        end;
   
    fun append a b = let fun cons(x,y) = (x::y)    
                                in listify cons b a    
                                end;
Can define other higher-order functions as glue also.

Lazy vs. Eager evaluation

At some cost in execution efficiency, can add extra power to language by supporting lazy evaluation - also called call-by-need or normal order evaluation.

Order of operations:

Ex.

    - fun test (x:{a:int,b:unit}) =    
             if (#a{a=2,b=print("A\n")} = 2)    
                then (#a x)    
                else (#a x);   
    val test = fn : { a:int, b:unit } -> int 
      
    - test {a = 7, b = print("B")};
If have eager evaluation, get:
    BA   
    val it = 7 : int
If have lazy evaluation, get:
    val it = 7 : int
Call-by-need is equivalent to call-by-name (see discussion of parameter passing techniques later in course ) in functional languages, but can be implemented more efficiently since when evaluate argument, can save value (since it won't change).

Can also share different instances of parameter.

E.g.,

    fun multiple x = if x = [1,2,3] then 0::x else [4,5,6]@x
When substitute in value for x, don't really need to make three copies (again, since can't change!)

Lazy evaluation allows programmer to create infinite lists.

Ex. (in lazy dialect of ML)

    fun from n = n :: from (n+1)   
    val nats = from 1;   
    fun nth 1 (fst::rest) = fst   
      | nth n (fst::rest) = nth (n-1) rest
Can get approximate square root of x by starting with an approximation a0 for sqrt and then getting successive approximations by calculating

an+1 = 1/2 * (an + x/an)

Program infinite list of approximations by:

    fun approxsqrts x =   
       let 
          from approx = approx :: from (0.5 * (approx + x/approx))   
       in 
          from 1.0   
       end;
If want approximation where difference between successive approximation is < eps,
    fun within eps (approx1 :: approx2 :: rest) =   
        if abs(approx1 - approx2) < eps    
                then approx1   
                else absolute eps (approx2::rest);
Now to get square root approximation in which the difference between successive terms is < eps then write:
    fun sqrtapprox x eps = within eps (approxsqrts x)
Of course can also do with eager language, but bit more to worry about - must combine logic of both approxsqrts and within into same function.

Can imagine functional programs as pipeline generating, filtering and transforming data. (Works best with lazy evaluation!)

For example, the above sqrtapprox function can be looked at as:

Think of program as composition of boxes, glued together by pipes (like UNIX pipes).

Lazy evaluation gives proper behavior so don't stack up lots of data between boxes.

Last box requests data from earlier boxes, etc.

In general try to write general boxes which generate, transform and filter data.

Why not just use lazy evaluation?

This material will not be covered in lecture, but provides extra information on lazy evaluation

Eager language easier and more efficient to implement w/ conventional techniques.

If language has side-effects, then important to know when they will occur!

Also many-optimizations involve introducing side-effects into storage to save time.

In parallelizing computation, often better to start computation as soon as ready. With eager evaluation, evaluation of parameter may be wasted.

Can simulate lazy evaluation in eager language by making expressions into "parameterless" functions.

I.e., if wish to delay evaluation of E : T, change to fn : () => E of type unit -> T.

Ex: Spose wish to implement second parameter of f with lazy evaluation:

    f x y = if x = [] then [] else x @ y
Rewrite as
    f' x y' = if x = [] then [] else x @ (y' ())  (* note y' applied to element of type unit *)
If would normally write: f E1 E2, instead write f' E1 (fn () => E2)

Then E2 only evaluated if x != []!

Implement lazy lists, Suspended lists, in an eager language:

    datatype 'a susplist = Mksl of (unit -> 'a * 'a susplist) | Endsl;
Like regular list, but must apply to () before get components!

    (* add new elt to beginning of suspended list *)   
    fun slCons newhd slist = let fun f () = (newhd, slist)      
                             in Mksl f end;   
       
    exception empty_list;   
       
    (* extract head of suspended list *)   
    fun slHd Endsl = raise empty_list   
      | slHd (Mksl f) = let val (a,s) = f ()   
                        in a end;   
       
    (* extract tail of suspended list *)   
    fun slTl Endsl = raise empty_list   
      | slTl (Mksl f) = let val (a,s) = f()   
                        in s end;   
       
    (* Is suspended list empty? *)   
    fun slNull Endsl = true   
      | slNull(Mksl f) = false;   
         
    (* Infinite list of ones as suspended list *)   
    val ones = let fun f() = (1,Mksl f)   
               in Mksl f end;   
                  
    (* Suspended list of increasing integers starting with n *)   
    fun from n = let fun f() = (n, from(n+1))   
                 in Mksl f end;   
                    
    val nat = from 1;

Languages like LISP and SCHEME as well as lazy languages support streams for I/O.

Program Correctness

Referential transparency is key to ease of program verification, because we can replace identifiers by their values.

I.e. If have

    let val I = E in E' end;
then get same value by evaluating E'[E/I], i.e., replace all occurrences of I by E in E' and then evaluate.

Thus we can reason that:

    let val x = 2 in x + x end   
    = 2 + 2   
    = 4
If side effects are allowed then this reasoning fails:

Suppose print(n) has value n and induces a side-effect of printing n on the screen. Then

    let val x = print(2) in x + x end   
             != print(2) + print(2)

Interestingly, our proof rule only works for lazy evaluation:

    let val x = m div n in 3 end;
= 3 only if n != 0!

In lazy evaluation this is always true.

Therefore can use proof rule only if guarantee no side effects in computation and all parameters and expressions converge (or use lazy evaluation).

General theorem: Let E be a functional expression (with no side effects). If E converges to a value under eager evaluation then E converges to the same value with lazy evaluation (but not vice-versa!!)

Imperative features - references

Ref is a built-in constructor that creates references (i.e. addresses)

Example

    - val p = ref 17   
    val p = ref 17 : int ref
Can get at value of reference by writing !p
    - !p + 3;   
    val 20 : int
Also have assignment operator ":="
    - p := !p + 1;   
    () : unit   
    - !p;   
    val 18 : int
Other imperative commands:

(E1; E2; ...; En) - evaluate all expressions (for their side-effects), returning value of En

while E1 do E2 - evaluates E2 repeatedly until E1 is false (result of while always has type unit)

Writing Pascal programs in ML:

    fun decrement(counter : int ref) = counter := !counter - 1;
   
    fun fact(n) = let 
                     val counter = ref n; 
                     val total = ref 1;   
                  in 
                     while !counter > 1 do   
                        (total := !total * !counter ;   
                         decrement counter);   
                     !total   
                  end;
There are restrictions on the types of references - e.g., can't have references to polymorphic objects (e.g., nil or polymorphic fcns). See discussion of non-expansive expressions in section 5.3.1. Essentially, only function definitions (or tuples of them) can have polymorphic type. Results of function applications or values of references can never be polymorphic.

Implementation issues

Efficiency:

Functional languages have tended not to run as fast as imperative: Why?

Use lists instead of arrays - linear time rather than constant to access elements

Passing around fcns can be expensive, local vbles must be retained for later execution. Therefore must allocate from heap rather than stack.

Recursion typically uses lot more space than iterative algorithms
New compilers detect "tail recursion" and transform to iteration.

Lack of destructive updating. If structure is changed, may have to make an entirely new copy (though minimize through sharing).
Results in generating lot of garbage so need garbage collection to go on in background.

"Listful style" - easy to write inefficient programs that pass lists around when single element would be sufficient (though optimization may reduce).

If lazy evaluation need to check whether parameter has been evaluated - can be quite expensive to support.
Need efficient method to do call by name - carry around instructions on how to evaluate parameter - don't evaluate until necessary.

Program run with current implementation of Standard ML of New Jersey is estimated to run only 2 to 5 times slower than equivalent C program.

Lazy would be slower.

What would happen if we designed an alternative architecture based on functional programming languages?

Concurrency

One of driving forces behind development of functional languages.

Because values cannot be updated, result not dependent on order of evaluation.

Therefore don't need explicit synchronization constructs.

If in distributed environment can make copies w/ no danger of copies becoming inconsistent.

If evaluate f(g(x),h(x)) can evaluate g(x) and h(x) simultaneously (w/ eager evaluation).

Two sorts of parallel architectures: data-driven and demand-driven.

Elts of these are being integrated into parallel computer designs.

Idea is programmer need not put parallel constructs into program and same program will run on single processor and multi-processor architectures.

Not quite there yet. Current efforts require hints from programmer to allocate parts of computation to different processors.

Summary

Functional programming requires alternative way of looking at algorithms.

Referential transparency supports reasoning about programs and execution on highly parallel architectures.

While lose assignment and control/sequencing commands, gain power to write own higher-order control structures (like listify, while, etc.)

Some cost in efficiency, but gains in programmer productivity since fewer details to worry about (higher-level language) and easier to reason about.

Languages like ML, Miranda, Haskell, Hope, etc. support implicit polymorphism resulting in greater reuse of code.

ML features not discussed:

ML currently being used to produce large systems. Language of choice in programming language research and implementation at CMU, Princeton, Williams, etc.

Computational biology: Human genome project at U. Pennsylvania

Lots of research into extensions. ML 2000 report.

Addition of object-oriented features?

Back to:

  • CS 334 home page
  • Kim Bruce's home page
  • CS Department home page
  • kim@cs.williams.edu