Differences between revisions 95 and 96
Revision 95 as of 2008-05-12 22:21:38
Size: 23010
Editor: Ted Herman
Comment:
Revision 96 as of 2014-05-25 18:23:59
Size: 23012
Editor: localhost
Comment: converted to 1.6 markup
Deletions are marked like this. Additions are marked like this.
Line 4: Line 4:
 * 9 May. Some material on ["concurrency"]; and discussion of final exam topics.
 * 7 May. Continuing ["semantics"], more detail on invariants, preconditions and postconditions (useful reference: page 164 of the textbook).
 * 5 May. A lecture on ["semantics"].
 * 9 May. Some material on [[concurrency]]; and discussion of final exam topics.
 * 7 May. Continuing [[semantics]], more detail on invariants, preconditions and postconditions (useful reference: page 164 of the textbook).
 * 5 May. A lecture on [[semantics]].
Line 8: Line 8:
  * An example similar to Figures 3.10 and 3.11 was shown on the blackboard, numbering each statement (like in the textbook); then, a timeline diagram was drawn, showing which variables are "active" or visible at different points in time, during program execution. Also, I pointed to the difference in Perl's my() and local(), in the ["perl"] page. The timeline diagram is a good way to see the difference between static and dynamic scoping rules.   * An example similar to Figures 3.10 and 3.11 was shown on the blackboard, numbering each statement (like in the textbook); then, a timeline diagram was drawn, showing which variables are "active" or visible at different points in time, during program execution. Also, I pointed to the difference in Perl's my() and local(), in the [[perl]] page. The timeline diagram is a good way to see the difference between static and dynamic scoping rules.
Line 11: Line 11:
  * Just for fun, I presented a little of Example 9.6 in Chapter 9, which explains C# ''properties'' -- you can read this in the text, and also in the [[wiki(C Sharp (programming language))]] page; I recommend you look at the [[wiki(property (programming))]] page.
  * The only other thing from Chapter 9 we'll consider is the subject of [[wiki(virtual method)]], on pages 500-503, including member lookup. The thing to appreciate is the difference between static and dynamic method binding: this is somewhat similar to the distinction between static and dynamic scoping, in that runtime lookup is only needed for the dynamic case.
 * 30 April. Revisit parameter conventions (formal vs actual/argument), imperative (value vs reference styles) vs functional (see [[wiki(evaluation strategy)]] page for a list of terminology and styles in those languages). The remainder of the lecture followed Chapter 3 material on scoping rules. The main distinction is static versus dynamic scoping. At any point, there is a conceptual set of active bindings (could be the association list in some languages; see also the term referencing environment). In the case of continuations or returning functions, a paradigm typical of functional languages, the question of deep vs shallow bindings arises.
  * Just for fun, I presented a little of Example 9.6 in Chapter 9, which explains C# ''properties'' -- you can read this in the text, and also in the <<wiki(C Sharp (programming language))>> page; I recommend you look at the <<wiki(property (programming))>> page.
  * The only other thing from Chapter 9 we'll consider is the subject of <<wiki(virtual method)>>, on pages 500-503, including member lookup. The thing to appreciate is the difference between static and dynamic method binding: this is somewhat similar to the distinction between static and dynamic scoping, in that runtime lookup is only needed for the dynamic case.
 * 30 April. Revisit parameter conventions (formal vs actual/argument), imperative (value vs reference styles) vs functional (see <<wiki(evaluation strategy)>> page for a list of terminology and styles in those languages). The remainder of the lecture followed Chapter 3 material on scoping rules. The main distinction is static versus dynamic scoping. At any point, there is a conceptual set of active bindings (could be the association list in some languages; see also the term referencing environment). In the case of continuations or returning functions, a paradigm typical of functional languages, the question of deep vs shallow bindings arises.
Line 15: Line 15:
 Some helpful references: [[wiki(scope (programming))]], [[wiki(free variables and bound variables)]].
 * 28 April. Now that some conventions for using the stack are established, look at some common control structures and how control is transfered, to see how control works with the stack. In the common control structures (if, for, while) the stack expands and contracts in [[wiki(LIFO)]] order. This makes stack manipulation easy (just change the stack pointer). For function, subroutine, or method invocation, the frame pointer also changes in a similar [[wiki(LIFO)]] manner at runtime. But one much harder kind of control transfer is the infamous [[wiki(goto)]] statement -- in class, I illustrated the difficulty of a "go to" directed to a statement within a loop:
 Some helpful references: <<wiki(scope (programming))>>, <<wiki(free variables and bound variables)>>.
 * 28 April. Now that some conventions for using the stack are established, look at some common control structures and how control is transfered, to see how control works with the stack. In the common control structures (if, for, while) the stack expands and contracts in <<wiki(LIFO)>> order. This makes stack manipulation easy (just change the stack pointer). For function, subroutine, or method invocation, the frame pointer also changes in a similar <<wiki(LIFO)>> manner at runtime. But one much harder kind of control transfer is the infamous <<wiki(goto)>> statement -- in class, I illustrated the difficulty of a "go to" directed to a statement within a loop:
Line 25: Line 25:
 But one important type of control structure ''does'' require a less conventional treatment of stacks: the [[wiki(coroutine)]]. Coroutines are useful for implementing iterators, generators, and threads. Two interesting ways to think about coroutines:
   * Can a subroutine have multiple [[wiki(entry point)]]s? Of course not. However, this is roughly the idea behind a coroutine: multiple entry points and persistent state between calls. More than that, a coroutine A can "call" another coroutine B, transferring program control from A to B, with the intent that B could transfer control back to A. Initially, this seems just like a subroutine. But, once B transfers control back to A, then A resumes where it left off, with the stack just as it was before. Finally, if B again transfers control to A, then the stack for A should be the way that it was when A last had control. No simple manipulation of stack and frame pointers can achieve this! The book suggests that something called a [[wiki(cactus stack)]] is needed (also, a heap/stack hybrid could work), so that A and B essentially have their own stacks. Note that these ideas generalize beyond just two coroutines, to any number of coroutines.
 But one important type of control structure ''does'' require a less conventional treatment of stacks: the <<wiki(coroutine)>>. Coroutines are useful for implementing iterators, generators, and threads. Two interesting ways to think about coroutines:
   * Can a subroutine have multiple <<wiki(entry point)>>s? Of course not. However, this is roughly the idea behind a coroutine: multiple entry points and persistent state between calls. More than that, a coroutine A can "call" another coroutine B, transferring program control from A to B, with the intent that B could transfer control back to A. Initially, this seems just like a subroutine. But, once B transfers control back to A, then A resumes where it left off, with the stack just as it was before. Finally, if B again transfers control to A, then the stack for A should be the way that it was when A last had control. No simple manipulation of stack and frame pointers can achieve this! The book suggests that something called a <<wiki(cactus stack)>> is needed (also, a heap/stack hybrid could work), so that A and B essentially have their own stacks. Note that these ideas generalize beyond just two coroutines, to any number of coroutines.
Line 28: Line 28:
      Now consider the arrangement of DFA1 and DFA2 in execution. DFA1 can run for a while, recognizing an input stream and generating some output symbols. The output symbols might be fed directly to DFA2 or possibly fed into something like a [[wiki(unix pipe)]]. Then DFA2 reads its input from the [[wiki(unix pipe)]]. What we see is a situation where DFA1 and DFA2 are roughly running in parallel, or perhaps as activity alternating between DFA1 and DFA2. The important thing to observe is that the ''state'' of each DFA has to be preserved whenever activity switches between the two DFAs, so that each DFA will resume where it left off, once it gets control again.
   * How can transfer of control, such as used by coroutines, be programmed in conventional languages (the text does point to some languages that directly support coroutines)? One example is C: the [[wiki(setjmp)]] (longjmp) library function provides something similar to the idea of coroutines. Python's generators enable coroutine-like control to be programmed directly.
   * A difference between the notions of [[wiki(coroutines)]] and [[wiki(continuations)]]: a coroutine needs its stack, but can change the content of its stack (and other static variables) each time it is activated; the classic notion of a continuation has a stack (in functional languages, there may be a "surrounding environment" that is treated like the stack information), but this stack associated with the continuation ''does not change'' when the continuation is activated -- instead, each time the continuation is activated, it resumes with the same stack.
      Now consider the arrangement of DFA1 and DFA2 in execution. DFA1 can run for a while, recognizing an input stream and generating some output symbols. The output symbols might be fed directly to DFA2 or possibly fed into something like a <<wiki(unix pipe)>>. Then DFA2 reads its input from the <<wiki(unix pipe)>>. What we see is a situation where DFA1 and DFA2 are roughly running in parallel, or perhaps as activity alternating between DFA1 and DFA2. The important thing to observe is that the ''state'' of each DFA has to be preserved whenever activity switches between the two DFAs, so that each DFA will resume where it left off, once it gets control again.
   * How can transfer of control, such as used by coroutines, be programmed in conventional languages (the text does point to some languages that directly support coroutines)? One example is C: the <<wiki(setjmp)>> (longjmp) library function provides something similar to the idea of coroutines. Python's generators enable coroutine-like control to be programmed directly.
   * A difference between the notions of <<wiki(coroutines)>> and <<wiki(continuations)>>: a coroutine needs its stack, but can change the content of its stack (and other static variables) each time it is activated; the classic notion of a continuation has a stack (in functional languages, there may be a "surrounding environment" that is treated like the stack information), but this stack associated with the continuation ''does not change'' when the continuation is activated -- instead, each time the continuation is activated, it resumes with the same stack.
Line 39: Line 39:
 * 21 April. Preparation for exam on Wednesday. Also, some discussion of [[wiki(macro (computer science)]] and how scripting languages similarly can mix different languages within the same program or file, for instance [http://www.djangoproject.com/documentation/tutorial04/ forms] in Django or Rails. Also, continuing with Python examples and questions.
 * 18 April. We look at some examples of Python, including the use of [[wiki(generator expressions)]].
 * 21 April. Preparation for exam on Wednesday. Also, some discussion of <<wiki(macro (computer science)>> and how scripting languages similarly can mix different languages within the same program or file, for instance [[http://www.djangoproject.com/documentation/tutorial04/|forms]] in Django or Rails. Also, continuing with Python examples and questions.
 * 18 April. We look at some examples of Python, including the use of <<wiki(generator expressions)>>.
Line 68: Line 68:
     the [[wiki(worse is better)]] inset on page 749; you may want to look a bit at      the <<wiki(worse is better)>> inset on page 749; you may want to look a bit at
Line 70: Line 70:
 * 16 April. Finished the main language features of ["python"].
 * 14 April. First problem of new homework explained; continued with ["python"].
 * 11 April. Finished ["perl"] for this course and started on ["python"]; fifth homework delayed.
 * 9 April. First half of lecture: answering questions about the homework. Second part of lecture, talking about globbing and indirection (see ["perl"] page for some details).
 * 7 April. Much of the ["perl"] page covered.
 * 4 April. Starting ["scripting languages"], including ["perl"].
 * 16 April. Finished the main language features of [[python]].
 * 14 April. First problem of new homework explained; continued with [[python]].
 * 11 April. Finished [[perl]] for this course and started on [[python]]; fifth homework delayed.
 * 9 April. First half of lecture: answering questions about the homework. Second part of lecture, talking about globbing and indirection (see [[perl]] page for some details).
 * 7 April. Much of the [[perl]] page covered.
 * 4 April. Starting [[scripting languages]], including [[perl]].
Line 77: Line 77:
 * 31 March. Discussion of current homework assignment. Some preparation for learning about the famous [[wiki(Curry-Howard)]] isomorphism.  * 31 March. Discussion of current homework assignment. Some preparation for learning about the famous <<wiki(Curry-Howard)>> isomorphism.
Line 82: Line 82:
 * 12 March. Continuing ["functions and sequences"]; explaining the general notions of map and reduce (which go beyond just Haskell). Also, the lecture started some coverage of [[wiki(lazy evaluation)]].
 * 10 March. Beginning of higher-order functions, see ["functions and sequences"].
 * 12 March. Continuing [[functions and sequences]]; explaining the general notions of map and reduce (which go beyond just Haskell). Also, the lecture started some coverage of <<wiki(lazy evaluation)>>.
 * 10 March. Beginning of higher-order functions, see [[functions and sequences]].
Line 104: Line 104:
 * 5 March. Finished the basic syntax for function definition. Introduced the main ["haskell"] page and how to look up things; then went through an example program defining mylist -- my own version of lists that can handle embedded lists. This will be part of the next homework.  * 5 March. Finished the basic syntax for function definition. Introduced the main [[haskell]] page and how to look up things; then went through an example program defining mylist -- my own version of lists that can handle embedded lists. This will be part of the next homework.
Line 106: Line 106:
 * 29 February. The first part of the lecture defined Lambda calculus and how it can be used to define functions. The last part of the lecture started coverage of ["haskell functions"] and the relevance of Currying to functional programming languages.  * 29 February. The first part of the lecture defined Lambda calculus and how it can be used to define functions. The last part of the lecture started coverage of [[haskell functions]] and the relevance of Currying to functional programming languages.
Line 110: Line 110:
 The lecture also worked in some Haskell syntax and built-in datatypes (Integer, Num, Char, String, lists) and function notation. Toward the end of the lecture, we looked at defining new data types in Haskell, and how the syntax actually enables you to make your own pattern-matching for binding arguments of functions to variables. See the ["haskell types"] page for more detail.  The lecture also worked in some Haskell syntax and built-in datatypes (Integer, Num, Char, String, lists) and function notation. Toward the end of the lecture, we looked at defining new data types in Haskell, and how the syntax actually enables you to make your own pattern-matching for binding arguments of functions to variables. See the [[haskell types]] page for more detail.
Line 118: Line 118:
 * 22 February. Returned graded examinations. Assigned second homework (see ["homework"] page). Last lecture on Prolog, to illustrate how regular expressions can be recognized using Prolog definitions: see ["prolog"] page for examples.  * 22 February. Returned graded examinations. Assigned second homework (see [[homework]] page). Last lecture on Prolog, to illustrate how regular expressions can be recognized using Prolog definitions: see [[prolog]] page for examples.
Line 120: Line 120:
 * 18 February. This lecture introduced Prolog lists and how definitions work using lists. The page ["prolog lists"] has some notes for the lecture.
 * 15 February. Lecture covered [[wiki(binding (computer science))]] and how Prolog goes through definitions, trying to bind to variables, in order to satisfy a ''goal''. The lecture also spent time on Prolog's built-ins for arithmetic, and the difference between "==" and "is" in Prolog (the latter is the one that can bind a variable to an arithmetic expression, whereas "==" just tests if two things refer to the same object or same atom).
 * 13 February. Continue with Prolog, explaining how terms, clauses, Horn clauses, and constraints work in Prolog's search for solutions to constraints. The ordering of evaluation is important to Prolog: see "Order of Definitions" on the course ["prolog"] page.
 * 18 February. This lecture introduced Prolog lists and how definitions work using lists. The page [[prolog lists]] has some notes for the lecture.
 * 15 February. Lecture covered <<wiki(binding (computer science))>> and how Prolog goes through definitions, trying to bind to variables, in order to satisfy a ''goal''. The lecture also spent time on Prolog's built-ins for arithmetic, and the difference between "==" and "is" in Prolog (the latter is the one that can bind a variable to an arithmetic expression, whereas "==" just tests if two things refer to the same object or same atom).
 * 13 February. Continue with Prolog, explaining how terms, clauses, Horn clauses, and constraints work in Prolog's search for solutions to constraints. The ordering of evaluation is important to Prolog: see "Order of Definitions" on the course [[prolog]] page.
Line 124: Line 124:
   * Chapter 1 of the textbook classifies programming languages as either being [[wiki(imperative language)]]s or [[wiki(declarative language)]]s, but the the chapter does state that the distinction is sometimes unclear. Initially, it seems that the definition of a ''declarative language'' looks like little more than a specification language, however declarative languages tend to rely on recursive definitions to such a degree that translating a declarative specification into a program becomes simple: in other words, declarative languages are actually programming languages.
   * ''Syntax versus Semantics''. Basic to any study of programming languages is the distinction between these two terms. Unfortunately, the term ''semantics'' has been overloaded: it has different and somewhat contradictory meanings, so context is important. A language's syntax is specified by grammars and regular expressions, as Chapter 2 describes. In addition, a compiler runs extra checks on the [[wiki(abstract syntax tree)]] and may report more "syntax" errors, which some authors call ''static semanatics''; later, when the program executes, there can be errors encountered at runtime, and some of these errors (which the compiler was unable to detect) are called ''runtime semantic'' errors. So you may encounter the word "semantics" when referring to errors that the compiler or supporting runtime software finds and reports. Actually, this should be called syntax, but our grammar tools are limited to CFGs, which don't do everything in defining what a proper program should be (for example, CFGs do not say that undefined variables should not be valid in a program). The true meaning of [[wiki(program semantics)]] comes much later in the textbook.
   * ''[[wiki(Prolog)]]'' is the subject of Chapter 11. Unfortunately, Chapter 11 makes reference to many terms of previous chapters, so reading Chapter 11 is a little confusing. Prolog uses a [[wiki(computing paradigm)]] less familiar to most students, so I'll take some extra time in the lecture to explain this.
   * Chapter 1 of the textbook classifies programming languages as either being <<wiki(imperative language)>>s or <<wiki(declarative language)>>s, but the the chapter does state that the distinction is sometimes unclear. Initially, it seems that the definition of a ''declarative language'' looks like little more than a specification language, however declarative languages tend to rely on recursive definitions to such a degree that translating a declarative specification into a program becomes simple: in other words, declarative languages are actually programming languages.
   * ''Syntax versus Semantics''. Basic to any study of programming languages is the distinction between these two terms. Unfortunately, the term ''semantics'' has been overloaded: it has different and somewhat contradictory meanings, so context is important. A language's syntax is specified by grammars and regular expressions, as Chapter 2 describes. In addition, a compiler runs extra checks on the <<wiki(abstract syntax tree)>> and may report more "syntax" errors, which some authors call ''static semanatics''; later, when the program executes, there can be errors encountered at runtime, and some of these errors (which the compiler was unable to detect) are called ''runtime semantic'' errors. So you may encounter the word "semantics" when referring to errors that the compiler or supporting runtime software finds and reports. Actually, this should be called syntax, but our grammar tools are limited to CFGs, which don't do everything in defining what a proper program should be (for example, CFGs do not say that undefined variables should not be valid in a program). The true meaning of <<wiki(program semantics)>> comes much later in the textbook.
   * ''<<wiki(Prolog)>>'' is the subject of Chapter 11. Unfortunately, Chapter 11 makes reference to many terms of previous chapters, so reading Chapter 11 is a little confusing. Prolog uses a <<wiki(computing paradigm)>> less familiar to most students, so I'll take some extra time in the lecture to explain this.
Line 128: Line 128:
   * ''Turing Completeness'' is a term every computer scientist should know. The theoretical view of what it means for a computer to be [[wiki(Turing Complete)]] is about the computational power of the computer; for a language, the meaning is similar (the Wikipedia article does talk about language features and give examples).
   * ''Specification versus Implementation'' are extremes for computer languages. A [[wiki(Specification language)]] allows one to precisely state what the output of a program should be, but without giving any detail on how the program should accomplish this. An implementation language, usually just called a programming language, specifies all the steps needed to produce the output.
   * ''The [[wiki(correctness)]] problem'' in computer science is the problem of ensuring that a program does, in fact, output what its specification says it should.
   * ''Turing Completeness'' is a term every computer scientist should know. The theoretical view of what it means for a computer to be <<wiki(Turing Complete)>> is about the computational power of the computer; for a language, the meaning is similar (the Wikipedia article does talk about language features and give examples).
   * ''Specification versus Implementation'' are extremes for computer languages. A <<wiki(Specification language)>> allows one to precisely state what the output of a program should be, but without giving any detail on how the program should accomplish this. An implementation language, usually just called a programming language, specifies all the steps needed to produce the output.
   * ''The <<wiki(correctness)>> problem'' in computer science is the problem of ensuring that a program does, in fact, output what its specification says it should.
Line 132: Line 132:
   * ''Refinement'', or [[wiki(program refinement)]], is the process of rewriting a specification or program in more detail; the inverse of refinement is abstraction. Compilers can be thought of as automatic refinement procedures.    * ''Refinement'', or <<wiki(program refinement)>>, is the process of rewriting a specification or program in more detail; the inverse of refinement is abstraction. Compilers can be thought of as automatic refinement procedures.
Line 134: Line 134:
   * ''[[wiki(Program synthesis)]]'' is the problem of automatically deriving a program from a specification.    * ''<<wiki(Program synthesis)>>'' is the problem of automatically deriving a program from a specification.
Line 136: Line 136:
 * 25 January. Finish coverage of the figures of Chapter 1, and introduce some notions of parsing and grammars, taken from Chapter 2. Read Chapter 2 up through Section 2.2, but stop just before Section 2.3. This lecture started on notes for ["grammars and parsing"].  * 25 January. Finish coverage of the figures of Chapter 1, and introduce some notions of parsing and grammars, taken from Chapter 2. Read Chapter 2 up through Section 2.2, but stop just before Section 2.3. This lecture started on notes for [[grammars and parsing]].
Line 138: Line 138:
  * Remarks about the ["syllabus"].
  * Start coverage of the ''figures'' in Chapter 1 of the textbook. The lecture material (minus actual figures which are copyrighted) is ["compiler interpreter terminology"]. The lecture also made the point that source programs have at least three intended "audiences":
  * Remarks about the [[syllabus]].
  * Start coverage of the ''figures'' in Chapter 1 of the textbook. The lecture material (minus actual figures which are copyrighted) is [[compiler interpreter terminology]]. The lecture also made the point that source programs have at least three intended "audiences":

22C111 Spring 2008 Programming Languages: Lectures Page

This page will show, by date, some of the topics and background sources for lecture material. Reading this page is not a substitute for attending class.

  • 9 May. Some material on concurrency; and discussion of final exam topics.

  • 7 May. Continuing semantics, more detail on invariants, preconditions and postconditions (useful reference: page 164 of the textbook).

  • 5 May. A lecture on semantics.

  • 2 May. Some miscellaneous loose ends related to scoping. What motivates this topic (the issues are not stressed by the textbook). The complexity of scoping rules might not be worth the trouble for small programs. However, once you have large libraries, projects with teams of programmers, then name conflicts are likely, and it's even natural to reuse names. Also, the modern imperative of code reuse tends to advocate (1) information hiding, and (2) modularity or independence in program components. Information hiding motivates the Object-Oriented view where objects and interfaces hide the internal state and logic from users of objects. Then things get messy when scoping, inheritance, overriding are concerned: the lecture looks a little at Chapter 9, particularly the issues of static vs dynamic methods, to show why visibility (related to scoping and Chapter 3's description of modules) is important. Particular sections for this lecture:
    • An example similar to Figures 3.10 and 3.11 was shown on the blackboard, numbering each statement (like in the textbook); then, a timeline diagram was drawn, showing which variables are "active" or visible at different points in time, during program execution. Also, I pointed to the difference in Perl's my() and local(), in the perl page. The timeline diagram is a good way to see the difference between static and dynamic scoping rules.

    • I used a little of Section 3.3.4 in Chapter 3, which motivates the design in syntax for modules to resolve possible naming conflicts when programs are put together with libraries.
    • As review, I mentioned Java's visibility modifiers in class definitions (public, private, protected), and how these have a kind of "security" motivation for scope/visibility.
    • Just for fun, I presented a little of Example 9.6 in Chapter 9, which explains C# properties -- you can read this in the text, and also in the C Sharp (programming language) page; I recommend you look at the property (programming) page.

    • The only other thing from Chapter 9 we'll consider is the subject of virtual method, on pages 500-503, including member lookup. The thing to appreciate is the difference between static and dynamic method binding: this is somewhat similar to the distinction between static and dynamic scoping, in that runtime lookup is only needed for the dynamic case.

  • 30 April. Revisit parameter conventions (formal vs actual/argument), imperative (value vs reference styles) vs functional (see evaluation strategy page for a list of terminology and styles in those languages). The remainder of the lecture followed Chapter 3 material on scoping rules. The main distinction is static versus dynamic scoping. At any point, there is a conceptual set of active bindings (could be the association list in some languages; see also the term referencing environment). In the case of continuations or returning functions, a paradigm typical of functional languages, the question of deep vs shallow bindings arises. Returning to static scoping, there are design choices for languages: global versus local scope being default behavior, block scope and access to surrounding scope; and with imported modules, scoping typically uses qualifiers on names. Subsection 3.3.3 shows some interesting issues of declaration vs definition, and why declaration order can be important (and why forward declarations are needed). Dynamic scoping is illustrated by figure 3.10 and another figure. Dynamic scoping uses temporal rules rather than textual rules to determine what is scope, hence compilers can't predict what will be referenced for dynamic scoping. Some helpful references: scope (programming), free variables and bound variables.

  • 28 April. Now that some conventions for using the stack are established, look at some common control structures and how control is transfered, to see how control works with the stack. In the common control structures (if, for, while) the stack expands and contracts in LIFO order. This makes stack manipulation easy (just change the stack pointer). For function, subroutine, or method invocation, the frame pointer also changes in a similar LIFO manner at runtime. But one much harder kind of control transfer is the infamous goto statement -- in class, I illustrated the difficulty of a "go to" directed to a statement within a loop:

    •   if (x>0) goto A;
        for (int i; i<n; i++) {
          float x = 3.14159;
         A:  a[i] = x*a[i];
          }

    It's tough to say what should happen here! This example gives us some clue as to why the "go to" is restricted, in some modern languages like Java, to be restricted to continue, break, switch, and throwing exceptions: these restricted forms have predictable effects that amount to simple manipulation of stack and frame pointers. But one important type of control structure does require a less conventional treatment of stacks: the coroutine. Coroutines are useful for implementing iterators, generators, and threads. Two interesting ways to think about coroutines:

    • Can a subroutine have multiple entry points? Of course not. However, this is roughly the idea behind a coroutine: multiple entry points and persistent state between calls. More than that, a coroutine A can "call" another coroutine B, transferring program control from A to B, with the intent that B could transfer control back to A. Initially, this seems just like a subroutine. But, once B transfers control back to A, then A resumes where it left off, with the stack just as it was before. Finally, if B again transfers control to A, then the stack for A should be the way that it was when A last had control. No simple manipulation of stack and frame pointers can achieve this! The book suggests that something called a cactus stack is needed (also, a heap/stack hybrid could work), so that A and B essentially have their own stacks. Note that these ideas generalize beyond just two coroutines, to any number of coroutines.

    • Another, more abstract way to visualize the idea of coroutines is to look at a pair of DFAs (or NFAs) that interact. Suppose we have two DFAs, DFA1 and DFA2. Each transition arc in DFA1 is labeled with an input symbol, but possibly also with a secondary label that is an output symbol. For instance, the label */^ could mean that the input symbol is *, and the output symbol is ^. (We could easily make a DFA looking for the case of ** in the input stream, converting this to ^ in the output stream.) Now, DFA2 takes for its input stream the output of DFA1.

      • Now consider the arrangement of DFA1 and DFA2 in execution. DFA1 can run for a while, recognizing an input stream and generating some output symbols. The output symbols might be fed directly to DFA2 or possibly fed into something like a unix pipe. Then DFA2 reads its input from the unix pipe. What we see is a situation where DFA1 and DFA2 are roughly running in parallel, or perhaps as activity alternating between DFA1 and DFA2. The important thing to observe is that the state of each DFA has to be preserved whenever activity switches between the two DFAs, so that each DFA will resume where it left off, once it gets control again.

    • How can transfer of control, such as used by coroutines, be programmed in conventional languages (the text does point to some languages that directly support coroutines)? One example is C: the setjmp (longjmp) library function provides something similar to the idea of coroutines. Python's generators enable coroutine-like control to be programmed directly.

    • A difference between the notions of coroutines and continuations: a coroutine needs its stack, but can change the content of its stack (and other static variables) each time it is activated; the classic notion of a continuation has a stack (in functional languages, there may be a "surrounding environment" that is treated like the stack information), but this stack associated with the continuation does not change when the continuation is activated -- instead, each time the continuation is activated, it resumes with the same stack.

  • 25 April. The goal of this lecture is to look at how memory, specifically stack memory, is used during program execution. The information and figures for the lecture come from Chapters 3 and 8:
    • Chapter 3, pages 103-111 on memory layout (but skip heap and garbage collection topics)
    • Terminology: subroutine call stack, stack frame, activation record, stack pointer, frame pointer, static link, dynamic link (links between stack frames)
    • Figure 3.1 on page 108, Figure 3.2 on page 110; the difference between frame and stack pointers
    • Figure 3.5 on page 120 shows static chains for a nested subroutine program
    • Pages 408-413: especially Figure 8.1, showing dynamic links
    • The start of Section 8.3, noting many kinds of argument/parameter styles in different languages
  • 23 April. Exam (no lecture).
  • 21 April. Preparation for exam on Wednesday. Also, some discussion of macro (computer science and how scripting languages similarly can mix different languages within the same program or file, for instance forms in Django or Rails. Also, continuing with Python examples and questions.

  • 18 April. We look at some examples of Python, including the use of generator expressions.

    • Also, in preparation for an upcoming exam, here is what to read from Chapter 13:
    • Section 13.1
      • Read the "What is a Scripting Language" section carefully. The "Common Characteristics" list of attributes is important. p.673 - read the gray box inset, which talks about VBScript in relation
        • to other scripting languages
    • Section 13.2
      • Just skim this section, looking at the titles and any italicized terms to get some idea of the problem domains listed. On page 684, there is a "check your understanding" list. Problem 6, 7, and 10 are things you should know (or read enough to answer these).
    • Skip subsection 13.2.2, because we covered Perl and won't look into
      • Awk and Sed in this course. Skip 13.2.3. Subsection 13.2.4 is interesting once you know about Unix processes, but we covered Python in class and so this subsection is not essential. Read enough of Subsection 13.2.5 (pages 698 and 699) to know what is an extension language. Read Subsection 13.3.1 enough to know what a CGI script does. Skip Subsections 13.3.2, 13.3.3, 13.3.4. Question 32 on page 711, however, is something you should know; I briefly discuss in class the role of scripts in web development frameworks. Skip Subsection 13.3.5 on XSLT. Read Subsection 13.4's introduction (the list of innovative features), but we will not look at Subsection 13.4.1 on Names and Scoping before the upcoming exam -- this will be part of the next part of the course. Subsection 13.4.2 is essentially covered by Perl and the previous Grep assignment; similarly, Subsections 13.4.3 and 13.4.4 are covered by the material in the course notes on Perl and Python, so we can skip that. Finally, please read

        the worse is better inset on page 749; you may want to look a bit at the link provided, to understand this interesting idea.

  • 16 April. Finished the main language features of python.

  • 14 April. First problem of new homework explained; continued with python.

  • 11 April. Finished perl for this course and started on python; fifth homework delayed.

  • 9 April. First half of lecture: answering questions about the homework. Second part of lecture, talking about globbing and indirection (see perl page for some details).

  • 7 April. Much of the perl page covered.

  • 4 April. Starting scripting languages, including perl.

  • 2 April. We viewed the first half of Philip Wadler's presentation (on Google video) "Faith, Evolution and Programming Languages".
  • 31 March. Discussion of current homework assignment. Some preparation for learning about the famous Curry-Howard isomorphism.

  • 28 March. Continue some advanced topics on functional programming: continuations and combinators.
  • 24 March. Chapter 10, review of some terminology (exam on Wednesday).
  • 17-21 March. Spring Break!
  • 14 March. Entire lecture answered questions about the homework.
  • 12 March. Continuing functions and sequences; explaining the general notions of map and reduce (which go beyond just Haskell). Also, the lecture started some coverage of lazy evaluation.

  • 10 March. Beginning of higher-order functions, see functions and sequences.

  • 7 March. Discussed Homework 3 problems. One problem asks to create a new datatype with multiple fields; here is an example (similar to what was done in class):
     -- define "Blob" datatype, which could have mix of Num and String
     data Blob a b = Mud a | Goo b | Dot
     -- function says whether argument is a "mud" blob:
     isMud x = case x of
             (Mud _) -> True
              _      -> False
     -- a list of blobs
     blobs = [ (Mud 1), (Mud 2), (Goo "ick"), (Goo "yuck"), Dot ]
     -- function says which of the blobs in a list are mud blobs:
     muds [] = []
     muds (m:x) = (isMud m) : (muds x)

    If you put all the above in a file, say blobs.hs, then execute the command "hugs blobs.hs", the Hugs shell asks for input, and you could do this:

     Main> muds blobs  
     [True,True,False,False,False]
    The other part of the lecture covered the basic computing paradigm of function program evaluation, contrasting its computing paradigm with imperative programming and Prolog. The basic idea is repeated substitition of definitions for function names, with binding of arguments, followed by simplification of operations on constants. We worked through a GCD example in class.
  • 5 March. Finished the basic syntax for function definition. Introduced the main haskell page and how to look up things; then went through an example program defining mylist -- my own version of lists that can handle embedded lists. This will be part of the next homework.

  • 3 March. Entire lecture developed a simple Haskell program to search through a string.
  • 29 February. The first part of the lecture defined Lambda calculus and how it can be used to define functions. The last part of the lecture started coverage of haskell functions and the relevance of Currying to functional programming languages.

  • 27 February. This was a lecture on data types in Haskell. The colon (:) operator has two roles in the language:
    1. Colon is an operator to build on a list, for instance 7:[2,3] builds [7,2,3].
    2. Colon can serve in Haskell's binding technique, which is a kind of pattern-matching (in the lecture I called this a "picture"). For instance, given L="fantastic", and the pattern (m:n:x), binding between these would result in m='f', n='a', and x="ntastic".

    The lecture also worked in some Haskell syntax and built-in datatypes (Integer, Num, Char, String, lists) and function notation. Toward the end of the lecture, we looked at defining new data types in Haskell, and how the syntax actually enables you to make your own pattern-matching for binding arguments of functions to variables. See the haskell types page for more detail.

  • 25 February. First lecture on functional programming languages. We'll be following some topics in Chapter 10 of the textbook, up to section 10.4 at least. The lecture was a general introduction to functional programming languages (Lisp, Scheme, Haskell, etc). Basic concepts are:
    • referential transparency
    • recursion instead of iteration; binding instead of assignment
    • freedom from side-effects
    • functions can be first-class values
    • borrowing from mathematical function notation
    • Haskell's notation for functions: f: [a] -> [a] -> a means that f is a function of two lists, and returns the same type as each list contains

  • 22 February. Returned graded examinations. Assigned second homework (see homework page). Last lecture on Prolog, to illustrate how regular expressions can be recognized using Prolog definitions: see prolog page for examples.

  • 20 February was first examination.
  • 18 February. This lecture introduced Prolog lists and how definitions work using lists. The page prolog lists has some notes for the lecture.

  • 15 February. Lecture covered binding (computer science) and how Prolog goes through definitions, trying to bind to variables, in order to satisfy a goal. The lecture also spent time on Prolog's built-ins for arithmetic, and the difference between "==" and "is" in Prolog (the latter is the one that can bind a variable to an arithmetic expression, whereas "==" just tests if two things refer to the same object or same atom).

  • 13 February. Continue with Prolog, explaining how terms, clauses, Horn clauses, and constraints work in Prolog's search for solutions to constraints. The ordering of evaluation is important to Prolog: see "Order of Definitions" on the course prolog page.

  • 11 February. Finish a few topics from the previous lecture, then introduce Prolog.
    • Chapter 1 of the textbook classifies programming languages as either being imperative languages or declarative languages, but the the chapter does state that the distinction is sometimes unclear. Initially, it seems that the definition of a declarative language looks like little more than a specification language, however declarative languages tend to rely on recursive definitions to such a degree that translating a declarative specification into a program becomes simple: in other words, declarative languages are actually programming languages.

    • Syntax versus Semantics. Basic to any study of programming languages is the distinction between these two terms. Unfortunately, the term semantics has been overloaded: it has different and somewhat contradictory meanings, so context is important. A language's syntax is specified by grammars and regular expressions, as Chapter 2 describes. In addition, a compiler runs extra checks on the abstract syntax tree and may report more "syntax" errors, which some authors call static semanatics; later, when the program executes, there can be errors encountered at runtime, and some of these errors (which the compiler was unable to detect) are called runtime semantic errors. So you may encounter the word "semantics" when referring to errors that the compiler or supporting runtime software finds and reports. Actually, this should be called syntax, but our grammar tools are limited to CFGs, which don't do everything in defining what a proper program should be (for example, CFGs do not say that undefined variables should not be valid in a program). The true meaning of program semantics comes much later in the textbook.

    • Prolog is the subject of Chapter 11. Unfortunately, Chapter 11 makes reference to many terms of previous chapters, so reading Chapter 11 is a little confusing. Prolog uses a computing paradigm less familiar to most students, so I'll take some extra time in the lecture to explain this.

  • 8 February. Assorted topics, some mentioned in Chapters 1 and 2, some going outside the textbook.
    • Turing Completeness is a term every computer scientist should know. The theoretical view of what it means for a computer to be Turing Complete is about the computational power of the computer; for a language, the meaning is similar (the Wikipedia article does talk about language features and give examples).

    • Specification versus Implementation are extremes for computer languages. A Specification language allows one to precisely state what the output of a program should be, but without giving any detail on how the program should accomplish this. An implementation language, usually just called a programming language, specifies all the steps needed to produce the output.

    • The correctness problem in computer science is the problem of ensuring that a program does, in fact, output what its specification says it should.

    • Pragma is a term defined in the textbook (in your Chapter 2 reading).

    • Refinement, or program refinement, is the process of rewriting a specification or program in more detail; the inverse of refinement is abstraction. Compilers can be thought of as automatic refinement procedures.

    • Program composition is the process of taking independently developed components (which could be subroutines or previously written parts of programs) and putting them together to get a new program. The same idea works with specifications. Most languages are designed with composition in mind, to facilitate SDKs, APIs, libraries, and so forth, often using linkers or other tools.

    • Program synthesis is the problem of automatically deriving a program from a specification.

  • 27 January. Continue coverage of Chapter 2, nearly finishing NFA and DFA topics.
  • 25 January. Finish coverage of the figures of Chapter 1, and introduce some notions of parsing and grammars, taken from Chapter 2. Read Chapter 2 up through Section 2.2, but stop just before Section 2.3. This lecture started on notes for grammars and parsing.

  • 23 January (first day). Some motivations for taking the course.
    • Remarks about the syllabus.

    • Start coverage of the figures in Chapter 1 of the textbook. The lecture material (minus actual figures which are copyrighted) is compiler interpreter terminology. The lecture also made the point that source programs have at least three intended "audiences":

      • The "machine" (or run-time, which can be dynamic)
      • Other programmers (who read, maintain, and modify the source program)
      • The compiler (which does static checking, macro substitution, etc)

lectures (last edited 2014-05-25 18:23:59 by localhost)