Exploring Generic Haskell Generic Haskell van alle kanten (met een samenvatting in het Nederlands) Proefschrift ter verkrijging van de graad van doctor aan de Universiteit Utrecht op gezag van de Rector Magnificus, Prof. dr. W. H. Gispen, ingevolge het besluit van het College voor Promoties in het openbaar te verdedigen op donderdag 2 september 2004 des ochtends te 10.30 uur door Andres L ¨oh geboren op 18 augustus 1976 te L ¨ubeck, Duitsland promotoren: Prof. dr. Johan Th. Jeuring, Universiteit Utrecht en Open Universiteit Nederland Prof. dr. S. Doaitse Swierstra, Universiteit Utrecht The work in this thesis has been carried out under the auspices of the research school ipa (Institute for Programming research and Algorithmics), and has been fi- nanced by the nwo (Nederlandse Organisatie voor Wetenschappelijk Onderzoek). Printed by Febodruk, Enschede. All drawings by Clara Strohm. ISBN 90-393-3765-9 Contents Adventure Calls! 1 . 1.1 From static types to generic programming . 1.2 History of Generic Haskell and contributions of this thesis . . 1.3 Related work on generic programming . . . . 1.4 Selecting a route . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Choosing the Equipment . 2 2.1 Prerequisites . . 2.2 The Generic Haskell compiler . . 2.3 A note on notation . . . . . . . . . . . . . . A Functional Core 3 3.1 Syntax of the core language fc . . 3.2 Scoping and free variables . . . 3.3 Types and kinds . . . . 3.4 Well-formed programs . . . 3.5 Operational semantics . . . . 3.6 Recursive let . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 2 4 7 11 15 15 16 16 21 22 24 24 29 30 37 iii Contents Type-indexed Functions 4 . 4.1 Exploring type-indexed functions . . 4.2 Relation to type classes . . . . 4.3 Core language with type-indexed functions fcr+tif . . 4.4 Translation and specialization . . . . 4.5 Type checking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Parametrized Type Patterns . . 5 . 5.1 Goals . . . . . 5.2 Parametrized type patterns . . 5.3 Dependencies between type-indexed functions . . 5.4 Type application in type arguments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 41 43 44 45 48 53 53 55 56 65 Dependencies 6 6.1 Core language with parametrized type patterns . . 6.2 Dependency variables and kinds . . . . 6.3 Dependency types . . . 6.4 Types and translation . . . . 6.5 Correctness of the translation . . . . 6.6 Kind-indexed types? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Going Generic 7 . 7.1 Unit, Sum, and Prod . . 7.2 Generic enumeration . . 7.3 Generic equality . . 7.4 Generic compression . . 7.5 Extending the language . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 69 . 72 . . 74 . 81 . . 95 . 103 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 . 108 . 110 . 112 . 113 . . 115 . . . . . . . . Local Redefinition 8 . . . 8.1 Enhanced equality . . . . 8.2 Size of data structures . . . . . . 8.3 Short notation . 8.4 Local redefinition within type-indexed functions . . . 8.5 Core language with local redefinition fcr+tif+par+lr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Types of Type-indexed Functions 9 . . Identity and mapping . 9.1 . 9.2 Zipping data structures . . 9.3 Generically collecting values . . . 9.4 More choice . . 9.5 Type tuples . . . . . 9.6 Multi-argument type signatures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iv 121 . 122 . 124 . 126 . 129 . 130 133 . 133 . 136 . 137 . 140 . 141 . 145 9.7 Revised generic application algorithm . . 9.8 Multiple dependencies on one function . . 9.9 Translation and correctness . . . . . . . 10 . 10.1 Zero . . . . . 10.2 A few examples . . 10.3 Formal translation . Embedding Datatypes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 Translation by Specialization . . . 11.1 Problem . . . . . . . 11.2 Lifting isomorphisms . 11.3 Lifting isomorphisms and universal quantification . . . 11.4 Reflexivity of the dependency relation . . 11.5 Translation of generic functions . . . . 11.6 How to determine the required components . . . 11.7 Discussion of translation by specialization . . . . 11.8 Other translation techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 Generic Abstraction . . 12.1 Motivation . . . 12.2 Generic reductions . . 12.3 Cata- and anamorphisms . 12.4 Types and translation of generic abstractions . . 12.5 Type indices of higher kind . . . 12.6 Multiple type arguments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 Type Inference 13.1 Type inference of type arguments . . 13.2 Dependency inference . . . 13.3 Base type inference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 Default Cases . . 14.1 Generic Traversals . . 14.2 Variants of equality . . . 14.3 Simulating multiple dependencies on one function . . 14.4 Implementation of default cases . . 14.5 Typing default cases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 Type, Newtype, Data 15.1 Datatype renamings . 15.2 Type synonym declarations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149 . 153 . 154 155 . 156 . 157 . 159 167 . 168 . 169 . 172 . 178 . 180 . 185 . 186 . 187 193 . 194 . 196 . 198 . 200 . 205 . 207 209 . 210 . 211 . 212 215 . 216 . 218 . 219 . 221 . 225 229 . 230 . 231 v Contents . . . . . . . 16 Type-indexed Datatypes . . . 16.1 Type-indexed tries . . . . . 16.2 Explicit specialization . . . . . . . 16.3 Idea of the translation . . . 16.4 Local redefinition on the type level . . . . 16.5 Generic abstraction on the type level . . 16.6 The Zipper . . . . . . . . 16.7 Implementation of type-indexed datatypes . 16.8 Translation by specialization and type-indexed datatypes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 Alternative Views on Datatypes . . 17.1 Constructors and labels . 17.2 Fixpoints of regular functors . 17.3 Balanced encoding . . . . 17.4 List-like sums and products . . . . 17.5 Constructor cases . . . 17.6 A language for views . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 Modules . . 18.1 A language with modules . 18.2 Translation of modules containing type-indexed entities . 18.3 Explicit specialization of type-indexed types is necessary . . 18.4 Open versus closed type-indexed functions . . . . . . . . . . . . . . . . . . . . . . . . Syntax overview Complete syntax . . All languages . . Metavariables used . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Samenvatting in het Nederlands Van statische types naar generiek programmeren . . . Generic Haskell . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Bibliography Index vi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235 . 236 . 240 . 241 . 244 . 245 . 246 . 249 . 270 273 . 274 . 280 . 283 . 283 . 284 . 286 289 . 290 . 292 . 297 . 299 303 . 303 . 307 . 307 311 . 311 . 313 315 323 List of Figures 2.1 Sample deduction rule . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 . . . . . . . . . . . . . . . . . . . . Syntax of the core language fc . . . . . . . . Kind checking for core language of Figure 3.1 . . . . Type checking for core language of Figure 3.1 . Type checking for patterns, extends Figure 3.3 . . . Subsumption relation on core types, extends Figure 3.3 . . . . . . . 3.1 . 3.2 . 3.3 . 3.4 . 3.5 . 3.6 Well-formed data declarations . . . 3.7 Well-formed programs . 3.8 . 3.9 . 3.10 Reduction rules for core language of Figure 3.1 . 3.11 Pattern matching for the core language of Figure 3.1, extends Fig- . . . . . . . . 3.12 Syntax of fcr, modifies the core language fc of Figure 3.1 . 3.13 Translation of fcr to fc . . . . . 3.14 Type checking for recursive let, modifies Figure 3.3 . . . . . Syntax of values, extends Figure 3.1 . . Type rule for run-time failure, extends Figure 3.3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ure 3.10 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 26 27 28 28 31 31 32 32 34 35 38 38 39 vii List of Figures 4.1 4.2 4.3 4.4 4.5 5.1 5.2 Core language with type-indexed functions fcr+tif, extends language fcr in Figures 3.1 and 3.12 . . . . . . Translation of fcr+tif to fcr . . . . . Type checking for fcr+tif, extends Figure 3.3 . . . Type checking for declarations in fcr+tif, extends Figure 4.3 . . Translation of fcr+tif environments to fcr type environments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Types for generic applications of add to type arguments of different form . . . . . . Types for generic applications of size to type arguments of different . . . form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.1 6.2 . . . . 6.5 6.6 . . 6.7 Well-formedness of type signatures for type-indexed functions . . 6.8 Generic application algorithm . 6.9 Core language with type-indexed functions and parametrized type patterns fcr+tif+par, extends language fcr+tif in Figure 4.1 . . Kind checking for language fcr+tif+par of Figure 6.1, extends Fig- . . . ure 3.2 . . Kind checking of type patterns in language fcr+tif+par of Figure 6.1 6.3 6.4 Well-formedness of dependency constraints in language fcr+tif+par of Figure 6.1 . . . . . . Kind checking of qualified types in fcr+tif+par of Figure 6.1 . . Extracting information from the type signature of a type-indexed . . . . . . function . . . . . . . . . Translation of qualified types and dependency constraints in lan- guage fcr+tif+par . . . . . 6.10 Subsumption relation on qualified types, extends Figure 3.5 . . 6.11 Conversion of dependency constraints into explicit abstractions, ex- . . . . . . . . . . . 6.12 Entailment of dependency constraints, extends Figure 6.10 . 6.13 Translation of fcr+tif+par expressions to fcr . . . . . . . 6.14 Revelation of dependency constraints in fcr+tif+par . . . . . 6.15 Translation of fcr+tif+par declarations to fcr . . . . . . . . 6.16 Translation of fcr+tif+par kind environments to fcr . . . . . . 6.17 Translation of fcr+tif+par environments to fcr type environments . 6.18 Translation of generic application in fcr+tif+par . . 6.19 Generic application algorithm extension for type arguments of higher . kinds, extends Figure 6.8 . tends Figure 6.10 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 46 48 49 50 60 62 70 73 73 74 74 76 76 78 83 85 87 88 88 89 91 92 93 94 . 100 Short notation for local redefinition . . 8.1 8.2 Alternative definition of short notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126 . 128 viii 8.5 9.1 9.2 9.3 8.3 Core language with local redefinition fcr+tif+par+lr, extends lan- guage fcr+tif+par in Figures 6.1, 4.1, and 3.1 . . . . . . . . . . . . . . 130 8.4 New rule for checking and translating recursive let in fcr+tif+par+lr, List of Figures . . 131 replaces rule (e/tr-let) in Figure 6.13 . Translation of fcr+tif+par+lr declarations to fcr, extends Figure 6.15 131 . . . . . . . . . . . . . . . . . . . . 135 . 137 . . . . 139 . 142 . . 143 . . 144 . . 144 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Example types for generic applications of map to type arguments of . . different form . Example types for generic applications of zipWith to type arguments of different form . . . Example types for generic applications of collect to type arguments of . . . . . different form . Syntax of type tuples and their kinds . . Kind checking of type and type argument tuples . . Comparison of type tuples . . . Bounded type tuples 9.4 9.5 9.6 . . . 9.7 . 9.8 Generalized type signatures in fcr+tif+mpar, replaces type signa- . . tures from Figure 6.1 . . . . Revised base type judgment, replaces rule (base) from Figure 6.6 . . . 9.9 9.10 Dependency judgment . 9.11 Revised well-formedness of type signatures for type-indexed func- . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . tions, replaces rule (typesig) of Figure 6.7 . . . . . . 9.12 Wrapper for the revised generic application algorithm . 9.13 Revised generic application algorithm, replaces Figures 6.8 and 6.19 9.14 Revised generic application algorithm, continued from Figure 9.13, . replaces Figures 6.8 and 6.19 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145 . 146 . 146 . 147 . 150 151 . 152 . . 10.1 Kind checking of parametrized types, extends Figure 3.2 . . . . . 10.2 Structural representation of datatypes . . . . 10.3 Structural representation of constructors . . . . . 10.4 Generation of embedding-projection pairs for datatypes . 10.5 Generation of embedding-projection pairs for constructors . . . . . . . . . . . . . . . . . . . 11.1 Well-formed programs, replaces Figure 3.7 . . . 11.2 Translation of fcr+gf declarations to fcr, extends Figure 6.15 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160 . 161 . 161 . 163 . 164 . 181 . 183 12.1 Core language with generic abstraction fcr+gf+gabs, extends lan- guage fcr+gf in Figures 9.4, 8.3, 6.1, 4.1, and 3.1 . . . . . . . . . . . . 201 12.2 Well-formedness of type signatures for type-indexed functions in- cluding generic abstractions, replaces Figure 9.11 . . 12.3 Translation of generic abstractions to fcr . . . . . . . . . . . . . . . . . . . . . . . . 201 . 203 . . ix List of Figures 12.4 Modified base type judgment for fcr+gf+gabs, replaces Figure 9.9 . 204 12.5 Types for generic applications of fsize to type arguments of different . form . . . . 205 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.1 Core language with default cases fcr+gf+gabs+dc, extends language fcr+gf+gabs in Figures 12.1, 9.4, 8.3, 6.1, 4.1, and 3.1 . . 14.2 Translation of default cases, extends Figures 11.2 and 6.15 . . . . 14.3 Conversion of arms for a default case . . . . . . . . . . . . . . . . . . . . . . . . . . . 221 . 222 . 223 15.1 Full syntax of type declarations for language fcrt, extends Figures 3.1 . . and 3.12 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 230 form . . . 16.1 Kinds for generic applications of FMap to type arguments of different . . . . . 16.2 Syntax of type-indexed datatypes in language fcrt+gftx . 16.3 Well-formedness of kind dependency constraints in fcrt+gftx of Fig- . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238 . 250 . . . ure 16.2, compare with Figure 6.4 . . . . . . . . . . . . . . . . . . . . . 252 16.4 Well-formedness of qualified kinds in fcrt+gftx of Figure 16.2, com- . . 16.5 Translation of qualified kinds and kind dependency constraints in pare with Figure 6.5 . . . . . . . . . . . . . . . . . . . . . . . . . . . 252 fcrt+gftx, compare with Figure 6.9 . . . 253 16.6 Subsumption relation on qualified kinds, compare with Figure 6.10 . 253 16.7 Extracting information from the kind signature of a type-indexed . . . . . . . . . . . . . . . . . datatype, compare with Figure 6.6 . . . . . . . . . . . . . . . . . . . . 255 pare with Figure 6.7 . 16.8 Well-formedness of kind signatures for type-indexed datatypes, com- . 16.9 Generic application algorithm for type-indexed datatypes, compare . 16.10 Translation of generic application of type-indexed datatypes in lan- with Figures 6.8 and 6.19 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255 . 256 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257 . 259 with Figure 6.14 . guage fcr+tif+par, compare with Figure 6.18 . . 16.11 Translation of fcrt+gftx types to fcrt . 16.12 Revelation of kind dependency constraints in fcrt+gftx, compare . . 16.13 Translation of fcrt+gftx expressions to fcrt, extends Figures 6.13, 8.1, . . 16.14 Translation of fcrt+gftx type declarations to fcrt, compare with Fig- . . 16.15 Translation of fcrt+gftx declarations to fcrt, continued from Fig- . . and 12.3 . ure 6.15 . ure 16.14 . . . . . 264 . . 16.16 Closed base type with respect to applications of type-indexed types . 267 16.17 Translation of fcrt+gftx declarations to fcrt, replaces Figure 11.2 . . 269 . . . . 262 . 263 . 260 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . x 17.1 Operations on the Cardinality type . . . . . . . . . . . . . . . . . . . . 279 List of Figures tends Figure 18.1 . 18.1 Language with modules fcrtm, extends Figures 3.1, 3.12, and 15.1 . 18.2 Language with modules and type-indexed entities fcrtm+gftx, ex- . . 18.3 Translation of fcrtm+gftx export list entries to fcrtm, continued from . . 18.4 Translation of fcrtm+gftx export list entries to fcrtm, continued from . . Figure 18.3 . Figure 18.3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18.5 Translation of fcrtm+gftx modules to fcrtm . 18.6 Conflicting components of type-indexed types 18.7 Explicit specialization in a separate module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 294 . . . . . . 295 . . . . . . . 298 . . . . . . . . 299 . . . . . . . . . . . . . . 293 . 291 . 290 . . xi List of Figures xii Acknowledgements The story of me ending up in Utrecht to do a Ph.D. on Generic Haskell is full of lucky coincidences. Therefore, first of all, I want to thank fate, if there is such a thing, for the fact that everything turned out so well. When I arrived in Utrecht, I did not even know my supervisor, Johan Jeuring – well, okay, I had read papers authored by him. Now, four years later, I can confidently say that I could not imagine a better supervisor. He has proved to be a patient listener, encouraged me where needed, warned me when I was about to be carried away by some spontaneous idea, let me participate in his insights, and shared his experiences regarding survival in the academic world. Most of all, he has become a friend. My whole time here would have been far less enjoyable if it were not for him. Johan, thank you very much. Although on second position in these acknowledgments, Doaitse Swierstra may well have deserved the first. After all, he is “responsible” for the fact that I abandoned everything else, moved to Utrecht, and started working on a Ph.D. When I first met him at a conference, I asked him about possibilities to do a Ph.D. in Utrecht and to work in the area of functional programming. Afterwards, I had a hard time convincing him that it would not be possible for me to start immediately, but that I still had almost a year to go until finishing my “Diplom”. During my time here, Doaitse has been a constant source of interesting ideas and enlightening discussions. I envy his seemingly unshakable enthusiasm for his work, and hope that this enthusiasm will inspire many others as it did inspire me. I want to thank Ralf Hinze, who had the doubtful privilege to share an office with me during my first year, and as another German eased the transition to the Netherlands. For me, working on generic programming, he has been the ideal person to have close by and to learn from. I thank him for showing an interest in me and my work, also after he left. Jeremy Gibbons, Ralf Hinze, Lambert Meertens, Rinus Plasmeijer, and Peter I am grateful that they Thiemann are the members of the reading committee. took the time to read this thesis, and for several helpful comments and insights. I am sorry that I could not act on all of them due to time constraints. One of the really amazing things about Utrecht University has been the at- mosphere in the software technology group. There are so many people that are interested in each other’s work, and if there ever was a problem just outside my own area of expertise, I was very likely to find a helpful answer just a few steps away next door. Many of my colleagues have provided interesting ideas I would like to mention Daan Leijen, Eelco Dolstra, and valuable discussions. xiii Acknowledgements Arjan van IJzendoorn, and Bastiaan Heeren in particular, who have become good friends over the years. I am very grateful to Martin Bravenboer, Bastiaan Heeren, Arjan van IJzen- doorn, Daan Leijen, Ganesh Sittampalam, Ian Lynagh, Shae Matijs Erisson, Andr´e Pang, the “Matthiases” Auer and Weisgerber, Carmen Schmitt, Clara Strohm, and G ¨unter L ¨oh for reading parts of my thesis and constructively commenting on var- ious aspects of my work. Especially to those of you who do not have a degree in computer science: thank you for taking the time and trying to understand what I have done. The people on the freenode #haskell channel, including, but not limited to earthy, Heffalump, Igloo, Marvin--, ozone, shapr, and SyntaxNinja, have pro- vided a great deal of – probably largely subconscious – mental support during the phase in which I had to write my thesis and diversions were scarce. I am indebted to Torsten Grust and Ulrik Brandes, who back during my time in Konstanz, managed to change my view of computer science into a positive one. Without them, I am convinced that I would never even have considered doing a Ph.D. in computer science. Thanks, Torsten, for exposing me to functional programming. I, too, once came to the university thinking that C is the only cool programming language in existence. My parents deserve a big “Dankesch ¨on” because they never complained about the choices I have made in my life, including the places where I have lived, and supported me in all my decisions in every way imaginable. Words cannot describe what Clara means to me and how much impact she had on the creation of the thesis. Not only did she read almost all of it, finding several smaller and larger mistakes, she also invested an incredible amount of time and creativity into creating all the drawings in this thesis, thereby giving it a unique character – not to mention all the mental support and patience during a time where she herself had a thesis to write. To say it in the words the “Lambda” would choose: “Schlububius? Einmal? Alle Achtung!” xiv 1 Adventure Calls! This thesis is an exploration – an exploration of a language extension of the func- tional programming language Haskell. The extension is called Generic Haskell, albeit the name has been used to refer to different objects over the last several years: Many papers have described different proposals, features, variations, and generations of the language. One purpose of this thesis is to do away with at least part of this fuzziness: everything is described in a common notation and from a single starting point. The other purpose is to simply give a complete overview of the language: we will systematically explain the core features of Generic Haskell, and several extensions, all with motivating examples and details on how the fea- tures can be implemented. Before we start our exploration, though, Section 1.1 will explain the idea and motivation behind generic programming which, at the same time, is the motivation for the design of Generic Haskell. After that, Section 1.2 will give an overview of the history of Generic Haskell. In Section 1.3 we discuss other important ap- proaches to generic programming. In the last section of this chapter, Section 1.4, we give an overview of all the chapters of this thesis, their contents, the papers they are based on, and how they are interconnected. 1 1 Adventure Calls! 1.1 From static types to generic programming Static types are used in many programming languages to facilitate the creation of error-free software. While static types cannot guarantee the correctness of all programs written in a language, a sound static type system is capable of eliminating a certain class of runtime errors, which result in particularly nasty program crashes, because operating systems usually do not allow us to catch such errors. These errors result from the program inadvertently accessing a memory position that does not belong to the program. With static types, a program is checked at compile time, to prevent this kind of behaviour. Ideally, the successful type checking process together with the translation semantics of the language make up a proof that the programs cannot “go wrong” (Milner 1978). In a less perfect world, these proofs often do not fully exist, because the trans- lation that the compilers perform is too complex and involves too many external factors. Nevertheless, statically checked types in the user’s programs result in less error-prone programs. The amount of information about the program that a type checker can verify is not fixed. Some static type systems can type more programs than others, some can catch more errors than others – other kinds of errors, which are less nasty, but still inconvenient enough, such as divisions by zero or array indices that are out of bound. Nevertheless, there is a constant struggle: if too much cleverness is incorporated into the types, the type checking becomes inefficient or even unde- cidable. If too little information is covered by the type checker, programmers find themselves fighting the type system: programs that would behave correctly are nevertheless rejected as incorrect, because the type system is not capable of find- ing out that a potentially unsafe construct is only used in safe contexts throughout a particular program. The Hindley-Milner type system (Hindley 1969; Milner 1978) with the Damas- Milner inference algorithm (Damas and Milner 1982) is a great historical achieve- ment, because it is an expressive type system which allows to type day-to-day programs without any problems, and not only has an efficient type checking al- gorithm, but even permits efficient type inference! Thus, the programmer is not forced to annotate a program with types or declare the types of entities used – everything is inferred by the compiler automatically and checked for consistency. One of the highlights of this type system is the possibility for parametrically poly- morphic functions. Functions that work in the same way for all datatypes need not be instantiated over and over again, making a new copy for each new datatype, but can be written once and used everywhere. Haskell (Peyton Jones 2003), along with other languages such as sml (Milner et al. 1997) or Clean (Plasmeijer and van Eekelen 2001), are based on the Hindley- 2 1.1 From static types to generic programming Milner type checking rules. Haskell in particular has also proved to be a testbed for various type system extensions. Even in its standard form, Haskell has sev- eral extensions over the classic Hindley-Milner algorithm. It allows explicit type signatures, if desired, to restrict a function’s type. It provides type classes, which allow one to write overloaded functions, with different functionality for different types. There is a kind system, which mirrors the type system yet again on the level of datatypes, and thus allows the systematic treatment of type constructors such as lists or tuples, which are parametrized over other types – and there is more . . . However, Haskell – just as the other languages mentioned – suffers from a problem: There is a conflict between the use of static types to prevent type errors and the goal of code reuse. Static types, in particular nominal types (types distin- guished by name rather than by structure) make a difference where there would not normally be one for the computer. For instance, a date is nothing more than a triple of integers, but in a static type system, a date will be treated as a separate type, and only special functions, tailored for dates, can be used on them. Often, this is exactly what is desired, but sometimes, it is a hindrance as well. Parametrically polymorphic functions make the situation bearable. Declaring new types is cheap, because a lot of functions, especially functions that process the structure of the data, work on any type. Everything can be put into a list, or a tree, selected, rearranged, and so forth. Parametrically polymorphic functions, however, are only good for doing things that are completely independent of the actual values. The values are put into a box and cannot be touched from within a polymorphic function. Often, however, we would want to make use of the underlying structure of the type. Suppose we have hidden an identification number, a room number, and a year in three different datatypes. This is usually a good thing, because most likely it should be prevented that a year is suddenly used as a room number. Still, all three are in principle integers, and integers admit some operations that are not applicable to all datatypes. They can be added, compared, tested for equality, incremented, and much more. Such functionality can never be captured in a parametrically polymorphic function, because it is only applicable to integers and, maybe, other numeric types, but not to all types, and definitely not to all types in the same way! Type classes can help here. A function can be overloaded to work on different datatypes in a different fashion. For instance, equality is already overloaded in Haskell, and so is addition or comparison. We can make identification numbers, room numbers, and years instances of the appropriate classes, and then use the functions. But, as soon as we define a new datatype that is just an integer, we have to write the instance declarations all over again. Similarly, when we add a new function that works perfectly on integers, we have to overload it and write instance declarations for all the datatypes again. 3 1 Adventure Calls! What we really need is a controlled way to forget the distinctions that the type system makes for a time and define functions that mainly care about the structure of a type. The possibility to define functions by analysis of the structure of datatypes is what we call generic programming in the context of this thesis. Haskell in particular offers a limited mechanism to achieve some of the desired functionality: the deriving construct allows automatic generation of type class instances for a fixed set of Haskell type classes – type classes that provide meth- ods such as the equality test, comparison, lower and upper bounds of values, generating a canonical representation of values as a string, and reading such a representation back in. Thus, Haskell has some built-in generic programs, but does not allow you to write your own generic programs. Other generic functions, or variations of the functions that can be derived, have to be defined for each datatype by hand. 1.2 History of Generic Haskell and contributions of this thesis In this section, we present a brief summary of how Generic Haskell came to life, and which language(s) it refers to. In this context, it will also become clear on whose work this thesis is based and what the contributions of this thesis are. The first generic programming language extension that has been designed for In PolyP, generic Haskell is PolyP (Jansson and Jeuring 1997; Jansson 2000). functions are called polytypic. The language introduces a special construct in which such polytypic functions can be defined via structural induction over the structure of the pattern functor of a regular datatype. Regular datatypes in PolyP are a subset of Haskell datatypes. A regular datatype t must be of kind ∗ → ∗, and if a is the formal type argument in the definition, then all recursive calls to t must have the form t a. These restrictions rule out higher kinded datatypes as well as nested datatypes, where the recursive calls are of a different form. In the lecture notes for a Summer School (Backhouse et al. 1999), theoretical background on generic programming is combined with an introduction to PolyP, thereby establishing generic programming as a synonym for polytypic program- ming in the context of Haskell. Ralf Hinze reused the term generic programming for the ability to define type- indexed functions during his own presentation of a programming language ex- tension for Haskell (Hinze 1999a, 2000b). As in PolyP, Haskell is extended with a construct to define type-indexed functions. Type indices can be of kind ∗ (for generic equality, or showing values) or ∗ → ∗ (for mapping functions or reduc- tions), and in principle, for all kinds of the form ∗ → · · · → ∗. The approach 4 1.2 History of Generic Haskell and contributions of this thesis has two essential advantages over PolyP: first, generic functions can be defined over the structure of datatypes themselves, without falling back to pattern func- tors. Second, nested types do not pose a problem for Hinze’s theory. While most generic functions are easier to define over the structure of a datatype directly than via the pattern functor, some functions that make explicit use of the points of recursion, such as generic cata- and anamorphisms, become harder to define. Hinze’s approach still suffers from some limitations: for each kind, the type language over which generic functions are defined is a different one, thus making the extension difficult to implement. Furthermore, types of kinds that are not in the above-mentioned form are not allowed as type indices of generic functions. And even though the set of types over which a generic function is explicitly defined is variable, no types of complex kinds are allowed in that set. These limitations are overcome in Hinze’s later work (Hinze 2000c), where generic functions can be defined for datatypes of all kinds, using a single function definition. In other words, one generic function can not only be instantiated to type arguments of a fixed kind, but to type arguments of all kinds. The essence of the idea is captured in the paper’s title: “Polytypic functions possess polykinded types”. The type of the generic function (to be precise, the number of function arguments it takes) is determined by the kind of the datatype it is instantiated to. In his “Habilitationsschrift” (Hinze 2000a), Hinze presents both his approaches in parallel, comparing them and favouring the second for an implementation which he calls “Generic Haskell”. His thesis also contains some hints about how to implement an extension for Haskell, taking the peculiarities of the Haskell lan- guage into account. The translation of generic functions proceeds by specializa- tion: specific instances of a generic function are generated for the types at which the function is used in the program. This has the advantage that type informa- tion can still be completely eliminated in the translation, allowing for an efficient translation. On the other hand, generic functions remain special constructs in the language which are not first-class: they cannot be passed as arguments to other functions. In another paper (Hinze and Peyton Jones 2001), a possible extension for the Glasgow Haskell Compiler ghc (ghc Team) is proposed (which has also been implemented), that is integrated into the type class system of Haskell: generic functions are not available as separate declarations, but only as class methods. Haskell allows default definitions for class methods to be given, such that if an instance is defined for a class without a new definition for a certain method, then the default definition is used. Using “derivable type classes”, generic default definitions are allowed, consisting of several cases for some basic types, thus allowing generic behaviour to be derived when a new instance is requested. In Haskell, type classes are parametrized by type arguments of fixed kinds. For this 5 1 Adventure Calls! reason, the derivable type classes are more related to Hinze’s first approach, and do not have kind-indexed types. Moreover, they only work for type indices of kind ∗. Several of the classes for which Haskell provides the deriving construct can now be defined generically. Other desirable classes, such as the Functor class which provides a generic mapping function, are still out of reach. The first release of the Generic Haskell compiler (Clarke et al. 2001) – in the context of the “Generic Haskell” project at Utrecht University – was therefore separate from the ghc extension for derivable type classes, and supported type- indexed functions with kind-indexed types following Hinze’s earlier sugges- tions (Hinze 2000a), without integrating genericity with the type class system. The existence of the compiler made practical programming experiments pos- sible, and these uncovered some weaknesses in expressivity which lead to the extensions described in the paper “Generic Haskell, specifically” (Clarke and L ¨oh 2003): default cases, constructor cases, and generic abstraction. These extensions were incorporated into a second release (Clarke et al. 2002) of the Generic Haskell compiler, together with support for type-indexed datatypes (Hinze et al. 2002). Such generic datatypes mirror the principle of structural induction over the lan- guage of datatypes on the type level. In other words, datatypes can be defined which have an implementation that depends on the structure of a type argument. While Hinze’s framework provided a strong and solid foundation, it turned out to have some inherent weaknesses as well: the concept of kind-indexed types, which implies that different cases of a generic definition take a different num- ber of arguments, proved to be difficult to explain in teaching, and unwieldy in presentations. The theory forces all generic functions into the shape of catamor- phisms, where recursion is not explicit; instead, the recursive calls are passed as arguments to the function. Transferred to ordinary Haskell functions, this means that we would need to write the factorial function as 0 = 1 fac fac rec (n + 1) = (n + 1) · rec . Note that this is an example by analogy: the factorial function does not have to be written this way in Generic Haskell, only recursion on generic functions followed this principle. An additional argument rec, which is equivalent to the recursive call to fac n, is passed to the second case. Such a definition requires a clear understanding of how the mechanism works; it is not immediately obvious what is going on. It is much easier to make the recursion explicit: fac fac = 1 0 (n + 1) = (n + 1) · fac n . 6 1.3 Related work on generic programming Furthermore, if recursion is only possible via explicitly passed arguments, it is also difficult to break free of the catamorphic scheme: what if we do not want to recurse on the immediate predecessor? Maybe we would rather call fac (n − 1). Or what if we have two functions which are mutually recursive? Dependency-style Generic Haskell (L ¨oh et al. 2003) is an attempt to alleviate this problem by providing a clearer, more flexible syntax without losing any of the generality and the features that were previously available. Dependencies also form the core of the presentation in this thesis. While the paper introduces dependencies as a modification of previous work on Generic Haskell, we present Generic Haskell from scratch, using dependencies from the very beginning. The main chapters dealing with dependencies are 5, 6, and 9. Because the introduction of dependencies addresses the foundations of Generic Haskell, it has implications on almost every aspect of it. Therefore, prior results have to be reevaluated in the new setting. We repeat fundamental aspects of Hinze’s theory in Chapters 7, 10, and 11. In later chapters, we concentrate on extensions to Generic Haskell that make it more expressive and easier to use. These results – as far as they have been published – are taken from the papers “Generic Haskell, specifically” (Clarke and L ¨oh 2003) and “Type-indexed data types” (Hinze et al. 2002). This material is also adapted and revised to fit into the framework of Dependency-style Generic Haskell. This thesis can be seen as a description of Generic Haskell in a consistent state after four years of research as well as an explanation of how to write a compiler for a language like Generic Haskell. In several places, we point out design deci- sions taken and sketch other possibilities to solve problems. The current state of the Generic Haskell implementation is described in Section 2.2. A more detailed overview over the contents of this thesis is given in Section 1.4, after we have surveyed related work. 1.3 Related work on generic programming First of all, it should be mentioned that generic programming is not an ideal term for the structural polymorphism that we talk about in this thesis, because the term is used by different communities in different meanings. Most notably, the object- oriented programming community, when talking about generic programming, mean about the same as is captured by parametric polymorphism in Hindley- Milner based type systems. Nevertheless, generic programming is the term that has been used for a while now (Bird et al. 1996) in the functional programming community to refer to structural polymorphism, i.e., functions defined over the structure of datatypes, and we will continue the habit. 7 1 Adventure Calls! Generic programming in functional languages has grown into a rich and di- verse field, and it is hard to do justice to all the excellent work that has been done during the last years. We will pick a few examples which we think are espe- cially related to the work presented in this thesis, knowing that we omit several noteworthy others. 1.3.1 Intensional type analysis Using intensional type analysis (Harper and Morrisett 1995), it is possible to analyze types at runtime, for instance to select a more efficient implementation of a function. The idea is very similar to the type-indexed functions that we discuss in Chapter 4. Stephanie Weirich (2002) has extended intensional type analysis to higher-kinded type arguments, thereby following some of the ideas of Hinze and making the approach usable for generic programming as well, especially in a purely structural type system. More recent work (Vytiniotis et al. 2004) throws some additional light on the relation between structural and nominal types. An intermediate language with support for both structural and nominal types and type-indexed functions which can be open (i.e., extensible with new cases) or closed is presented. The language does not provide an automatic transformation of datatypes into their underlying structure (cf. Chapters 10 and 17), but it could be used as a target language for Generic Haskell. 1.3.2 Scrap your boilerplate! Recently, Ralf L¨ammel and Simon Peyton Jones have joined forces to provide a workable generic programming extension directly in ghc. Two papers (L¨ammel and Peyton Jones 2003, 2004) describe the extension. The fundamental idea is a different one than for Generic Haskell: genericity is created by extending a poly- morphic, uniform traversal function with type-specific behaviour. For instance, the identity traversal can be extended with the function that increases all integers by 1, resulting in a function that still works for all datatypes, but is no longer parametrically polymorphic. Internally, a type representation is passed for such functions, and a typesafe cast is used to check if special behaviour has been specified for a datatype. The strength of the approach lies in the manipulation of large data structures, which is related to what Generic Haskell can achieve by means of default cases – the re- lation is described further in Chapter 14. Furthermore, generic functions defined in the “boilerplate” style are first class and require no special treatment in the Haskell type system. Even though the perspective of the approach is a different one, it can also be used to write many of the generic operations that work by structural induction 8 1.3 Related work on generic programming over the language of types – such as equality, comparison, parsing and unparsing – which is the original motivation behind Generic Haskell. However, the “boilerplate” approach does not support type-indexed types. 1.3.3 Template Haskell Template Haskell (Sheard and Peyton Jones 2002) is a language extension of Haskell, allowing the programmer to write meta-programs that are executed at compile time. Meta-programs have access to the abstract syntax tree of the pro- gram, and can use the tree to perform reflection on already existing code and to produce new code. Using Template Haskell, it is possible to write generic programs as meta- programs. At the call site of a generic function, a specialized version of the generic program that works for one specific type, can be spliced into the program code. Template Haskell aims at being far more than just a generic programming extension, but in the area of generic programming, it suffers from a couple of disadvantages. Template Haskell does not yet have a type system, although there is ongoing work to resolve this problem (Lynagh 2004). While the code resulting from meta-programs is type checked normally by the Haskell compiler, there is no check that the template program can only produce correct code. Furthermore, the syntax trees that Template Haskell manipulates contain less information than one would need to write good generic programs. For instance, it is hard to detect recursion (for instance, between datatypes), and the syntax trees are not annotated with type information. Finally, Template Haskell does not support syntactic sugar for generic functions. 1.3.4 Generic Clean The programming language Clean (Plasmeijer and van Eekelen 2001) provides a fully integrated generic programming facility (Alimarine and Plasmeijer 2001) that is based on the same ideas (Hinze 2000c) as Generic Haskell. The Clean ex- tension is integrated into the type class system, which is very similar to Haskell’s system of type classes. Generic functions are defined as special, kind-indexed classes: a few instances have to be defined, and others can then be derived auto- matically in a generic way. Generic Clean does not allow for dependencies between type-indexed func- tions, which makes it difficult to write generic functions that use other generic functions on variable types. Generic programming in Clean has been used for several applications, such as the generation of automatic tests (Koopman et al. 2003), in the context of dynamic 9 1 Adventure Calls! types (Achten and Hinze 2002; Achten et al. 2003), and to generate components of graphical user interfaces (Achten et al. 2004). There has also been work on optimization of generic functions (Alimarine and Smetsers 2004). 1.3.5 Pattern calculus The pattern calculus (Jay 2003), based on the constructor calculus (Jay 2001), pro- vides a very flexible form of pattern matching that permits patterns of multiple types to occur in a single case construct. Furthermore, special patterns such as application patterns can be used to match against any constructor. Generic functions can be implemented by analyzing a value, using sufficiently general patterns. Jay’s calculus allows type inference and has been implemented in the programming language FISh2. Generic functions written in this system are first class. Still, the implementation does not rely on type information to drive the evaluation. Functions written in the pattern calculus are, however, more difficult to write: the set of patterns for which a function must be defined in order to behave generically is relatively large, and some of the patterns embody complicated concepts. Furthermore, because the pattern match works on a concrete value, not on a type, functions that produce values generically, such as parsers, are difficult to write. 1.3.6 Dependent types Lennart Augustsson (1999) was the first to suggest the use of dependent types in the context of Haskell, and proposed a language called Cayenne. While type- indexed functions, such as provided by Generic Haskell, are functions (or values) that depend on a type argument, dependent types are types that depend on a value argument. However, by using dependent types one can simulate generic functions. In addition, dependent types allow several applications beyond type- indexed functions, but at the price of significantly complicating the type system. Using some features not provided by Cayenne, a number of complex encodings of generic functions within a dependently typed language have been presented by Altenkirch and McBride (2003). The style of dependent programming used in that paper is further developed in another article (McBride and McKinna 2004), and has led to the development of Epigram, another programming language that supports dependent types. Once the development has reached a stable point, it will be interesting to find out if generic programming can be provided in the form of a simple library in such a language, or if syntactic additions are still desirable to make generic programming practicable. 10 1.4 Selecting a route 1.4 Selecting a route In Chapter 2, we make a remark about previous knowledge that we assume from readers, and discuss notational conventions. We start our tour of Generic Haskell slowly, first introducing a core language in Chapter 3. This language consists of a subset of Haskell, and is held in a syntactic style that is mostly compatible with Haskell. Haskell can be relatively easily desugared to the core language. In this chapter, we also give type checking rules for the language and present a small step operational semantics. In most of the other chapters, we will – step by step – introduce the features that make up Generic Haskell. We usually present some examples, both to mo- tivate the need for the features that are presented, and to show how they can be used. For the examples, we usually use full Haskell syntax, to give an impression of what actual programs look like. After this leisurely introduction of a new fea- ture, we generally discuss its implementation. We extend the core language with new constructs as necessary, and discuss the semantics of the new constructs, usually by presenting a translation back into the original core language. In these theoretical parts, we work exclusively on the core language and its extensions. Often, such theoretical parts are marked by a “Lambda”. The “Lambdas” are friendly fellows and experts on both functional and generic programming. They accompany the reader during his or her explorations. In this case, the “Lambda” is advising the reader who is interested more in practical usage of the language than in the gritty details, that the “Lambda”-marked section could be skipped without danger (at least on a first reading). Similarly, on rare occasions, the “Lambda” warns that a certain area has not yet been explored in full detail, and that the following text describes future or ongoing work. The chapters on additions to the core language are grouped into two parts. Chapters 4 to 11 introduce a basic language for generic programming, which is already quite powerful, but in many places not as convenient as would be de- sirable. Therefore, the remaining chapters focus on several extensions that allow writing generic programs more easily, but also aim at enhancing the expressive- ness of the language even further. Chapter 4 covers the first extension of the core language, the possibility to define type-indexed functions. Generic functions are type-indexed functions that fulfill specific conditions. It thus makes sense to introduce type-indexed functions first, in this chapter, and to discuss genericity later. The idea of dependencies between type-indexed functions, as introduced in the paper “Dependency-style Generic Haskell” (L ¨oh et al. 2003), forms the core of this thesis. Dependencies are first discussed in a limited setting in Chapter 5, 11 1 Adventure Calls! which also slightly generalizes type-indexed functions by allowing more flexible type patterns. Chapter 6 gives a theoretical account of the extensions explained in Chapter 5. Only later, in Chapter 9, will we introduce dependencies in full generality. In between, we sketch how generic functions work, and give the first few ex- ample generic functions, in Chapter 7. This is not new material, but covered in several of Hinze’s papers on generic programming in Haskell. In Chapter 8, we discuss the principle of local redefinition, which offers the possibility to locally modify the behaviour of a type-indexed function. Local redefinition forms an essential part of Dependency-style Generic Haskell. After having presented the full story about dependencies in Chapter 9, we complete our account of generic functions in the Chapters 10, which focuses on how to map datatypes to a common structural representation, and 11, which explains how to translate generic functions, in particular if they are called on datatypes for which the behaviour has to be derived in a generic way. Both chapters are based on earlier work on Generic Haskell and adapted here such that they fit into our framework. Chapter 11 at the same time marks the end of the central features of Generic Haskell. With the features discussed up to this point, one has a fully operational language at hand. Nevertheless, several extensions are extremely helpful, because they make generic programming both easier and more flexible. These additional features are introduced in the remainder of the thesis. Generic abstraction, covered by Chapter 12, is another way of defining type- indexed functions. Functions defined by generic abstraction do not perform case analysis on a type argument directly, but use other type-indexed functions and inherit their type-indexed-ness from those. Generic abstraction is one of the ex- tensions introduced in the “Generic Haskell, specifically” (Clarke and L ¨oh 2003) paper. They are shown here from the perspective of Dependency-style, and ben- efit significantly from the changed setting, allowing for a far cleaner treatment than in the paper. In Chapter 13, we discuss type inference, specifically for type-indexed func- tions, which can help to reduce the burden on the programmer while writing generic functions further. Several different problems are discussed, such as infer- ence of the type arguments in calls to generic functions or the inference of type signatures of generic functions. While the answers to the questions about type inference for generic functions are not all positive, it is possible to infer a reason- able amount of information that allows comfortable programming. This chapter is for the most part ongoing work and based on unpublished material. Chapter 14 on default cases presents a way to reuse cases from existing type- indexed functions while defining new generic functions. Frequently occurring traversal patterns can thus be captured in basic type-indexed functions, and sub- 12 1.4 Selecting a route sequently extended to define several variations of that pattern. Default cases are covered in “Generic Haskell, specifically”, but adapted for Dependency-style here. In Chapter 15, we extend our core language to allow all of Haskell’s type dec- laration constructs: type and newtype as well as data. This becomes relevant in the following Chapter 16, on type-indexed datatypes, which is based on the paper of the same name (Hinze et al. 2002). Type-indexed datatypes are like type-indexed functions, but on the type level: they are datatypes that have a dif- ferent implementation depending on the structure of a type argument. Again, the presentation of type-indexed datatypes is significantly different from the paper, because we extend the dependency type system to the type level. In Chapter 17, we describe a number of different encodings of datatypes into the Haskell type language, which form alternatives to the standard encoding that is discussed in Chapter 10. By changing the view on datatypes, some generic functions become easier to define, others become more difficult or even impossi- ble to write. This chapter describes ongoing work. Chapter 18 on modules shows how type-indexed – and in particular generic – functions and datatypes can live in programs consisting of multiple mod- ules, and to what extent separate compilation can be achieved. This chapter draws upon unpublished knowledge gained from the implementation of Generic Haskell (Clarke et al. 2002). Modules are also the last step in making Generic Haskell a complete language on top of Haskell; therefore this chapter concludes the thesis. All in all, we present a complete and rich language for generic programming, which can, has been, and hopefully will be used for several interesting applica- tions. I wish you a joyful exploration! 13 1 Adventure Calls! 14 2 Choosing the Equipment In this chapter, we will prepare for our exploration. Section 2.1 briefly discusses prerequisites for the material presented in this thesis and pointers to introductory material to gain the assumed knowledge. In Section 2.2, we discuss the status In Section 2.3, we explain several notational of the Generic Haskell compiler. conventions that are used in the remainder of this thesis. 2.1 Prerequisites I have tried to write this thesis in such a way that it is understandable without a heavy background in generic programming in the context of functional lan- guages. Because the thesis describes a language extension of Haskell, a familiar- ity with Haskell is very advisable. I recommend Bird’s excellent textbook (Bird 1998), but the very readable language report (Peyton Jones 2003) might do as well, especially if another statically typed functional language, such as Clean or an ml variant, is already known. It is extremely helpful if one is comfortable with the data construct in Haskell to define new datatypes, and with the kind system 15 2 Choosing the Equipment that classifies types in the same way as types classify expressions. In the formal discussion of the language, we use many deduction rules to define type checking algorithms and translations. It can therefore do no harm if one has seen such concepts before. I heartily recommend Pierce’s book on “Types and programming languages” (Pierce 2002), which serves as a well-motivated general introduction into the theory of statically typed languages. If one is interested in the background of generic programming beyond Generic Haskell, I recommend the Summer School article of Backhouse et al. (1999) – which contains an introduction to the field from a theoretical perspective, without focus on a specific programming language – or some of the materials mentioned in Section 1.3 on related work. If more examples or even exercises are desired, then the later Summer School articles of Hinze and Jeuring (2003a,b) provide material. 2.2 The Generic Haskell compiler There exists a Generic Haskell compiler. There have been two formal releases, Amber (Clarke et al. 2001) and Beryl (Clarke et al. 2002), and the current devel- opment version supports a couple of features that the two released versions do not. Nevertheless, the compiler lags considerably behind the development of the theory. The compiler translates into Haskell, and leaves all type checking to the Haskell compiler. Furthermore, the support for Dependency-style Generic Haskell, as it is used in this thesis, is rudimentary at best, and type signatures have to be written using kind-indexed types (cf. Section 6.6). The syntax used in the compiler is discussed in the User’s Guides for the two releases, and the Sum- mer School material (Hinze and Jeuring 2003a,b) contains examples and exercises that are specifically targeted at the language that is implemented by the compiler. Even with its limitations, the compiler can be used to implement a large num- ber of the examples in this thesis and provide valuable insight into the prac- tice of generic programming. The current version is available from http://www. generic-haskell.org/. It is my hope that in the future this site will have a version with full support for the features discussed in this thesis. 2.3 A note on notation This section lists several notational conventions that we adhere to in this thesis. 16 2.3 A note on notation 2.3.1 Natural numbers and sequences Natural numbers start with 0. Whenever the domain of a numerical value is not explicitly given, it is a natural number. We use m . . n to denote the set of natural numbers between m and n, including both m and n. We use m . . to denote the set of natural numbers greater than or equal to m, and . . n to denote the set of natural numbers smaller than or equal to n. 2.3.2 Syntactic equality We use the symbol ≡ to denote syntactic equality or meta-equality. The symbol = is used in Haskell and our languages for declarations, and == is used as a name for the equality function in Haskell and our languages. 2.3.3 Repetition Whenever possible, the ellipsis . . . repetition constructs that are defined as follows: is not used in this thesis. Instead, we use {X}i∈m..n | m > n ≡ ε {X}i∈m..n s | otherwise ≡ X[i / m] {X}i∈m+1..n | m > n ≡ ε | otherwise ≡ X[i / m] {s X}i∈m+1..n Read these definitions as syntactical macros where X and s represent arbitrary syntactical content, i is a variable, and m and n are natural numbers. The substi- tution is syntactical: where the bound variable i literally occurs in X, the natural number m is substituted. Note that if X or s contain infix operators, the intention is that the associativity of the infix operator is interpreted after the expansion. For example, the list 1 : 2 : 3 : 4 : 5 : [ ], to be read as 1 : (2 : (3 : (4 : (5 : [ ])))), could be written as {i :}i∈1..5 [ ] . Using Haskell’s syntactic sugar for lists, we could write [ {i}i∈1..5 , ] for the same list, namely [1, 2, 3, 4, 5]. 17 2 Choosing the Equipment 2.3.4 Environments Environments are finite maps, associating keys with values. For the entries we use different syntax depending on what an entry is meant to express. For demon- stration, we will use entries of the form x 7→ v, mapping a key x to a value v. Most often, we treat environments as lists that extend to the right: Environments E ::= ε | E, x 7→ v empty environment non-empty environment . If not otherwise stated, an environment can only contain one binding for a cer- tain key. If a second binding is added to the right, then it overwrites the old. Environments can be reordered. Consequently, we use the notation x 7→ v ∈ E, but also E ≡ E0, x 7→ v, to express the fact that the binding x 7→ v is contained in the environment E. We use the comma (,) also as union operator for two environ- ments, again with the rightmost binding for a key overwriting all earlier bindings for the same key. We use the same notation for sets as for environments. Sets are environments with only keys, i.e., empty values. 2.3.5 Syntax descriptions, metavariables and indices We introduce a multitude of languages in this thesis, and an important part of languages is syntax. The syntax of environments above is an example of how we present syntax descriptions: each syntactic category is named before it is listed, and each alternative is followed by a description to the right. The symbols on the right hand side of productions are either terminals that are part of the syntax (such as ε above), or metavariables that refer to nonterminals of different cate- gories. The left-hand side of a production introduces the symbol that we use as metavariable for the category that is just defined, in this case E for environments. In most cases, we use only one symbol to denote all occurrences of this syn- tactic category, and distinguish different entities by indexing. Thus, if E is an environment, then so are E0, E2, Ei, or E0 j. A list of all metavariables that we use and their meaning is shown on page 307. Often, we use variable repetition constructs in syntax descriptions. For exam- ple, we could also have described the syntax of environments non-inductively using the following production: Environments, alternatively E ::= {xi 7→ vi}i∈1..n , (n ∈ 0 . .) environment . 18 2.3 A note on notation Here, we state that an environment may be any comma-separated sequence of key-value pairs of the form xi 7→ vi, where that sequence may be of any length n in the given range. In the spirit of Section 2.3.1, we drop the explicit range specification (n ∈ 0 . .) if the range is 0 . ., i.e., unrestricted. 2.3.6 Deduction rules Most properties and algorithms in this thesis are specified by means of deduction rules. A sample deduction rule is shown in Figure 2.1. K; Γ ‘ e :: t Γ ‘ e1 :: t1 → t2 Γ ‘ e2 :: t1 Γ ‘ (e1 e2) :: t2 (e-app-sample) Figure 2.1: Sample deduction rule We give the form of the judgment before the rules, between horizontal lines. In this case, the judgment is of the form K; Γ ‘ e :: t . We use the same metavariables as in the syntactic descriptions, and distinguish multiple occurrences of objects of the same category by use of indices. Each rule has a name, such as (e-app-sample) in the example. If an environ- ment such as K in this case is passed on unchanged, we sometimes drop it from the rule. 2.3.7 Free variables and substitutions We define which variables are free when we describe a particular language. Throughout this thesis, we use fev(e) to refer to free variables of an expression e, whereas dv(t) refers to free type variables of a type t, and fdv(t) refers to free dependency variables (see Chapter 6) of a type t. We write e1[e2 / x] to denote the application of the substitution that replaces every free occurrence of variable x in expression e1 by expression e2. We use substitutions also on other syntactic categories, for instance types. We use the 19 2 Choosing the Equipment metavariables ϕ and ψ to stand for substitutions, and then write ϕ e to denote the application of a substitution ϕ to an expression e. We usually assume that the application of substitutions does not lead to name capture, i.e., that alpha-conversion is performed in such a way that no name capture occurs. 2.3.8 Font conventions We use bold face in the text to denote language keywords and for definitions. In the index, pages containing the definition of a certain notion are also in bold face. We emphasize important notions that are not defined elsewhere or not defined in this thesis. Again, we point to pages containing such notions using an empha- sized page number in the index. We use capital Greek letters to denote environments. Note that capital Greek letters are always written upright, so E denotes a capital “epsilon”, whereas E is a capital “e” – we try to avoid such clashes as much as possible, though, to avoid confusion. In code examples, next to the bold keywords, identifiers are written in italics, Type names are capitalized, Constructors capitalized in italics, and type Classes capitalized using a sans-serif font. We also use a sans-serif font for internal operations, i.e., functions on the meta- language level, such as fev(e) to determine the free variables of an expression. The arguments of internal operations are always enclosed in parentheses. 20 3 A Functional Core Before we will start on our tour into the field of generic functions, we will stick to the known for a while. While we will describe generic programming in the context of Generic Haskell, an extension to the Haskell language, in an informal way, we also need a vehicle for formal analysis of the generic features, a solid core language to base our extensions on. To this end, we will introduce a basic functional core language, named fc, in this chapter, much like what the Haskell Language Report (Peyton Jones 2003) uses as the target of translation for all of the advanced constructs defined in the Haskell language. The language fc is designed to be sufficiently rich that, in conjunction with the gradual extensions following in future chapters, it can be used to demonstrate the full power of generic programming in a formal setting. Although there are no language constructs specific to generic functions in this chapter and the language may look very familiar, this chapter also introduces some notations and mechanisms that will be used extensively throughout the rest of the thesis. This chapter is probably not necessary to understand the rest of the thesis, but is a good reference to look up unfamiliar notation or terminology. Type systems and their properties for very similar languages are given in Pierce’s book (2002). 21 3 A Functional Core 3.1 Syntax of the core language fc The syntax of the core language is shown in Figure 3.1. A program is a list of datatype declarations followed by a special expression called main. Datatype declarations are modelled after Haskell’s data construct. A datatype may be parametrized (we write the type arguments using a type-level Λ, which may only occur in a datatype definition). A datatype has zero or more con- structors, each of which has a number of arguments (or fields). Kinds are the types of types. Kind ∗ is reserved for all types that can be assigned to expressions, whereas parametrized datatypes (also called type con- structors) have functional kinds. The type language has variables and named types (types that have been de- fined as datatypes). Two types can be applied to each other. We can universally quantify over a type variable. Function types are not explicitly included in the syntax, but we assume that the function type constructor (→) is part of the named types T, and available as a built-in type constant. In expressions, one can refer to variables and constructors. Application and lambda abstraction deal with functions. An expression can be analyzed in a case statement. We call this expression the head of that case statement. A case statement consists of multiple arms, where each arm consists of an expression that is guarded by a pattern. Patterns are a restricted form of expres- sions, consisting of (fully applied) constructors and variables only. All variables in a pattern must be distinct. A let statement allows to locally define a new function via a function decla- ration. The value bound in the let is visible in the expression that constitutes the body of the let. It is, however, not visible in the definition of the local value itself, i.e., this is not a recursive let. The fix statement can be used to introduce recursion. Using fix and assuming that we have n-ary tuples for arbitrary n, we can define a recursive let statement letrec as a derived form. In Section 3.6, we will formally introduce recursive let as a language extension, and subsequently replace let with letrec, as in Haskell. We will often assume that certain datatypes, such as tuples and lists or inte- gers, and some primitive functions on them, are predefined. We will also often use more convenient notation for some operations and constructs, such as the tuple and list notation that is standard in Haskell, grouping of multiple lambda abstractions, and infix operators. In particular (and probably most important), we will write t1 → t2 22 3.1 Syntax of the core language fc Programs P ::= {Di; }i∈1..n main = e Type declarations D ::= data T = {Λai :: κi.}i∈1..‘ {Cj {tj,k}k∈1..nj }j∈1..m | type declarations plus main expression Value declarations ::= x = e d Kinds κ ::= ∗ | κ1 → κ2 Types t, u ::= a, b, c, f , . . . | T (t1 t2) | | ∀a :: κ.t datatype declaration function declaration kind of manifest types functional kind type variable named type type application universal quantification Expressions e ::= x, y, z, . . . | C | (e1 e2) | λx → e | case e0 of {pi → ei}i∈1..n ; variable constructor application lambda abstraction let d in e | | fix e Patterns p ::= (C {pi}i∈1..n) | x, y, z, . . . case let fixed point constructor pattern variable pattern Figure 3.1: Syntax of the core language fc 23 3 A Functional Core for the type of functions from t1 to t2. All these constructs are used to make examples more readable. It is easy to translate them away. In the following, we will define when an fc program is correct and present an operational semantics of the language. 3.2 Scoping and free variables The language fc has the usual scoping rules. Type variables are bound only by type-level lambda Λ in datatype declarations, and by universal quantifiers. Abstracted type variables in datatypes are visible in the entire datatype definition; quantified type variables everywhere underneath the quantifier. We use dv(t) to refer to the free type variables of a type t. Constructor and type names scope over the whole program. Value-level variables are introduced by a let statement, by a lambda abstraction, and by the patterns in a case statement. A let binds all variables that occur on the left hand side of its declarations in its body. A lambda abstraction binds its variable in its body, too. A pattern binds all variables that occur in the pattern in the expression that corresponds to the pattern. Patterns are only legal if all variables in one pattern are different. We use fev(e) to refer to the free variables of an expression e. Programs are equivalent under alpha-conversion, i.e., bound variables can be renamed without changing the meaning of the program. Substitution is generally meant to be capture-avoiding: if the substituted ex- pression contains variables that would be captured by a binding construct, it is assumed that alpha-conversion is performed so that no name capture takes place. If name capture is intended, we will explicitly mention that fact. 3.3 Types and kinds For the large part of this thesis, we will not consider type inference (an exception is Chapter 13). Nevertheless, type safety is an essential part of generic programming and thus of Generic Haskell. Therefore we present rules that specify when a program in the core language can be assigned a type. We will not, however, present algorithms to find a suitable type. We annotate all type variables with kinds; therefore finding the kind of a type is straightforward, given the kind rules that follow. On the value level, however, we do not use type annotations explicitly, but assume that they are given as needed, in addition to the program. 24 3.3 Types and kinds One way to view the core language is as a variant of the polymorphic lambda calculus Fω (Girard 1972), where all the type applications and type abstractions have been separated from the program, but can be recovered during the type checking phase as needed. Leaving out the type annotations allows us to focus on the part that is central to this thesis – the generic programming – and also to write the examples in a language which is closer to what the programmer will actually write in a real programming language, which will be able to infer types at least for expressions that Haskell can infer types for. Our language is more expressive than Haskell, though. We allow universal quantification to occur everywhere in types, thereby opening the possibility to express types of arbitrary rank, as opposed to Haskell 98, which only admits rank-1 types. Rank-n types do occur frequently in the treatment of generic func- tions, and it is desirable that a language for generic programming supports them. They are implemented as an extension to Haskell and useful in many other ar- eas (Peyton Jones and Shields 2003). Furthermore, we allow universally quantified types to occur within datatypes, and even datatypes parametrized with polymorphic types, but these features are rarely used and not essential. As datatypes play a central role in Generic Haskell, it is important to be familiar with the concept of kinds. Let us therefore review the Haskell kind system. The kind ∗ is reserved for types that can be assigned to expressions in the lan- guage, whereas parametrized types (or type constructors) behave like functions on the type level and are assigned functional kinds. For instance, the Haskell types Int, Char and Bool are all of kind ∗, whereas Maybe, the list constructor [ ], or the input-output monad IO are of kind ∗ → ∗. The pair constructor (,) and Either are both of kind ∗ → ∗ → ∗. We will sometimes abbreviate ∗ → ∗ → ∗ to ∗2, where ∗n = {∗ →}i∈1..n∗ . Kinds need not be of the form ∗n for some natural number n, as types can be parametrized over type constructors. A simple example is data IntStruct (f :: ∗ → ∗) = I (f Int) . Here, f is a variable of kind ∗ → ∗ that can be instantiated, for example, to some container type. Hence, IntStruct has kind (∗ → ∗) → ∗. We will encounter more examples of types with complex kinds later. During kind checking, we make use of an environment K that contains bind- ings of the form a :: κ and T :: κ, associating type variables and named types with kinds. The kind checking rules are given in Figure 3.2 and are of the form K ‘ t :: κ , 25 3 A Functional Core K ‘ t :: κ a :: κ ∈ K K ‘ a :: κ (t-var) T :: κ ∈ K K ‘ T :: κ (t-named) K ‘ t1 :: κ1 → κ2 K ‘ t2 :: κ1 K ‘ (t1 t2) :: κ2 (t-app) K, a :: κ ‘ t :: ∗ K ‘ ∀a :: κ. t :: ∗ (t-forall) Figure 3.2: Kind checking for core language of Figure 3.1 expressing that under environment K, the type t has kind κ. The rules themselves bear no surprises: variables and named types are looked up in the environment, application eliminates functional kinds, and universal quantification works for variables of any kind, but the resulting type is always of kind ∗. The type rules that are displayed in Figure 3.3 are of the form K; Γ ‘ e :: t , which means that expression e is of type t under environment Γ and kind en- vironment K. The environment K is of the same form as in the kind inference rules, whereas Γ contains entries of the form x :: t and C :: t, associating a type t with a variable x or a constructor C. Recall our convention from Section 2.3.6 that we may omit environments that are not important to a rule (i.e., passed on unchanged). The kind environment K is rarely needed during type checking. Again, most rules are simple: variables and constructors are looked up in the environment. Application and lambda abstraction are simple. Also, let state- ments are easy to check as they are non-recursive. The rule for case statements is the most complicated rule: The head of the case, e0, must type check against a type t0. This type must match the type of the patterns. Each pattern can bind a number of variables, which is witnessed by the environments Γ i. Each arm ei is now checked with an extended environment where the variables bound by the respective pattern are added. All arms must be of the same type t, which is then also the type of the whole expression. The rules for pattern matching are shown in Figure 3.4 and explained in Section 3.3.1. 26 3.3 Types and kinds K; Γ ‘ e :: t x :: t ∈ Γ Γ ‘ x :: t (e-var) C :: t ∈ Γ Γ ‘ C :: t (e-con) Γ ‘ e1 :: t1 → t2 Γ ‘ e2 :: t1 Γ ‘ (e1 e2) :: t2 (e-app) K ‘ t1 :: ∗ K; Γ, x :: t1 ‘ e :: t2 K; Γ ‘ λx → e :: t1 → t2 (e-lam) Γ ‘ e0 :: t0 Γ {Γ ‘pat pi :: t0 i}i∈1..n K ‘ t :: ∗ i ‘ ei :: t}i∈1..n {K; Γ, Γ K; Γ ‘ case e0 of {pi → ei}i∈1..n ; :: t (e-case) Γ ‘ e0 :: t0 Γ, x :: t0 ‘ e :: t Γ ‘ let x = e0 in e :: t (e-let-val) Γ ‘ e :: t → t Γ ‘ fix e :: t (e-fix) a /∈ dv(Γ) K, a :: κ; Γ ‘ e :: t K; Γ ‘ e :: ∀a :: κ. t (e-gen) Γ ‘ e :: t1 6 t2 ‘ t1 Γ ‘ e :: t2 (e-subs) Figure 3.3: Type checking for core language of Figure 3.1 27 3 A Functional Core The fix behaves as a fixpoint construct. Therefore the expression must have the type of a function where domain and codomain are both of type t. The least fixpoint is then of type t. A type can be generalized, that is, universally quantified with respect to a type variable, if it can be type checked without any assumption about the type variable in the type environment Γ. Finally, there is a rule for subsumption. Polymorphic types can be specialized, and monomorphic types generalized under certain circumstances. The subsump- tion relation is shown in Figure 3.5 and described in Section 3.3.2. Γ 1 ‘pat p :: t Γ 2 Γ ‘pat x :: t x :: t (p-var) C :: {ti →}i∈1..n t0 ∈ Γ Γ i}i∈1..n {Γ ‘pat pi :: ti {Γ Γ ‘pat C {pi}i∈1..n :: t0 i}i∈1..n , (p-con) Figure 3.4: Type checking for patterns, extends Figure 3.3 K ‘ t1 6 t2 ‘ t 6 t (s-refl) ‘ t3 ‘ t2 6 t1 6 t4 ‘ (t1 → t2) 6 (t3 → t4) (s-fun) a /∈ dv(t1) K, a :: κ ‘ t1 K ‘ t1 6 t2 6 ∀a :: κ. t2 (s-skol) K ‘ u :: κ K ‘ t1[u / a] 6 t2 6 t2 K ‘ ∀a :: κ. t1 (s-inst) Figure 3.5: Subsumption relation on core types, extends Figure 3.3 28 3.4 Well-formed programs 3.3.1 Patterns For patterns, we use a variation of the typing judgment that tells us about the variables that are bound by the pattern. The form of the judgment is Γ 1 ‘pat p :: t Γ 2 , which means that under environment Γ environment Γ 2. 1 the pattern p is of type t, binding the A variable can have any type, and the variable is bound to that type. Note that pattern variables are not taken from the environment: patterns bind variables rather than refer to them. A constructor pattern is checked as follows: the constructor has to be in the environment. In the pattern, the constructor has to appear fully applied, i.e., if its type is a function with n arguments, then there must be n argument patterns pi of matching type. The pattern binds all variables bound by the arguments. We require all the bound variables to be distinct. 3.3.2 Subsumption The subsumption judgments are of the form K ‘ t1 6 t2 , which expresses the fact that an expression of type t1 can behave as an expression of type t2 under kind environment K. The relation is reflexive and furthermore has two rules that deal with adding and removing universal quantifiers. The function type constructor is contravari- ant in its first argument, which is reflected in the subsumption rule for functions. 3.4 Well-formed programs Some loose ends remain to be tied: for example, constructors need to be present in the environments, but are nowhere inserted. In fact, constructors are defined by the data declarations. Figure 3.6 shows how: each type that occurs in one of the constructors (i.e., all the tj,k’s) must be of kind ∗, where the kind environment is extended with the type variables that constitute the parameters of the datatype. For each constructor, a type is added to the resulting environment. A whole program can now be checked as shown in Figure 3.7: checking pro- 0, which may contain external types ceeds against two environments K0 and Γ 29 3 A Functional Core and functions. We always assume that at least (→) :: ∗ → ∗ → ∗ is in K0, but we often add additional types and functions on them, such as tuples or inte- gers and arithmetic operations. All datatype declarations are checked for well- formedness against an extended environment K, which consists of K0 plus all the datatypes declared in the program with their kinds. This expresses the fact that all datatypes may be mutually recursive. The Γ i containing the constructors are added to Γ 0 to form Γ, and using this Γ, the main function is type checked. The type of the main function is also the type of the program. 3.5 Operational semantics We define values, which are intended to be results of programs, in Figure 3.8. Values are a subset of the expression language, built from constructors (construc- tors are distinguished from functions by the fact that they cannot be reduced) and lambda abstractions. In addition, there is fail, which represents run-time failure. Failure will be used as the result of a case expression where none of the patterns matches the expression in the head. Heads of case expressions are evaluated to weak head-normal form to reveal the top-level constructor during pattern matching. The syntax for weak head- normal form expressions is like the syntax for values, only that the arguments of the top-level constructor do not have to be evaluated and can be arbitrary expressions. We extend the expression language with failure as well, as failure can occur while reducing expressions. The new type rule for fail is given in Figure 3.9: as fail represents failure, it can have any type. Figure 3.10 presents small-step reduction rules to reduce expressions to values. We write ‘ e1 (cid:26) e2 to denote that e1 reduces to e2 in one reduction step. The reduction of a program is equivalent to the reduction of the main expres- sion. In other words, the datatype declarations are not needed for the reduction of an expression. Constructors are syntactically distinguished, and knowing that a name represents a constructor is enough to formulate the reduction rules. The reduction rules can be extended by additional rules for built-in functions. If a program is type checked under an initial type environment Γ 0 containing primitive functions (not constructors), it is necessary to provide reduction rules for these functions as well. 30 3.5 Operational semantics K1 ‘ D K2; Γ D ≡ data T = {Λai :: κi.}i∈1..‘ {Cj {tj,k}k∈1..nj }j∈1..m (cid:8){K {, ai :: κi}i∈1..‘ ‘ tj,k :: ∗}k∈1..nj (cid:9)j∈1..m Γ ≡ (cid:8)Cj :: {∀ai :: κi.}i∈1..‘ {tj,k →}k∈1..nj T {ai}i∈1..‘(cid:9)j∈1..m K ‘ D T :: {κi →}i∈1..‘∗; Γ | , (p-data) Figure 3.6: Well-formed data declarations K; Γ ‘ P :: t P ≡ {Di; }i∈1..n main = e Ki; Γ i}i∈1..n {K ‘ Di 0, {Γ Γ ≡ Γ K ≡ K0, {Ki}i∈1..n i}i∈1..n , , K; Γ ‘ e :: t K0; Γ 0 ‘ P :: t Figure 3.7: Well-formed programs (p-prog) 31 3 A Functional Core Values v ::= C {vi}i∈1..n | λx → e | fail Weak head-normal form w ::= C {ei}i∈1..n | λx → e | fail Expressions ::= . . . e | fail constructor function run-time failure constructor function run-time failure everything from Figure 3.1 run-time failure Figure 3.8: Syntax of values, extends Figure 3.1 K; Γ ‘ e :: t K; Γ ‘ t :: ∗ K; Γ ‘ fail :: t (e-fail) Figure 3.9: Type rule for run-time failure, extends Figure 3.3 Let us look at the reduction rules in a little bit more detail. The rule (r-con) causes that, if a constructor is encountered, its arguments are reduced to values one by one. The rule for application implements lazy evaluation and beta reduc- tion: the function is reduced before its argument, and when a lambda abstraction is encountered, the argument is substituted for the variable in the body of the function. Run-time failure is propagated. Most involved is, once more, the treatment of case statements. We try to match the first pattern against the head expression. If the match succeeds, as in (r- case-1), the match results in a substitution ϕ (mapping the variables bound in the pattern to the matching parts of the expression). Note that alpha-conversion may need to be performed on the arm p1 → e1 prior to the derivation of ϕ in order to avoid name capture. The substitution ϕ is then applied to the expression If the match fails, as in (r-case-2), the first arm of belonging to the first arm. the case is discarded, i.e., matching continues with the next arm. If no arms are available anymore, as in (r-case-3), there is nothing we can do but to admit run-time failure. 32 3.5 Operational semantics A let statement is reduced by substituting the expression for the variable in the body. Finally, a fixpoint statement introduces recursion. We have not yet shown how to perform a pattern match. This is detailed in Figure 3.11. These rules have conclusions of the form ‘match p ← e ϕ . This is to be understood as that pattern p matches value e, yielding a substitution ϕ, mapping variables to expressions. We assume that fail is a special substitution, mapping all variables to the expression fail. First, the expression to be matched is reduced using rule (m-reduce) until it reaches weak head-normal form. We use the internal function whnf(e) to express the syntactic condition that e is in weak head-normal form. The rule (m-con) shows that if both pattern and expression are of the same constructor, the argu- ments are matched one by one. The resulting substitution is the reverse compo- sition of all substitutions originating from the arguments. (The order of compo- sition is actually not important, because the pattern variables do not occur in the substituting expressions and the domains of the substitutions will be disjoint, as we declare patterns legal only if each variable occurs at most once.) Note that if one of the substitutions is fail, then the resulting substitution is fail, too. In rule (m-var), we see that matching against a variable always succeeds, binding the variable to the expression. The other rules indicate that everything else fails: matching two different constructors against each other, or matching a function against a constructor, and matching the fail value against anything. There is a slight deviation from Haskell here, which simplifies the rules a little, but is otherwise nonessential: if the evaluation of an expression during a pattern match fails, then only this particular match fails, but the entire case statement may still succeed. We can capture the safety of the reduction relation in the following two the- orems. The progress and preservation properties are easy to prove for this lan- guage, as it is a core language without any special features. Readers familiar with such proofs may want to skip ahead to the next section. Theorem 3.1 (Progress). If e is a closed expression (i.e., fev(e) ≡ ε) and K; Γ ‘ e :: t, then either e is a value or there is an e0 with ‘ e (cid:26) e0. Note that there is no guarantee of termination, as we have unbounded recur- sion by means of the fix statement. Proof. We prove the theorem by induction on the type derivation for e. The last rule applied cannot have been (e-var), because e is closed. If it was (e-con), then e is a constructor and thereby a value. Likewise for (e-lam) and (e-fail). 33 3 A Functional Core ‘ e1 (cid:26) e2 n ∈ 1 . . ‘ e1 (cid:26) e0 1 ‘ C {vi}i∈1..m {ei}i∈1..n (cid:26) C {vi}i∈1..m e0 1 {ei}i∈2..n (r-con) (cid:26) e0 ‘ e1 1 ‘ (e1 e2) (cid:26) (e0 1 e2) (r-app-1) ‘ ((λx → e1) e2) (cid:26) e1[e2 / x] (r-app-2) ‘ (fail e) (cid:26) fail (r-app-3) n ∈ 1 . . ‘match p1 ← e ϕ ‘ case e of {pi → ei}i∈1..n ; ϕ 6≡ fail (cid:26) ϕ e1 (r-case-1) n ∈ 1 . . ‘ case e of {pi → ei}i∈1..n ; ‘match p1 ← e fail (cid:26) case e of {pi → ei}i∈2..n ; (r-case-2) ‘ case e of ε (cid:26) fail (r-case-3) ‘ let x = e1 in e2 (cid:26) e2[e1 / x] (r-let) ‘ fix e (cid:26) e (fix e) (r-fix) Figure 3.10: Reduction rules for core language of Figure 3.1 34 3.5 Operational semantics (m-reduce) ‘match p ← e ϕ ¬whnf(e) ‘ e (cid:26) e0 ‘match x ← e0 ϕ ‘match x ← e ϕ {‘match pi ← ei ϕi}i∈1..n ϕ ≡ {ϕn+1−i}i∈1..n ‘match C {pi}i∈1..n ← C {ei}i∈1..n ϕ · (m-con) ‘match x ← w (x 7→ w) (m-var) C1 6≡ C2 ‘match C1 {pi}i∈1..n ← C2 {ej}j∈1..n fail (m-fail-1) ‘match C {pi}i∈1..n ← λx → e fail (m-fail-2) ‘match C {pi}i∈1..n ← fail fail (m-fail-3) Figure 3.11: Pattern matching for the core language of Figure 3.1, extends Figure 3.10 35 3 A Functional Core In the case (e-app) (i.e., e ≡ (e1 e2)), by induction hypothesis e1 is either a value or can be reduced. If e1 can be reduced, then (r-app-1) applies. If e1 is a value, then e1 is either fail or a lambda abstraction or a (partially applied) constructor. Depending on that, either (r-app-3) or (r-app-2) or (r-con) can be applied. If the last type rule used is (e-case) and there is at least one arm in the state- ment, then (r-case-1) or (r-case-2) applies, depending on whether the pattern match succeeds or fails. We assume here that the pattern match can always be performed, which can be proved by another induction argument. Rule (r-case-3) is left for the case that there are no arms. If the type derivation ends in a (e-let-val) or (e-fix), the corresponding reduction rules (r-let) and (r-fix) are possible. For both (e-gen) and (e-subs), applying the induction hypothesis yields the proposition immediately. Theorem 3.2 (Type preservation). If K; Γ ‘ e :: t and ‘ e (cid:26) e0, then K; Γ ‘ e0 :: t. This property, often called subject reduction, means in words that if an expres- sion e can be assigned type t under some environment, and e reduces to e0, then e0 still has type t under the same environment. Proof. Again, we proceed by induction on the typing derivation. If the last rule is (e-var) or (e-con), then there are no reduction rules that can be applied. The same holds for rules (e-lam) and (e-fail). In the case that the last rule is (e-app), four reduction rules are possible: in the case of (r-app-3), the proposition trivially holds, and in the case of (r-app-1), the proposition follows immediately from the induction hypothesis applied to the function. The same holds for the case of (r-con); here, the induction hypothesis is applied to the first non-value argument of the constructor. If the reduction rule applied is (r-app-2), then we need the fact that a type is preserved under substitution, which we will prove as Lemma 3.3. This lemma also helps us in case (e-let-val), where the only reduction rule that applies is (r-let). If we have a case statement and the last type rule used is (e-case), there are three reduction rules that could apply: for (r-case-2), we see from the form of (e-case) that an arm from a case statement still allows the same type judgment as before. For (r-case-3), the proposition holds again trivially, which leaves (r-case-1). In this situation, e ≡ case e0 of {pi → ei}i∈1..n ; . Let {xi}i∈1..m , be the variables contained in p1. Then, by (e-case), we have Γ, Γ 1 ‘ e1 :: t 36 3.6 Recursive let where Γ induction on the pattern (‘pat) judgments). From (r-case-1), we know that (we can prove this equality using a straightforward :: ti}i∈1..m , 1 ≡ {xi ‘match p1 ← e ϕ , and using a similarly straightforward induction on the pattern matching (‘match) judgments, we can show that ϕ is a composition of simple substitutions replacing each xi with some expression of type ti. Applying Lemma 3.3 then leads to the desired result that Γ ‘ ϕ e1 :: t. If the last step has been (e-fix), then e ≡ fix e0. Hence, we have to show that Γ ‘ e0 (fix e0) :: t. But Γ ‘ e0 :: t → t, and Γ ‘ fix e0 :: t, therefore an application of (e-app) does the trick. In the case (e-gen), we have to show that ‘ e0 :: ∀a :: κ. t. From the induction hypothesis, applied to e :: t, we know that K, a :: κ; Γ ‘ e0 :: t. Because a /∈ dv(Γ), we can immediately reapply (e-gen) to get the desired result. If the last derivation step is using rule (e-subs), then there is a type t0 with ‘ t0 6 t and ‘ e :: t0. We can apply the induction hypothesis to e :: t0, resulting in ‘ e0 :: t0. Reapplying (e-subs), which only depends on the type, not on the expression, proves the proposition. It remains to show the promised lemma: Lemma 3.3 (Substitution). If Γ, x :: u ‘ e :: t and Γ ‘ e0 :: u, then Γ ‘ e[e0 / x] :: t. Proof. Once more, we will prove the lemma using an induction on the type derivation for e, inspecting the last derivation step. If the last step is (e-var), then e is a variable, i.e., e ≡ y. If y 6≡ x, then e[e0 / x] ≡ If y ≡ x, then t ≡ u, because Γ, x :: u ‘ x :: u. e, and the proposition holds. Furthermore, e[e0 / x] ≡ e0. Since Γ ‘ e0 :: u, we are done. If the last step is (e-con), then e is a constructor and unaffected by substitution. In the case of (e-app), we can apply the induction hypothesis to both function and argument. All other cases are similar to (e-app). We can make sure that bound variables do not interfere with the substituted variable x by renaming. After that, the sub- stitution only affects the subexpressions that occur in the construct, and we can therefore obtain the desired result directly by applying the induction hypothesis to all subexpressions. 3.6 Recursive let We introduce letrec as a derived syntactic construct, defined by its translation to a combination of fix, let, and case, as in Figure 3.13. We only show rule (tr-letrec) 37 3 A Functional Core for the letrec construct – all other constructs are unaffected by the translation. The translation assumes that we have tuples available in the language, and we use sel(m, n) (where 1 6 m 6 n) as an abbreviation for the function that selects the m-th component of an n-tuple. It can be defined as sel(m, n) ≡ λx → case x of (cid:0){xi}i∈1..n , (cid:1) → xm . We call the language that contains letrec, but no (non-recursive) let, fcr. Its syntax is defined in Figure 3.12. Expressions ::= . . . e | letrec {di}i∈1..n ; everything except let from Figure 3.1 recursive let in e Figure 3.12: Syntax of fcr, modifies the core language fc of Figure 3.1 efcr J rec ≡ efc K z fresh rec ≡ qletrec {xi = ei}i∈1..n ; in e0y (cid:16) let z = fix in case z of ({xi}i∈1..n λz → {let xi = sel(i, n) z in}i∈1..n (cid:0){ e0 J rec K ) → , (tr-letrec) (cid:1)(cid:17) ei J rec}i∈1..n , K Figure 3.13: Translation of fcr to fc We define the operational semantics of letrec via the translation into the lan- guage without letrec, given above. The translation touches only recursive lets, leaving everything else unchanged. We need a new type judgment for recursive let and prove that it coincides with the type of its translation. The rule is given in Figure 3.14: both expressions bound and the body are checked in an extended environment, which contains the types of the bound variables. Theorem 3.4 (Correctness of letrec typing). If e ≡ letrec {xi = ei}i∈1..n and K; Γ ‘ e :: t can be assigned using (e-letrec-val), then K; Γ ‘ ; rec :: t. K e J in e0, Proof. The proof is by induction on the derivation of the translated expression. 38 3.6 Recursive let (e-letrec-val) Γ0 ≡ Γ {, xi :: ti}i∈1..n {Γ0 ‘ ei :: ti}i∈1..n Γ ‘ letrec {xi = ei}i∈1..n Γ0 ‘ e :: t in e :: t ; Figure 3.14: Type checking for recursive let, modifies Figure 3.3 The translation for a recursive let statement involves tuples, thus we assume that appropriate entries are present in the environments. If we set t0 ≡ t, then we know from rule (e-letrec-val) that for all ei, we have Γ0 ‘ ei :: ti, where Γ0 ≡ Γ, Γ00 with Γ00 ≡ {xi :: ti}i∈1..n , . The induction hypothesis states that {Γ0 ‘ ei J rec :: ti}i∈0..n . K The first goal is to make a statement about the type of the argument to the fix operator. The innermost tuple has the type Γ0 ‘ ({ ei J rec}i∈1..n , K ) :: ({ti}i∈1..n , ) . From the definition of sel(i, n), it is easy to see that Γ, z :: ({ti}i∈1..n , ) ‘ sel(i, n, z) :: ti . Therefore, the entire nested non-recursive let statement is of type Γ, z :: ({ti}i∈1..n , ) ‘ {let xi = sel(i, n) z in}i∈1..n (cid:0){ ei J rec}i∈1..n , K (cid:1) :: ({ti}i∈1..n , ) , by repeated application of rule (e-let-val). Subsequently, we use (e-lam) followed by (e-fix) to see that (cid:16) Γ ‘ fix λz → {let xi = sel(i, n) z in}i∈1..n (cid:0){ (cid:1)(cid:17) ei J rec}i∈1..n , K :: ({ti}i∈1..n , ) . In the following, let (cid:16) e0 ≡ fix λz → {let xi = sel(i, n) z in}i∈1..n (cid:0){ ei J rec}i∈1..n , K (cid:1)(cid:17) , and we will now inspect the whole translation result let z = e0 in case z of ({xi}i∈1..n , ) → e0 J rec . K 39 3 A Functional Core The rule (e-let-val) states that since we already know that Γ ‘ e0 :: ({ti}i∈1..n suffices to show that , ), it Γ, z :: ({ti}i∈1..n , ) ‘ case z of ({xi}i∈1..n , ) → e0 J rec :: t K to prove the theorem. The induction hypothesis stated that Γ0 ‘ rec :: t, and it K does not change anything if we extend the environment Γ0 with the binding for z rec. It thus remains to apply (e-case) in e0 because z does not occur free in e0 or K J the right way: because the pattern is the same as above and therefore of type e0 J Γ, z :: ({ti}i∈1..n , ) ‘pat ({xi}i∈1..n , ) :: ({ti}i∈1..n , ) Γ00 , we are done. In Haskell, the keyword let is used to denote recursive let. Because fcr contains only recursive let statements, we will do the same from now on and use the let keyword instead of letrec. 40 4 Type-indexed Functions We are now going to leave the known territory of the functional core language. Over the next chapters, we will introduce the theory necessary to compile generic functions in several small steps. As we have learned in the introduction, generic functions are type-indexed functions. We will therefore, in this chapter, explain type-indexed functions, which are functions that take an explicit type argument and can have behaviour that depends on the type argument. The next chapter extends the class of type arguments we admit in both definition and call sites of type-indexed functions. After that, in Chapter 7, we will really make type-indexed functions generic, such that they work for a significantly larger class of type arguments than those that they are explicitly defined for. 4.1 Exploring type-indexed functions We call a function type-indexed if it takes an explicit type argument and can have behaviour that depends on the type argument. Let us jump immediately to a first 41 4 Type-indexed Functions example and see what a type-indexed function looks like: add hBooli = (∨) add hInti = (+) add hChari = λx y → chr (add hInti (ord x) (ord y)) . The above function add defines an addition function that works for any of the three types Bool, Int, Char. In principle, the definition can be seen as the def- inition of three independent functions: one function add hBooli, which takes two boolean values and returns the logical “or” of its arguments; one function add hInti, which performs numerical addition on its two integer arguments; fi- nally, one function add hChari, which returns the character of which the nu- merical code is the sum of the numerical codes of the function’s two character arguments. The three functions just happen to have a name consisting of two parts, the first of which is identical for all three of them, and the second of which is the name of a type in special parentheses. In Generic Haskell, type arguments are always emphasized by special paren- theses. One way in which this type-indexed function differs from three independent functions is its type. The type of add is ha :: ∗i → a → a → a . Very often, to emphasize that add is a type-indexed function, we move the type argument to the left hand side of the type signature: add ha :: ∗i :: a → a → a . This means that add has an additional argument, its type argument a, which is of kind ∗ (indeed, Bool, Int, and Char are all three types of kind ∗), and then two arguments of the type indicated by the type argument, yielding a result of the same type. Hence, for a type-indexed function different cases have related types. According to this type, the function can be called by providing three argu- ments: one type, and two arguments of that type. Indeed, the calls add hBooli False True add hInti 2 7 add hChari ’A’ ’ ’ would evaluate to True, 9, and ’a’ respectively. What happens if we call add on a different type, say Float? For add hFloati 3.7 2.09 , 42 4.2 Relation to type classes we do not have an appropriate case to select, so all we can do is to fail. But the program fails at compile time! The compiler can check that the function is defined for three types only, and that Float is not among them, and thus report an error. We could even decide to include the types for which add is defined in its type signature and write add ha :: ∗, a ∈ {Bool, Int, Char}i :: a → a → a , but this would make the type language extremely verbose, and as we will see, it will get verbose enough anyway. Therefore we make the failure to specialize a call to a generic function due to the absence of an appropriate case in the definition another sort of error, which we call specialization error. Hence, if we have a Generic Haskell program containing the above definition of add and subsequently a call to add hFloati like the one above, we get a statically reported specialization error claiming that add hFloati cannot be derived, given the cases add is defined for. 4.2 Relation to type classes It should be mentioned once more that what we have seen so far is not very exciting; everything we have done can also – probably even better – be done by using type classes. Instead of defining add as a generic function, we could also define a type class with a method named add: class Add a where add :: a → a → a instance Add Bool where add = (∨) instance Add Int where add = (+) instance Add Char where add = λx y → chr (ord x + ord y) This code has nearly the same effect as definition of the type-indexed function add before. The Haskell type for the class method add is add :: (Add a) ⇒ a → a → a , 43 4 Type-indexed Functions where the class constraint Add a captures the same information as type argument before, namely that there has to be some type a of kind ∗ (the kind is implicit in the constraint, because class Add expects an argument of kind ∗). The occurrence of class Add also encodes the fact that the function can only be called on Bool, Int, and Char. If add is called on any other type, such as Float, an “instance error” will be reported statically which corresponds directly to the specialization error for the type-indexed functions. There are two differences. First, the type class can be extended with new in- stances, whereas the type-indexed function is closed, i.e., it has the cases that are present at the site of its definition, and cannot be extended later. This does not make a huge difference while we consider only single-module programs, but for large programs consisting of multiple modules it can have both advantages and disadvantages. We will discuss the impacts of this design decision as well as the possibilities of “open” type-indexed functions later, in Section 18.4. Second – and this is a clear advantage of the type classes for now – we do not have to provide the type argument at the call site of the function. We can just write add ’A’ ’ ’ and the case for type Char will be selected automatically. In other words, the type argument is inferred for type classes. Type inference for generic functions – in different variations – is the topic of Chapter 13. For now, we assume that all type information is explicitly provided. 4.3 Core language with type-indexed functions fcr+tif Having introduced the concept of type-indexed functions informally in the con- text of Haskell, we will now extend the core language of Chapter 3 with type- indexed functions and formally analyze their properties and semantics. This extended language is called fcr+tif, and the syntax modifications with respect to fcr are shown in Figure 4.1. The language extension introduces a new form of value declarations, for type- indexed functions, using a type-level case statement to pattern match on types. The notation that we have seen in the beginning of this chapter serves as syntactic sugar for the typecase construct: the function add hBooli = (∨) add hInti = (+) add hChari = λx y → chr (ord x + ord y) can be written as add hai = typecase a of Bool → (∨) 44 4.4 Translation and specialization Value declarations ::= x = e d | x hai = typecase a of {Pi → ei}i∈1..n ; function declaration, from Figure 3.1 Expressions ::= . . . e | x hAi Type patterns P ::= T Type arguments A ::= T type-indexed function declaration everything from Figures 3.1 and 3.12 generic application named type pattern named type Figure 4.1: Core language with type-indexed functions fcr+tif, extends language fcr in Figures 3.1 and 3.12 Int → (+) Char → λx y → chr (ord x + ord y) in fcr+tif. Note that the typecase construct is tied to the declaration level and, unlike ordinary case, may not occur freely in any expression. Furthermore, arms of a case statement distinguish on elaborate patterns, whereas type patterns are for now just names of datatypes (we introduce the syntactic category of type patterns already here because type patterns will be allowed to be more complex later). We assume that a typecase is only legal if all type patterns are mutually distinct. The expression language is extended with generic application that constitutes the possibility to call a type-indexed function (again, only named types are al- lowed as arguments for now). Type-indexed functions are not first-class in our language: whenever they are called, a type argument must be supplied immedi- ately. Furthermore, type-indexed functions cannot be bound to ordinary variables or passed as arguments to a function. We will explain the reasons for this restric- tion in Section 11.7. We use the special parentheses to mark type arguments also at the call sites of type-indexed functions. 4.4 Translation and specialization We will define the semantics of fcr+tif not by providing additional reduction rules for the generic constructs, but by giving a translation into the original core 45 4 Type-indexed Functions language fcr. We will then prove that the translation is correct by proving that the translation preserves types. The translation is the topic of Figure 4.2. dfcr+tif J tif Σ 1 K ≡ {dfcr}i∈1..n ; Σ 2 (cid:8)di ≡ cp(x, Ti) = tif(cid:9)i∈1..n K ei J qx hai = typecase a of {Ti → ei}i∈1..n ≡ {di}i∈1..n ; tif ; y {x hTii}i∈1..n , tif ≡ x = x = e K J tif ε e K J (tr-fdecl) efcr+tif J tif Σ ≡ efcr K (cid:8) di J qlet {di}i∈1..n ; Σ0 ≡ Σ {, Σ i}i∈1..n Σ0 ≡ {di,j}j∈1..mi Σ tif K in ey Σ ≡ let (cid:8){di,j}j∈1..mi tif ; (cid:9)i∈1..n i (cid:9)i∈1..n ; in tif e Σ0 K J (tr-tif) (tr-let) x hTi ∈ Σ (tr-genapp) tif Σ ≡ cp(x, T) K x hTi J Figure 4.2: Translation of fcr+tif to fcr We first introduce a rule to translate one fcr+tif declaration into a sequence of fcr declarations. The judgment is of the form dfcr+tif J tif Σ ; 1 K where the environments Σ ≡ {dfcr}i∈1..n Σ 2 1 and Σ 2 are signature environments. We define the signature of a generic function to be the list of types that appear in the patterns of the typecase that defines the function. At the moment, type 46 4.4 Translation and specialization patterns are plainly named types, thus the signature is the list of types for there are cases in the typecase construct. For example, the add function defined on page 4.1 in Section 4.1 has signature Bool, Int, Char. Note that the signature of a type-indexed function is something different than the type signature of a (type- indexed) function. The former is a list of named types that is relevant for the translation process, the latter assigns a type to the whole function for type check- ing purposes. A signature environment contains entries of the form x hTi, indicating that the named type T is in the signature of the type-indexed function x. In the above-mentioned judgment, the environment Σ 1 is the input environ- ment under which the translation is to take place, Σ 2 is the output environment containing bindings of which we learn during the analysis of the declaration. The environment is only really accessed in the translation of expressions, which takes the form efcr+tif J tif Σ ≡ efcr . K Both translations are understood to be projections – they translate every fcr+tif declaration or expression construct for which there is no special rule and that is also valid in fcr to itself. Therefore, rules that are similar to (tr-fdecl), are left implicit in the translation of expressions. The declaration of a type-indexed function is translated into a group of decla- rations with different names. The operation cp is assumed to take a variable and a type name and create a unique variable name out of the two. Here, cp stands for component, and we call the translation of a single case of a type-indexed function component henceforth. Furthermore, the signature of the type indexed function defined is stored in the output environment. For a normal function declaration, the expression is translated, and the empty environment is returned. In a let construct, one or more type-indexed functions can be defined. The environment Σ0, containing all the available components, is recursively made visible for the translation of the declarations (i.e., the right hand sides of the declarations may contain recursive calls to the just-defined functions) and to the body of the let statement. We call the process of translating a call to a type-indexed function specializa- tion, because we replace the call to a function which depends on a type argument by a call to a specialized version of the function, for a specific type. The process of specialization thus amounts to selecting the right case of the type-indexed function, or, in other words, performing the pattern match at the type level. In rule (tr-genapp), a generic application is specialized by verifying that the type argument occurs in the signature of the function, and then using the appropriate component of the function as translation. 47 4 Type-indexed Functions If the type argument is not an element of the signature of the generic function called, then the translation will fail with a specialization error. In Chapter 5, we will extend the language further to allow a larger class of generic applications to succeed. 4.5 Type checking To make fcr+tif a “real” language extension, we extend the type rules for the functional core language fcr to cover the constructs for defining and calling type- indexed functions as well. The additional rules are shown in Figure 4.3 and 4.4. Together with the type checking rules of Figure 3.3 and 3.14, they form the type checking judgments for fcr+tif. The judgments still have the same shape, namely K; Γ ‘ e :: t , but the environment Γ can now, next to the usual entries of the form x :: t, also contain entries of the form x ha :: ∗i :: t, associating a type with a type-indexed function. As type-indexed and ordinary functions share the same name space, there can only be one active binding for any name x, either as a type-indexed or as an ordinary function. K; Γ ‘ e :: t K ‘ T :: ∗ x ha :: ∗i :: t0 ∈ Γ K; Γ ‘ x hTi :: t0[T / a] (e-genapp) Γ0 ≡ Γ {, Γ i}i∈1..n i}i∈1..n {Γ0 ‘decl di Γ Γ0 ‘ e :: t Γ ‘ let {di}i∈1..n ; (e-let) in e :: t Figure 4.3: Type checking for fcr+tif, extends Figure 3.3 The rule for generic application checks that the kind of the type argument is ∗ and that x is a type-indexed function in scope. The type is the type of the type- 48 4.5 Type checking K; Γ 1 ‘decl d Γ 2 ‘ e :: t ‘decl x = e x :: t (d-val) {K ‘ Ti :: ∗}i∈1..n K, a :: ∗ ‘ t :: ∗ K; Γ ‘ ei :: t[Ti / a] K; Γ ‘decl x hai = typecase a of {Ti → ei}i∈1..n ; x ha :: ∗i :: t (d-tif) Figure 4.4: Type checking for declarations in fcr+tif, extends Figure 4.3 indexed function, with the formal type argument in its type substituted by the actual type argument of the application. The former rule (e-letrec-val) is now obsolete and replaced by the more general (e-let) that allows for both value and type-indexed function declarations. It makes use of a subsidiary judgment for declarations of the form K; Γ 1 ‘decl d Γ 2 that checks a declaration under environments K and Γ ronment Γ two rules for that judgment are presented in Figure 4.4. 1 and results in an envi- 2 containing possible new bindings introduced by the declaration. The The rule (d-val) is for value declarations. It is easy to see that the new rules for let-statements are generalizations that coincide with the old rules for the case that all declarations are of the form x = e. The second rule, (d-tif), is for declarations of type-indexed functions. All type patterns must be of kind ∗. There must be a type t of kind ∗, containing a type variable a of kind ∗, and all expressions ei in the arms of the typecase must have an instance of this type t, with a substituted by the type pattern Ti. The type t that has this characteristic is then returned as the type of the type-indexed function in the resulting environment. We now want to show that the translation is correct, i.e., that it preserves type correctness, or in this case even the exact types of expressions. However, before we can proceed to the theorem, we first have to extend the translation to environ- ments. Signature environments do not exist in fcr, and type environments cannot contain type signatures for type-indexed functions. Translations are introduced in Figure 4.5, and judgments are of the forms 49 4 Type-indexed Functions Γfcr+tif tif ≡ Γfcr K J (tif-gam-1) ε tif ≡ ε K J Γ, x :: t tif ≡ K J Γ tif, x :: t K J (tif-gam-2) Γ, x ha :: ∗i :: t tif ≡ K Γ tif K J J (tif-gam-3) Σfcr+tif tif Γfcr+tif K J ≡ Γfcr (tif-sig-1) ε tif ≡ ε K J x ha :: ∗i :: t ∈ Γ tif Γ , cp(x, T) :: t[T / a] K tif Γ ≡ K Σ J Σ, x hTi J (tif-sig-2) Figure 4.5: Translation of fcr+tif environments to fcr type environments 50 4.5 Type checking Γfcr+tif tif K Σfcr+tif tif Γfcr+tif K J J ≡ Γfcr ≡ Γfcr . The former filters type signatures of type-indexed functions from a type envi- ronment, whereas the latter translates a signature environment into type sig- natures for the associated components: every type that is in the signature of a type-indexed function is translated into a type assumption for the corresponding component of the function. We use the abbreviation Γ; Σ tif Γ ≡ K Σ Γ tif, K J J tif Γ . K J Note that the translation of the signature environment produces bindings for components only, and they never clash with the bindings from the translated type environment. Now we have everything we need for the theorem: J e J tif Γ ‘ K Theorem 4.1 (Correctness of fcr+tif). If e is a fcr+tif expression with K; Γ ‘ e :: t, Σ :: t, assuming that Σ is a signature environment such that Γ; Σ tif then K; K tif e Σ exists. J K Corollary 4.2. If e is a fcr+tif expression with K; Γ ‘ e :: t with no type-indexed bindings in Γ, and Σ ≡ ε, then K; Γ ‘ tif e ε K J In other words, if the translation of an fcr+tif expression into fcr succeeds, the resulting expression has the same type as the original expression. The corollary, which follows immediately from the theorem, emphasizes the special case where Σ is empty. In this situation exactly the same environments can be used to assign t to both e and its translation. :: t. J tif Γ . K Proof of the theorem. An induction on the type derivation for e will do the trick once more. Only the new cases that are possible for the last derivation step are interesting: if the last step is (e-genapp), then e ≡ x hTi, and we know by (tr- genapp) that x hTi is in Σ. Furthermore, (e-genapp) ensures that x ha :: ∗i :: t0 is in Γ, where t ≡ t0[T / a]. Now, tif Σ ≡ cp(x, T), and cp(x, T) :: t0[T / a] is in e K J Γ; Σ If the last step of the type derivation for e is (e-let), we first need to extend the correctness theorem (and the induction) to the translation of declarations. We will Σ ≡ {di}i∈1..n Σ0 and both K; Γ ‘decl d Γ0 tif d prove the following property: if K J i}i∈1..n, then i}i∈1..n and {K; , J If the derivation for d ends in the rule (d-val), then d ≡ x = e. In this case, tif Σ . By induction hypothesis, we know that if K Σ0 ≡ ε, n ≡ 1, and d1 ≡ x = Γ ‘ e :: t, then Σ :: t. Therefore, the two applications of ‘decl are tif K ‘decl x = e x :: t Γ ‘decl di tif K Γ0 ≡ {Γ tif K tif Γ ‘ K Γ0; Σ0 K; Γ Γ; Σ Γ; Σ Γ e J e J J J . 51 4 Type-indexed Functions and K; Γ; Σ Γ ‘decl x = tif K e J Σ x :: t , tif K J thus Γ0 ≡ Γ 1 ≡ x :: t. This implies If the derivation for d ends in the rule (d-tif), then Γ0; Σ tif Γ0 ≡ K J Γ0; ε Γ0 ≡ Γ tif K 1. J d ≡ x hai = typecase a of {Ti → ei}i∈1..n ; . ei J Here, we apply the induction hypothesis to the ei, getting the ei and their transla- Σ have the same type, say Γ ‘ ei :: ti and tif tif Σ :: ti. We know tions K K from (d-tif) that there is a t such that ti ≡ t[Ti / a]. Furthermore, in this situation di ≡ cp(x, Ti) = . Next to that, we can conclude that , Γ0 ≡ x ha :: ∗i :: t and Γ Σ , and Σ0 ≡ {x hTii}i∈1..n tif K i ≡ cp(x, Ti) :: ti. We thus have to show that tif Γ ‘ K Γ; Σ ei J ei J J x ha :: ∗i :: t; {x hTii}i∈1..n J , tif x ha::∗i::t K ≡ {cp(x, Ti) :: ti}i∈1..n , , but this follows immediately from the rules in Figure 4.5 for the translation of signature environments. We can now cover the case of the proof of the theorem where the derivation for e concludes on an application of (e-let). Here, we know that e ≡ let {di}i∈1..n ; in e0, and tif e K J e0 where J (cid:8) Σ ≡ let (cid:8){di,j}j∈1..mi ; (cid:9)i∈1..n ; in e0 0 0, and Σ0 ≡ Σ {, Σ Σ0 ≡ e0 tif K Σ0 ≡ {di,j}j∈1..mi Σ tif K di J (cid:9)i∈1..n i . i}i∈1..n with We now apply the induction hypothesis to e0 and the di. For e0, we get that both 0 :: t, where Γ0 ≡ Γ {, Γ i}i∈1..n and Γ0 ‘decl di Γ i). For the declarations, we use the correctness property that we have proved Γ0 ‘ e0 :: t and Γ0; Σ0 Γ0 ‘ e0 tif K J above, which yields i,j}j∈1..mi , (cid:9)i∈1..n , (cid:8) Γ i; Σ J Γ0; Σ0 J Γ0; Σ0 i ≡ {Γ tif Γ i K Γ0 ‘decl di,j tif K tif Γ0 ≡ K Γ; Σ J where Γ i,j. Observing that J Γ tif Γ {, K an application of rule (e-letrec-val) results in Γ ‘ let (cid:8){di,j}j∈1..mi (cid:9)i∈1..n in e0 tif Γ i K Γ; Σ , }i∈1..n i; Σ J i ; ; 0 :: t , tif K J which is precisely what we need. 52 5 Parametrized Type Patterns 5.1 Goals In the type-indexed functions that we have treated so far, one deficiency stands out: types of a kind other than ∗ are nowhere allowed, not in type patterns nor in type arguments. A consequence is that also composite types of kind ∗ – such as [Int], where the list type constructor [ ], of kind ∗ → ∗, is involved – are not allowed. What if we want a type-indexed function that computes something based on a data structure of kind ∗ → ∗, for example the size of a data structure, i.e., the number of “elements” in that structure? A definition such as x size h[ α ]i size hMaybe αi Nothing size hMaybe αi (Just ) size hTree αi Leaf (Node ‘ x r) = size hTree αi ‘ + 1 + size hTree αi r size hTree αi = length x = 0 = 1 = 0 53 5 Parametrized Type Patterns might do, if we assume that Tree is a Haskell datatype defined as data Tree (a :: ∗) = Leaf | Node (Tree a) a (Tree a) . For the definition, we assume that patterns of the form T α are allowed, where T is a named type of kind ∗ → ∗, and α is a variable. All patterns that occur in the definition are of this form. But that is still too limiting: a type-indexed function that works on types of kind ∗ may as well work on some data structures of higher kind. Recall our example from last chapter, the add function: add hBooli add hInti add hChari x y = (∨) = (+) = chr (ord x + ord y) . Given that we know how to add two values of some type t, we can also add two values of type Maybe t, by treating Nothing as exceptional value: add hMaybe αi Nothing add hMaybe αi add hMaybe αi (Just x) (Just y) = Just (add hαi x y) , = Nothing Nothing = Nothing The knowledge of how to add two values of the argument type of Maybe is hidden in the reference to add hαi in the final case, and we still have to make sure that we get access to that information somehow. We could also extend the function add to lists as pointwise addition: add h[ α ]i x y | length x == length y | otherwise = map (uncurry (add hαi)) (zip x y) = error "args must have same length" . We do not even have to restrict ourselves to kind ∗ and ∗ → ∗ datatypes. We can do pointwise addition for pairs, too: add h(α, β)i (x1, x2) (y1, y2) = (add hαi x1 y1, add hβi x2 y2) . Now our function add has arms which involve type constructors of kinds ∗, ∗ → ∗, and ∗ → ∗ → ∗, all at the same time. Perhaps surprisingly, we need not just one, but three significant extensions to the simple type-indexed functions introduced in Chapter 4, to be able to success- fully handle the above examples. The first requirement is clear: type patterns must become more general. Instead of just named types of kind ∗, we will admit named types of arbitrary kind, 54 5.2 Parametrized type patterns applied to type variables in such a way that the resulting type is of kind ∗ again. This addition will be described in more detail in Section 5.2. Secondly, and this is the most difficult part, we need to introduce the notion of dependencies between type-indexed functions. A dependency arises if in the def- inition of one type-indexed function, another type-indexed function (including the function itself) is called, with a variable type as type argument. Dependen- cies must be tracked by the type system, so the type system must be extended accordingly. All this is the topic of Section 5.3. Last, to be of any use, we must also extend the specialization mechanism. Until now, we could only handle calls to type-indexed functions for named types of kind ∗. Now, we want to be able to call size h[Int]i or add h(Char, [Bool])i. Thus, type arguments have to be generalized so that they may contain type applications as well. This is explained in Section 5.4. In the rest of this chapter, we will describe all of these extensions in detail In the next chapter, we will formally extend our core by means of examples. language with the constructs necessary to support the extensions. 5.2 Parametrized type patterns We now allow patterns such as h[ α ]i or hEither α βi in the definitions of type- indexed functions. A pattern must be of kind ∗, and of the form that a named type constructor is applied to variables to satisfy the type constructor. Thus hEither αi is not allowed because Either is only partially applied. Of course, types of kind ∗ are a special case of this rule: the type pattern hInti is the nullary type constructor Int applied to no type variables at all. All type variables in type patterns have to be distinct, just as we require vari- ables in ordinary patterns to be distinct. A pattern such as h(α, α)i is illegal. Also, we do not allow nested type patterns: h[Int]i is forbidden, and so is hEither α Chari. The top-level type constructor is the only named type occur- ring in a type pattern, the rest are all type variables. This restriction on type patterns is not essential. One could allow nested type patterns, or multiple pat- terns for the same type, such as they are allowed in Haskell case statements. This would significantly complicate the future algorithms for the translation of type-indexed functions with issues that are not directly related to generic pro- gramming. With our restriction, performing pattern matching on the cases of a type-indexed definition remains as simple as possible: comparing the top-level constructors is sufficient to find and select the correct branch. We retain the notions of signature, specialization, and component, all defined in Section 4.4. The signature of a type-indexed function is the set of named types 55 5 Parametrized Type Patterns occurring in the type patterns. For size, the signature is [ ], Maybe, Tree. For add, including the new cases, the signature consists of the six types Bool, Int, Char, Maybe, [ ], (, ). Specialization is the process of translating a call of a generic function. A component is the result of translating a single arm of a typecase construct. This translation to components can be very simple in some cases. For instance, the arm of the size function for lists, size h[ α ]i x = length x , can be translated into a component cp(size, [ ]) x = length x almost in the same way as we translate arms for types of kind ∗. Note that components are always generated for a type-indexed function and a named type. In this case, the func- tion is size, and the named type [ ]. As the signature of a type-indexed function can contain types of different kind (compare with the example signature for add above), also components can be created for named types of different types. The variables in the type patterns do not occur in the translation. In this case, the translation is easy because the variable α is not used on the right hand side. Things get more difficult if the variables in the type patterns are used in generic applications on the right hand sides, and we will discuss that in the following section. 5.3 Dependencies between type-indexed functions When a type-indexed function is called within the definition of another type- indexed function, we must distinguish different sorts of calls, based on the type argument: calls with type arguments that are constant are treated differently from calls where the type argument contains variables. Let us look at the add function once more, in particular at the Int and Char cases of the function: add hInti add hChari x y = (+) = chr (ord x + ord y) The ord of a character is an integer, therefore we could equivalently have written add hChari x y = chr (cid:0)add hInti (ord x) (ord y)(cid:1) During translation, we can generate a component cp(add, Char) as usual: we specialize the call add hInti on the right hand side to refer to cp(add, Int). The type argument Int is statically known during the translation of the function definition, therefore the compiler can locate and access the appropriate component. 56 5.3 Dependencies between type-indexed functions On the other hand, in the case for lists add h[ α ]i x y | length x == length y = map (cid:0)uncurry (add hαi)(cid:1) (zip x y) | otherwise = error "args must have same length" , we cannot simply generate a component cp(add, [ ]), because we have to specialize the call add hαi, without knowing at the definition site of add what α will be. This information is only available where the function add is called. Nevertheless, it is desirable that we can translate the definition of a type-indexed function without having to analyze where and how the function is called. The solution to this problem is surprisingly simple: we say that the function add is a dependency of add. A dependency makes explicit that information is missing. This information is needed during the translation. In the result of the translation, this information is provided in the form of an additional function argument passed to the components of add. The right hand sides can then access this argument. To be more precise: a component for a type argument involving free variables expects additional arguments – one for each combination of variable and func- tion among the dependencies. In our example, there is one variable α, and one dependency, add, so there is one additional argument, which we will succinctly call cp(add, α), because it tells us how to add values of type α and abstracts from an unknown component. The component that is generated will be cp(add, [ ]) cp(add, α) x y | length x == length y = map (uncurry cp(add, α)) (zip x y) | otherwise = error "args must have same length" . In this example, the type-indexed function add depends on itself – we say that it is reflexive. This reflexivity occurs frequently with type-indexed functions, be- cause it corresponds to the common case that a function is defined using direct re- cursion. Still, type-indexed function can depend on arbitrary other type-indexed functions. These can, but do not have to include the function itself. Dependencies of type-indexed functions are reflected in their type signatures. Previously, add had the type add ha :: ∗i :: a → a → a . This type is no longer adequate – we use a new syntax, add ha :: ∗i :: (add) ⇒ a → a → a . In addition to the old type, we store the dependency of add on itself in the type signature. This type signature is a formal way to encode all type information 57 5 Parametrized Type Patterns about add that is necessary. In general, the type signature of a type-indexed function consists of the name of the function and its type argument to the left of the double colon ::, and a list of function names that constitute dependencies of the function, followed by a double arrow ⇒ and the function’s base type to the right of the double colon. Sometimes, we use the term type signature to refer only to the part to the right of the double colon. According to the definitions above, the function add has one dependency: the function add itself. The base type of add is a → a → a. This generalized form of type signature with a list of dependencies still does not enable us to cover the types of all generic functions we would like to write. We will extend upon the material of this chapter later, in Chapter 9. There is an algorithm that allows us – given the type signature – to determine the type of add hAi for any type argument (or pattern) A. This algorithm, called gapp, will be discussed in detail in Section 6.3. Now, let us look at some example types for specific applications of add: for a constant type argument such as Int, or [Char], the dependencies are ignored, and the types are simply add hInti add h[Char]i :: [Char] → [Char] → [Char] . :: Int → Int → Int These types are for specific instances of the type-indexed functions, and they can be derived automatically from above type signature. In general, for any type argument A that is dependency-variable free, we have add hAi :: A → A → A . Dependencies have an impact on the type of a generic application once vari- ables occur in the type argument. For instance, if the type argument is [ α ], the resulting type is add h[ α ]i :: ∀a :: ∗. (add hαi :: a → a → a) ⇒ [a] → [a] → [a] . We call the part in the parentheses to the left of the double arrow a dependency constraint. In this case, add hαi :: a → a → a is a dependency constraint. This means that we can assign the type [a] → [a] → [a] to the call, but only under the condition that we know how add hαi is defined, and that add hαi has to be of type a → a → a, which is the base type of add. Again, this is the type for the generic application add h[ α ]i, and the type can be derived from the type signature for add given above using the gapp algorithm from Section 6.3. Dependency constraints are comparable to type class constraints (Wadler and Blott 1989; Jones 1994) in Haskell, or perhaps even better to constraints for implicit 58 5.3 Dependencies between type-indexed functions parameters (Lewis et al. 2000). A dependency constraint encodes an implicit argument that must be provided. We will see later that in the translation to the core language fcr, these implicit arguments are turned into explicit function arguments. Recall the definition of add for [ α ]: add h[ α ]i x y | length x == length y = map (uncurry (add hαi)) (zip x y) | otherwise = error "args must have same length" . On the right hand side, there is a call to add hαi. This occurrence of add hαi has the type ∀a :: ∗. (add hαi :: a → a → a) ⇒ a → a → a . The call has the type a → a → a, but at the same time introduces a dependency on add hαi of type a → a → a. This reflects the fact that the generic applica- tion add hαi does only make sense in a context where the lacking information is provided somehow. The translation refers to cp(add, α), which must be in scope. The whole right hand side of the definition – because it is the arm of add for [ α ] – must have the aforementioned type for add h[ α ]i, which is ∀a :: ∗. (add hαi :: a → a → a) ⇒ [a] → [a] → [a] . The right hand side thus may depend on add hαi. The type pattern of the arm eliminates the dependency. The translation will provide the function argument cp(add, α), which is then in scope for the right hand side of the component. In general, we say that dependency constraints are introduced by a call to a generic function on a type argument involving variables, and they are eliminated or satisfied by a type pattern in the definition of a type-indexed function. For now, type patterns are the only way to eliminate dependency constraints. This implies that there is not much sense in calling type-indexed functions on type arguments with variables except while defining a type-indexed function. We will learn about another mechanism to eliminate dependency constraints in Chapter 8. Figure 5.1 summarizes example types for generic applications of the add func- In all cases, the metavariable A is tion, for different sorts of type arguments. supposed to be free of variables, but we assume that it is of different kind in each case: ∗, ∗ → ∗, ∗ → ∗ → ∗, and finally (∗ → ∗) → ∗, applied to variables of suitable kind to make the type argument as a whole a kind ∗ type. For each variable in the type pattern, there is one dependency constraint. We call the vari- ables in the pattern (denoted by Greek letters) dependency variables. For each 59 5 Parametrized Type Patterns add hA :: ∗i :: add hA (α :: ∗) :: ∗i A → A → A :: ∀a :: ∗. (add hαi :: a → a → a) ⇒ A a → A a → A a add hA (α :: ∗) (β :: ∗) :: ∗i :: ∀(a :: ∗) (b :: ∗). (add hαi :: a → a → a, add hβi :: b → b → b) ⇒ A a b → A a b → A a b add hA (α :: ∗ → ∗) :: ∗i :: ∀a :: ∗ → ∗. (add hα (γ :: ∗)i :: ∀c :: ∗. (add hγi :: c → c → c) ⇒ a c → a c → a c) ⇒ A a → A a → A a . Figure 5.1: Types for generic applications of add to type arguments of different form dependency variable, one type variable of the same kind is introduced. In the examples, α is associated with a, and β with b, and γ with c. It may seem strange that we do not just use the same variables, but rather distinguish the quantified type variables from the dependency variables (we distinguish the two sorts of variables even on a syntactic level, which is why we use Greek letters to denote dependency variables). The reason is that in the more general situation that will be discussed in Chapter 9, we will allow multiple type variables to be associated with one dependency variable. Dependency variables are only allowed in type patterns of type-indexed function definitions and in type arguments in depen- dency constraints and calls to type-indexed functions – everywhere between the special type parentheses h·i. If a dependency variable is of higher kind – as can be seen in the last example for α of kind ∗ → ∗ – the associated dependency constraint is nested: it introduces local dependency variables – such as γ in the example – and the type of the dependency is itself a dependency type. How can this nested dependency type be understood? The call add hA (α :: ∗ → ∗)i is of type A a → A a → A a, but depends on an implicit argument called add hα γi, which is of type ∀c :: ∗. (add hγi :: c → c → c) ⇒ a c → a c → a c . This add hα γi may thus itself depend on some function add hγi of some type c → c → c, and must, given this function, be of type a c → a c → a c. Note that the form of the type that the dependency has, is itself very similar to the type of the second example, where A is of kind ∗ → ∗. In theory, there is no limit to the kinds that may occur in the type arguments. However, more complex kinds rarely occur in practice. The function add represents the most common case of dependency: add de- pends on itself, and on nothing else. This corresponds to a normal function 60 5.3 Dependencies between type-indexed functions which is defined by means of recursion on itself. But a type-indexed function need not depend on itself, or it may depend on other type-indexed functions. We have already seen a function of the first category, namely size (defined on page 53). Its type signature is size ha :: ∗i :: () ⇒ a → Int . We often omit the empty list of dependencies and write size ha :: ∗i :: a → Int , which coincides with the old form of type signatures, that we used when we did not know about dependencies. Note that although size calls itself recursively in the Tree arm, size hTree αi (Node ‘ x r) = size hTree αi ‘ + 1 + size hTree αi r , it does not depend on itself. The position of a dependency variable in a type argument determines whether or not (and if yes, which) dependency constraints are needed: if the type argument is of the form A0 {Ai}i∈1..n, and A0 does not contain any further application, then A0 is called the head. We also write head(A) In size hTree αi, the head of the type to denote the head of a type argument. argument is Tree. In add hαi, the head of the type argument is α. If a dependency variable α is the head of A, then a call x hAi introduces a dependency constraint on x hαi. If α occurs in A, but not as the head of A, then the call x introduces dependency constraints according to the dependencies of x: for each function yk that is a dependency of x, a dependency constraint on yk hαi is introduced. Let us map this abstract rule to our examples. The definition of add (see page 54) contains, for instance, a call to add hαi on the right hand side of the case for add h[ α ]i. The dependency variable α is in the head position in this call, therefore a dependency constraint on add hαi is introduced (amounting to a ref- erence to cp(add, α) in the translation). As a result, add must depend on itself, because only then the dependency constraint can be satisfied. The translation will then provide the additional function argument cp(add, α) for the component cp(add, [ ]) resulting from case add h[ α ]i. On the other hand, in the situation of size, the head of the type argument in the call size hTree αi is Tree, and α occurs somewhere else in the type argument. Therefore, this application of size does not necessarily force a dependency for size hαi. Instead, the call introduces dependency constraints for all functions that size depends on. But this is the empty set, thus the call introduces no dependen- cies, and everything is fine, i.e., type correct. 61 5 Parametrized Type Patterns Intuitively, the type of the elements does not matter anywhere in the compu- tation of the size of the data structure (at least, according to the way we defined that size). We can generate a component cp(size, Tree) and specialize the recursive call without need for an additional argument: cp(size, Tree) (Node ‘ x r) = cp(size, Tree) ‘ + 1 + cp(size, Tree) r . (In Section 8.2, we will learn about a better way to define size, as a generic function that does depend on itself.) The fact that function size is restricted to type arguments constructed with type constructors of kind ∗ → ∗ stems from its arms, not from its type. Consequently, a call to size hInti results in a specialization error rather than a type error. From the type signature of size, which is size ha :: ∗i :: a → Int , we can derive example types for applications of size to different type arguments, just as we in Figure 5.1 for add. This time, there are no dependencies, therefore the resulting types are simpler. The examples are shown in Figure 5.2. These types can once more be calculated automatically from the type signature using the gapp algorithm from Section 6.3. A → Int size hA :: ∗i A a → Int size hA (α :: ∗) :: ∗i size hA (α :: ∗) (β :: ∗) :: ∗i :: ∀(a :: ∗) (b :: ∗). A a b → Int size hA (α :: ∗ → ∗) :: ∗i :: ∀(a :: ∗ → ∗). A a → Int . :: :: ∀a :: ∗. Figure 5.2: Types for generic applications of size to type arguments of different form It is perfectly possible to define an arm of size for type patterns involving type constructors of different kind such as pairs size h(α, β)i = const 2 , or integers size hInti = const 0 . Next, let us look at an example of a function that depends on another function. Suppose we want to define a partial order that works on lists and pairs and combinations thereof. We define two lists to be comparable only if they both have the same size and the elements are comparable pointwise. If the comparison 62 5.3 Dependencies between type-indexed functions yields the same result for all elements, then that is the result of the function. For pairs though, two elements are only comparable if the first components are equal, the result being the result of comparing the second components: data CprResult = Less | More | Equal | NotComparable cpr h[ α ]i x y | size h[ α ]i x == size h[ α ]i y = if size h[ α ]i x == 0 then Equal else let p = zipWith (cpr hαi) x y in if allEqual h[CprResult]i p then head p else NotComparable | otherwise = NotComparable cpr h(α, β)i (x1, x2) (y1, y2) | equal hαi x1 y1 | otherwise = cpr hβi x2 y2 = NotComparable . This function deliberately uses quite a number of other type-indexed functions which we assume to be defined: size we already know, equal is a function to test two values for equality, and allEqual is a function to check for a list (and possibly other data structures) whether all elements stored in the list are equal. Finally, cpr itself is used recursively. We assume that the type signatures for equal and allEqual are as follows: ha :: ∗i :: (equal) ⇒ a → a → Bool equal allEqual ha :: ∗i :: (equal) ⇒ a → Bool Whereas equal is supposed to test two values of the same type for equality, the function allEqual is intended to be used on data structures – such as a list, [CprResult], in the definition of cpr – to test if all elements of that data structure are equal. The question is: what is the type signature for cpr? You may try to guess the answer before reading on, as an exercise for your intuition. The situation of the calls to size are as in the Tree arm of size itself: no depen- dency is needed. The type argument [ α ] does contain a dependency variable, but not in the head. Therefore, because size does not depend on itself, the specializa- tion to the List arm can be made without referring to the element type α. Even more clearly, there is no dependency on allEqual. The call has a constant type argument, and the specialization to constant type arguments is always pos- sible without additional arguments and hence does never cause a dependency. The function equal is a different story: in the arm for pairs, we call equal hαi, therefore there is a dependency on equal. And, because cpr is called on α in the 63 5 Parametrized Type Patterns arm for lists, and on β in the arm for pairs, there is also a dependency on cpr. Hence, the type signature of cpr is cpr ha :: ∗i :: (equal, cpr) ⇒ a → a → CprResult . Note that the type signature is not as fine-grained as one might expect: a depen- dency is a global property of a type-indexed function, not attached to some arm. Although the arm for lists does not depend on equal, the type for cpr h[ α ]i that is derived from the above type signature exhibits the dependency: cpr h[ α ]i :: ∀a :: ∗. (equal hαi :: a → a → Bool, cpr hαi :: a → a → CprResult) ⇒ [a] → [a] → CprResult . Also, there is no distinction between different dependency variables. The arm for pairs does only depend on cpr for the second component. Nevertheless, the type for cpr hα, βi contains the dependency on cpr for both components: cpr h(α, β)i :: ∀(a :: ∗) (b :: ∗). (equal hαi :: a → a → Bool, hαi :: a → a → CprResult, cpr equal hβi :: b → b → Bool, cpr hβi :: b → b → CprResult) ⇒ (a, b) → (a, b) → CprResult . Another look at type classes reveals again a certain connection, this time be- tween dependencies and instance rules, i.e., instances that have a condition. If we had a type class with a method size, such as class size Size a where :: a → Int , we could define instances for lists and trees as follows: instance Size [a] where = length x instance Size (Tree a) where size x = 0 size Leaf size (Node ‘ x r) = size ‘ + 1 + size r . The instance definitions are universal in the sense that they work for all lists and all trees, without any condition on the element type. 64 5.4 Type application in type arguments However, if the class for the add function, defined in Section 4.2 as class add Add a where :: a → a → a needed to be extended to lists in the same way as the type-indexed function, we must write instance Add a ⇒ Add [a] where add x y | length x == length y = map (uncurry add) (zip x y) | otherwise = error "args must have same length" . The fact that the type-indexed function depends on itself is mirrored in the con- straint on the instance declaration: an Add instance for [a] can only be defined if a is an instance of class Add as well. 5.4 Type application in type arguments Assuming we have successfully defined a type-indexed function such as add to work on lists, it is only natural that we also want to use it somewhere. In Chap- ter 4, a generic application had the form x hTi, i.e., the language of type argu- ments was restricted to named types. We are going to extend the syntax for type arguments (as opposed to type pat- terns) to include type application. We have already used such type arguments in the examples in the previous sections; for instance, the definition of size on page 53 contains the call size hTree αi on the right hand side of the case for trees. Perhaps more interesting, the function cpr uses the call allEqual h[CprResult]i, thus an application of two named types to each other. A comparable situation would be a call to add h[Int]i. For example, the call add h[Int]i [1, 2, 3] [2, 3, 5] should have type [Int] and evaluate to [3, 5, 8]. The question is, how is add special- ized to [Int]? The answer is that an application in the type argument is translated into an application of specializations. There is a definition for add h[ α ]i – we thus know how to add two lists, provided that we know how to add list elements, i.e., that we have access to add hαi. We have also defined the case add hInti, therefore we also know how to add two integers. It is pretty obvious that these two cases can be combined to get to add h[Int]i, by supplying add hInti for the dependency on add hαi that is required by add h[ α ]i. 65 5 Parametrized Type Patterns In the translation, dependencies are turned into explicit function arguments. It has already been sketched that the component for add on [ ] looks as follows: cp(add, [ ]) cp(add, α) x y | length x == length y = map (uncurry cp(add, α)) (zip x y) | otherwise = error "args must have same length" . The dependency of add on itself is translated into an explicit additional argument. This can now be supplied using the component of add for Int, such that add h[Int]i can be specialized to cp(add, [ ]) cp(add, Int) . This is the general idea: dependencies reappear in the translation as explicit arguments. If a type-indexed function is called with a complex type argument that contains type application, the translation process automatically fills in the required dependencies. Thus, a call to cpr h[Int]i requires two dependencies to be supplied and is translated to cp(cpr, [ ]) cp(equal, Int) cp(cpr, Int) , under the precondition that both equal and cpr have arms for Int. If that is not the case, such as in our definition of cpr above, a specialization error is be reported. With the help of the dependencies in the types of type-indexed functions, we can prove the translation technique sketched above correct. In other words, spe- cializing a generic application to an application of specializations will always produce a type correct term in fcr, and the type of that term is related to the de- pendency type the term has in the original language that supports type-indexed functions. The proof will be given in Section 6.5. For the programmer, this cor- rectness means that type-indexed functions can be used “naturally”, even for complex type arguments, because all plugging and plumbing happens under the hood. The translation method just outlined also implies a couple of restrictions that should hold for dependencies. If, for instance, the call add h[Int]i is translated to cp(add, [ ]) cp(add, Int), then the type of cp(add, Int) must match the the type of the argument expected by cp(add, [ ]). This explains why, in the examples above, the types that appear in the dependency constraints are always related to the base type of the function in question. Furthermore, the dependency relation is transitive. Assume that function x depends on both x and y, and function y depends on both y and z. If T1 and T2 are two named types, and the call x hT1 (T2 α)i occurs on the right hand side of the definition of an arm of x, this call would be – incorrectly, as we shall see below – translated to 66 5.4 Type application in type arguments cp(x, T1) (cid:0)cp(x, T2) cp(x, α) cp(y, α)(cid:1) (cid:0)cp(y, T2) cp(y, α) cp(z, α)(cid:1) . The type argument is an application of T1 to T2 α, therefore the translation is an application of the component of x for T1 to the dependencies of x on the argument. The function x has two dependencies, x and y, thus cp(x, T1) has two arguments, corresponding to the calls x hT2 αi and y hT2 αi. In both cases the type argument is again an application, and in both cases the function that is called has again two dependencies, therefore both arguments are itself applications of a component, each to two arguments. But the function y depends on z, therefore the call to cp(z, α) occurs in the translation! This means that the compiler must provide not only cp(x, α) and cp(y, α), but also cp(z, α) in this component of x. As we have said before, we choose not to have the added complexity of different dependencies depending on the arm we are defining, thus the only solution is to make x directly depend on z as well. The price we pay for this transitivity condition is that sometimes unnecessary dependencies are carried around. The correct translation of x hT1 (T2 α)i, for example, is cp(x, T1) (cid:0)cp(x, T2) cp(x, α) cp(y, α) cp(z, α)(cid:1) (cid:0)cp(y, T2) cp(y, α) cp(z, α)(cid:1) (cid:0)cp(z, T2) cp(z, α)(cid:1) , assuming that z depends only on itself. 67 5 Parametrized Type Patterns 68 6 Dependencies This chapter constitutes a formalization of the language extensions that have been introduced in the previous Chapter 5. We will introduce a new language fcr+tif+par, based on fcr+tif, that can handle parametrized type patterns and type-indexed functions with dependencies. The syntax of the language is intro- duced in Section 6.1. Subsequently, we delve into the details of dependencies and discuss dependency variables in Section 6.2 and dependency types in Sec- tion 6.3. We will then explain how to translate programs in the extended language fcr+tif+par to programs in fcr, in Section 6.4. In Section 6.5, we discuss why the translation is correct. We conclude with Section 6.6, which puts the theory cov- ered in this chapter in relation with Ralf Hinze’s work on generic programming. 6.1 Core language with parametrized type patterns We will now extend the language fcr+tif of Figure 4.1 – our functional core lan- guage with type-indexed functions – to cover the extensions necessary to handle the new forms of type patterns and type arguments, and dependencies between 69 6 Dependencies Qualified types q ::= {∀ai :: κi.}i∈1..n (∆) ⇒ t qualified type Constraint sets ∆ ::= {Yi}i∈1..n Type constraints Y ::= x hα0 {(αi :: κi)}i∈1..ni :: q , constraint set dependency constraint parametrized named type pattern named type, from Figure 4.1 dependency variable type application Type patterns P ::= T {αi}i∈1..n Type arguments A ::= T | α, β, γ, . . . (A1 A2) | Type signatures σ ::= ({yk}k∈1..n , ) ⇒ t type signature of type-indexed function Figure 6.1: Core language with type-indexed functions and parametrized type patterns fcr+tif+par, extends language fcr+tif in Figure 4.1 type-indexed functions. The extended language is called fcr+tif+par. The addi- tional syntactic constructs are shown in Figure 6.1. Dependency variables are a new form of type variables used in the context of dependencies. We distinguish them by using lowercase Greek letters (α, β, γ) for them. Dependency variables are the topic of Section 6.2. A qualified type is a type with dependency constraints. A dependency constraint is of the form x hα0 {(αi :: κi)}i∈1..ni :: q and consists of the name of a type-indexed function x, a type argument consist- ing of dependency variables, and a qualified type. The constraint expresses a dependency on function x at dependency variable α0. Depending on the kind of the dependency variable α0, there may be further dependency variable parame- ters, with kind annotations, which are local to the constraint and scope over the qualified type q only. Dependency constraints may appear nested in the sense that the type in a particular constraint can be qualified again. However, all constraints are always at the beginning of a type. Type variables may be universally quantified in a qualified type. 70 6.1 Core language with parametrized type patterns A set of constraints is much like an environment. For a specific combination of type-indexed function x and dependency variable α0, there may be at most one constraint. The order of the constraints is unimportant – in other words, we treat types that differ only in the order of dependency constraints as equivalent. To keep forthcoming algorithms deterministic, and save us from a lot of re- ordering hassle during the definition of the translation to fcr, we will now define a canonical ordering on dependency constraints, and can henceforth assume that qualified types appear with their constraints ordered when needed. We assume that there is a total order