/bsp/ - bsp

Someone's Office

Index Catalog Archive Bottom Refresh
+
-
Name
Options
Subject
Message

Max message length: 12000

files

Max file size: 32.00 MB

Total max file size: 50.00 MB

Max files: 5

Supported file types: GIF, JPG, PNG, WebM, OGG, and more

E-mail
Password

(used to delete files and posts)

Misc

Remember to follow the Rules

The backup domains are located at 8chan.se and 8chan.cc. TOR access can be found here, or you can access the TOR portal from the clearnet at Redchannit 3.0 (Temporarily Dead).

Ghost Screen
Hilda Anniversary Marathon


8chan.moe is a hobby project with no affiliation whatsoever to the administration of any other "8chan" site, past or present.

DL2 Thread bsp Board owner 07/11/2025 (Fri) 07:53:29 Id: a6d79a No. 12
DL2 is a functional programming language and intermediate representation based on System F-Omega. DL2 extends System F-Omega with iso-recursive types, recursive functions and sum-of-existential-product types. This variant of System F-Omega is representative of a large number of typed programming languages. This makes DL2 an attractive compilation target, as many languages can be compiled directly to DL2. DL2 is intentionally runtime-agnostic so that it can be compiled to many targets or embedded in other languages.
(245.65 KB lambda-prolog.pdf)

I've been looking at Hindley-Milner type inference for DL2. My initial expectation for DL2 was that macros could make it practical even without type inference. While it is possible to use DL2 without type inference, it wound up being untenable from a user perspective: >polymorphic functions require giving the whole type, including all polymorphic constructors >macros can only slightly help the previous issue, since they have to be included per-function >when using stack-recursive DL2, the looping mechanism requires detailing the type >complicated and verbose types do appear in practice >type abbreviations quickly become excessive The problem is that the general case of full type inference for System F-Omega and therefore for DL2 requires an undecidable form of higher-order unification. The designs of Standard ML and Haskell are strongly influenced by their commitments to their chosen type inference mechanisms. Given this issue, I can't help but think that I made the right choice in not specifying type inference as part of the design of DL2. My current plan for implementing DL2 type inference is to use unification that delays any reduction that it can't immediately resolve. This should yield a set of constraints that is agnostic to any higher-order unification strategy. When first-order unification can't resolve all type variables, it will produce an error message instead of attempting to synthesize an appropriate term. Limiting the terms that can be inferred by avoiding such term synthesis seems like the most reasonable trade-off given the general un-decidability of the problem. The expectation is that the desired type inference will almost never need a term that is synthesized. The Lambda Prolog paper attached seems to describe such an approach in its introduction, so I should probably read it closely.
>>13 Actually, I think I can get decent leverage out of regular first-order Hindley-Milner. If I work more on modules, I won't need existential products for things like type-classes, the result will only need ordinary unification and I can keep everything fast for the prototype. It's a half-solution but one that I know can be easily implemented and will leave things open enough to allow full higher-order type inference. Even in Haskell, most programs don't benefit from the full System F-Omega. If I want to have type-classes with ordinary Hindley-Milner, I can "manually" register the necessary existential products using a separate registry the same way Haskell does. I can use constraint unification to churn through type-class instances after the fact, attempting each appropriate member and backtracking on failure. This reduces the pressure to have "real" type-classes using DL2's existential products directly. Real existentials in the front-end can wait for now.
I was thinking about how to refer to the syntactic elements of DL2. For a while, I've been using "terms" ambiguously and referring to "value-level terms" in the working documentation, and that's confusing, since values already has a particular different meaning. Instead, I think I should use the following convention: - DL2 forms: the basic syntactic elements of DL2 - DL2 terms: a form which has a type, meaning it has a judgment "[: t]" for some type t - DL2 types: a form which has a kind, meaning it has a judgment "[:: k]" for some kind k - DL2 kinds: a form which is a kind, meaning it has the terminal judgment ":::" This cleanly suggests the "term : type : kind" concept, which is a very regular idea and therefore easy to remember. The word "form" is more neutral than "term", and has the connotation of "lexical" rather than "semantic", so is less confusing. This leaves "expression" out on its own. I think the best way to use "expression" is in contrast with "value", and to let them sit as independent of the other terms, with the only relation being that expressions and values are still assumed to be forms or be represented by forms. So in addition to the above conventions: - DL2 value: a closed, valid DL2 form which is normalized according to some appropriate definition of "value normalization", which is potentially specific to a particular DL2 system - DL2 expression: a closed, valid DL2 form, which may or may not be normalized according to "value normalization" This would then allow discussing "term values", "type values", "kind values" alongside "term expressions", "type expressions" and "kind expressions". An example of this usage would be this simple theorem, resulting immediately from the fact that there are no reduction rules for kinds: "All kind expressions are kind values." This also clears up the ugly notion of of "type constructors" common in the literature, by allowing the concept of "type expressions" as an alternative.
While working on DL2's first HM type inference, I wanted to put down some notes on bidirectional typing. Bidirectional typing is a system which combines type inference with type checking. Specifically, it augments type checking with rules for exactly when types can be synthesized or inferred. The hope is to get a simple implementation of type inference which matches the straight-forwardness of type checking. The attached paper goes into the theoretical reasons, but ignores some that might motivate implementers: >checker-inferencer's structure is easy to write and obvious: it is syntax-directed, like many functional programs, and can be written with the natural recursive structure >does not require unification for STLC inference >error messages can be reported exactly where they're discovered >as described below, "bare lambda" is "unlikely" to occur in the function position of an apply, so STLC inference will almost always succeed, requiring only the declarations of the types of each relevant constant The typical approach (according to the paper, "Pfenning Recipe"), is to have constructors check and to have constants, variables and destructors synthesize. In the specific case of "lambda" and "apply", this means that "lambda" is checked and "apply" synthesizes. This strategy then directs the rest of the rules. In the case of apply, synthesis requires that the function's type is synthesized, but once the function type is synthesized the argument only needs to be checked. The types of all variables are known up-front, since lambda can only ever be checked. There's an obvious issue with this approach: what do you do about instances of polymorphic constants? One of the most important uses of type inference in practice is to allow for using polymorphic constants without supplying types. The recipe described does not, on its own, allow for inferring the type arguments to a polymorphic constant, even when restricting to only prefix-polymorphic constants. It's necessary to re-introduce type variables and unification. Of course, once you have type variables and unification, you already have a major component of Hindley-Milner type inference (i.e. "the hard part"). At that point, the obvious question is: "why bother if HM does everything?" I'm a skeptic regarding the approach, but in relation to DL2 there are some interesting aspects. The work on bidirectional typing is a good hint as to what approaches are likely to work for practical type inference. Since DL2 is more similar to GHC's Core than Haskell itself, DL2 can accept many possible type inference mechanisms. Both bidirectional typing and unification-based inference bear similarities to logic programming without non-determinism or with limited non-determinism. Bidirectional typing tends to follow the hints of logic programming, specifically in suggesting that there are multiple modes of type-checking. At no point in the "original" bidirectional typing is it needed to have either non-determinism or unification, but modes from logic programming are still relevant. Meanwhile, unification-based inference takes complete expressions and uses only unification to derive the most general type. Bidirectional typing's error-checking utility is another reason why it's interesting from a DL2 perspective. The restricted format of bidirectional typing means that error-checking is guaranteed to involve local judgments, unless extended with unification. There may or may not be something to take there. When working on DL2's HM-based macro-expander, I thought about how to work with function application. It occurred to me that, if the macro-expander took an input "return type", then it would be possible to synthesize a non-polymorphic function type from just the arguments and without generating a new type variable. This made me think about bidirectional typing and find the above paper. This idea doesn't seem entirely distinct from bidirectional typing, especially the "simultaneous input and output" system described in the attached paper. DL2 macro-expanders also inherently suggest a requirement for locality, which is one of the strengths of bidirectional types. DL2 type-aware macro-expansion may be able to take advantage of incoming type information. It would be possible to supply variable types in binders for "match" and "case" without resorting to unification. The use of simultaneous input and output typing would allow for "un-satisfiable input type" errors, at least. Different type inference scenarios could have different specific macros. Type annotations could be used by users to direct macro behavior, even non-locally. The idea of such a "bi-HM" DL2 is more of a curiosity than an actual long-term plan. It might end up that its only utility is in eliminating unification, or there might be some obscenely clever technique that lets it provide high-quality errors. It might suggest a "combined meta-theory" of type inference that has little utility in practice. Some reasons to think this might not be useful: >should "checked apply" expand its arguments in a different order from "regular apply" just to avoid unification? >would abuse of type information make the system unpredictable at a user level? >could trying to be too clever with error messages make them worse in practice? >does knowing extra type information help much in practice? I'm half-considering trying a variant DL2 macro-expander and set of macros operating primarily according to Pfenning's recipe, and use it as a comparison point for error handling with other macro-expanders, although I'd have to find a decent set of "incorrect programs" to put different systems through their paces.
>>17 While working on DL2, I realized two very good arguments in favor of this "bi-HM" system. First, it can be used to supply binder argument judgments preemptively. This become apparent while working on case, but I believe this would also come up in pattern-matching. The issue was that the case argument types were computed separately from the non-case argument types. Second, when expanding some programs the type is known outright, and it's easier to just generate the code directly rather than rely on the call macro to do it. In this case, it's useful to have a helper variant of the expand function that lets the user supply the (sometimes-partial) type for unification after the fact. I noticed this while working on the DL2 C FFI, specifically for casting C arithmetic types to and from DL2's word type. I realized that most of my existing options weren't very good for that particular case, while immediate generation would be simple with only the option to supply the appropriate type to dl2-expand. Supplying this type directly is also simpler than generating the macro call for a type-check. In other words, I'm likely to end up with a variant of dl2-expand which does take the type directly. I also learned recently that Coq referred to its own type system as using "bidirectional typing" by name: https://rocq-prover.github.io/platform-docs/rocq_theory/explanation_bidirectionality_hints.html That said, it still seems silly in an HM context to use bidirectional typing as Coq's documentation implies. Type variables in HM means that every input type is a potential output type simply by providing a new variable. From this perspective, there's no difference between type inference and type checking. The whole implementation of bidirectional typing in an HM context is simply allowing an input type to be submitted. This also corresponds well with a logic programming interpretation of type-checking rules. The primary reason to separate input and output when using bidirectional typing with HM seems to be to avoid generating and unifying unnecessary type variables. I know that it's possible to optimize this generation and unification using HM-level matching, so that almost every form will have at most one type variable created for this purpose. In particular: >lambda generates at most one variable for each argument and at most one variable for the return and destructs its input variable >apply generates at most one variable for each argument and re-uses the input variable >constructors generate at most one variable for each argument and destruct the input variable >case generates one variable for the choose type, then destructs the result for input variables to each of its bindings, then re-uses the input variable for each of those I think that, in practice, it's likely to be a small overhead most of the time or may be significantly faster, due to unification being skipped by sharing variables rather than unifying them. This is in addition to the potential for better error-checking due to having type information readily available. Based on this, I think I can start recommending the bi-HM approach to type inference.
Now that I'm back to working on the DL2->C transpiler, I've had to think more about A-Normal Form (ANF). ANF is a program form in which expressions are divided into statements and atomic expressions. The primary result of ANF conversion is to separate and order statements, yielding sequential "blocks" of statements bound to variables with a terminating statement or atomic expression. Some discussion and history of the name ANF: https://web.archive.org/web/20250415115410/https://www.williamjbowman.com/blog/2022/06/30/the-a-means-a/ >Parametric ANF For DL2 there are functions which should be considered "atomic", and these vary by compilation context. Arithmetic, for example, has specific optimizations that are best done on whole expressions. For this reason, it's best for arithmetic expressions to be collected together as completely as possible for optimization. Similarly, there are functions which should never be duplicated. Inherently expensive functions like make-vector and repeat can be expected to fall in this set. Some are more ambiguous: length and index vector operations are cheap but require access to the relevant vector, and can therefore be considered important for resource semantics, so that re-ordering such expressions is questionable. However, these vectors also have their own specific optimizations. To deal with this, I suggest that DL2-ANF should be parametrically defined over DL2: the compiler's user should be able to supply a predicate to examine whether a specific DL2 expression is an atomic expression or not. If a DL2 expression is in fact atomic, then ANF will embed it freely in other expressions. >DL2-ANF as Restricted DL2 Originally, I thought that DL2-ANF should stand separately from DL2, but during implementation I've come to think that DL2-ANF should actually be a form of DL2. Specifically, DL2-ANF should come from allowing "block-form" bodies in lambda expressions. - Block-form bodies consist of nested "lambda-apply" constructs, where each application's arguments consist of statements, and terminating in either a statement or atomic expression. - Statements are either a case of some atomic expression to some blocks or a non-atomic application whose arguments consist only of atomic expressions. - Atomic expressions are variables, constants, kinds, types, choose constructions and a subset of additional expressions specified by the user. >ANF Types Pain Point One of the more painful issues in ANF is the practical necessity to provide types for statements. This means that ANF needs to carry and return type information as it operates. Although it is technically possible to avoid this, doing so carries its own troubles. This pain point of deriving types for intermediate terms has to occur somewhere in many compilation pipelines. For some specific pipelines, such as a C back-end, this type derivation will have to occur at some point in the pipeline. In the particular case of C, there's an argument to be made for pushing this near the end of the compilation pipeline, simply to avoid type substitution and normalization. In the particular case of ANF, it isn't sufficient to use the regular procedures. The problem is that the easiest way to implement ANF involves temporarily using a variant of DL2 with unique IDs for variables which I call DL2-ID. It is also preferred that type normalization is as fast as possible. My current plan here is to stick with doing everything inside DL2-ANF as described above, take on the pain of deriving the necessary types and implement a "DL2-ID normalizing substitution" procedure. This is also necessary to implement another technique which I will discuss later. A pleasant side effect of all of this is that the ANF conversion implemented this way also acts as an up-front type in-lining and normalization procedure. This allows accepting a larger subset of DL2 based on type in-lining without a separate pass. My current HM front-end produces code based on naively in-lining prefix-polymorphic functions, so this allows me to conveniently support a "DL2-inline-boxed" variant of the language. >Closure Conversion as ANF In DL2, there are 9 term forms: >var, constant >constructors lambda, rec-lambda and destructor apply >constructor choose and destructor case >constructor iso-wrap and destructor iso-unwrap Variables are optimized by in-lining, constants can be considered "optimal" in themselves. It's clear how to optimize choose and case in an ANF context. >Treat choose as an expression so it can get in-lined. >Case terms can check if they resolve to a known choose expression. If so, then eliminate the case statement by evaluating the resulting branch instead. The iso-wrap and iso-unwrap forms are the same: iso-wrap is an expression and iso-unwrap can be eliminated if its argument is a direct iso-wrap form. That leaves lambda and rec-lambda themselves. The problem is that lambda and rec-lambda can't be so comfortably in-lined, since doing so naively duplicates compilation. However, lambda and rec-lambda are core components of DL2 programs. With their prominence in the core language, there is a strong argument that lambda and rec-lambda should be treated as expressions too. Users will strongly appreciate being able to freely use lambda and rec-lambda for their own higher-order constructs, so it makes sense to provide "guaranteed optimizations" for lambda and rec-lambda so that users can treat them as ordinary expressions. If users are unnecessarily worried about the costs of closure construction and closure calls, they may resort to excessive techniques to avoid the use of lambda, limiting the practiced usefulness of the language. In particular, in-lined control flow for loops, monadic and effect mechanisms would take a major performance hit, whether those mechanisms are user-defined or provided by the runtime. Users especially shouldn't be afraid to write repeat loops just because they use lambda internally, since doing so would take out an important branch of DL2 and render the whole project pointless. There's a strong argument in favor of not just in optimizing the use of lambda but in favor of aggressively in-lining lambda and rec-lambda forms as ordinary expressions, if the code duplication can be avoided. A principled approach could also significantly reduce pressure on heuristic-based function call in-lining, allowing for more consistent performance. There are two "obvious strategies": >closure conversion then ANF treating closures as expressions >ANF, closure conversion, then an additional closure optimization pass Neither strategy is completely satisfying. Ideally, one would want to in-line all instances of closures, but both strategies fail to in-line known closures which are used by other closures. It would be better if ANF's own in-lining were used for closures. Since ANF is "optimal" in terms of in-lining, inherently in-lining every atomic expression, doing so would yield similarly "optimal" closure use, avoiding closure construction wherever possible. For these reasons, I suggest that rather than doing closure conversion as a separate pass, closure conversion is instead fused with ANF into a single pass. This will allow exactly the sort of aggressive closure in-lining that I want. In fact, I believe this style of in-lining should be considered the standard for DL2, which is another reason to implement ANF this way. This fused ANF and closure conversion could act as a "standard transformation" which provides an intuitive resource semantics. Closures only need to be actually constructed when their call sites are unknown, alongside any other key forms of in-lining. With call sites resolved as completely as possible, the fused ANF with closure conversion can be a standard transformation. It acts as a principled standard optimization pass and produces a form which is convenient for back-ends to produce high-quality code resembling the appropriate program in their target language.


Forms
Delete
Report
Quick Reply