/bsp/ - bsp

Someone's Office

Catalog Archive
+
-
Name
Options
Subject
Message

Max message length: 12000

files

Max file size: 32.00 MB

Total max file size: 50.00 MB

Max files: 5

Supported file types: GIF, JPG, PNG, WebM, OGG, and more

E-mail
Password

(used to delete files and posts)

Misc

Remember to follow the Rules

The backup domains are located at 8chan.se and 8chan.cc. TOR access can be found here, or you can access the TOR portal from the clearnet at Redchannit 3.0 (Temporarily Dead).



8chan.moe is a hobby project with no affiliation whatsoever to the administration of any other "8chan" site, past or present.

This Board BSP 04/18/2025 (Fri) 19:31:12 Id: 34e5e6 No. 1 [Reply]
Welcome to this board. This board is for my personal use. I mostly work on technology, but I have other interests. I encourage anyone interested in imageboard technology or programming languages to read through and see if anything is interesting to you.
Edited last time by bsp on 08/29/2025 (Fri) 18:07:24.
Right now, my work mostly focuses on dl-distro. >dl-distro repository: https://gitgud.io/bsp/dl-distro/

Programming Languages Bibliography bsp 10/24/2025 (Fri) 20:03:47 Id: 7fd227 No. 40 [Reply]
This thread is for collecting material on programming languages. Language design, compiler and interpreter design and implementation, runtime design and implementation and type theory are all appropriate for this thread.
4 posts and 5 images omitted.
A paper on the theory of monomorphization. I'd been thinking of trying to figure out a "destructor monomorphization" for a while, but it seems the authors of this paper have already done something similar to what I was thinking. Notable is that their method does not support polymorphic recursion. This isn't surprising: practical polymorphic recursion can yield unbounded encodings. This is actually a very encouraging result for DL2. Ideally, DL2 can take this result, add box polymorphism and get a very rich form of mono-box polymorphism. My biggest problem is that I'm not sure this really is the best approach. The paper focuses on a constraint system. I have to wonder if it would be better to use a more type-directed approach, where each polymorphic variable is directly annotated based on its uses. The idea I have in mind is to traverse the AST with an environment, returning an annotated AST and environment. It's hard to say this strategy will work without implementing my idea in full, and it might turn out to be equivalent in any case. I am very pleased to hear that it is in fact possible. One of my primary worries about DL2 was that type-classes and other similar features would be complicated by the absence of an appropriate monomorphization mechanism. This paper shows that it's possible to use the natural encoding. Having a standard monomorphization transformation would make it easy to write code without worrying about these issues. Hacks to escape the issues of type-class instantiation would be unnecessary, and with the exception of unbounded polymorphic recursion it would be possible to use full DL2 without worry.
(1.30 MB SML-history.pdf)

(171.21 KB 1200x643 haskell-dead-end.jpeg)

https://smlfamily.github.io/history/SML-history.pdf >This paper focuses on the history of Standard ML, which plays a central role in this family of languages, as it was the first to include the complete set of features that we now associate with the name “ML” (i.e., polymorphic type inference, datatypes with pattern matching, modules, exceptions, and mutable state). >Standard ML, and the ML family of languages, have had enormous influence on the world of programming language design and theory. ML is the foremost exemplar of a functional programming language with strict evaluation (call-by-value) and static typing. The use of parametric polymorphism in its type system, together with the automatic inference of such types, has influenced a wide variety of modern languages (where polymorphism is often referred to as generics). It has popularized the idea of datatypes with associated case analysis by pattern matching. >Standard ML also set a precedent by being a language whose design included a formal definition with an associated metatheory of mathematical proofs (such as soundness of the type system). A formal definition was one of the explicit goals from the beginning of the project. While some previous languages had rigorous definitions, these definitions were not integral to the design process, and the formal part was limited to the language syntax and possibly dynamic semantics or static semantics, but not both. >More particularly, it is worth understanding how Standard ML pioneered and integrated its most characteristic features. >Static typing with type inference and polymorphic types, now commonly known as the Hindley-Milner type system. >Datatypes with the corresponding use of pattern matching for case analysis and destructuring over values of those types. >A module sub-language that itself is a functional language with a substantial extension of the basic type system of the core language. >This original ML was an embedded language within the interactive theorem proving system LCF, serving as a logically-secure scripting language. >Standard ML popularized a number of features that functional-language programmers take for granted today. The most important of these being datatypes with pattern matching, Hindley-Milner type inference, and parametric polymorphic (or “generic”) types. >What makes Standard ML (SML) an interesting and important programming language? For one thing, it is an exemplar of a strict, statically typed functional language. In this role, it has had a substantial influence on the design of many modern programming languages, including other statically-typed functional languages (e.g., OCaml, F#, Haskell, and Scala). >One of the most significant impacts of SML has been the exploration of a wide and diverse range of implementation techniques in SML compilers. >While the Standard ML language has not changed over the past 20+ years, there continue to be at least five actively supported implementations with an active user community. >Standard ML occupies a “sweet spot” in the design space; it is flexible and expressive while not being overly complicated — in contrast with many functional languages (e.g., Haskell, OCaml, and Scala) that have very rich, but complicated, type systems, and so many features that most programmers use only a subset of the language. >In both teaching and research, its stability allows its users to focus attention on their own work and avoid the distractions that arise when a language is defined by an ever-changing implementation.

Message too long. Click here to view full text.

This is a paper with some similarities to >>45. This paper details Morphic's approach to eliminating higher-order functions from a program by determining a set of closures that might be passed to each higher-order function, re-constructing a type for each set representing the particular selection, then using it to branch to one of several specialized functions. >Restricted versions of specializing defunctionalization, such as C++ and Rust’s aggressive monomorphizing strategy and Julia JIT-enhanced approach, are already in widesrpead use [Bezanson et al. 2017; Hovgaard et al. 2018; Klabnik and Nichols 2019, Ch. 10; Paszke et al. 2021; Stroustrup 2013, Ch. 26]. However, all existing specializing techniques suffer from limitations of expressiveness—either not supporing truly first-class functions (as in [Hovgaard et al. 2018; Paszke et al. 2021]), or forcing the programmer to fall back to traditional, slow virtually-dispatched closures in the general case (as in C++ and Rust [Klabnik and Nichols 2019, Ch. 13; Stroustrup 2013, Ch. 11]). Monomorphization can be seen as a form of specialization-by-substitution. The type arguments are substituted and the function called with a matching ABI. Defunctionalization in general and this paper in particular argues that this isn't enough: it would be better to inline both type and function arguments. >We designed our benchmarks to make significant use of combinator-based library abstractions, as we believe such abstractions represent a common and important source of higher-order control flow in many functional programs. To this end, we wrote Morphic code to define a lazy iterator library, a state monad library, and a parser combinator library, all making heavy use of higher-order functions in both their implementations and their public APIs. The iterator library is used in the benchmarked sections of PrimesIter, PrimesSieve, Quicksort and Unify, the state monad library is used in Unify, and the parser combinator library is used in Calc and ParseJSON. The benchmarks seem somewhat artificial. They designed their benchmarks to use higher-order functions wherever possible. The worst is that they were quick to use monadic programs in their benchmarks. Monadic programs strongly favor inlining and defunctionalization as they involve "continuation" functions that are used by small, simple "bind" wrappers which have simple enough behavior to justify inlining. Further compilation is then likely to be extremely favorable to this style of defunctionalization. In this context, the results don't appear to reflect an overwhelming success. It's expected that MLton's monovariant defunctionalization will perform poorly on such benchmarks. OCaml does very well on the iterator-based programs, indicating that its ordinary compilation procedures work well for this style of program. The paper is probably picking its fights. If Haskell's ordinary compilation is sufficient to get the same benefits, then the paper might have to argue as if Haskell is somehow "cheating" its results. Admittedly, it's not a bad argument, Haskell's compiler is very aggressive and allows for such oddities as the MTL. The idiomatic Haskell code would use the list monad instead of iterators as well as the state monad on one of its benchmarks, and Haskell is all but guaranteed to do very well on those four of the six benchmarks. At that point, it would be a question of how well they handle parser combinators. Haskell would likely be competitive in that context. The fact that OCaml only chokes on the parser combinator programs. >Across all compilers, we additionally remark that our largest and most realistic benchmark program, ParseJSON, is also the program in which we see the largest performance gains due to LSS. We take this as evidence in support of the hypothesis that the benefits of specialization become more significant with increasing scale. I would disagree strongly here. I think that what's proven with ParseJSON is a matter of the insufficiency of contemporary compilers in the face of higher-order constructs. I think this paper needs strong follow-ups to evaluate its effectiveness alongside inlining for contrast. The paper also needs follow-ups in establishing worst-case behavior or the absence of worst-case behavior. While I'm skeptical of the results, I think the results are encouraging either way for DL2. Currently, the DL2 documentation argues in favor of a systematic and synergistic approach to compilation. If Morphic's approach is appropriate for general use, then working well with even the simplest DL2 compilers would be strong evidence towards the development of such systematic approaches. If this is a fluke of a result, then DL2's argument can similarly work the other way in favor of the development of more sophisticated systematic approaches to inlining.

DL2 Thread bsp Board owner 07/11/2025 (Fri) 07:53:29 Id: a6d79a No. 12 [Reply]
DL2 is a functional programming language and intermediate representation based on System F-Omega. DL2 extends System F-Omega with iso-recursive types, recursive functions and sum-of-existential-product types. This variant of System F-Omega is representative of a large number of typed programming languages. This makes DL2 an attractive compilation target, as many languages can be compiled directly to DL2. DL2 is intentionally runtime-agnostic so that it can be compiled to many targets or embedded in other languages.
4 posts and 2 images omitted.
>>17 While working on DL2, I realized two very good arguments in favor of this "bi-HM" system. First, it can be used to supply binder argument judgments preemptively. This become apparent while working on case, but I believe this would also come up in pattern-matching. The issue was that the case argument types were computed separately from the non-case argument types. Second, when expanding some programs the type is known outright, and it's easier to just generate the code directly rather than rely on the call macro to do it. In this case, it's useful to have a helper variant of the expand function that lets the user supply the (sometimes-partial) type for unification after the fact. I noticed this while working on the DL2 C FFI, specifically for casting C arithmetic types to and from DL2's word type. I realized that most of my existing options weren't very good for that particular case, while immediate generation would be simple with only the option to supply the appropriate type to dl2-expand. Supplying this type directly is also simpler than generating the macro call for a type-check. In other words, I'm likely to end up with a variant of dl2-expand which does take the type directly. I also learned recently that Coq referred to its own type system as using "bidirectional typing" by name: https://rocq-prover.github.io/platform-docs/rocq_theory/explanation_bidirectionality_hints.html That said, it still seems silly in an HM context to use bidirectional typing as Coq's documentation implies. Type variables in HM means that every input type is a potential output type simply by providing a new variable. From this perspective, there's no difference between type inference and type checking. The whole implementation of bidirectional typing in an HM context is simply allowing an input type to be submitted. This also corresponds well with a logic programming interpretation of type-checking rules. The primary reason to separate input and output when using bidirectional typing with HM seems to be to avoid generating and unifying unnecessary type variables. I know that it's possible to optimize this generation and unification using HM-level matching, so that almost every form will have at most one type variable created for this purpose. In particular: >lambda generates at most one variable for each argument and at most one variable for the return and destructs its input variable >apply generates at most one variable for each argument and re-uses the input variable >constructors generate at most one variable for each argument and destruct the input variable

Message too long. Click here to view full text.

Now that I'm back to working on the DL2->C transpiler, I've had to think more about A-Normal Form (ANF). ANF is a program form in which expressions are divided into statements and atomic expressions. The primary result of ANF conversion is to separate and order statements, yielding sequential "blocks" of statements bound to variables with a terminating statement or atomic expression. Some discussion and history of the name ANF: https://web.archive.org/web/20250415115410/https://www.williamjbowman.com/blog/2022/06/30/the-a-means-a/ >Parametric ANF For DL2 there are functions which should be considered "atomic", and these vary by compilation context. Arithmetic, for example, has specific optimizations that are best done on whole expressions. For this reason, it's best for arithmetic expressions to be collected together as completely as possible for optimization. Similarly, there are functions which should never be duplicated. Inherently expensive functions like make-vector and repeat can be expected to fall in this set. Some are more ambiguous: length and index vector operations are cheap but require access to the relevant vector, and can therefore be considered important for resource semantics, so that re-ordering such expressions is questionable. However, these vectors also have their own specific optimizations. To deal with this, I suggest that DL2-ANF should be parametrically defined over DL2: the compiler's user should be able to supply a predicate to examine whether a specific DL2 expression is an atomic expression or not. If a DL2 expression is in fact atomic, then ANF will embed it freely in other expressions. >DL2-ANF as Restricted DL2 Originally, I thought that DL2-ANF should stand separately from DL2, but during implementation I've come to think that DL2-ANF should actually be a form of DL2. Specifically, DL2-ANF should come from allowing "block-form" bodies in lambda expressions. - Block-form bodies consist of nested "lambda-apply" constructs, where each application's arguments consist of statements, and terminating in either a statement or atomic expression. - Statements are either a case of some atomic expression to some blocks or a non-atomic application whose arguments consist only of atomic expressions. - Atomic expressions are variables, constants, kinds, types, choose constructions and a subset of additional expressions specified by the user. >ANF Types Pain Point

Message too long. Click here to view full text.

>System F-Omega and DL2 Modules One of the nice things about basing DL2 on System F-Omega is that System F-Omega offers a "free" module system that is extensive in its capabilities. Modules can be constructed of ordinary existential products and functions in System F-Omega. It is possible to create and use types with an opaque internal representation to create "objects" or to supply a standard set of operations applicable to an existing type. The two techniques may be combined to describe derived instances of standard sets of operations. In this way, System F-Omega might be considered a "calculus of modules" in which modules are represented as first-class objects. >Explicit DL2 Modules Right now, I also have an explicit module system as an additional layer on top of DL2's inherent module system. This is partly for convenience for the front-end in providing a "basic standard" for definitions with names and partly for performance on the back-end by allowing separate compilation. I can also provide convenience functions for especially unpleasant code generation such as schema definitions that front-ends can take advantage of to establish their baseline functionality. There are also back-end convenience functions which isolate only the relevant definitions for a program from a set of definitions. >Schema and Choice Module Convenience Functions Included in the explicit module system are two convenience functions for defining ADTs: one for recursive "schema" definitions and one for non-recursive "choice" definitions. This supplies automatic code generation for both given the appropriate names and a set of binder terms. I also want to supply functions for "transparent" variants that allow in-lining each definition. This should allow for more performant types to be used in the absence of prefix polymorphism for my current source subset of DL2 that only supports boxed polymorphism after type in-lining. >Schema Type-checking Performance I made the unfortunate mistake of supplying a constructor for each constructor case in both schema and choice. Large iso-recursive schemas constructed this way are extremely slow to type-check. Instead, I want to supply a single constructor and destructor to and from each appropriate DL2 schema type and choice type and the corresponding set of convenience constructors. Once that's done, it should be clearer how much this affects type-checking performance. If the type checker is fast enough after this change, I should be able to represent DL1's core types as one large iso-recursive schema, which shouldn't affect the compiled interpreter but will result in a cleaner presentation.

DL1 Thread bsp Board owner 07/11/2025 (Fri) 01:08:00 Id: 97de93 No. 11 [Reply]
DL1 is a computing model, virtual machine, intermediate representation and programming language. DL1 most closely resembles an untyped functional programming language with effects that corresponds to stored-program computers based on a pattern-matching variant of the lambda calculus. DL1 has three primary features distinguishing it from other functional languages: - Explicit evaluation: an eval function that is completely specified, does not perform macro-expansion, operates specifically on a small well-defined structured language and has the same semantics as a JIT compiler. - Meta-execution: a set of meta-execution primitives that allows for safe virtual execution of DL1 code. - Copy-on-write semantics: DL1 vectors include an update function that uses copy-on-write to guarantee constant-time mutation when possible and produces a copy of a vector otherwise.
3 posts and 1 image omitted.
For both DL1 and DL2 I'm unclear about what concurrency model would be appropriate for the two. I have had a few ideas, but I'm not satisfied with any of them yet. DL1's copy-on-write semantics effectively requires eager reference counting, and DL2's runtime-agnosticism means that in general it's preferable to have solutions for everything, including whatever memory model is appropriate for concurrent DL1. >Just copy everything I am seriously considering this. A good reason NOT to do this are large allocations. Making specific exceptions to allow for sharing such allocations does not strike me as entirely unreasonable. >Just use Software Transactional Memory (STM) This is almost my favorite solution. It is not a complete solution, but this might become part of a complete solution. In particular, it doesn't provide a solution to shared references, which are the main problem. It assumes a preexisting ability to share immutable structures. >Just use Atomic Reference Counts (ARC) This would be my favorite solution if it weren't a performance disaster under-the-hood. First, there are the straw solutions of only using a single atomic reference count for each object. This would be extremely slow, especially on older computers, even for unshared data with uncontended reference counts. Likelier I would have to use a hybrid atomic reference count, which is more common in practice anyways in cases when sharing is not guaranteed. There are two issues, one just a matter of performance but one which is semantic. The first issue is that using hybrid reference counts means that for shared objects I have to deal with either time overhead of checking whether an object's reference count needs an atomic update or accept the memory overhead of an additional reference count when decrementing the reference count.

Message too long. Click here to view full text.

In the process of writing DL1, I also wrote two extension systems, which eventually converged into one extension system called Octext. The basic premise of Octext is based on some observations: >difficulties in library use stem from various type incompatibilities between languages >if only a few fixed types are used, there's no need to write FFI bindings for untyped languages >statically-typed languages can use it directly, layer their own types based on specifications or expose library access to other programs >the overhead of library calls is proportional to the number of times libraries are called divided by the work done by those libraries per call >structured binary data is a universal data format >adding domain elements tends to be sufficient for most practical tasks >if the user provides a runtime themselves, their own data structures can be used directly by the extension, saving a pass of data marshaling More obscurely: >C libraries give the illusion of control by exposing every element of their domain as its own call, while Haskell libraries will casually include DSLs promoting correct use >the Haskell approach puts less pressure on the API proper and promotes both internal and external correctness Octext allows a different mode of language and program extension, providing a single agreed interface between language and extension based on structured binary data alongside extension-provided types that remain opaque to the extension's user. Octext takes an additional step to make this practical, it allows the extension user to provide its own runtime to the extension, allowing the native types of the extension user to supply the interface of the extension. My intention is to allow the user to supply their runtime as one of several vtables, with method gaps filled in programmatically. This way, the extension user can specify its most appropriate set of methods and the extension can use its most appropriate set of methods, with Octext bridging the gap. Currently, Octext is going some re-design, which I'm hoping should be its last. Currently, I have two major goals. First, I want to include is a direct memory access (DMA) interface, allowing the user runtime to lend it direct pointers to its data structures. Second, I want to update the system so that it can handle concurrency.

Message too long. Click here to view full text.

I've been thinking about implementing a proper module system for DL1. Originally, DL1's "module system" amounted to loading imported code directly in the top-level. I later changed this to only import files that have not already been imported so that dependencies can be taken without worrying about duplicates in the dependency graph. There are a few basic issues with this approach: >loading a module has it run every effect the module executes as it loads >there's no way to get only a portion of the module's definitions >variable shadowing will occur based off of load order, making the overall semantics sensitive to the final load order >there's no standard way to load a module multiple times >the module system does not admit the explicit notion of a packaged DL1 program The "obvious" answer is to export an association list or some more complete notion of a standardized module. As I use it, this would not be acceptable for DL1: >a simple association list can't handle the explicit namespaces that I'm using to develop multiple co-existing macro systems in DL1, unless I add it as an explicit feature >standardizing a module system means committing to a set of features as well as adding to that set if that set of features is found insufficient >any preexisting module system will get in the way of users implementing their own (likely better) module systems There are some other issues: >in general, it would be better if modules themselves could be represented as ordinary data >there are also top-level considerations: how should a module interact with a top-level that loads it?

Message too long. Click here to view full text.


Esoteric and Political bsp Board owner 08/27/2025 (Wed) 17:24:06 Id: 4daf85 No. 28 [Reply]
Separately from my technical work, I am very invested in the esoteric and the political. As I understand it, I have been "given" an unusual role in my life largely having to do with these in combination.
For various reasons, I've had reason to think about Druidry and its potential revival as an option to help handle a religious crisis, as well as potentially being beneficial to many Western cultures. Much of what was known is said to have been lost to time due to a combination of oral tradition and a doctrine of secrecy within Druidry. Although Druids are frequently depicted primarily by their closeness to nature, it is said that Druids actually formed a priestly, judicial and professional class in Celtic cultures, contrasting with the contemporary world's professional specializations. Druids also believed in immortal human souls with re-incarnation. The idea of a Druidic revival sounds interesting, but the oral restriction of past Druidry means that any attempt at a revival is inherently limited, constraining new Druidic traditions so that they must start effectively disconnected from the original Druids. The issue is compounded by the contemporary tendency towards a merely aesthetic Druidry. However, I do not think either issue is entirely fatal. If going this route, or going a similar route with another system, it may be worthwhile to take advantage of contemporary factors rather than shun them. In particular, I would suggest that a Druid should be familiar with technology and the philosophical work that went into its development. This does not contradict any concept of being close to Nature: technology is fundamentally the application of Natural laws, and it strikes me as an unusual concept that the use or sophistication of these techniques is the issue rather than the correct or incorrect nature of their application. The practice of computing in particular is closest to mathematics and logic studied properly. It is my opinion that despite how it may seem there are sacred things in computing. When the ugliness is stripped away and a beautiful core kept then computational truth reveals itself over and over. The practice of programming challenges the mind with a demand for precision, and the same technology and practice graduates naturally into techniques such as theorem proving, which allow for obtaining and verifying truth directly.
I listened to Grace and the Abyss by Murdoch Murdoch. I need to respond properly later but can't say some of the most important parts yet, so I'm making some notes now. Summary: >MM grew up Christian, played video games and D&D, in what sounds like a typical American suburban childhood >MM had an abused childhood friend Sam, who introduced MM to atheism >MM's atheism deepened during college, turning into a doctrine that he refers to as "biological determinism" >Sam became a volatile alcoholic Christian in the meantime, MM starts to drift away >One day, Sam confesses consideration of suicide and a lack of faith >Sam "only wanted to believe, but he couldn't" >MM, thinking it in line with a doctrine of strength, simply reflects Sam's choice back at him: "stop whining, either choose life or choose death" >Sam commits suicide, MM blames himself >MM shops around for mystical doctrines, especially among Eastern religions >At some point he writes his book featuring a clown fighting nihilism and always chasing the sun >MM currently believes in neg-entropy doctrine and gobbledygook quantum anti-determinism >MM effectively believes in Christianity, but can't deny the criticisms of it One thing that I'm uncertain of is how deeply he came to understand the various intermediate philosophies he looked at. The impression I get is that he tends to only pick up on the most prominent concepts, but then fails to truly apply the various philosophies he looks at. His current object of religious affection seems entirely impracticable.

Message too long. Click here to view full text.

>>36 I did find the Murdoch Murdoch Grace and the Abyss PDF. It did not become any better when read instead of listened to.

General Thread bsp Board owner 08/28/2025 (Thu) 18:43:33 Id: 18e3f0 No. 30 [Reply]
This is a general thread, for any discussion which does not fit into any of the other threads.
Here's a copy of the software and sites text that was periodically posted to the GamerGate thread: Software and sites to give attention to! Operating systems >AROS: http://aros.sourceforge.net/ >BSD <Net: https://www.netbsd.org/ <Open: https://www.openbsd.org/ <DragonFly: https://www.dragonflybsd.org/ >GrapheneOS: https://grapheneos.org/ >Linux <Artix Linux: https://artixlinux.org/ <CLIP OS: https://clip-os.org/ <Devuan: https://www.devuan.org/ <EndeavourOS: https://endeavouros.com/ <OpenMandriva: https://www.openmandriva.org/ <Rocky Linux: https://rockylinux.org/

Message too long. Click here to view full text.


(981.28 KB 1000x1500 6d1030c9ee57fd2e3fb8903b5d3e7fcc.jpg)

(379.00 B bsp-challenge.txt)

(697.00 B my-key.txt)

Public Key Thread bsp Board owner 08/28/2025 (Thu) 21:45:31 Id: d82aac No. 32 [Reply]
This thread is for collecting public keys. This solution will be temporary until a suitable site-wide PKI is available. Please remember: >this key collection is for ALL ANONS, regardless of board or even site >public keys are OPTIONAL, and should remain so >you may OPTIONALLY volunteer as much or as little additional information about yourself alongside your public key, the only recommendation I make is that visitors, tourists and other arrivals declare themselves as such >WHEN submitting public keys, the challenge is REQUIRED, and only public keys which submit a signed copy of the challenge text are likely to be respected >public keys NOT submitted with the corresponding challenge will be assumed to belong to somebody who is not actually present to submit their public key, while this is permitted it is recommended to please say so explicitly >there will be more opportunities to collect and receive public keys, do not worry if you miss this opportunity >it is expected that public keys >I may archive this thread later as a hard cutoff >it is my intention and expectation that anons with public keys will be able to help anons without public keys, in various ways to be discussed later >I am choosing the challenge text with some good-faith measures in mind, those who are interested in key collection are advised to discuss this with me later >if you realize later that you failed to generate a public key of the correct kind, do not panic, instead submit a corrected key at the next opportunity, and link the two keys together using mutual cryptographic signatures >if you wish to submit a public key of a different kind for any reason, use the same procedure, linking the two keys using mutual cryptographic signatures, and sign the challenge text with both >you may use the same key in multiple rounds of key collection

Message too long. Click here to view full text.


Imageboard Technology Bibliography bsp Board owner 05/20/2025 (Tue) 00:54:40 Id: 4971bb No. 4 [Reply]
This thread is for discussion of technology and software specifically in the context of imageboards. This can include technology purpose-built for imageboards or technology in the context of being re-purposed for imageboard use.
Imageboards are an anonymous equivalent to bulletin boards with support for images and other files. Imageboards represent a particular context in networking: >Accountless: Typical imageboard use foregoes accounts entirely. Users can instead post directly without providing any details or confirming any off-site identity. >Anonymous: Typical imageboard use is anonymous, even when accounts are involved. Imageboards are known to have user cultures which discourage taking on an identity when there's no reason to do so. >Asynchronous: Imageboards are asynchronous. They are intended to be used by any number of users, and the contents posted on imageboards stored for later reading and replies by other users. Imageboards involve a minimal-trust non-ephemeral networking context.
An old thread on zzzchan about decentralized imageboards and forums: https://zzzchan.xyz/tech/thread/845.html https://archive.is/MTVrO >Lately I've been interested in looking for a final solution to the imageboard problem, deplatforming and relying on centralized authorities for hosting. P2P through TOR seems like the most logical path forward. But the software would also need to be accessible, easily installed and understood by just about anyone, and easily secure/private by default. >Retroshare seemed like a decent choice, but unfortunately its forum function is significantly lacking in features. I haven't investigate too much into zeronet either but from what I recall that was a very bloated piece of software and I'm looking for something that's light and simple. Then there's BitChan (>507) which fits most of the bill but contrasted with Retroshare is not simple to setup. >I know there is essentially nothing else out there so this thread isn't necessarily asking to be spoonfed some unknown piece of software that went under the radar of anons. But I think the concept of P2P imageboards should be further explored even though the failure of zeronet soured a lot of peoples perspective on the concept. Imageboards are so simple by nature I feel this shouldn't be as difficult as it is. Retroshare comes close but as I understand it you can't really moderate the forums that you create. Plus the media integration is basically non-existent, though media is a lesser concern. But having everything routed through tor and being able to mail, message, and have public forums all in a single small client available on every operating system is the kind of seamlessness that a program needs for widespread adoption. [Edited to prevent invalid links] A decent amount of the discussion is technologically obsolete, especially regarding cryptography, but if there are aspects anons want to pull out of that, I'd like to know. For self-serving reasons, I'm going to pull out this later post (14501) regarding the web: >The main problem is the web is centralized by nature. They later added stupid kludges like cuckflare, amazon, etc. to offload traffic, but that doesn't change the fact there is a single point of failure (when the origin web site goes offline, or gets "deplatformed" like 8chan). >On top of that, the web is attrociously bloated, which is another reason the kludges got the traction they did. And all that bloat for what exactly, when you don't even have a stardard/easy means to do what >14496 is asking. No, instead you have to build more shit on top of the huge steaming pile of shit. >But all this used to be really simple. On Usenet you'd subscribe to a newsgroup, and then every time you connect to your local server, it downloads the new messages. Then when you open your newsreader to one of those groups, you see all the new posts, threaded in whichever way *you* want them to be. You could even download all the new posts to your computer, a bit like POP3 for email (because they're also just text messages with headers). Now you have a local archive, whithout having to write convoluted scripts for parsing/scraping html/js and updating them when something changes on the server (and they don't get blocked one day when cuckflare decides your script is a bot). >And threads never expired! You could reply to a thread from a year ago, or even longer, if you had archived one of its posts (technically all you need are the headers). >And if you think a discussion is getting off-topic, you can split it into a new thread! (or even cross-post to a different newsgroup). Yeah that doesn't work at all on web imageboard or forum, despite their huge code size. So what really are they even good for, except taking up resources? Oh right, they're excellent at tracking and spying on you, and also good for hacking into your computer via one of countless bugs. It's hard to say whether or not the later ideas are right or wrong, since more options aren't always better, but some of the people working on Lynxchan may have interesting things to say about the web.

Cryptography Bibliography bsp Board owner 07/04/2025 (Fri) 01:10:12 Id: 2c5f87 No. 6 [Reply]
This thread is for collecting cryptographic techniques, protocols and implementations.
2 posts and 1 image omitted.
>>8 It should be pointed out that zkVMs emulating CPUs and circuit-based ZKP systems suggest very different ZKP machinery. Notably, most CPUs are "stored-program" machines while circuit-based systems are "static-program" machines. There are obvious corollaries that any ZKP system supporting the implementation of zkCPUs is generally recursive, and can therefore execute a description of itself. Two more corollaries seem obvious to me: >the zkCPU system running the original ZKP system is one such self-description of that original ZKP system >the performance of an optimal such self-description of a ZKP system is at least as good as that of the zkCPU self-description, no matter the performance metric used. There's an obvious question: what performance spectrum can be expected from "zkFPGA" systems based directly on using field operations for circuit descriptions and the rest of the necessary ZKP machinery? zkFPGA seems like it should be a reasonable "baseline" mechanism. If this can be assumed and zkFPGA self-emulation is poor, then one could expect that using zkCPU self-emulation is similarly poor. Since zkCPU self-emulation would consist of the composition of two virtual machines: a ZKP on top of a CPU and a CPU on top of a ZKP, then one would intuitively expect comparable results from the reverse composition of a CPU-on-ZKP-on-CPU system. If the intuition holds, poor performance for a "perfect" self-emulator should translate to poor performance for CPU-on-ZKP or for ZKP-on-CPU. If a practical ZKP has poor self-emulation, one would expect decent ZKP-on-CPU performance and therefore lousy CPU-on-ZKP performance. So then why bother with a zkCPU? The only way I can see zkCPU being sensible is if it has an acceptably-small factor overhead compared to zkFPGA. I see two questions worth answering: >What is the performance of the zkFPGA approach?

Message too long. Click here to view full text.

Two papers on lookup arguments and VMs built using them: Lasso and Jolt. I'll need to read these properly later. Particularly notable is the table on page 12 giving the costs of lookup arguments. I have to wonder what leads to the square root verification time.
>>8 >>9 I'm clearer now on why exactly zkVMs have been such a popular topic. The first issue is actually zero-knowledge itself. The zero-knowledge aspect of zk-SNARKs tends to be an expense in itself. However, doing a zero-knowledge proof directly inside a zero-knowledge proof is pointless: a non-ZK proof would be equally obscured at lower cost. An "optimal universal circuit" for a zk-SNARK would likely just be a ZKP wrapper checking a non-ZK SNARK. From that perspective, the real question isn't about efficient ZKP but about efficient proof in general. The second issue are special techniques such as "folding" and "lookup arguments" for transforming proofs in particular forms. If folding is effective but specifically applicable to zkCPU-styled designs, then the zkCPU does make sense. I'm not especially clear on the details of such techniques. This article discusses it in more detail: https://a16zcrypto.com/posts/article/jolt-6x-speedup/ >Here’s another way to frame Jolt’s performance: The Jolt prover now performs under 800 field multiplications per RISC-V cycle. In contrast, the Plonk prover applied to an arithmetic circuit commits to ~8 random values per gate, translating to at least 800 field multiplications per gate. This makes the Jolt prover faster per RISC-V cycle than the Plonk prover per gate. For most applications, this means the Jolt prover is much faster than popular SNARKs applied to hand-optimized circuits. >Why is this true? Because SNARKs like Plonk and Groth16 fail to exploit repeated structure at multiple levels. First, their prover costs don’t improve when applied to circuits with internal structure — they’re just as slow on structured circuits as on arbitrary ones. Second, encoding computations as circuits is already a mistake in many contexts. Batch-evaluation arguments like Shout do the opposite: They exploit repeated evaluation of the same function to dramatically reduce prover cost. And batch-evaluation is exactly what VMs do: They execute the same small set of primitive instructions over and over, one per cycle. So while a VM abstraction introduces some overhead compared to hand-optimized circuits, the structure it imposes enables optimizations that more than make up for it. For now, I don't think it's worth pursuing this particular approach. I think there will be better opportunities to exploit the same techniques later without resorting to the heavy zkVM approach, and Google's work on the topic both benchmarks well and is likely to be the subject of ongoing scrutiny and auditing in a way that Jolt can't be while it's still a moving target. Auditability is priority for my work, and some of the designs I have in mind are intended to take full advantage of such mechanisms when they are mature. In addition, there are "in-house" elements that work better with Google's work than with Jolt, as well as other issues with Jolt and its dependencies. I've considered the issue of using a zkVM and my conclusion is that it would ultimately end up a distraction in practice.

Message too long. Click here to view full text.


STGL Thread bsp Board owner 08/25/2025 (Mon) 22:18:52 Id: d0c07b No. 23 [Reply]
This is a thread about STGL and its currently-existing prototype, PGL. STGL is the simply-typed graphics language, and will be a rough equivalent to OpenGL 3.3/WebGL 2 based on the simply-typed lambda calculus. STGL will simplify graphics programming by providing a type system directly to the programmer to immediately detect and prevent type errors. STGL will also generalize immediate processing, batch processing, display lists and command buffers by allowing render programs to be written as functions acting at any granularity and executed with any arguments. These functionalities already exist in a weaker form in the prototype PGL. This thread will likely be updated eventually with more/better images, once they exist.
I would appreciate comments on STGL. Which operations should it have? Which types should it have? Are there pitfalls I should be aware of? I already have a decent idea of what it should look like via my work on the prototype PGL. I'd like to make sure that STGL is a mostly-complete representation of OpenGL 3.3 Core/WebGL 2, maybe without some known-bad misfeatures. I'm only recently a graphics programmer, but I'd like it to be capable of decently advanced graphics. From a user perspective, it would also be nice to collect some techniques that can be distilled into code. In particular, I would really like to see advanced capabilities such as SDF-based alpha, impostors and weighted-blended order-independent transparency at some point. SDF-based alpha and impostors are likely possible in PGL already, but WB-OIT is almost certainly not. Additionally, are there any features that users want to have, either in the STGL core or as a "standard macro" provided by front-ends to STGL?

Networking and Protocols Bibliography bsp Board owner 05/12/2025 (Mon) 21:10:24 Id: 3e1465 No. 2 [Reply]
This thread is for collecting and evaluating papers and software related to networking and network protocols.
Project website is here: https://reticulum.network From the PDF manual: >Reticulum is a cryptography-based networking stack for building both local and wide-area networks with readily available hardware, that can continue to operate under adverse conditions, such as extremely low bandwidth and very high latency. >Reticulum allows you to build wide-area networks with off-the-shelf tools, and offers end-to-end encryption, forward secrecy, autoconfiguring cryptographically backed multi-hop transport, efficient addressing, unforgeable packet acknowledgements and more. >Reticulum enables the construction of both small and potentially planetary-scale networks, without any need for hierarchical or bureaucratic structures to control or manage them, while ensuring individuals and communities full sovereignty over their own network segments. >Reticulum is a complete networking stack, and does not need IP or higher layers, although it is easy to utilise IP (with TCP or UDP) as the underlying carrier for Reticulum. It is therefore trivial to tunnel Reticulum over the Internet or private IP networks. Reticulum is built directly on cryptographic principles, allowing resilience and stable functionality in open and trustless networks. >No kernel modules or drivers are required. Reticulum can run completely in userland, and will run on practically any system that runs Python 3. Reticulum runs well even on small single-board computers like the Pi Zero. More technical details: >The network unit is a "destination" >Destinations are represented as 16 byte hashes of their properties >4 types of destinations:

Message too long. Click here to view full text.

Edited last time by bsp on 05/13/2025 (Tue) 01:12:28.

[ 1 ]
Forms
Delete
Report