/bsp/ - bsp

Someone's Office

Catalog Archive
+
-
Name
Options
Subject
Message

Max message length: 12000

files

Max file size: 32.00 MB

Total max file size: 50.00 MB

Max files: 5

Supported file types: GIF, JPG, PNG, WebM, OGG, and more

E-mail
Password

(used to delete files and posts)

Misc

Remember to follow the Rules

The backup domains are located at 8chan.se and 8chan.cc. TOR access can be found here, or you can access the TOR portal from the clearnet at Redchannit 3.0 (Temporarily Dead).

Ghost Screen
Celebrating its fifth anniversary all September


8chan.moe is a hobby project with no affiliation whatsoever to the administration of any other "8chan" site, past or present.

This Board BSP 04/18/2025 (Fri) 19:31:12 Id: 34e5e6 No. 1 [Reply]
Welcome to this board. This board is for my personal use. I mostly work on technology, but I have other interests. I encourage anyone interested in imageboard technology or programming languages to read through and see if anything is interesting to you.
Edited last time by bsp on 08/29/2025 (Fri) 18:07:24.

General Thread bsp Board owner 08/28/2025 (Thu) 18:43:33 Id: 18e3f0 No. 30 [Reply]
This is a general thread, for any discussion which does not fit into any of the other threads.
Here's a copy of the software and sites text that was periodically posted to the GamerGate thread: Software and sites to give attention to! Operating systems >AROS: http://aros.sourceforge.net/ >BSD <Net: https://www.netbsd.org/ <Open: https://www.openbsd.org/ <DragonFly: https://www.dragonflybsd.org/ >GrapheneOS: https://grapheneos.org/ >Linux <Artix Linux: https://artixlinux.org/ <CLIP OS: https://clip-os.org/ <Devuan: https://www.devuan.org/ <EndeavourOS: https://endeavouros.com/ <OpenMandriva: https://www.openmandriva.org/ <Rocky Linux: https://rockylinux.org/

Message too long. Click here to view full text.


(981.28 KB 1000x1500 6d1030c9ee57fd2e3fb8903b5d3e7fcc.jpg)

(379.00 B bsp-challenge.txt)

(697.00 B my-key.txt)

Public Key Thread bsp Board owner 08/28/2025 (Thu) 21:45:31 Id: d82aac No. 32 [Reply]
This thread is for collecting public keys. This solution will be temporary until a suitable site-wide PKI is available. Please remember: >this key collection is for ALL ANONS, regardless of board or even site >public keys are OPTIONAL, and should remain so >you may OPTIONALLY volunteer as much or as little additional information about yourself alongside your public key, the only recommendation I make is that visitors, tourists and other arrivals declare themselves as such >WHEN submitting public keys, the challenge is REQUIRED, and only public keys which submit a signed copy of the challenge text are likely to be respected >public keys NOT submitted with the corresponding challenge will be assumed to belong to somebody who is not actually present to submit their public key, while this is permitted it is recommended to please say so explicitly >there will be more opportunities to collect and receive public keys, do not worry if you miss this opportunity >it is expected that public keys >I may archive this thread later as a hard cutoff >it is my intention and expectation that anons with public keys will be able to help anons without public keys, in various ways to be discussed later >I am choosing the challenge text with some good-faith measures in mind, those who are interested in key collection are advised to discuss this with me later >if you realize later that you failed to generate a public key of the correct kind, do not panic, instead submit a corrected key at the next opportunity, and link the two keys together using mutual cryptographic signatures >if you wish to submit a public key of a different kind for any reason, use the same procedure, linking the two keys using mutual cryptographic signatures, and sign the challenge text with both >you may use the same key in multiple rounds of key collection

Message too long. Click here to view full text.


Esoteric and Political bsp Board owner 08/27/2025 (Wed) 17:24:06 Id: 4daf85 No. 28 [Reply]
Separately from my technical work, I am very invested in the esoteric and the political. As I understand it, I have been "given" an unusual role in my life largely having to do with these in combination.
For various reasons, I've had reason to think about Druidry and its potential revival as an option to help handle a religious crisis, as well as potentially being beneficial to many Western cultures. Much of what was known is said to have been lost to time due to a combination of oral tradition and a doctrine of secrecy within Druidry. Although Druids are frequently depicted primarily by their closeness to nature, it is said that Druids actually formed a priestly, judicial and professional class in Celtic cultures, contrasting with the contemporary world's professional specializations. Druids also believed in immortal human souls with re-incarnation. The idea of a Druidic revival sounds interesting, but the oral restriction of past Druidry means that any attempt at a revival is inherently limited, constraining new Druidic traditions so that they must start effectively disconnected from the original Druids. The issue is compounded by the contemporary tendency towards a merely aesthetic Druidry. However, I do not think either issue is entirely fatal. If going this route, or going a similar route with another system, it may be worthwhile to take advantage of contemporary factors rather than shun them. In particular, I would suggest that a Druid should be familiar with technology and the philosophical work that went into its development. This does not contradict any concept of being close to Nature: technology is fundamentally the application of Natural laws, and it strikes me as an unusual concept that the use or sophistication of these techniques is the issue rather than the correct or incorrect nature of their application. The practice of computing in particular is closest to mathematics and logic studied properly. It is my opinion that despite how it may seem there are sacred things in computing. When the ugliness is stripped away and a beautiful core kept then computational truth reveals itself over and over. The practice of programming challenges the mind with a demand for precision, and the same technology and practice graduates naturally into techniques such as theorem proving, which allow for obtaining and verifying truth directly.

Imageboard Technology Bibliography bsp Board owner 05/20/2025 (Tue) 00:54:40 Id: 4971bb No. 4 [Reply]
This thread is for discussion of technology and software specifically in the context of imageboards. This can include technology purpose-built for imageboards or technology in the context of being re-purposed for imageboard use.
Imageboards are an anonymous equivalent to bulletin boards with support for images and other files. Imageboards represent a particular context in networking: >Accountless: Typical imageboard use foregoes accounts entirely. Users can instead post directly without providing any details or confirming any off-site identity. >Anonymous: Typical imageboard use is anonymous, even when accounts are involved. Imageboards are known to have user cultures which discourage taking on an identity when there's no reason to do so. >Asynchronous: Imageboards are asynchronous. They are intended to be used by any number of users, and the contents posted on imageboards stored for later reading and replies by other users. Imageboards involve a minimal-trust non-ephemeral networking context.
An old thread on zzzchan about decentralized imageboards and forums: https://zzzchan.xyz/tech/thread/845.html https://archive.is/MTVrO >Lately I've been interested in looking for a final solution to the imageboard problem, deplatforming and relying on centralized authorities for hosting. P2P through TOR seems like the most logical path forward. But the software would also need to be accessible, easily installed and understood by just about anyone, and easily secure/private by default. >Retroshare seemed like a decent choice, but unfortunately its forum function is significantly lacking in features. I haven't investigate too much into zeronet either but from what I recall that was a very bloated piece of software and I'm looking for something that's light and simple. Then there's BitChan (>507) which fits most of the bill but contrasted with Retroshare is not simple to setup. >I know there is essentially nothing else out there so this thread isn't necessarily asking to be spoonfed some unknown piece of software that went under the radar of anons. But I think the concept of P2P imageboards should be further explored even though the failure of zeronet soured a lot of peoples perspective on the concept. Imageboards are so simple by nature I feel this shouldn't be as difficult as it is. Retroshare comes close but as I understand it you can't really moderate the forums that you create. Plus the media integration is basically non-existent, though media is a lesser concern. But having everything routed through tor and being able to mail, message, and have public forums all in a single small client available on every operating system is the kind of seamlessness that a program needs for widespread adoption. [Edited to prevent invalid links] A decent amount of the discussion is technologically obsolete, especially regarding cryptography, but if there are aspects anons want to pull out of that, I'd like to know. For self-serving reasons, I'm going to pull out this later post (14501) regarding the web: >The main problem is the web is centralized by nature. They later added stupid kludges like cuckflare, amazon, etc. to offload traffic, but that doesn't change the fact there is a single point of failure (when the origin web site goes offline, or gets "deplatformed" like 8chan). >On top of that, the web is attrociously bloated, which is another reason the kludges got the traction they did. And all that bloat for what exactly, when you don't even have a stardard/easy means to do what >14496 is asking. No, instead you have to build more shit on top of the huge steaming pile of shit. >But all this used to be really simple. On Usenet you'd subscribe to a newsgroup, and then every time you connect to your local server, it downloads the new messages. Then when you open your newsreader to one of those groups, you see all the new posts, threaded in whichever way *you* want them to be. You could even download all the new posts to your computer, a bit like POP3 for email (because they're also just text messages with headers). Now you have a local archive, whithout having to write convoluted scripts for parsing/scraping html/js and updating them when something changes on the server (and they don't get blocked one day when cuckflare decides your script is a bot). >And threads never expired! You could reply to a thread from a year ago, or even longer, if you had archived one of its posts (technically all you need are the headers). >And if you think a discussion is getting off-topic, you can split it into a new thread! (or even cross-post to a different newsgroup). Yeah that doesn't work at all on web imageboard or forum, despite their huge code size. So what really are they even good for, except taking up resources? Oh right, they're excellent at tracking and spying on you, and also good for hacking into your computer via one of countless bugs. It's hard to say whether or not the later ideas are right or wrong, since more options aren't always better, but some of the people working on Lynxchan may have interesting things to say about the web.

Cryptography Bibliography bsp Board owner 07/04/2025 (Fri) 01:10:12 Id: 2c5f87 No. 6 [Reply]
This thread is for collecting cryptographic techniques, protocols and implementations.
2 posts and 1 image omitted.
>>8 It should be pointed out that zkVMs emulating CPUs and circuit-based ZKP systems suggest very different ZKP machinery. Notably, most CPUs are "stored-program" machines while circuit-based systems are "static-program" machines. There are obvious corollaries that any ZKP system supporting the implementation of zkCPUs is generally recursive, and can therefore execute a description of itself. Two more corollaries seem obvious to me: >the zkCPU system running the original ZKP system is one such self-description of that original ZKP system >the performance of an optimal such self-description of a ZKP system is at least as good as that of the zkCPU self-description, no matter the performance metric used. There's an obvious question: what performance spectrum can be expected from "zkFPGA" systems based directly on using field operations for circuit descriptions and the rest of the necessary ZKP machinery? zkFPGA seems like it should be a reasonable "baseline" mechanism. If this can be assumed and zkFPGA self-emulation is poor, then one could expect that using zkCPU self-emulation is similarly poor. Since zkCPU self-emulation would consist of the composition of two virtual machines: a ZKP on top of a CPU and a CPU on top of a ZKP, then one would intuitively expect comparable results from the reverse composition of a CPU-on-ZKP-on-CPU system. If the intuition holds, poor performance for a "perfect" self-emulator should translate to poor performance for CPU-on-ZKP or for ZKP-on-CPU. If a practical ZKP has poor self-emulation, one would expect decent ZKP-on-CPU performance and therefore lousy CPU-on-ZKP performance. So then why bother with a zkCPU? The only way I can see zkCPU being sensible is if it has an acceptably-small factor overhead compared to zkFPGA. I see two questions worth answering: >What is the performance of the zkFPGA approach?

Message too long. Click here to view full text.

Two papers on lookup arguments and VMs built using them: Lasso and Jolt. I'll need to read these properly later. Particularly notable is the table on page 12 giving the costs of lookup arguments. I have to wonder what leads to the square root verification time.
>>8 >>9 I'm clearer now on why exactly zkVMs have been such a popular topic. The first issue is actually zero-knowledge itself. The zero-knowledge aspect of zk-SNARKs tends to be an expense in itself. However, doing a zero-knowledge proof directly inside a zero-knowledge proof is pointless: a non-ZK proof would be equally obscured at lower cost. An "optimal universal circuit" for a zk-SNARK would likely just be a ZKP wrapper checking a non-ZK SNARK. From that perspective, the real question isn't about efficient ZKP but about efficient proof in general. The second issue are special techniques such as "folding" and "lookup arguments" for transforming proofs in particular forms. If folding is effective but specifically applicable to zkCPU-styled designs, then the zkCPU does make sense. I'm not especially clear on the details of such techniques. This article discusses it in more detail: https://a16zcrypto.com/posts/article/jolt-6x-speedup/ >Here’s another way to frame Jolt’s performance: The Jolt prover now performs under 800 field multiplications per RISC-V cycle. In contrast, the Plonk prover applied to an arithmetic circuit commits to ~8 random values per gate, translating to at least 800 field multiplications per gate. This makes the Jolt prover faster per RISC-V cycle than the Plonk prover per gate. For most applications, this means the Jolt prover is much faster than popular SNARKs applied to hand-optimized circuits. >Why is this true? Because SNARKs like Plonk and Groth16 fail to exploit repeated structure at multiple levels. First, their prover costs don’t improve when applied to circuits with internal structure — they’re just as slow on structured circuits as on arbitrary ones. Second, encoding computations as circuits is already a mistake in many contexts. Batch-evaluation arguments like Shout do the opposite: They exploit repeated evaluation of the same function to dramatically reduce prover cost. And batch-evaluation is exactly what VMs do: They execute the same small set of primitive instructions over and over, one per cycle. So while a VM abstraction introduces some overhead compared to hand-optimized circuits, the structure it imposes enables optimizations that more than make up for it. For now, I don't think it's worth pursuing this particular approach. I think there will be better opportunities to exploit the same techniques later without resorting to the heavy zkVM approach, and Google's work on the topic both benchmarks well and is likely to be the subject of ongoing scrutiny and auditing in a way that Jolt can't be while it's still a moving target. Auditability is priority for my work, and some of the designs I have in mind are intended to take full advantage of such mechanisms when they are mature. In addition, there are "in-house" elements that work better with Google's work than with Jolt, as well as other issues with Jolt and its dependencies. I've considered the issue of using a zkVM and my conclusion is that it would ultimately end up a distraction in practice.

Message too long. Click here to view full text.


STGL Thread bsp Board owner 08/25/2025 (Mon) 22:18:52 Id: d0c07b No. 23 [Reply]
This is a thread about STGL and its currently-existing prototype, PGL. STGL is the simply-typed graphics language, and will be a rough equivalent to OpenGL 3.3/WebGL 2 based on the simply-typed lambda calculus. STGL will simplify graphics programming by providing a type system directly to the programmer to immediately detect and prevent type errors. STGL will also generalize immediate processing, batch processing, display lists and command buffers by allowing render programs to be written as functions acting at any granularity and executed with any arguments. These functionalities already exist in a weaker form in the prototype PGL. This thread will likely be updated eventually with more/better images, once they exist.
I would appreciate comments on STGL. Which operations should it have? Which types should it have? Are there pitfalls I should be aware of? I already have a decent idea of what it should look like via my work on the prototype PGL. I'd like to make sure that STGL is a mostly-complete representation of OpenGL 3.3 Core/WebGL 2, maybe without some known-bad misfeatures. I'm only recently a graphics programmer, but I'd like it to be capable of decently advanced graphics. From a user perspective, it would also be nice to collect some techniques that can be distilled into code. In particular, I would really like to see advanced capabilities such as SDF-based alpha, impostors and weighted-blended order-independent transparency at some point. SDF-based alpha and impostors are likely possible in PGL already, but WB-OIT is almost certainly not. Additionally, are there any features that users want to have, either in the STGL core or as a "standard macro" provided by front-ends to STGL?

DL1 Thread bsp Board owner 07/11/2025 (Fri) 01:08:00 Id: 97de93 No. 11 [Reply]
DL1 is a computing model, virtual machine, intermediate representation and programming language. DL1 most closely resembles an untyped functional programming language with effects that corresponds to stored-program computers based on a pattern-matching variant of the lambda calculus. DL1 has three primary features distinguishing it from other functional languages: - Explicit evaluation: an eval function that is completely specified, does not perform macro-expansion, operates specifically on a small well-defined structured language and has the same semantics as a JIT compiler. - Meta-execution: a set of meta-execution primitives that allows for safe virtual execution of DL1 code. - Copy-on-write semantics: DL1 vectors include an update function that uses copy-on-write to guarantee constant-time mutation when possible and produces a copy of a vector otherwise.
2 posts omitted.
>>18 For comparison, I have also implemented an analog of repeat and recurse in Racket. (define-syntax-rule (recurse name use clauses ...) (letrec ((name (match-lambda** clauses ...))) use)) (struct rs-next (x) #:transparent) (struct rs-finish (x) #:transparent) (define (run-repeat step st) (match st ((struct rs-finish (x)) x) ((struct rs-next (x)) (run-repeat step (apply step x))) (_ (error 'run-repeat "run-repeat invalid state ~a" st)))) (define finish rs-finish) (define-syntax-rule (next x ...) (rs-next (list x ...)))

Message too long. Click here to view full text.

For both DL1 and DL2 I'm unclear about what concurrency model would be appropriate for the two. I have had a few ideas, but I'm not satisfied with any of them yet. DL1's copy-on-write semantics effectively requires eager reference counting, and DL2's runtime-agnosticism means that in general it's preferable to have solutions for everything, including whatever memory model is appropriate for concurrent DL1. >Just copy everything I am seriously considering this. A good reason NOT to do this are large allocations. Making specific exceptions to allow for sharing such allocations does not strike me as entirely unreasonable. >Just use Software Transactional Memory (STM) This is almost my favorite solution. It is not a complete solution, but this might become part of a complete solution. In particular, it doesn't provide a solution to shared references, which are the main problem. It assumes a preexisting ability to share immutable structures. >Just use Atomic Reference Counts (ARC) This would be my favorite solution if it weren't a performance disaster under-the-hood. First, there are the straw solutions of only using a single atomic reference count for each object. This would be extremely slow, especially on older computers, even for unshared data with uncontended reference counts. Likelier I would have to use a hybrid atomic reference count, which is more common in practice anyways in cases when sharing is not guaranteed. There are two issues, one just a matter of performance but one which is semantic. The first issue is that using hybrid reference counts means that for shared objects I have to deal with either time overhead of checking whether an object's reference count needs an atomic update or accept the memory overhead of an additional reference count when decrementing the reference count.

Message too long. Click here to view full text.

In the process of writing DL1, I also wrote two extension systems, which eventually converged into one extension system called Octext. The basic premise of Octext is based on some observations: >difficulties in library use stem from various type incompatibilities between languages >if only a few fixed types are used, there's no need to write FFI bindings for untyped languages >statically-typed languages can use it directly, layer their own types based on specifications or expose library access to other programs >the overhead of library calls is proportional to the number of times libraries are called divided by the work done by those libraries per call >structured binary data is a universal data format >adding domain elements tends to be sufficient for most practical tasks >if the user provides a runtime themselves, their own data structures can be used directly by the extension, saving a pass of data marshaling More obscurely: >C libraries give the illusion of control by exposing every element of their domain as its own call, while Haskell libraries will casually include DSLs promoting correct use >the Haskell approach puts less pressure on the API proper and promotes both internal and external correctness Octext allows a different mode of language and program extension, providing a single agreed interface between language and extension based on structured binary data alongside extension-provided types that remain opaque to the extension's user. Octext takes an additional step to make this practical, it allows the extension user to provide its own runtime to the extension, allowing the native types of the extension user to supply the interface of the extension. My intention is to allow the user to supply their runtime as one of several vtables, with method gaps filled in programmatically. This way, the extension user can specify its most appropriate set of methods and the extension can use its most appropriate set of methods, with Octext bridging the gap. Currently, Octext is going some re-design, which I'm hoping should be its last. Currently, I have two major goals. First, I want to include is a direct memory access (DMA) interface, allowing the user runtime to lend it direct pointers to its data structures. Second, I want to update the system so that it can handle concurrency.

Message too long. Click here to view full text.


DL2 Thread bsp Board owner 07/11/2025 (Fri) 07:53:29 Id: a6d79a No. 12 [Reply]
DL2 is a functional programming language and intermediate representation based on System F-Omega. DL2 extends System F-Omega with iso-recursive types, recursive functions and sum-of-existential-product types. This variant of System F-Omega is representative of a large number of typed programming languages. This makes DL2 an attractive compilation target, as many languages can be compiled directly to DL2. DL2 is intentionally runtime-agnostic so that it can be compiled to many targets or embedded in other languages.
2 posts and 1 image omitted.
I was thinking about how to refer to the syntactic elements of DL2. For a while, I've been using "terms" ambiguously and referring to "value-level terms" in the working documentation, and that's confusing, since values already has a particular different meaning. Instead, I think I should use the following convention: - DL2 forms: the basic syntactic elements of DL2 - DL2 terms: a form which has a type, meaning it has a judgment "[: t]" for some type t - DL2 types: a form which has a kind, meaning it has a judgment "[:: k]" for some kind k - DL2 kinds: a form which is a kind, meaning it has the terminal judgment ":::" This cleanly suggests the "term : type : kind" concept, which is a very regular idea and therefore easy to remember. The word "form" is more neutral than "term", and has the connotation of "lexical" rather than "semantic", so is less confusing. This leaves "expression" out on its own. I think the best way to use "expression" is in contrast with "value", and to let them sit as independent of the other terms, with the only relation being that expressions and values are still assumed to be forms or be represented by forms. So in addition to the above conventions: - DL2 value: a closed, valid DL2 form which is normalized according to some appropriate definition of "value normalization", which is potentially specific to a particular DL2 system - DL2 expression: a closed, valid DL2 form, which may or may not be normalized according to "value normalization" This would then allow discussing "term values", "type values", "kind values" alongside "term expressions", "type expressions" and "kind expressions". An example of this usage would be this simple theorem, resulting immediately from the fact that there are no reduction rules for kinds: "All kind expressions are kind values." This also clears up the ugly notion of of "type constructors" common in the literature, by allowing the concept of "type expressions" as an alternative.
While working on DL2's first HM type inference, I wanted to put down some notes on bidirectional typing. Bidirectional typing is a system which combines type inference with type checking. Specifically, it augments type checking with rules for exactly when types can be synthesized or inferred. The hope is to get a simple implementation of type inference which matches the straight-forwardness of type checking. The attached paper goes into the theoretical reasons, but ignores some that might motivate implementers: >checker-inferencer's structure is easy to write and obvious: it is syntax-directed, like many functional programs, and can be written with the natural recursive structure >does not require unification for STLC inference >error messages can be reported exactly where they're discovered >as described below, "bare lambda" is "unlikely" to occur in the function position of an apply, so STLC inference will almost always succeed, requiring only the declarations of the types of each relevant constant The typical approach (according to the paper, "Pfenning Recipe"), is to have constructors check and to have constants, variables and destructors synthesize. In the specific case of "lambda" and "apply", this means that "lambda" is checked and "apply" synthesizes. This strategy then directs the rest of the rules. In the case of apply, synthesis requires that the function's type is synthesized, but once the function type is synthesized the argument only needs to be checked. The types of all variables are known up-front, since lambda can only ever be checked. There's an obvious issue with this approach: what do you do about instances of polymorphic constants? One of the most important uses of type inference in practice is to allow for using polymorphic constants without supplying types. The recipe described does not, on its own, allow for inferring the type arguments to a polymorphic constant, even when restricting to only prefix-polymorphic constants. It's necessary to re-introduce type variables and unification. Of course, once you have type variables and unification, you already have a major component of Hindley-Milner type inference (i.e. "the hard part"). At that point, the obvious question is: "why bother if HM does everything?" I'm a skeptic regarding the approach, but in relation to DL2 there are some interesting aspects. The work on bidirectional typing is a good hint as to what approaches are likely to work for practical type inference. Since DL2 is more similar to GHC's Core than Haskell itself, DL2 can accept many possible type inference mechanisms. Both bidirectional typing and unification-based inference bear similarities to logic programming without non-determinism or with limited non-determinism. Bidirectional typing tends to follow the hints of logic programming, specifically in suggesting that there are multiple modes of type-checking. At no point in the "original" bidirectional typing is it needed to have either non-determinism or unification, but modes from logic programming are still relevant. Meanwhile, unification-based inference takes complete expressions and uses only unification to derive the most general type. Bidirectional typing's error-checking utility is another reason why it's interesting from a DL2 perspective. The restricted format of bidirectional typing means that error-checking is guaranteed to involve local judgments, unless extended with unification. There may or may not be something to take there. When working on DL2's HM-based macro-expander, I thought about how to work with function application. It occurred to me that, if the macro-expander took an input "return type", then it would be possible to synthesize a non-polymorphic function type from just the arguments and without generating a new type variable. This made me think about bidirectional typing and find the above paper. This idea doesn't seem entirely distinct from bidirectional typing, especially the "simultaneous input and output" system described in the attached paper. DL2 macro-expanders also inherently suggest a requirement for locality, which is one of the strengths of bidirectional types.

Message too long. Click here to view full text.

>>17 While working on DL2, I realized two very good arguments in favor of this "bi-HM" system. First, it can be used to supply binder argument judgments preemptively. This become apparent while working on case, but I believe this would also come up in pattern-matching. The issue was that the case argument types were computed separately from the non-case argument types. Second, when expanding some programs the type is known outright, and it's easier to just generate the code directly rather than rely on the call macro to do it. In this case, it's useful to have a helper variant of the expand function that lets the user supply the (sometimes-partial) type for unification after the fact. I noticed this while working on the DL2 C FFI, specifically for casting C arithmetic types to and from DL2's word type. I realized that most of my existing options weren't very good for that particular case, while immediate generation would be simple with only the option to supply the appropriate type to dl2-expand. Supplying this type directly is also simpler than generating the macro call for a type-check. In other words, I'm likely to end up with a variant of dl2-expand which does take the type directly. I also learned recently that Coq referred to its own type system as using "bidirectional typing" by name: https://rocq-prover.github.io/platform-docs/rocq_theory/explanation_bidirectionality_hints.html That said, it still seems silly in an HM context to use bidirectional typing as Coq's documentation implies. Type variables in HM means that every input type is a potential output type simply by providing a new variable. From this perspective, there's no difference between type inference and type checking. The whole implementation of bidirectional typing in an HM context is simply allowing an input type to be submitted. This also corresponds well with a logic programming interpretation of type-checking rules. The primary reason to separate input and output when using bidirectional typing with HM seems to be to avoid generating and unifying unnecessary type variables. I know that it's possible to optimize this generation and unification using HM-level matching, so that almost every form will have at most one type variable created for this purpose. In particular: >lambda generates at most one variable for each argument and at most one variable for the return and destructs its input variable >apply generates at most one variable for each argument and re-uses the input variable >constructors generate at most one variable for each argument and destruct the input variable

Message too long. Click here to view full text.


Networking and Protocols Bibliography bsp Board owner 05/12/2025 (Mon) 21:10:24 Id: 3e1465 No. 2 [Reply]
This thread is for collecting and evaluating papers and software related to networking and network protocols.
Project website is here: https://reticulum.network From the PDF manual: >Reticulum is a cryptography-based networking stack for building both local and wide-area networks with readily available hardware, that can continue to operate under adverse conditions, such as extremely low bandwidth and very high latency. >Reticulum allows you to build wide-area networks with off-the-shelf tools, and offers end-to-end encryption, forward secrecy, autoconfiguring cryptographically backed multi-hop transport, efficient addressing, unforgeable packet acknowledgements and more. >Reticulum enables the construction of both small and potentially planetary-scale networks, without any need for hierarchical or bureaucratic structures to control or manage them, while ensuring individuals and communities full sovereignty over their own network segments. >Reticulum is a complete networking stack, and does not need IP or higher layers, although it is easy to utilise IP (with TCP or UDP) as the underlying carrier for Reticulum. It is therefore trivial to tunnel Reticulum over the Internet or private IP networks. Reticulum is built directly on cryptographic principles, allowing resilience and stable functionality in open and trustless networks. >No kernel modules or drivers are required. Reticulum can run completely in userland, and will run on practically any system that runs Python 3. Reticulum runs well even on small single-board computers like the Pi Zero. More technical details: >The network unit is a "destination" >Destinations are represented as 16 byte hashes of their properties >4 types of destinations:

Message too long. Click here to view full text.

Edited last time by bsp on 05/13/2025 (Tue) 01:12:28.

[ 1 ]
Forms
Delete
Report