| ▲ | forgotpwd16 3 days ago | |||||||||||||||||||||||||||||||
There's no target. Someone is just experimenting with Claude. 2026 gonna be year of slop. And note this project is not FOSS. (Not sure what author is thinking. Don't they know nowadays someone can code launder their code through Claude?) P.S. The English-to-AST though could be useful to other projects that may want natural-ish language input without having to resort to a LLM. E.g. a modify CSV tool in natural language like one posted yesterday. | ||||||||||||||||||||||||||||||||
| ▲ | tristenharr 3 days ago | parent [-] | |||||||||||||||||||||||||||||||
Hello, author here! The license is BSL 1.1 based on the MariaDB license, the source transitions to MIT on December 24th 2029. We're a small bootstrapped team, and I was worried if I went full on FOSS from the get-go a big player might resell it with a easy one click button to deploy things like the playground and such that's coming soon and I'd struggle to feed myself while maintaining a potentially growing project while others reaped the fruits of the labor. I've seen that kind of thing happen a lot in recent years. I also am aware somebody could code-launder things, but personally I'd take that as a compliment, if somebody truly wants to copy my programming language and such, then I'd be glad to have inspired someone haha! We're tiny, bootstrapped, and nobody has ever heard of us so that kind of attention alone would be awesome! It's free for individuals, orgs with < 25 people, educators, students, and non-profits, and I'm currently still working through monetization but I'm thinking of taking two paths, one being payment to get the Z3 verification feature that lets you mathematically verify that the code won't panic at runtime. The other being payment to use the tokenizer that will be built with this. If you look here you can see the lexicon to get a better idea how the english compile pipeline works. https://github.com/Brahmastra-Labs/logicaffeine/blob/main/as... This also makes the language highly configurable as you can change any of the key-words to better suit your brain if you so chose. Current LLM's biggest bottlenecks in my personal opinion would be the tokenizers and the way they get their info. Imagine if you got fed in random chunks of tokens the way they do. If you could create an AST of the english and use that to tokenize things instead... well at least I have some hair-brained theories here I want to test out. Standard LLM tokenizers are statistical and they chop words into chunks based on frequency, often breaking semantic units. This lexer could perform morphological normalization on the fly, an LLM spends millions of parameters to learn that the word "The" usually precedes a noun, but this parser knows that deterministically. This could be used to break things into clauses rather than arbitrary windows. Even just as a tool for compaction and goal tracking and rule following this could be super useful is my theory. A semantic tokenizer could potentially feed an LLM all parse trees to teach it ambiguity. There is a test suite of over 1500 passing tests. I do utilize Claude, but I try really hard to prevent it from becoming slop. Development follows a strict RED/GREEN TDD cycle, where the feature gets specced out first, the plan and spec gets refined and tests get designed, then the tests get written and then implementation occurs. It is somewhat true that I can't make as many promises about the code regarding untested behavior, but I can make promises regarding the things that have been tested. The test suite is wired directly into CI. I guess it is fair that some people will feel any code written with the assistance of an LLM is slop, but everyone is still working out their workflows and personally you can find mine here: https://github.com/Brahmastra-Labs/logicaffeine/blob/main/Tr... TLDR of it would be: 1. Don't Vibe-Code 2. One-shot things in a loop and if you fail use git stash. 3. Spend 95% of the time cleaning the project and writing specifications, spend 5% of the time implementing. 4. Create a generate-docs.sh script that dumps your entire project into a single markdown file. 5. Summon a council of experts and have them roleplay. 6. Use the council to create a specification for the thing you are working on. 7. Iterate and refine the specification until it is pristine. 8. Only begin to code when the specification is ready. Use TDD with red/green tests. I'm always learning though, so please if you've got suggestions on better ways share them! | ||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||