| ▲ | jongjong 6 hours ago | |||||||||||||||||||||||||
I don't see the point of Elixir now. LLMs work better with mainstream languages which make up a bigger portion of their training set. I don't see the point of TypeScript either, I can make the LLM output JavaScript and the tokens saved not having to add types can be used to write additional tests... The aesthetics or safety features of the languages no longer matter IMO. Succinctness, functionality and popularity of the language are now much more important factors. | ||||||||||||||||||||||||||
| ▲ | HorizonXP 6 hours ago | parent | next [-] | |||||||||||||||||||||||||
So I know these are just benchmarks, but apparently Elixir is one of the best languages to use with AI, despite having a smaller training dataset: https://www.youtube.com/watch?v=iV1EcfZSdCM and https://github.com/Tencent-Hunyuan/AutoCodeBenchmark/tree/ma... Furthermore, it's actually kind of annoying that the LLMs are not better than us, and still benefit from having code properly typed, well-architected, and split into modules/files. I was lamenting this fact the other day; the only reason we moved away from Assembly and BASIC, using GOTOs in a single huge file was because us humans needed the organization to help us maintain context. Turns out, because of how they're trained, so do the LLMs. So TypeScript types and tests actually do help a lot, simply because they're deterministic guardrails that the LLM can use to check its work and be steered to producing code that actually works. | ||||||||||||||||||||||||||
| ||||||||||||||||||||||||||
| ▲ | cloud8421 6 hours ago | parent | prev | next [-] | |||||||||||||||||||||||||
> I don't see the point of Elixir now. LLMs work better with mainstream languages which make up a bigger portion of their training set. I can't say if it works better with other languages, but I can definitely say both Opus and Codex work really well with Elixir. I work on a fairly large application and they consistently produce well structured working code, and are able to review existing code to find issues that are very easy to miss. The LLM needs guidance around general patterns, e.g. "Let's use a state machine to implement this functionality" but it writes code that uses language idioms, leverages immutability and concurrency, and generally speaking it's much better than any first pass that I would manually do. I have my ethical concerns, but it would be foolish of me to state that it works poorly - if anything it makes me question my own abilities and focus in comparison (which is a whole different topic). | ||||||||||||||||||||||||||
| ▲ | jakejohnson 5 hours ago | parent | prev | next [-] | |||||||||||||||||||||||||
LLMs work great with Elixir. Running tsc in a loop while generating code still catches type errors introduced by an LLM and it’s faster than generating additional tests. Elixir is also succinct and highly functional. If you can’t find a specific library it’s easier than ever to build out the barebones functionality you need yourself or use NIFs, ports, etc. | ||||||||||||||||||||||||||
| ▲ | dnautics 5 hours ago | parent | prev | next [-] | |||||||||||||||||||||||||
> Succinctness, functionality and popularity of the language are now much more important factors. No. I would argue that popularity per se is irrelevant: if there are a billion examples of crap code, the LLMs learn crap code. conversely know only 250 documents can poison an LLM independent if model size. [Cite anthropic paper here]. The most important thing is conserve context. Succinctness is not really what you want because most context is burned on thinking and tool calls (I think) and not codegen. Here is what I think is not important: strong typing, it requires a tool call anyways to fetch the type. Here is what I think is important: - fewer footguns - great testing (and great testing examples) - strong language conventions (local indicators for types, argument order conventions, etc) - no weird shit like __init__.py that could do literally anything invisible to the standard code flow | ||||||||||||||||||||||||||
| ▲ | techpression 4 hours ago | parent | prev | next [-] | |||||||||||||||||||||||||
Your code doesn’t run anywhere? Running on the BEAM is extremely helpful for a lot of things. Also, I review my LLM output, I want that experience to be enjoyable. | ||||||||||||||||||||||||||
| ▲ | WolfeReader 5 hours ago | parent | prev [-] | |||||||||||||||||||||||||
I'm starting to see a new genre of post here in the AI bubble, where people go to topics that aren't about AI at all, and comment something like, "this doesn't matter because it's not AI". This is the third I've seen in a week. | ||||||||||||||||||||||||||