| ▲ | bnchrch 13 hours ago |
| Gleam is a beautiful language, and what I wish Elixir would become (re:typing). For those that don't know its also built upon OTP, the erlang vm that makes concurrency and queues a trivial problem in my opinion. Absolutely wonderful ecosystem. I've been wanting to make Gleam my primary language, but I fear LLMs have frozen programming language advancement and adoption for anything past 2021. But I am hopeful that Gleam has slid just under the closing door and LLMs will get up to speed on it fast. |
|
| ▲ | Uehreka 12 hours ago | parent | next [-] |
| > I fear LLMs have frozen programming language advancement and adoption for anything past 2021. Why would that be the case? Many models have knowledge cutoffs in this calendar year. Furthermore I’ve found that LLMs are generally pretty good at picking up new (or just obscure) languages as long as you have a few examples. As wide and varied as programming languages are, syntactically and ideologically they can only be so different. |
| |
| ▲ | miki123211 10 hours ago | parent | next [-] | | There's a flywheel where programmers choose languages that LLMs already understand, but LLMs can only learn languages that programmers write a sufficient amount of code in. Because LLMs make it that much faster to develop software, any potential advantage you may get from adopting a very niche language is overshadowed by the fact that you can't use it with an LLM. This makes it that much harder for your new language to gain traction. If your new language doesn't gain enough traction, it'll never end up in LLM datasets, so programmers are never going to pick it up. | | |
| ▲ | crystal_revenge 9 hours ago | parent | next [-] | | > Because LLMs make it that much faster to develop software I feel as though "facts" such as this are presented to me all the time on HN, but in my every day job I encounter devs creating piles of slop that even the most die-hard AI enthusiasts in my office can't stand and have started to push against. I know, I know "they just don't know how to use LLMs the right way!!!", but all of the better engineers I know, the ones capable of quickly assessing the output of an LLM, tend to use LLMs much more sparingly in their code. Meanwhile the ones that never really understood software that well in the first place are the ones building agent-based Rube Goldberg machines that ultimately slow everyone down If we can continue living in the this AI hallucination for 5 more years, I think the only people capable of producing anything of use or value will be devs that continued to devote some of their free time to coding in languages like Gleam, and continued to maintain and sharpen their ability to understand and reason about code. | | |
| ▲ | Verdex 6 hours ago | parent [-] | | This last week: * One developer tried to refactor a bunch of graph ql with an LLM and ended up checking in a bunch of completely broken code. Thankfully there were api tests. * One developer has an LLM making his PRs. He slurped up my unfinished branch, PRed it, and merged (!) it. One can only guess that the approved was also using an LLM. When I asked him why he did it, he was completely baffled and assured me he would never. Source control tells a different story. * And I forgot to turn off LLM auto complete after setting up my new machine. The LLM wouldn't stop hallucinating non-existent constructors for non-existent classes. Bog standard intellisense did in seconds what I needed after turning off LLM auto complete. LLMs sometimes save me some time. But overall I'm sitting at a pretty big amount of time wasted by them that the savings have not yet offset. |
| |
| ▲ | treyd 4 hours ago | parent | prev | next [-] | | I don't think this is actually true. LLMs have an impressive amount of ability to do knowledge-transfer between domains, it only makes sense that that would also apply to programming languages, since the basic underlying concepts (functions, data structures, etc.) exist nearly everywhere. If this does appear to become a problem, is it not hard to apply the same RLHF infrastructure that's used to get LLMs effective at writing syntactically-correct code that accomplishes sets of goals in existing programming languages to new ones. | | |
| ▲ | troupo 2 hours ago | parent [-] | | > LLMs have an impressive amount of ability to do knowledge-transfer between domains, it only makes sense that that would also apply to programming languages, since the basic underlying concepts (functions, data structures, etc.) exist nearly everywhere. That would make sense if LLMs understood the domains and the concepts. They don't. They need a lot of training data to "map" the "knowledge transfer". Personal anecdote: Claude stopped writing Java-like Elixir only some time around summer this year (Elixir is 13 years old), and is still incapable of writing "modern HEEX" which changed some of the templaring syntax in Phoenix almost two years ago. |
| |
| ▲ | croes 6 hours ago | parent | prev [-] | | I bet LLMs create their version of Jevons paradox. More trial and error because trial is cheap, in the end less typing but hardly faster end results |
| |
| ▲ | schrodinger 11 hours ago | parent | prev [-] | | The motivation isn’t there to create new languages for humans when you’re programming at a higher level of abstraction now (AI prompting). It’d be like inventing a new assembly language when everyone is writing code in higher level languages that compile to assembly. I hope it’s not true, but I believe that’s what OP meant and I think the concern is valid! | | |
| ▲ | abound 10 hours ago | parent | next [-] | | I would argue it's more important than ever to make new languages with new ideas as we move towards new programming paradigms. I think the existence of modern LLMs encourages designing a language with all of the following attributes: - Simple semantics (e.g. easy to understand for developers + LLMs, code is "obviously" correct) - Very strongly typed, so you can model even very complex domains in a way the compiler can verify - Really good error messages, to make agent loops more productive - [Maybe] Easily integrates with existing languages, or at least makes it easy to port from existing languages We may get to a point where humans don't need to look at the code at all, but we aren't there yet, so making the code easy to vet is important. Plus, there's also a few bajillion lines of legacy code that we need to deal with, wouldn't it be cool if you could port (or at least extend it) it into some standardized, performant, LLM-friendly language for future development? | | |
| ▲ | kevindamm 10 hours ago | parent | next [-] | | I think that LLMs will be complemented best with a declarative language, as inserting new conditions/effects in them can be done without modifying much (if any!) of the existing code. Especially if the declarative language is a logic and/or constraint-based language. We're still in early days with LLMs! I don't think we're anywhere near the global optimum yet. | |
| ▲ | aaronblohowiak 9 hours ago | parent | prev [-] | | This is why I use rust for everything practicable now. Llms make the tedious bits go away and I can just enjoy the fun bits. |
| |
| ▲ | pxc 11 hours ago | parent | prev | next [-] | | > It’d be like inventing a new assembly language when everyone is writing code in higher level languages that compile to assembly. Isn't that what WASM is? Or more or less what is going on when people devise a new intermediate representation for a new virtual machine? Creating new assembly languages is a useful thing that people continue to do! | |
| ▲ | merlincorey 10 hours ago | parent | prev | next [-] | | I believe prompting an AI is more like delegation than abstraction especially considering the non-deterministic nature of the results. | | |
| ▲ | sarchertech 10 hours ago | parent [-] | | It does further than non-determinism. LLM output is chaotic. 2 nearly identical prompts with a single minor difference can result in 2 radically different outputs. |
| |
| ▲ | rapind 11 hours ago | parent | prev [-] | | We may end up using AI to create simplified bespoke subset languages that fit our preferences. Like a DSL of sorts but with better performance characteristics than a traditional DSL and a small enough surface area. |
|
|
|
| ▲ | sbuttgereit 12 hours ago | parent | prev | next [-] |
| > For those that don't know its also built upon OTP, the erlang vm This isn't correct. It can compile to run on the BEAM: that is the Erlang VM. OTP isn't the Erlang VM; rather, "OTP is set of Erlang libraries and design principles providing middle-ware to develop [concurrent/distributed/fault tolerant] systems." Gleam itself provides what I believe is a substantial subset of OTP support via a library: https://github.com/gleam-lang/otp Importantly: "Gleam has its own version of OTP which is type safe, but has a smaller feature set. [vs. Elixir, another BEAM language with OTP support]" |
| |
| ▲ | lpil 12 hours ago | parent [-] | | Hi, I’m the creator of Gleam! The comment you are replying to is correct, and you are incorrect. All OTP APIs are usable as normal within Gleam, the language is designed with it in mind, and there’s an additional set of Gleam specific additions to OTP (which you have linked there). Gleam does not have access to only a subset of OTP, and it does not have its own distinct OTP inspired OTP. It uses the OTP framework. | | |
| ▲ | tazjin 11 hours ago | parent | next [-] | | (I know Erlang well, but haven't used Gleam) The library the parent links to says this: > Not all Erlang/OTP functionality is included in this library. Some is not possible to represent in a type safe way, so it is not included. Does this mean in practice that you can use all parts of OTP, but you might lose type checking for the parts the library doesn't cover? | | |
| ▲ | lpil 10 hours ago | parent [-] | | No, it means that one specific package only offers bindings to certain parts. It’s the documentation for one library, not the language. |
| |
| ▲ | miki123211 10 hours ago | parent | prev | next [-] | | > Hi, I’m the creator of Gleam! What's the state of Gleam's JSON parsing / serialization capabilities right now? I find it to be a lovely little language, but having to essentially write every type three times (once for the type definition, once for the serializer, once for the deserializer) isn't something I'm looking forward to. A functional language that can run both on the backend (Beam) and frontend (JS) lets one do a lot of cool stuff, like optimistic updates, server reconciliation, easy rollback on failure etc, but that requires making actions (and likely also states) easily serializable and deserializable. | | |
| ▲ | lpil 10 hours ago | parent | next [-] | | You can generate those conversions, most people do. But also, you shouldn’t think of it as writing the same type twice! If you couple your external API and your internal data model you are greatly restricting your domain modelling cability. Even in languages where JSON serialisation works with reflection I would recommend having a distinct definition for the internal and external structure so you can have the optimal structure for each context, dodging the “lowest common decimator” problem. | | |
| ▲ | miki123211 10 hours ago | parent | next [-] | | I understand your point, and I agree with it in most contexts! However, for the specific use case where one assumes that the client and server are running the exact same code (and the client auto-refreshes if this isn't the case), and where serialization is only used for synchronizing between the two, decoupling the state from it's representation on the wire doesn't really make sense. | | | |
| ▲ | premek 8 hours ago | parent | prev [-] | | >You can generate those conversions, most people do. Hi, what do people use to generate them, I found gserde (edit: and glerd-json) | | |
| |
| ▲ | worthless-trash 13 minutes ago | parent | prev | next [-] | | Also. The lisp cam now generate serialisers and deserializers for some types iirc. | |
| ▲ | lawn 10 hours ago | parent | prev [-] | | This is also what really annoyed me when I tried out Gleam. I'm waiting for something similar to serde in Rust, where you simply tag your type and it'll generate type-safe serialization and deserialization for you. Gleam has some feature to generate the code for you via the LSP, but it's just not good enough IMHO. | | |
| ▲ | lpil 10 hours ago | parent [-] | | Multiple of such tools exist and have done for years. Serde isn’t a Rust-core project, and similarly the Gleam alternatives are not Gleam-core. | | |
| ▲ | lawn 10 hours ago | parent [-] | | Rust has macros that make serde very convenient, which Gleam doesn't have. Could you point to a solution that provides serde level of convenience? Edit: The difference with generating code (like with Gleam) and having macros generate the code from a few tags is quite big. Small tweaks are immediately obvious in serde in Rust, but they drown in the noise in the complete serialization code like with the Gleam tools. | | |
| ▲ | sshine 7 hours ago | parent | next [-] | | > Rust has macros that make serde very convenient, which Gleam doesn't have. To be fair, Rust's proc macros are only locally optimal: While they're great to use, they're only okay to program. Your proc-macro needs to live in another crate, and writing proc macros is difficult. Compare this to dependently typed languages og Zig's comptime: It should be easier to make derive(Serialize, Deserialize) as compile-time features inside the host language. When Gleam doesn't have Rust's derivation, it leaves for a future where this is solved even better. | |
| ▲ | lpil 9 hours ago | parent | prev [-] | | In Gleam code generators are most commonly used, similar to in C#, Go, or Elm. | | |
|
|
|
| |
| ▲ | sbuttgereit 9 hours ago | parent | prev [-] | | Fair enough, but to be fair to my statements, the quotes I chose were largely from gleam-lang.org or the Gleam OTP library. Take for example this section of the Gleam website FAQ section: https://gleam.run/frequently-asked-questions/#how-does-gleam... "Elixir has better support for the OTP actor framework. Gleam has its own version of OTP which is type safe, but has a smaller feature set." At least on the surface, "but has a smaller feature set" suggests that there are features left of the table: which I think it would be fair to read as a subset of support. If I look at this statement from the Gleam OTP Library `readme.md`: "Not all Erlang/OTP functionality is included in this library. Some is not possible to represent in a type safe way, so it is not included. Other features are still in development, such as further process supervision strategies." That quote leaves the impression that OTP is not fully supported and therefore only a subset is. It doesn't expound further to say unsupported OTP functionality is alternatively available by accessing the Erlang modules/functions directly or through other mechanisms. In all of this I'll take your word for it over the website and readme files; these things are often not written directly by the principals and are often not kept as up-to-date as you'd probably like. Still even taking that at face value, I think it leaves some questions open. What is meant by supporting all of OTP? Where the documentation and library readme equivocates to full OTP support, are there trade-offs? Is "usable as normal" usable as normal for Erlang or as normal for Gleam? For example, are the parts left out of the library available via directly accessing the Erlang modules/functions, but only at the cost of abandoning the Gleam type safety guarantees for those of Erlang? How does this hold for Gleam's JavaScript compilation target? As you know, Elixir also provides for much OTP functionality via direct access to the Erlang libraries. However, there I expect the distinction between Elixir support and the Erlang functionality to be substantially more seamless than with Gleam: Elixir integrates the Erlang concepts of typing (etc.) much more directly than does Gleam. If, however, we're really talking about full OTP support in Gleam while not losing the reasons you might choose Gleam over Elixir or Erlang, which I think is mostly going to be about the static typing... then yes, I'm very wrong. If not... I could see how strictly speaking I'm wrong, but perhaps not completely wrong in spirit. | | |
| ▲ | lpil 9 hours ago | parent [-] | | Ah, that’s good feedback. I agree, that documentation is misleading. I’ll fix them ASAP. > Elixir also provides for much OTP functionality via direct access to the Erlang libraries. This is the norm in Gleam too! Gleam’s primary design constraint is interop with Erlang code, so using these libraries is straightforward and commonplace. |
|
|
|
|
| ▲ | agos 12 hours ago | parent | prev | next [-] |
| the Erlang vm is called BEAM, not OTP. sadly, Gleam's implementation of OTP is not at the same level as Elixir's or Erlang. |
| |
| ▲ | lpil 12 hours ago | parent [-] | | Gleam uses regular OTP, it doesn’t have a distinct OTP inspired framework. Source:
I’m the author of Gleam. | | |
| ▲ | girvo 10 hours ago | parent | next [-] | | I wonder why so many have got this wrong across this thread? Was it true once upon a time or something, or have people just misunderstood your docs or similar? | | |
| ▲ | lpil 10 hours ago | parent [-] | | OTP is a very complex subject and quite unusual in its scope, and it’s not even overly clear what it even is. Even in Erlang and Elixir it’s commonly confused, so I think it’s understandable that Gleam has the same problem further still with its more distinct programming style. |
| |
| ▲ | andy_ppp 10 hours ago | parent | prev [-] | | "Big Elixir" must be paying people to misunderstand Gleam today eh ;-) |
|
|
|
| ▲ | market_hacker 3 hours ago | parent | prev | next [-] |
| i just implemented a project in elixir with LLM support and would never have considered that before. (i had never used elixir before) - So who knows maybe it will help adoption? |
|
| ▲ | troupo 2 hours ago | parent | prev | next [-] |
| > what I wish Elixir would become (re:typing). Elixir is slowly rolling out set-theoretic typing: https://hexdocs.pm/elixir/main/gradual-set-theoretic-types.h... |
|
| ▲ | innocentoldguy 6 hours ago | parent | prev [-] |
| I don’t mean to minimize the huge effort by the Gleam team; however, Elixir cannot become Gleam without breaking OTP/BEAM in the same ways Gleam does. As it stands now, Elixir is the superior language between the two, if using the full Erlang VM is your goal. |
| |
| ▲ | worthless-trash 8 minutes ago | parent [-] | | I use many of the otp functions in gleam on thr regular, what functionality cant i call? Gleam can call any erlang function, and can somewhat handle the idc types. [ im sure it has another name ]. Did i miss something that gleam fails on, because this is one of my concerns. |
|