| ▲ | throwawa14223 4 days ago |
| It's getting harder to find IDEs that properly boycott LLMs. |
|
| ▲ | ants_everywhere 4 days ago | parent | next [-] |
| In a similar vein I can barely find an OS that refuses to connect to the internet |
| |
|
| ▲ | jama211 4 days ago | parent | prev | next [-] |
| “Boycott” is a pretty strong term. I’m sensing a strong dislike of ai from you which is fine but if you dislike a feature most people like it shouldn’t be surprising to you that you’ll find yourself mostly catered to by more niche editors. |
| |
| ▲ | isodev 4 days ago | parent [-] | | I think it's a pretty good word, let's not forget how LLMs learned about code in the first place... by "stealing" all the snippets they can get their curl hands on. | | |
| ▲ | astrange 4 days ago | parent | next [-] | | And by reading the docs, and by autogenerating code samples and testing them against verifiers, and by paying a lot of people to write sample code for sample questions. | | |
| ▲ | troupo 4 days ago | parent [-] | | Yeah, none of that happened with LLMs | | |
| ▲ | khafra 4 days ago | parent [-] | | https://openai.com/index/prover-verifier-games-improve-legib... OpenAI has been doing verifier-guided training since last year. No SOTA model was trained without verified reward training for math and programming. | | |
| ▲ | troupo 4 days ago | parent [-] | | Your claim: "by reading the docs, and by autogenerating code samples and testing them against verifiers, and by paying a lot of people to write sample code for sample questions." Your link: "Grade school math problems from a hardcoded dataset with hardcoded answers" [1] It really is the same thing. [1] https://openai.com/index/solving-math-word-problems/ --- start quote --- GSM8K consists of 8.5K high quality grade school math word problems. Each problem takes between 2 and 8 steps to solve, and solutions primarily involve performing a sequence of elementary calculations using basic arithmetic operations (+ − × ÷) to reach the final answer. --- end quote --- | | |
| ▲ | khafra 4 days ago | parent [-] | | My two claims: 1. OpenAI has been doing verifier-guided training since last year. 2. No SOTA model was trained without verified reward training for math and programming. I supported the first claim with a document describing what OpenAI was doing last year; the extrapolation should have been straightforward, but it's easy for people who aren't tracking AI progress to underestimate the rate at which it occurs. So, here's some support for my second claim: https://arxiv.org/abs/2507.06920
https://arxiv.org/abs/2506.11425
https://arxiv.org/abs/2502.06807 | | |
| ▲ | troupo 4 days ago | parent [-] | | > the extrapolation should have been straightforward, Indeed."By late next month you'll have over four dozen husbands" https://xkcd.com/605/ > So, here's some support for my second claim: I don't think any of these links support the claim that "No SOTA model was trained without verified reward training for math and programming" https://arxiv.org/abs/2507.06920: "We hope this work contributes to building a scalable foundation for reliable LLM code evaluation" https://arxiv.org/abs/2506.11425: A custom agent with a custom environment and a custom training dataset on ~800 predetermined problems. Also "Our work is limited to Python" https://arxiv.org/abs/2502.06807: The only one that somewhat obliquely refers to you claim |
|
|
|
|
| |
| ▲ | jama211 3 days ago | parent | prev [-] | | Ah the classic “I don’t want to acknowledge how right that person is about their point, so instead I’ll ignore what they said and divert attention to another point entirely”. You’re just angry and adding no value to this conversation because of it |
|
|
|
| ▲ | armadyl 4 days ago | parent | prev | next [-] |
| If you're on macOS there's Code Edit as a native solution (fully open source, not VC backed, MIT licensed), but it's currently in active development: https://www.codeedit.app/. Otherwise there's VSCodium which is what I'm using until I can make the jump to Code Edit. |
| |
| ▲ | yycettesi 4 days ago | parent [-] | | Okay dann lass die Ablage erst laufen ohne Teig dann kannst du mit Teig machen wenn du übergaben machst zwischen 13:30 und 14:00 Uhr dann bitte schichtführer/in Bescheid sagen bzw. geben tschüss |
|
|
| ▲ | internet2000 4 days ago | parent | prev | next [-] |
| Just don't use the features. |
|
| ▲ | computerliker 4 days ago | parent | prev | next [-] |
| https://kate-editor.org/ |
| |
| ▲ | gary_0 4 days ago | parent | next [-] | | I couldn't get it to properly syntax highlight and autosuggest even after spending over an hour hunting through all sorts of terrible documentation for kate, clangd, etc. It also completely hides all project files that aren't in source control, and the only way to stop it is to disable the git plugin. What a nightmare. Maybe I'll try VSCodium next. | | |
| ▲ | typpilol 4 days ago | parent [-] | | I thought vscodium was just vscode but open source. Won't any issues in vscode also be present in vscodium? | | |
| ▲ | gary_0 4 days ago | parent | next [-] | | It can't access most Microsoft online services including Copilot, which happens to disable most of the features I don't want. (I understand this is both by design, as well as because Microsoft forbids unofficial forks from doing so.) | | |
| ▲ | ygritte 4 days ago | parent [-] | | However, MS do everything they can to make plugins not work in VSCodium. And the plugin marketplaces are separate now. |
| |
| ▲ | sneak 4 days ago | parent | prev [-] | | Many of the popular features in VS Code are provided by plugins that are not open source and thus not provided with VSCodium. |
|
| |
| ▲ | isodev 4 days ago | parent | prev [-] | | Kate is brilliant. |
|
|
| ▲ | qbane 4 days ago | parent | prev | next [-] |
| How about Sublime Text (not really an IDE, just text editor) |
|
| ▲ | kristopolous 4 days ago | parent | prev | next [-] |
| Neovim, emacs? |
| |
| ▲ | mr_toad 4 days ago | parent | next [-] | | Amusing that Emacs that came out of the MIT AI lab, and heavily uses Lisp, a language that used to be en vogue for AI research. | | |
| ▲ | PessimalDecimal 4 days ago | parent | next [-] | | Amusing is one word for it. Expert systems were all the rage until they weren't. We'll see how LLMs do by comparison. | | |
| ▲ | vrighter 4 days ago | parent | next [-] | | The so-called "guardrails" used for LLM are very close to expert systems, imo. Since the landscape of potentially malicious inputs in plain english is practically infinite, without any particular enforced structure for the queries you make of it, means that those "guardrails" are, in effect, an expert system. An ever growing pile of if-then statements. Didn't work then, won't work now. | |
| ▲ | kristopolous 3 days ago | parent | prev [-] | | People are trying to achieve the same thing - rules based systems with decision trees. That's still one of the most lucrative use cases. |
| |
| ▲ | monkeyelite 4 days ago | parent | prev [-] | | You are word associating. The ideas in each part of that chain are unrelated. |
| |
| ▲ | guluarte 4 days ago | parent | prev [-] | | neovim will support llms natively (though a language server) https://github.com/neovim/neovim/pull/33972 | | |
| ▲ | what 4 days ago | parent | next [-] | | That’s not really native support for LLMs? It’s supporting some LSP feature for completions. | |
| ▲ | justatdotin 4 days ago | parent | prev | next [-] | | LSP != LLM | |
| ▲ | brigandish 4 days ago | parent | prev | next [-] | | You have to enable it and install a language server, that's not the same as an LLM being baked in. | | |
| ▲ | simonh 4 days ago | parent [-] | | It’s not baked in, in that sense. You still have to enable it in XCode and link it to a Claude account. It’s basically the same. | | |
| ▲ | brigandish 4 days ago | parent [-] | | At the level of "Having to configure something to use it", they're the same, but then that's the same as the hundreds of other config options then. I think we can be slightly more precise than that. In Neovim the choice of language server and the choice of LLM is up to the user, (possibly even the choice of this API, I believe, having only skimmed the PR) while both of those choices are baked in to XCode, so they're not the same thing. | | |
| ▲ | simonh 4 days ago | parent [-] | | That's fair enough, but it's the opposite complaint, that XCode's LLM support is more limited because it is proprietary. That's a perfectly valid and reasonable objection, of course. |
|
|
| |
| ▲ | vrighter 4 days ago | parent | prev [-] | | Neovim already supports LSP servers. The fact that a language server exists for anything, doesn't make neovim (or any other editor) "support" the technology. It doesn't, what it does support is LSP, and it doesn't and couldn't care less what language/slop the LSP is working with. |
|
|
|
| ▲ | carstenhag 4 days ago | parent | prev | next [-] |
| Just disable the feature/plugin in your IDE of choice. Android Studio/IntelliJ: https://i.imgur.com/RvRMvvK.png |
|
| ▲ | ssk42 4 days ago | parent | prev | next [-] |
| Gosh, it's almost like a proper IDE has synonymous features with LLMs |
|
| ▲ | drusepth 4 days ago | parent | prev | next [-] |
| Ironically, you could probably vibe code your own. |
| |
| ▲ | 4 days ago | parent | next [-] | | [deleted] | |
| ▲ | rs186 4 days ago | parent | prev | next [-] | | Good luck getting just scroll bar right with vibe coding. You'll be surprised how much engineering is done to get that part work smoothly. | | |
| ▲ | CamperBob2 4 days ago | parent [-] | | If enough examples are in-distribution, the model's scroll bar implementation will work just fine. (Eventually, after the human learns what to ask for and how to ask for it.) Why wouldn't it? | | |
| ▲ | shakna 4 days ago | parent [-] | | Most programs today regularly have bugs with scrolling. Thus, an LLM will produce for you... A buggy piece of code. | | |
| ▲ | adastra22 4 days ago | parent [-] | | LLMs are not Xerox machines. They can, in fact, produce better code than is in their training set. | | |
| ▲ | mirkodrummer 4 days ago | parent [-] | | That is funny for how much is wrong. Ask the LLMs to vibe code a text editor and you'll get a React app using Supabase. Engineering !== Token prediction | | |
| ▲ | adastra22 4 days ago | parent | next [-] | | Non sequitur? I have used agentic coding tools to solve problems that have literally never been solved before, and it was the AI, not me, that came up with the answer. If you look under the hood, the multi-layered percqptratrons in the attention heads of the LLM are able to encode quite complex world models, derived from compressing its training set in a which which is formally as powerful as reasoning. These compressed model representations are accessible when prompted correctly, which express as genuinely new and innovative thoughts NOT in the training set. | | |
| ▲ | mirkodrummer 4 days ago | parent [-] | | > I have used agentic coding tools to solve problems that have literally never been solved before, and it was the AI, not me, that came up with the answer. Would you show us? Genuinely asking | | |
| ▲ | adastra22 3 days ago | parent [-] | | Unfortunately confidentiality prevents me from doing so—this was for work. I know it is something new that hasn’t been done before because we’re operating in a very niche scientific field where everyone knows everyone and one person (me, or the members of my team) can be up to speed on what everyone else is doing. It’s happened now that a couple of times it pops out novel results. In computational chemistry, machine learned potentials trained with transformer models have already resulted in publishable new chemistry. Those papers are t out yet, but expect them within a year. | | |
| ▲ | mirkodrummer 3 days ago | parent [-] | | [flagged] | | |
| ▲ | adastra22 3 days ago | parent [-] | | I'm sorry you're so sour on this. It's an amazing and powerful technology, but you have to be able to adjust your own development style to make any use of it. |
|
|
|
| |
| ▲ | CamperBob2 3 days ago | parent | prev | next [-] | | Ask the LLMs to vibe code a text editor, and you'll get pretty much what you deserve in return for zero effort of your own. Ask the best available models -- emphasis on models -- for help designing the text editor at a structural rather than functional level first, being specific about what you want and emphasizing component-level test whenever possible, and only then follow up with actual code generation, and you'll get much better results. | |
| ▲ | drusepth 3 days ago | parent | prev [-] | | I think this comment exposes an important point to make: people have different opinions of what "vibe coding" even means. If I were to ask an LLM to vibe code a text editor, I guarantee you I wouldn't get a React app using Supabase -- because I'd give it pages of requirements documentation and tell it not only what I want, but the important decisions on how to make it. Obviously no model is going to one-shot something like a full text editor, but there's an ocean of difference between defining vibe coding as prompting "Make me a text editor" versus spending days/weeks going back and forth on architecture and implementation with a model while it's implementing things bottom-up. Both seem like common definitions of the term, but only one of them will _actually_ work here. |
|
|
|
|
| |
| ▲ | ZYbCRq22HbJ2y7 4 days ago | parent | prev [-] | | Do you really think so? Have you ever explored the source of something like: https://github.com/JetBrains/intellij-community | | |
| ▲ | PessimalDecimal 4 days ago | parent | next [-] | | Doesn't have to. The LLM will do it! We're done with code, aren't we? | | |
| ▲ | wfhrto 4 days ago | parent [-] | | Code is still there, but humans are done dealing with it. We're at a higher level of abstraction now. LLMs are like compilers, operating at a higher level. Nobody programs assembly language any more, much less machine language, even though the machine language is still down there in the end. | | |
| ▲ | ZYbCRq22HbJ2y7 4 days ago | parent [-] | | > Nobody programs assembly language They certainly do, and I can't really follow the analogy you are building. > We're at a higher level of abstraction now. To me, an abstraction higher than a programming language would be natural language or some DSL that approximates it. At the moment, I don't think most people using LLMs are reading paragraphs to maintain code. And LLMs aren't producing code in natural language. That isn't abstraction over language, it is an abstraction over your computer use to make the code in language. If anything, you are abstracting yourself away. Furthermore, if I am following you, you are basically saying, you have to make a call to a (free or paid) model to explain your code every time you want to alter it. I don't know how insane that sounds to most people, but to me, it sounds bat-shit. |
|
| |
| ▲ | drusepth 3 days ago | parent | prev [-] | | I've worked in 3 different WYSIWYG editors for web and desktop applications over the years, lightly contributed to a handful of other open-source editors, and spent plenty of time building my own personal editors from scratch (and am currently using gpt-5 to fix my own human bugs in a rewrite of the Notebook.ai text editor that I re-re-implemented ~8 years ago). Editors are incredibly complex and require domain knowledge to guide agents toward the correct architecture and implementation (and away from the usual naive pitfalls), but in my experience the latest models reason about and implement features/changes just fine. |
|
|
|
| ▲ | matthewmacleod 4 days ago | parent | prev | next [-] |
| Of course it is, because that would be an aggressively stupid thing to do. Like boycotting syntax highlighting, spellckecking, VCS integration or a dozen other features that are th whole pint of IDEs. If you don’t want to use LLM coding assistants – or if you can’t, or it’s not a technology suitable for your work – nobody cares. It’s totally fine. You don’t need to get performatively enraged about it. |
|
| ▲ | guluarte 4 days ago | parent | prev [-] |
| even nvim is getting native support for llms |
| |
| ▲ | __MatrixMan__ 4 days ago | parent | next [-] | | It doesn't matter how they feel about LLMs, ignoring their battle hardened plugin system and going native would be bad architecture. | | |
| ▲ | tcoff91 4 days ago | parent [-] | | It’s just native support for ghost text. It’s not llm specific |
| |
| ▲ | tcoff91 4 days ago | parent | prev | next [-] | | You have to opt in and set up a language server | |
| ▲ | Insanity 4 days ago | parent | prev [-] | | Is it? Link? | | |
|