| ▲ | whoknowsidont 3 days ago |
| It's not. And if your team is doing this you're not "advanced." Lots of people are outing themselves these days about the complexity of their jobs, or lack thereof. Which is great! But it's not a +1 for AI, it's a -1 for them. |
|
| ▲ | NewsaHackO 3 days ago | parent | next [-] |
| Part of the issue is that I think you are underestimating the number of people not doing "advanced" programming. If it's around ~80-90%, then that's a lot of +1s for AI |
| |
| ▲ | friendzis 2 days ago | parent | next [-] | | Wrong. 80% of code not being advanced is quite strictly not the same as 80% people not doing advanced programming. | | |
| ▲ | NewsaHackO 2 days ago | parent [-] | | I completely understand the difference, and I am standing by my statement that 80-90% of programmers are not doing advanced programming at all. |
| |
| ▲ | whoknowsidont 3 days ago | parent | prev | next [-] | | Why do you feel like I'm underestimating the # of people not doing advanced programming? | | |
| ▲ | NewsaHackO 3 days ago | parent [-] | | Theoretically, if AI can do 80-90% of programming jobs (the ones not in the "advanced" group), that would be an unequivocal +1 for AI. | | |
| ▲ | whoknowsidont 3 days ago | parent [-] | | I think you're crossing some threads here. | | |
| ▲ | NewsaHackO 3 days ago | parent [-] | | "It's not. And if your team is doing this you're not "advanced."
Lots of people are outing themselves these days about the complexity of their jobs, or lack thereof. Which is great! But it's not a +1 for AI, it's a -1 for them. "
Is you, right? | | |
|
|
| |
| ▲ | 3 days ago | parent | prev [-] | | [deleted] |
|
|
| ▲ | 9rx 3 days ago | parent | prev | next [-] |
| It's true for me. I type in what I want and then the AI system (compiler) generates the code. Doesn't everyone work that way? |
| |
| ▲ | zahlman 3 days ago | parent | next [-] | | Describing a compiler as "AI" is certainly a take. | | |
| ▲ | conradev 2 days ago | parent | next [-] | | I used to hand roll the assembly, but now I delegate that work to my agent, clang. I occasionally override clang or give it hints, but it usually gets it right most of the time. clang doesn't "understand" the hints because it doesn't "understand" anything, but it knows what to do with them! Just like codex. | | |
| ▲ | lm28469 2 days ago | parent [-] | | Given an input clang will always give the same output, not quite the same for llms. Also nobody ever claimed compilers were intelligent or that they "understood" things | | |
| ▲ | conradev 2 days ago | parent | next [-] | | The determinism depends on the architecture of the model! Symbolica is working on more deterministic/quicker models: https://www.symbolica.ai I also wish it was that easy, but compiler determinism is hard, too: https://reproducible-builds.org | |
| ▲ | 9rx 2 days ago | parent | prev | next [-] | | An LLM will also give the same output for the same input when the temperature is zero[1]. It only becomes non-deterministic if you choose for it to be. Which is the same for a C compiler. You can choose to add as many random conditionals as you so please. But there is nothing about a compiler that implies determinism. A compiler is defined by function (taking input on how you want something to work and outputting code), not design. Implementation details are irrelevant. If you use a neural network to compile C source into machine code instead of more traditional approaches, it most definitely remains a compiler. The function is unchanged. [1] "Faulty" hardware found in the real world can sometimes break this assumption. But a C compiler running on faulty hardware can change the assumption too. | | |
| ▲ | whimsicalism 2 days ago | parent | next [-] | | currently LLMs from majorvproviders are not deterministic with temp=0, there are startups focusing on this issue (among others) https://thinkingmachines.ai/blog/defeating-nondeterminism-in... | |
| ▲ | lm28469 2 days ago | parent | prev [-] | | You can test that yourself in 5 seconds and see that even at a temp of 0 you never get the same output | | |
| ▲ | 9rx 2 days ago | parent [-] | | Works perfectly fine for me. Did you do that stupid HN thing where you failed to read the entire comment and then went off to try it on faulty hardware? | | |
| ▲ | lm28469 2 days ago | parent [-] | | No I did that HN thing where I went to an LLM, set temp to 0, pasted your comments in and got widely different outputs every single time I did so | | |
| ▲ | 9rx 2 days ago | parent | next [-] | | "Went" is a curious turn of phrase, but I take it to mean that you used an LLM on someone else's hardware of unknown origin? How are you ensuring that said hardware isn't faulty? It is a known condition. After all, I already warned you of it. Now try it on deterministic hardware. | | |
| ▲ | lm28469 a day ago | parent [-] | | Feel free to share your experiments, I cannot reproduce them but you seem very sure about your stance so I am convinced you gave it a try, right ? | | |
| ▲ | 9rx a day ago | parent [-] | | Do you need to reproduce them? You can simply look at how an LLM is built, no? It is not exactly magic. But what are you asking for, exactly? Do you want me to copy and paste the output (so you can say it isn't real)? Are you asking for access to my hardware? What does sharing mean here? |
|
| |
| ▲ | 2 days ago | parent | prev | next [-] | | [deleted] | |
| ▲ | NewsaHackO 2 days ago | parent | prev [-] | | Was the seed set to the same value everytime? | | |
|
|
|
| |
| ▲ | bewo001 2 days ago | parent | prev [-] | | Hm, some things compilers do during optimization would have been labelled AI during the last AI bubble. |
|
| |
| ▲ | 3 days ago | parent | prev | next [-] | | [deleted] | |
| ▲ | agumonkey 3 days ago | parent | prev | next [-] | | it's something that crossed my mind too honestly. natural-language-to-code translation. | | |
| ▲ | skydhash 3 days ago | parent [-] | | You can also do search query to code translation by using GitHub or StackOverflow. |
| |
| ▲ | parliament32 3 days ago | parent | prev [-] | | Compilers are probably closer to "intelligence" than LLMs. |
| |
| ▲ | rfrey 3 days ago | parent | prev [-] | | I understand what you're getting at, but compilers are deterministic. AI isn't just another tool, or just a higher level of program specification. | | |
| ▲ | 7952 2 days ago | parent | next [-] | | This is all a bit above my head. But the effects a compiler has on the computer are certainly not deterministic. It might do what you want or it might hit a weird driver bug or set off a false positive in some security software. And the more complex stacks get the he more this happens. | |
| ▲ | dust42 2 days ago | parent | prev | next [-] | | And so is "AI". Unless you add randomness AKA raise the temperature. | | |
| ▲ | rfrey 2 days ago | parent | next [-] | | If you and I put the same input into GCC, we will get the same output (counting flags and config as input). The same is not true for an LLM. | | |
| ▲ | 9rx 2 days ago | parent [-] | | > The same is not true for an LLM. Incorrect. LLMs are designed to be deterministic (when temperature=0). Only if you choose for them to be non-deterministic are they so. Which is no different in the case of GCC. You can add all kinds of random conditionals if you had some reason to want to make it non-deterministic. You never would, but you could. There are some known flaws in GPUs that can break that assumption in the real world, but in theory (and where you have working, deterministic hardware) LLMs are absolutely deterministic. GCC also stops being deterministic when the hardware breaks down. A cosmic bit flip is all it takes to completely defy your assertion. |
| |
| ▲ | 2 days ago | parent | prev [-] | | [deleted] |
| |
| ▲ | 9rx 3 days ago | parent | prev [-] | | [flagged] | | |
| ▲ | rfrey 2 days ago | parent | next [-] | | > Nobody was ever talking about AI. If you want to participate in the discussions actually taking place, not just the one you imagined in your head Wow. No, I actually don't want to participate in a discussion where the default is random hostility and immediate personal attack. Sheesh. | | |
| ▲ | 9rx 2 days ago | parent [-] | | [flagged] | | |
| ▲ | tomhow a day ago | parent [-] | | What the hell? You can't comment like this on HN, not matter how right you are or feel you are. The guidelines make it clear we're trying for something better here. These guidelines are particularly relevant: Be kind. Don't be snarky. Converse curiously; don't cross-examine. Edit out swipes. When disagreeing, please reply to the argument instead of calling names. "That is idiotic; 1 + 1 is 2, not 3" can be shortened to "1 + 1 is 2, not 3." Please don't fulminate. Please don't sneer, including at the rest of the community. Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith. Please don't post shallow dismissals... Please don't comment on whether someone read an article. "Did you even read the article? It mentions that" can be shortened to "The article mentions that". HN is only a place where people want to participate because others make the effort to keep the standards up. Please do your part to make this a welcoming place rather than a mean one. https://news.ycombinator.com/newsguidelines.html |
|
| |
| ▲ | 2 days ago | parent | prev [-] | | [deleted] |
|
|
|
|
| ▲ | XenophileJKO 3 days ago | parent | prev [-] |
| I beginning to think most "advanced" programmers are just poor communicators. It really comes mostly down to being able to concisely and eloquently define what you want done. It also is important to understand what the default tendencies and biases of the model are so you know where to lean in a little. Occasionally you need to provide reference material. The capabilities have grown dramatically in the last 6 months. I have an advantage because I have been building LLM powered products so I know mechanically what they are and are not good with. For example.. want it to wire up an API with 250+ endpoints with a harness? You better create (or have it create) a way to cluster and audit coverage. Generally the failures I hear often with "advanced" programmers are things like algorithmic complexity, concurrency, etc.. and these models can do this stuff given the right motivation/context. You just need to understand what "assumptions" the model it making and know when you need to be explicit. Actually one thing most people don't understand is they try to say "Do (A), Don't do (B)", etc. Defining granular behavior which is fundamentally a brittle way to interact with the models. Far more effective is defining the persona and motivation for the agent. This creates the baseline behavior profile for the model in that context. Not "don't make race conditions", more like "You value and appreciate elegant concurrent code." |
| |
| ▲ | tjr 3 days ago | parent | next [-] | | Some of the best programmers I know are very good at writing and/or speaking and teaching. I struggle to believe that “advanced programmers” are poor communicators. | | |
| ▲ | XenophileJKO 2 days ago | parent [-] | | Genuine reflection question, are these excellent communicators good at using llms to write code? My supposition was: Many programmers that say their programming domain was too advanced and llms didn't work for their kind of code are simply bad at describing concisely what is required. | | |
| ▲ | tjr 2 days ago | parent [-] | | Most good programmers that I know personally work, as do I, in aerospace, where LLMs have not been adopted as quickly as some other fields, so I honestly couldn’t say. |
|
| |
| ▲ | interstice 3 days ago | parent | prev | next [-] | | > I beginning to think most "advanced" programmers are just poor communicators. This is a interesting take take considering that programmers are experts in communicating what someone has asked for (however vaguely) into code. I think you're referring to is the transition from 'write code that does X' which is very concrete to 'trick an AI into writing the code I would have written, only faster', which feels like work that's somewhere between an art form and asking a magic box to fix things over and over again until it stops being broken (in obvious ways, at least). Understandably people that prefer engineered solutions do not like the idea of working this way very much. | | |
| ▲ | XenophileJKO 2 days ago | parent [-] | | When you oversee a team technically as a tech lead or an architect, you need communication skills. 1. Basing on how the engineer just responded to my comment, what is the understanding gap? 2. How do I describe what I want in a concise and intuitive way? 3. How do I tell an engineer what is important in this system and what are the constraints? 4. What assumptions will an engineer likely make that are will cause me to have to make a lot of corrections? Etc.. this is all human to human. These skills are all transferrable to working with an LLM. So I guess if you are not used to technical leadership, you may not have used those skills as much. | | |
| ▲ | interstice 2 days ago | parent [-] | | The issue here is that LLM’s are not human and so having a human mental model of how to communicate doesn’t really work. If I communicate to my engineer to do X I know all kinds of things about them, like their coding style, strengths and weaknesses, and that they have some familiarity with the code they are working with and won’t bring the entirety of stack overflow answers to the context we are working in. LLM’s are nothing like this even when working with large amounts of context, they fail in extremely unpredictable ways from one prompt to the next. If you disagree I’d be interested in what stack or prompting you are using that avoids this. |
|
| |
| ▲ | mjr00 3 days ago | parent | prev | next [-] | | > It really comes mostly down to being able to concisely and eloquently define what you want done. We had a method for this before LLMs; it was called "Haskell". | |
| ▲ | XenophileJKO 3 days ago | parent | prev [-] | | One added note. This rigidness of instruction is a real problem that the models themselves will magnify and you need to be aware of. For example if you ask a Claude family of models to write a sub-agent for you in Claude Code. 99% of the time it will define a rigid process with steps and conditions instead of creating a persona with motivations (and if you need it suggested courses of action). |
|