Remix.run Logo
mudkipdev 11 hours ago

There can't be any interesting discussion about AI programming. Every conversation boils down to what skill files you use, or how Opus 4.6 compares to Codex, or how well you can manage 16 parallel agents.

johnfn 10 hours ago | parent | next [-]

There genuinely is a lot of interesting discussion to be had about LLMs, and I know this is true because I discuss things with my coworkers daily and learn a lot. I do admit that conversation online about LLMs is frequently lacking. I think it's a bit like politics - everyone has an opinion about it, so unfortunately online discourse devolves to the lowest common denominator. Hey guys, have you noticed that if you use LLMs frequently it's possible you'll forget to think critically?

But "there can't be any interesting discussion about AI programming" is completely false.

mrcsharp 9 hours ago | parent | next [-]

For me, almost every single time a conversation like this happens in real life it boils down to the one side claiming that "This is the future" and "Don't get left behind" followed by a torrent of hype and buzzwords. So no, there is no interesting conversations to be had about LLM programming anymore.

johnfn an hour ago | parent | next [-]

Maybe you struggle to have good conversations because I just provided an anecdote and you immediately stated that my anecdote is false? If this is how you typically interact with people I’m not surprised you’re not having interesting conversations.

arw0n 6 hours ago | parent | prev [-]

> "This is the future"

Yeah, that's silly, it is already the present!

Some interesting conversation one can have with coworkers specifically:

1. How should code review and responsibility for code be updated to a) increase velocity, b) keep quality and c) keep reviewers from burnout. There are plenty scenarios in which vibe coding a component in an afternoon is the correct choice, even if it is buggy, insecure, and no one really understands it.

2. Which parts of the codebase work well with code assistants, which don't? Why? What could be changed to make it easier? In my experience, Claude Code sometimes loses its mind on infra topics. It is also not very good at complex, interconnected services (humans aren't either).

3. Which tasks could be offloaded to agents to save everyone time and sanity? - Creatig Jira Tickets from meeting transcripts is an obvious one, collecting and curating bug reports another one.

4. How should we design systems to better work for coding agents? Does it influence our tech choices? Should it influence them?

5. Is AI a net positive or negative for security?

And so much more. The last topic in particular is incredibly important, and things are developing so fast that you can probably have a new conversation on it every two weeks.

imp0cat 10 hours ago | parent | prev [-]

    if you use LLMs frequently it's possible you'll forget to think critically?
Nowadays, you can have a sub-agent to think critically for you. ;)
geraneum 10 hours ago | parent | prev | next [-]

My pet peeve with all LLM discourse is whenever someone mentions any problem they experience with LLMs or any mistake they make, someone comments that humans make the same mistake.

hperrin 9 hours ago | parent [-]

And the difference is that humans will learn not to make that mistake anymore.

zer0tonin 9 hours ago | parent [-]

That's very optimistic.

wuiheerfoj 10 hours ago | parent | prev | next [-]

I disagree and you could reduce basically anything to this: 'there can‘t be any interesting discussion about React. Every conversation boils down to which framework you use or how you manage state or whether you use typescript or javascript‘

kennywinker 10 hours ago | parent | next [-]

All of those are opinions about programming. Which framework, which language, etc.

Conversations about which model to use aren’t conversations about programming.

A better analogy would be some topic that you can’t discuss without it boiling down to which text editor you should use. It’s related to programming, a little. But it’s not programming.

austin-cheney 10 hours ago | parent | prev [-]

That is exactly why I left reddit. r/javascript had almost completely abandoned JavaScript discussions for React and Angular while r/programming was half filled with irrational JavaScript fear nonsense.

minimaxir 10 hours ago | parent | prev | next [-]

That isn't why /r/programming banned it. They banned it because every discussion about LLMs inevitably devolves into discussions about AI slop in varying levels of civility, and the rare good LLM submissions/discussions do not offset it.

Other tech-adjacent subreddits such as /r/rust have banned LLM discussion for similar, more pragmatic reasons.

samrus 10 hours ago | parent | prev | next [-]

This is far too negative and reductionist

Like saying theres no interesting discussions about programming. Just whether OOP is overhyped, python is slow, how well you can convert a c codebase to rust

amarant 10 hours ago | parent | prev | next [-]

You have not seen my recent WhatsApp chats. Me and a pal are talking about what we're doing with Claude code, and it's quite interesting!

Just like discussions about traditional programming never were only about syntax and type systems, AI discussions aren't only about prompts and harnesses. I find there's quite a bit of overlap actually! "How do you approach this problem?" Is a question that is valid in both discussions, for example.

maplethorpe 7 hours ago | parent | prev | next [-]

In my experience Opus 4.6 is the best.

vova_hn2 9 hours ago | parent | prev | next [-]

Genuine question: how to distinguish yourself from the stream of slop?

I am also annoyed by the endless stream of articles and projects related to LLM-assisted coding. Not because I dislike LLM-assisted coding as an idea, but because it's all more of the same (as you said). I think that there are still a lot of low-hanging fruit in improving LLM harnesses that no one is working on because everyone seems to be chasing the latest trends ("agentic", "multiagentic", "skills") without thinking bigger.

But I'm afraid that if I finally invest time and implement some of my ideas on making LLM-assisted coding better (reliable, safer, easier for humans to interpret and understand generated code), I won't be able to gather any feedback. People will simply dismiss it as "yet another slop for creating more slop" and that's it.

What is the way out of this conundrum?

eru 10 hours ago | parent | prev [-]

> or how well you can manage 16 parallel agents.

Claude does that for me. :)