Remix.run Logo
embedding-shape 13 hours ago

Agree, and it's also such a shame that none of the AI companies actually focus on that way of using AI.

All of them are moving into the direction of "less human involved and agents do more", while what I really want is better tooling for me to work closer with AI and be better at reviewing/steering it, and be more involved. I don't want "Fire one prompt and get somewhat working code", I want a UX tailored for long sessions with back and forth, letting me leverage my skills, rather than agents trying to emulate what I already can do myself.

It was said a long time ago about computing in general, but more fitting than ever, "Augmenting the human intellect" is what we should aim for, not replacing the human intellect. IA ("Intelligence amplification") rather than AI.

But I'm guessing the target market for such tools would be much smaller, basically would require you to already understand software development, and know what you want, while all AI companies seem to target non-developers wanting to build software now. It's no-code all over again essentially.

dsr_ 12 hours ago | parent | next [-]

Is it any surprise that the cocaine cartels really want you to buy more cocaine, so they don't focus on its usefulness in pain relief and they refine it and cut it with the cheapest substances that will work rather than medical-grade reagents?

Same thing.

embedding-shape 12 hours ago | parent | next [-]

It's surprising that the ones who are producing the cocaine, don't try to find the best use of the cocaine, yes. But then these are VC-fueled businesses, then it all goes out the window, unfortunately. Otherwise they'd actually focus on usefulness, not just "usage" or whatever KPI they go by and share with their investors.

Barbing 7 hours ago | parent | prev [-]

LLMs are drugs because they’re addictive and sap your abilities, is it?

(or generally: “Is the cocaine cartel comparison fair or unfair?”)

dsr_ 22 minutes ago | parent [-]

LLMs are fair to compare to cocaine because while there certainly could be ethical producers who follow reasonable laws and work to develop good uses, the market is completely dominated by organizations that don't.

And in my experience potheads offer you a toke and if you politely refuse, no problem at all. Coke addicts don't want to take no for an answer and insist that everybody should do it, they get so much more done, decisions are faster and better and what the hell is wrong with you if you don't want some?

So, the users are similar too.

freeopinion 11 hours ago | parent | prev | next [-]

Of course there are tools focusing on this. It takes a little getting used to how prevalent it is. My editor now can anticipate the next three lines of code I intend to write complete with what values I want to feed to the function I was about to invoke. It all shows up in an autocomplete annotation for me. I just type the first two or three characters and press tab to get everything exactly how I was about to type it in--including an accurate comment worded exactly in my voice.

Is that what you mean by IA?

For example, I type "for" and my editor guesses I want to iterate over the list that is the second argument of the function for which I am currently building the body. So it offers to complete the rest of the loop condition for me. Not only did it anticipate that I am writing a for loop. It figures out what I want to iterate over, and perhaps even that I want to enumerate the iteration so I have the index and the value. Imagine if I had written a comment to explain my intent for the function before I started writing the function body. How much better could it augment my intellect?

eikenberry 6 hours ago | parent | next [-]

I think this could be a decent interface with one addition, a way to comment on the completion being suggested. You could ask it for a different completion or to extend the completion, do something different, do a specific thing, whatever. An active way to "explain my intent" with the AI (besides leaving comments hinting at what you want) in addition to the passive completion system.

embedding-shape 10 hours ago | parent | prev | next [-]

To be honest, I'm not quite sure what the ideal UX looks like yet. The AI assisted autocomplete is too little, but the idea of saying "Build X for purpose Y" is too high-level. Maintaining Markdown documents that the AI implements, also feels too high-level, but letting the human fully drive the implementation probably again too low-level.

I'm guessing the direction I'd prefer, would be tooling built to accept and be driven by humans, but allowed to be extended/corrected by AI, or something like that, maybe.

Maybe a slight contradiction, and very wish-washy/hand-wavey, but I haven't personally quite figured out what I think would be best yet either, what the right level actually is, so probably the best I could say right now :) Sorry!

zozbot234 10 hours ago | parent [-]

The Markdown documents can be at any level. Just keep asking the AI to break each individual step in the plan down into substeps, then ask it to implement after you review. It's great for the opposite flow too - reverse engineering from working legacy code into mid-level and high-level designs, then proposing good refactors.

embedding-shape 9 hours ago | parent [-]

Yes, I'm talking about a UX that could handle that for the programmer instead, as an example. Zoom out a bit :)

Barbing 7 hours ago | parent | prev | next [-]

Still magical a few years in?

>Imagine if I had written a comment to explain my intent for the function before I started writing the function body.

This in particular is not dissimilar from opening a chat with a model and giving it a prompt as usual but then adding at the end:

Begin your response below:

  { func
jibal 7 hours ago | parent | prev [-]

Which editor?

> Imagine if I had written a comment to explain my intent for the function before I started writing the function body.

The loon programming language (a Lisp) has "semantic functions", where the body is just the doc comment.

JetSetIlly 5 hours ago | parent | prev | next [-]

"All of them are moving into the direction of "less human involved and agents do more", while what I really want is better tooling for me to work closer with AI and be better at reviewing/steering it, and be more involved."

I want less ambitious LLM powered tools than what's being offered. For example, I'd love a tool that can analyse whether comments have been kept up to date with the code they refer to. I don't want it to change anything I just want it to tell me of any problems. A linter basically. I imagine LLMs would be a good foundation for this.

refsys 4 hours ago | parent [-]

Any terminal tool like Claude Code or Codex (I assume OpenCode too, but I haven't tried) can do it, by using as a prompt pretty much exactly what you wrote, and if it still wants to edit, just don't approve the tool calls.

One problem I've noticed is that both claude models and gpt-codex variants make absolutely deranged tool calls (like `cat <<'EOF' >> foo...EOF` pattern to create a file, or sed to read a couple lines), so it's sometimes hard to see what is it even trying to do.

JetSetIlly 4 hours ago | parent | next [-]

"Any terminal tool like Claude Code or Codex (I assume OpenCode too, but I haven't tried) can do it, by using as a prompt pretty much exactly what you wrote, and if it still wants to edit, just don't approve the tool calls."

I'm sure it can. I'd still like a single use tool though.

But that's just my taste. I'm very simple. I don't even use an IDE.

edit: to expand on what I mean. I would love it if there was a tool that has conquered the problem and doesn't require me to chat with it. I'm all for LLMs helping and facilitating the coding process, but I'm so far disappointed in the experience. I want something more like the traditional process but using LLMs to solve problems that would be otherwise difficult to solve computationally.

vunderba 4 hours ago | parent | prev [-]

I’m glad I’m not the only one who’s noticed these seemingly arbitrary calls to write files using the cat command instead of the native file edit capabilities of the agent.

Thanemate 11 hours ago | parent | prev | next [-]

>Agree, and it's also such a shame that none of the AI companies actually focus on that way of using AI.

This is because, regardless of the current state of things, the endgame which will justify all the upfront investment is autonomous, self-improving, self-maintaining systems.

mghackerlady 11 hours ago | parent | prev | next [-]

I think it was Steve Jobs who said computers should be like a bicycle for the mind, I tend to agree

embedding-shape 10 hours ago | parent | next [-]

Yeah, Douglas Engelbart was also a huge believer in that, and I think from various stuff I've read from him and the Augmentation Research Center put me on this track of really agreeing with it.

"Bicycle for the mind", as always when it involves Jobs, sounds more fitting for the masses though, so thanks for sharing that :)

axus 8 hours ago | parent | prev | next [-]

Agents are a "self-driving car for the mind". I don't enjoy or dislike driving, but lots of Americans love to drive. In the future they will lament their driving skills' decline.

ivell 8 hours ago | parent [-]

We as the general population have consistently lost lots of skills from just 200 years back. Most likely we will not miss them (though coding used to be my hobby).

Though if apocalypse happens and all of our built tech goes away, we are in for a serious survival issu.

Barrin92 4 hours ago | parent [-]

>Most likely we will not miss them

given that we've also lost the faculty to look at the past with anything other than contempt most people wouldn't even know what they miss. The little problem with losing the 'general cognition' department, just like broad social or cultural decline is that you lose the ability to even judge what you're losing, because the thing you just lost was doing the judging

jcgrillo 7 hours ago | parent | prev [-]

I love this Jobs quote for two reasons:

(1) It captures the ideal so well

(2) The bitter irony of how thoroughly pre-OS X Macintosh computers failed to live up to it

I feel like there's a similar dichotomy in LLM tools now

blibble 10 hours ago | parent | prev | next [-]

> Agree, and it's also such a shame that none of the AI companies actually focus on that way of using AI.

their valuations are replaced on getting rid of you entirely, along with everyone else

the "humans can use it to increase their productivity" is an interim step

10 hours ago | parent | prev [-]
[deleted]