| ▲ | freeopinion 11 hours ago | ||||||||||||||||
Of course there are tools focusing on this. It takes a little getting used to how prevalent it is. My editor now can anticipate the next three lines of code I intend to write complete with what values I want to feed to the function I was about to invoke. It all shows up in an autocomplete annotation for me. I just type the first two or three characters and press tab to get everything exactly how I was about to type it in--including an accurate comment worded exactly in my voice. Is that what you mean by IA? For example, I type "for" and my editor guesses I want to iterate over the list that is the second argument of the function for which I am currently building the body. So it offers to complete the rest of the loop condition for me. Not only did it anticipate that I am writing a for loop. It figures out what I want to iterate over, and perhaps even that I want to enumerate the iteration so I have the index and the value. Imagine if I had written a comment to explain my intent for the function before I started writing the function body. How much better could it augment my intellect? | |||||||||||||||||
| ▲ | eikenberry 6 hours ago | parent | next [-] | ||||||||||||||||
I think this could be a decent interface with one addition, a way to comment on the completion being suggested. You could ask it for a different completion or to extend the completion, do something different, do a specific thing, whatever. An active way to "explain my intent" with the AI (besides leaving comments hinting at what you want) in addition to the passive completion system. | |||||||||||||||||
| ▲ | embedding-shape 10 hours ago | parent | prev | next [-] | ||||||||||||||||
To be honest, I'm not quite sure what the ideal UX looks like yet. The AI assisted autocomplete is too little, but the idea of saying "Build X for purpose Y" is too high-level. Maintaining Markdown documents that the AI implements, also feels too high-level, but letting the human fully drive the implementation probably again too low-level. I'm guessing the direction I'd prefer, would be tooling built to accept and be driven by humans, but allowed to be extended/corrected by AI, or something like that, maybe. Maybe a slight contradiction, and very wish-washy/hand-wavey, but I haven't personally quite figured out what I think would be best yet either, what the right level actually is, so probably the best I could say right now :) Sorry! | |||||||||||||||||
| |||||||||||||||||
| ▲ | Barbing 7 hours ago | parent | prev | next [-] | ||||||||||||||||
Still magical a few years in? >Imagine if I had written a comment to explain my intent for the function before I started writing the function body. This in particular is not dissimilar from opening a chat with a model and giving it a prompt as usual but then adding at the end: Begin your response below: | |||||||||||||||||
| ▲ | jibal 7 hours ago | parent | prev [-] | ||||||||||||||||
Which editor? > Imagine if I had written a comment to explain my intent for the function before I started writing the function body. The loon programming language (a Lisp) has "semantic functions", where the body is just the doc comment. | |||||||||||||||||