Remix.run Logo
Cycl0ps 4 hours ago

I’m in agreement with the blog post. I’ve been treating AI more like a tool and less like a science experiment and I’ve gotten some good results when working on my various side projects. In the past much of my time was taken up by research and learning the various little parts of how everything works. What starts as a little python project to play around with APIs ends with me spending 5 hours learning tkinter and barely making any API calls.

danielbln 4 hours ago | parent [-]

LLMs have finally freed me from the shackles of yak shaving. Some dumb inconsequential tooling thing doesn't work? Agent will take care of it in a background session and I can get back to building things I do care about.

mikelevins 3 hours ago | parent [-]

I'm finding that in several kinds of projects ranging from spare-time amusements to serious work, LLMs have become useful to me by (1) engaging me in a conversation that elicits thoughts and ideas from me more quickly than I come up with them without the conversation, and (2) pointing me at where I can get answers to technical questions so that I get the research part of my work done more quickly.

Talking with other knowledgeable humans works just as well for the first thing, but suitable other humans are not as readily available all the time as an LLM, and suitably-chosen LLMs do a pretty good job of engaging whatever part of my brain or personality it is that is stimulated through conversation to think inventively.

For the second thing, LLMs can just answer most of the questions I ask, but I don't trust their answers for reasons that we all know very well, so instead I ask them to point me at technical sources as well, and that often gets me information more quickly than I would have by just starting from a relatively uninformed google search (though Google is getting better at doing the same job, too).