| ▲ | gamewithnoname 4 days ago | |||||||
Exactly. What makes it even more odd for me is they are mostly describing doing nothing when using their agents. I see the "providing important context, setting guardrails, orchestration" bits appended, and it seems like the most shallow, narrowest moat one can imagine. Why do people believe this part is any less tractable for future LLMs? Is it because they spent years gaining that experience? Some imagined fuzziness or other hand-waving while muttering something about the nature of "problem spaces"? That is the case for everything the LLMs are toppling at the moment. What is to say some new pre-training magic, post-training trick, or ingenious harness won't come along and drive some precious block of your engineering identity into obsolescence? The bits about 'the future is the product' are even stranger (the present is already the product?). To paraphrase theophite on Bluesky, people seem to believe that if there is a well free for all to draw from, that there will still exist a substantial market willing to pay them to draw from this well. | ||||||||
| ▲ | fartfeatures 4 days ago | parent [-] | |||||||
Having AI working with and for me is hugely exciting. My creativity is not something an AI can outmode. It will augment it. Right now ideas are cheap, implementation is expensive. Soon, ideas will be more valuable and implementation will be cheap. The economy is not zero sum nor is creativity. | ||||||||
| ||||||||