| ▲ | scottlawson 5 hours ago | ||||||||||||||||||||||
The thesis that in the past it was safe to share ideas and projects because the execution was hard, and that now things have changed because of AI is an interesting AI, but I wonder if it is really true. It certainly seems true that for small projects and relatively narrow scoped things that AI can replicate them easily. I'm thinking specifically about blog posts where people share their first steps and simple programs as they learn something new, like "here is how I set up a flask website", "here is how I trained a neural network on MNIST". But if AI is empowering people to take on more complex projects, perhaps it takes the same amount of time to replicate the execution of a more advanced project? In other words, maybe in the past, it would take me 10 hours to do a "small" project, which today I could do in 1 hour with the assistance of AI. And now, with the assistance of AI, I can go much farther in 10 hours and deliver a more complex project. But that means that someone else trying to replicate this execution is still going to need around 10 hours to replicate it. Basically, I'm agreeing that AI can reduce barrier to replicating the execution of another person's project, but at the same time, that we can make more complex projects that are harder to replicate. So a basic SASS crud app is trivial now but a multi-disciplinary domain specific app that integrates multiple systems is still going to be hard to replicate. | |||||||||||||||||||||||
| ▲ | nicbou 5 hours ago | parent | next [-] | ||||||||||||||||||||||
The problem for me is that I'm competing with the AI results that Google trained on my work. I'm losing the majority of my traffic to it, so at some point I'll have to give up because the work no longer supports me and no longer has an audience. | |||||||||||||||||||||||
| |||||||||||||||||||||||
| ▲ | jandrewrogers 3 hours ago | parent | prev | next [-] | ||||||||||||||||||||||
It isn't just about AI. Some R&D domains started disappearing from literature and the public internet a decade before the first LLMs. The incentives to go dark emerged even when the adversary was other humans. AI is just accelerating a trend that was already there. Some areas of frontier computer science research have largely been dark for decades. The strategy is to quietly do several years of iterated hardcore R&D. The cumulative advances are such a step change when seen by would-be fast-followers that it obscures the insights that allowed individual advances to occur. As an exaggerated case, imagine if the public history of powered flight skipped from the Wright Brothers to the Boeing 737. In practice, this strategy has a major failure mode that people overlook. The sharp discontinuity in capability means that almost nothing that exists in the market is prepared to integrate with it. This is a large impediment to adoption even if the technology is objectively incredible and the market will inevitably get on board. In short, it looks a lot like being too early to market. This is surmountable with clever execution but with this strategy you've traded one problem for a different one. | |||||||||||||||||||||||
| |||||||||||||||||||||||
| ▲ | MattDamonSpace 5 hours ago | parent | prev [-] | ||||||||||||||||||||||
Sure but the Forest point stands, whatever you can hide from the Forest becomes something that slows it down and allows you some, even if only brief, moat? | |||||||||||||||||||||||
| |||||||||||||||||||||||