| ▲ | trollbridge 8 hours ago | |||||||
If AI actually has hit the levels that Sequoia, Anthropic, et al claim it has, then autonomous AI agents should be forking projects and making them so much better that we'd all be using their vastly improved forks. Why isn't this happening? | ||||||||
| ▲ | Kerrick 8 hours ago | parent | next [-] | |||||||
I dunno about autonomous, but it is happening at least a bit from human pilots. I've got a fork of a popular DevOps tool that I doubt the maintainers would want to upstream, so I'm not making a PR. I wouldn't have bothered before, but I believe LLMs can help me manage a deluge of rebases onto upstream. | ||||||||
| ||||||||
| ▲ | redox99 5 hours ago | parent | prev | next [-] | |||||||
The agents are not that good yet, but with human supervision they are there already. I've forked a couple of npm packages, and have agents implement the changes I want plus keep them in sync with upstream. Without agents I wouldn't have done that because it's too much of a hassle. | ||||||||
| ▲ | chrisjj 7 hours ago | parent | prev | next [-] | |||||||
Because those levels are pure PR fiction. | ||||||||
| ▲ | hxugufjfjf 4 hours ago | parent | prev [-] | |||||||
I do this all the time. I just keep them to myself. Nobody wants my AI slop fork even if it fixes the issues of the original. | ||||||||