| ▲ | crystal_revenge 6 hours ago | |||||||||||||||||||||||||
> Either AI use surged massively in the last quarter December 2025 is considered by many people to be a major step function in agentic coding (both due to improvements in harnesses and LLMs themselves). I know my coding has forever changed since then. Before I was basically always hands on the keyboard while working with AI. Now I'm running experiments with multiple agents over the weekend, only periodically checking in if they have any questions or need further instruction. The last quarter is where I personally first started to see how this was all going to change things (despite having worked on both the research and product side of AI for the last few years). > I have a feeling they've tuned the models recently to commit more often, which gives the illusion of more work being done. Agents certainly are committing more often, but I know, at least for these projects, there really is work being done. An example: I had an agent auto-researching a forecast I was working on. This is something I've done manually for over a decade now. The iteration process is tedious and time consuming, and would often take weeks of setting up and ultimately poorly documenting many, many experiments to see what works. Now I can "set it and forget it", and get the same results I would have in hours (with much more surface area covered and much better documentation). Each experiment is a branch (or work-tree) so yes there are a lot of commits happening, but the results are measurably real. I often think the big divide related to the success with agents is whether or not the quality of ones work can be objectively measured. For those of us doing work that can be measured, the impact of agents is still hard to comprehend. | ||||||||||||||||||||||||||
| ▲ | overfeed 4 hours ago | parent | next [-] | |||||||||||||||||||||||||
> Each experiment is a branch (or work-tree) so yes there are a lot of commits happening, but the results are measurably real. If you are correct , and GitHub is scaling its compute mostly as a reaction to this externality (agents churning through code that will mostly be discarded), then you can look forward to getting billed for your usage. After all, it is hard to build a scalable system without back-pressure. | ||||||||||||||||||||||||||
| ||||||||||||||||||||||||||
| ▲ | iknowstuff 4 hours ago | parent | prev [-] | |||||||||||||||||||||||||
Whats your setup? How are your agents not running out of context and becoming dumb as a rock after ~100k tokens? Do you have a heartbeat thing on spawning more agents every time? | ||||||||||||||||||||||||||
| ||||||||||||||||||||||||||