| ▲ | wwweston 5 hours ago | |||||||
Sometimes it strikes me that something like this might be one of the better litmus tests for AI — if it’s really good enough to start 10x-ing engineers (let alone replacing them) it should be more common for more projects like this should begin to accelerate to practical usability. If not, maybe the productivity dividends are mostly shallow. | ||||||||
| ▲ | atherton94027 32 minutes ago | parent | next [-] | |||||||
The problem is that many of these clean room reimplementations require contributors to not have seen any of the proprietary source. You can't guarantee that with ai because who knows which training data was used | ||||||||
| ||||||||
| ▲ | adastra22 2 hours ago | parent | prev | next [-] | |||||||
This was my thought here as well. Getting one piece of software to match another piece of software is something that agentic AI tools are really good at. Like, the one area where they are truly better than humans. I expect that with the right testing framework setup and accessible to Claude Code or Codex, you could iterate your way to full system compatibility in a mostly automated way. If anyone on the team is interested in doing this, I’d love to speak to them. | ||||||||
| ▲ | MangoToupe 2 hours ago | parent | prev [-] | |||||||
Sure. In the meantime productivity is still useful. | ||||||||