| ▲ | zjp 10 hours ago | |
Yes. Agents are good at solving densely represented (embarrassingly solved) problems, and a surprising and disturbing number of problems we have are, at least at the decomposed level, well represented. They can even compose them in new ways. But for the same reason they would be unable to derive general relativity, they cannot use insight to reformulate problems. I base this statement on my experience trying to get them to implement Flying Edges, a parallel isosurface extraction algorithm. It’s a reformulation of marching cubes, a serial algorithm that works over voxels, that works over edges instead. If they’re not shown known good code, models will try and implement marching cubes superficially shaped like flying edges. You are still necessary to push the frontier forward. Though, given the way some models will catch themselves making a conceptual error and correct in real time, we should be nervous. | ||
| ▲ | operation_moose 10 hours ago | parent [-] | |
I've had the same experience. I do a lot of automation of two engineering software packages through python and java APIs which are not terribly well documented and existing discussion of them on the greater web is practically nonexistent. They are completely, 100% useless, no matter what I do. Add on another layer of abstraction like "give me a function to calculate <engineering value>" and they get even worse. I had a small amount of luck getting it to refactor some really terrible code I wrote while under the gun, but they made tons of errors I had to go back and fix. Luckily I had a pretty comprehensive test suite by that point and finding the mistakes wasn't too hard. (I've tried all of the "just point them at the documentation" replies I'm sure are coming. It doesn't help) | ||