| ▲ | embedding-shape 15 hours ago | ||||||||||||||||
As always, the answer is "divide & conquer". Works for humans, works for LLMs. Divide the task into as small, easy to verify steps as possible, ideally steps you can automatically verify by running one command. Once done, either do it yourself or offload to LLM, if the design and task splitting is done properly, it shouldn't really matter. Task too difficult? Divide into smaller steps. | |||||||||||||||||
| ▲ | fulafel an hour ago | parent | next [-] | ||||||||||||||||
Judging from this an approach might have been to port the 28 modules individually and check that everything returns the same data in Perl and TS versions: "I took a long-overdue peek at the source codebase. Over 30,000 lines of battle-tested Perl across 28 modules. A* pathfinding for edge routing, hierarchical group rendering, port configurations for node connections, bidirectional edges, collapsing multi-edges. I hadn’t expected the sheer interwoven complexity." | |||||||||||||||||
| ▲ | eru 12 hours ago | parent | prev | next [-] | ||||||||||||||||
Well, ideally we teach the AIs how to divide-and-conquer. I don't care, whether my AI coding assistant is multiple LLMs (or other models) working together. | |||||||||||||||||
| |||||||||||||||||
| ▲ | lomase 8 hours ago | parent | prev [-] | ||||||||||||||||
I ask the LLM to split the task for me. It shines. | |||||||||||||||||