| ▲ | dboreham 9 hours ago | ||||||||||||||||||||||
> who prompts the AI LLMs are a box where the input has to be generated by someone/something, but also the output has to be verified somehow (because, like humans, it isn't always correct). So you either need a human at "both ends", or some very clever AI filling those roles. But I think the human doing those things probably needs slightly different skills and experience than the average legacy developer. | |||||||||||||||||||||||
| ▲ | reactordev 9 hours ago | parent [-] | ||||||||||||||||||||||
Rules engines were designed for just such a thing. Validating input/output. You don’t need a human to prompt AI, you need a pipeline. While a single LLM won’t replace you. A well designed system of flows for software engineering using LLMs will. | |||||||||||||||||||||||
| |||||||||||||||||||||||