| ▲ | Spivak 3 days ago | |
These models have turned a bunch of NLP problems that were previously impossible into something trivial. I have personally built extremely reliable systems from the biased random number generator. Our f-score using "classic" NLP went from 20% to 99% using LLMs. | ||
| ▲ | no_wizard 3 days ago | parent [-] | |
NLP, natural language processing for the unfamiliar. LLMs are tailor made for this work particularly well. They're great tokenizers of structured rules. Its why they're also halfway decent at generating code in some situations. I think the fall down you see is in logical domains of that rely on relative complexity and contextual awareness in a different way. I've had less luck, for example, having AI systems parse and break down a spreadsheet with complex rules. Thats simply recent memory | ||