▲ | poisonborz 10 hours ago | |
There are some services that notify on replies :) Despite your intent your comment is kinda meanly worded, and it is perhaps you who did not read it, or at least mix up terms. In my parent comment I did not mention documentation, but tutorials, as in guide articles like devs not associated with a project writing about how to achieve some goal. To be more specific, I think there are two distinct type of text that gets written about a project during in its lifecycle, in parallel: #1 A documentation, written by the maintainers. This will always contain all functions, methods, API endpoints, components, whatever. It's the complete description to the full extent of features. It may or may not also contain the second type. An LLM can theoretically interpret and use the whole project based on this info. #2 Guides, tutorials, reviews, forum posts, describing or giving tips on the whole project or specific features, or describing methods using that project ("Use x and y to process queries 20x faster on z!"). These writings were essential for the spreading and marketing of a mentioned project. I think this is what the OP article was about. My argument was that these would not be seeked anymore, devs would just ask LLMs "how to achieve x with this tool". | ||
▲ | creesch 4 hours ago | parent [-] | |
> LLM can theoretically interpret and use the whole project based on this info. That's the thing though, LLMs really can't. At least not to a degree that they are able to act on it at a same level as when trained on everything else including tutorials and such. Languages and technologies that LLMs excel at are those that are widely spread with numerous examples. Just plain documentation with just the api calls isn't enough to train a LLM on. They effectively learn from example. So with just #1 and no longer #1 aimed at humans you will never get to a point where you can ask an LLM about the technology. This is what prompted me to remark that I feel you haven't thought this through. Which you might have, but that makes me think you have a overly optimistic view of what data is enough to reliably train LLMs on. Again, to stress the point, just documentation isn't enough. So you really do need humans adapting the technology first, widening the base of examples to train on. |