▲ | DHRicoF 3 days ago | |
I don't think there is enough (non syntetic) data available to get near what we are used to. The big breakthrough of GPT was exactly that. You can train a model with (for what that time was) stupidly high amount of data and make it okis to a lot of task you haven't trained explicitly. | ||
▲ | torginus 3 days ago | parent [-] | |
You can make GPT rewrite all existing textual info into chatbot format, so there's no loss there. With newer techniques, such as chain of thought and self-checking, you can also generate a ton of high-quality training data, that won't degrade the output of the LLM. Though the degree to which you can do that is not clear to me. Imo it makes sense to train an LLM as a chatbot from the start. |