Remix.run Logo
torginus 4 days ago

Sorry - I have a somewhat question - is it possible to train models as instruct models straight away? Previously LLMs were trained on raw text data, but now we can generate instruct data directly either from 'teaching LLMs' or ask existing LLMs to conver raw data into instruct format.

Or alternatively - if chat tuning diminishes some of the models' capability, would it make sense to have a smaller chat model prompt a large base model, and convert back the outputs?

DHRicoF 3 days ago | parent [-]

I don't think there is enough (non syntetic) data available to get near what we are used to.

The big breakthrough of GPT was exactly that. You can train a model with (for what that time was) stupidly high amount of data and make it okis to a lot of task you haven't trained explicitly.

torginus 3 days ago | parent [-]

You can make GPT rewrite all existing textual info into chatbot format, so there's no loss there.

With newer techniques, such as chain of thought and self-checking, you can also generate a ton of high-quality training data, that won't degrade the output of the LLM. Though the degree to which you can do that is not clear to me.

Imo it makes sense to train an LLM as a chatbot from the start.