▲ | vunderba 3 days ago | |||||||
Everybody trots out this argument. GPT styled LLMs were introduced back in 2018 so SIX years ago. Have they gotten more COHERENT? Absolutely. Is coherence the same thing as NOVELTY? NOT EVEN REMOTELY. I've played with markov chains in the 90s that were capable of producing surprising content. Unless there is a radical advancement in the underlying tech, I don't see any indication that they'll be capable of genuine novelty any time in the near future. Take satire for example. I have yet to see anything come out of an LLM that felt particularly funny. I suppose if the height of humor for you is Dad jokes, reddit level word punnery, and the backs of snapple lids though that might be different. | ||||||||
▲ | CuriouslyC 3 days ago | parent | next [-] | |||||||
If you have a particular style of witty observational humor that you prefer, providing the model some examples of that will help it generate better output. It's capable of generating pretty much anything if you prompt it the right way. For truly nuanced or novel things, you have to give it a nucleus of novelty and axis of nuance for it to start generating things in your desired space. | ||||||||
| ||||||||
▲ | sumedh 3 days ago | parent | prev [-] | |||||||
> I don't see any indication that they'll be capable of genuine novelty any time in the near future. That is like saying the plane invented by Wright brothers will never go the moon. |