Remix.run Logo
yanosh_kunsh 4 hours ago

So does that mean that LLM inference could go down significantly in price and/or context length would dramatically increase?