Remix.run Logo
snek_case 4 hours ago

You can work on building LLMs that use less compute and run locally as well. There are some pretty good open models. They probably be made even more computationally efficient.