Remix.run Logo
999900000999 2 days ago

Kiro’s main advantage is Amazon is paying for my LLM usage instead of me.

For the most part it’s unlimited right now. Vs Code’s Copilot Agent mode is basically the same thing , tell it to write a list of tasks , but I have to pay for it.

I’m much happier with both of these options, both are much cheaper than Claude Code.

IMO the real race is to get LLM cost down. Someone smarter than me is going to figure out how to run a top LLM model for next to nothing.

This person will be a billionaire. Nvidia and AMD are probably already working on it. I want Deepseek running on a 100$ computer that uses a nominal amount of power.

brokegrammer 2 days ago | parent [-]

My thoughts exactly. Inference should be dirt cheap for LLMs to truly become powerful.

It's similar to how computing used to be restricted to mega corps 100 years ago, but today, a smartphone has more computing power than any old age mainframe. Today we need Elon Musk to buy 5 million GPUs to train a model. Tomorrow, we should be able to train a top of the line model using a budget RTX card.

999900000999 2 days ago | parent [-]

Tbh, if the model is small enough you can train locally.

I don't need my code assistant to be an expert on Greek myths. The future is probably highly specialized mini llms. I might train a model to code my way.

I'm not that smart enough to figure this out, but the solution can't be to just brute force training with more gpus.

There is another answer.