Remix.run Logo
vidarh 16 hours ago

If you're going to rent a few ec2 gpu instances you might as well funnel things through openrouter. Not that many of us have workflows where trusting an LLM provider is a problem but sending the data to EC2 is not.

As for why, why would you not? Sitting around waiting for a single assistant is inefficient use of time; I tend to have more like 4-10 instances running in parallel.

2ndorderthought 15 hours ago | parent | next [-]

I absolutely see no reason to send company IP, future plans, and current code base to any other company.

I also do not run 10 agents at the same time. There's no way I could keep up with the volume of work from doing that in any meaningful way

killingtime74 15 hours ago | parent | next [-]

Does your company self host everything though. Many are already in the cloud, why single out llms to not use cloud for

2ndorderthought 12 hours ago | parent [-]

I trust most cloud providers more than most LLMs providers but I still don't trust them much. Anything I can keep safeguarded on premises I do.

killingtime74 5 hours ago | parent [-]

I understand that most of the cloud providers run the llms on their own infra, like AWS Bedrock https://aws.amazon.com/bedrock/pricing/

gowld 10 hours ago | parent | prev [-]

Nobody wants or needs your company IP, future plans, and current code base.

You don't run 10 agents to get more volume of work. You run 10 agents to get better quality work

jen20 16 hours ago | parent | prev [-]

> Not that many of us have workflows where trusting an LLM provider is a problem but sending the data to EC2 is not.

I'd imagine plenty of people have a problem with trusting fly-by-night inference providers or model owners with opt-out policies [1] [2] about training on your data, who would be more than happy to send data to EC2, or even the same models in Amazon Bedrock.

[1]: https://github.blog/news-insights/company-news/updates-to-gi...

[2]: https://help.openai.com/en/articles/5722486-how-your-data-is...