| ▲ | vidarh 16 hours ago | |||||||||||||||||||||||||||||||
If you're going to rent a few ec2 gpu instances you might as well funnel things through openrouter. Not that many of us have workflows where trusting an LLM provider is a problem but sending the data to EC2 is not. As for why, why would you not? Sitting around waiting for a single assistant is inefficient use of time; I tend to have more like 4-10 instances running in parallel. | ||||||||||||||||||||||||||||||||
| ▲ | 2ndorderthought 15 hours ago | parent | next [-] | |||||||||||||||||||||||||||||||
I absolutely see no reason to send company IP, future plans, and current code base to any other company. I also do not run 10 agents at the same time. There's no way I could keep up with the volume of work from doing that in any meaningful way | ||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||
| ▲ | jen20 16 hours ago | parent | prev [-] | |||||||||||||||||||||||||||||||
> Not that many of us have workflows where trusting an LLM provider is a problem but sending the data to EC2 is not. I'd imagine plenty of people have a problem with trusting fly-by-night inference providers or model owners with opt-out policies [1] [2] about training on your data, who would be more than happy to send data to EC2, or even the same models in Amazon Bedrock. [1]: https://github.blog/news-insights/company-news/updates-to-gi... [2]: https://help.openai.com/en/articles/5722486-how-your-data-is... | ||||||||||||||||||||||||||||||||