| ▲ | 2ndorderthought 16 hours ago | ||||||||||||||||||||||||||||||||||||||||||||||
Why are you running 2 instances anyways? If you want that workflow just rent a few ec2 gpu instances and fire away? | |||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | vidarh 16 hours ago | parent [-] | ||||||||||||||||||||||||||||||||||||||||||||||
If you're going to rent a few ec2 gpu instances you might as well funnel things through openrouter. Not that many of us have workflows where trusting an LLM provider is a problem but sending the data to EC2 is not. As for why, why would you not? Sitting around waiting for a single assistant is inefficient use of time; I tend to have more like 4-10 instances running in parallel. | |||||||||||||||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||||||||||||||