| ▲ | alaudet 2 hours ago | |
I had a conversation with Claude yesterday about this very topic. The AI was pretty candid about the issue and said many of the same things the author said. Now I am not sure if I went in with an unintended bias and it just went into full sycophant mode, I tried to be neutral in my prompts, along the lines of the implications of integrating AI into processes when the true cost is not being charged. But it was obvious that even moderate usage is a loss leader, so heavy users with agentic workloads are in a risky situation and should think long and hard about their business model if costs slowly trickle up in the triple, quadruple etc etc range. I will continue to use it as an assistant that does the menial stuff quicker than I ever could, but it's just too early to let it do stuff that would hurt if it disappeared. Enjoy it while it lasts. | ||
| ▲ | niekkamer 2 hours ago | parent [-] | |
I think a solution could be local hardware acceleration the diffecult thing to achieve is not leaking dmodel data, since yeah that is obviously a nogo for antropic, openai, etc | ||