Remix.run Logo
eulgro 5 hours ago

> The report estimates that carbon emissions from models with the least efficient inference are over 10 times as high as those with the most efficient inference. DeepSeek’s V3 models were estimated to consume around 23 watts when responding to a “medium-length” prompt, while Claude 4 Opus was estimated to consume about 5 watts.

This makes absolutely no sense. I suppose they meant watt hours, and that's a weird way to explain carbon emissions...