| ▲ | Insimwytim 4 hours ago | |||||||||||||||||||||||||
We also need to take into account, that CGI only consumes energy when the actual creation of particular video happens. "AI" consumes energy before user even started (during training). That is on top of comparison for each particular case. | ||||||||||||||||||||||||||
| ▲ | sdenton4 4 hours ago | parent [-] | |||||||||||||||||||||||||
Right idea, but the application is incorrect. Model training is similar to the creation of the cgi for the movie. Both happen before anyone consumes the output, and represent the up front cost for the producer. Both a movie and a language model can cost tens or hundreds of dollars to produce. In both cases additional infrastructure is needed for efficient usage: movie theaters or streaming platforms for movies, and data centers with the GPUs for LLMs. This is also upfront (capex) costs. At consumption time, the movie requires some additional resources, per viewing, whether it's a movie theater or streaming. Likewise, an llm consumes some resources at inference time. These are opex. In both cases, the marginal cost for inference/consumption is quite low. | ||||||||||||||||||||||||||
| ||||||||||||||||||||||||||