| ▲ | nwah1 18 hours ago | |
My understanding is that inference (running existing models) is around 1/4th of the average compute budget for AI companies. Training new models takes up about 3/4ths. As such, using only 11% of their GPUs indicates that they've elected not to do as much training as they are capable of. | ||
| ▲ | brazukadev 11 hours ago | parent [-] | |
if they "elected" to do that, with such a terrible model, they are the most incompetent AI lab ever. | ||