But with new hardware comping out, and maybe models being smart enough to help with optimizing them and reducing inference costs even more, I think we should still expect the costs to go down.