That’s an odd claim, given that we have so-called thinking models. Is there a specific way you have in mind in which LLMs are not thinking processes?
I can call my cat an elephant
it doesn't make him one