It's largely dependent on what we think "reason" means, is it not? That's not a pro argument from me, in my world LLMs are stochastic parrots.