▲ | fumeux_fume a day ago | |
Aside from the fact that you seem to be demanding a lot from someone who's informally sharing their experience online, I think the effectiveness really depends on what you're writing code for. With straightforward use cases that have ample documented examples, you can generally expect decent or even excellent results. However, the more novel the task or the more esoteric the software library, the likelier you are to encounter issues and feel dissatisfied with the outcomes. Additionally, some people are simply pickier about code quality and won't accept suboptimal results. Where I work, I regularly encounter wildly enthusiastic opinions about GenAI that lack any supporting evidence. Dissenting from the mainstream belief that AI is transforming every industry is treated as heresy, so such skepticism is best kept close to the chest—or better yet, completely to oneself. | ||
▲ | BeetleB a day ago | parent [-] | |
> Aside from the fact that you seem to be demanding a lot from someone who's informally sharing their experience online Looking at isolated comments, you are right. My point was that it was a trend. I don't expect everyone to go into details, but I notice almost none do. Even what you pointed out ("great for somethings, crappy for others") has much higher entropy. Consider this, if every C++ related submission had comments that said the equivalent of "After using C++ for a few weeks, my verdict is that its performance capabilities are unimpressive", and then didn't go into any details about what made them think that, I think you'd find my analogous criticism about such comments fair. |