▲ | vdupras 3 days ago | ||||||||||||||||
I have trouble understanding how that guideline applies here. The original article shows how it's possible that we're about to see an AI bubble pop, the parent comment show generic american arrogance[1], and I come up with a historical example of such a mix of hubris and arrogance. If my comment can be characterized as flamebait, it has to be to a lesser degree than the parent, right? And I'm not even claiming that the situation applies. If you take the strongest plausible interpretation of my comment, it says that if indeed this whole AI bubble is hubris, if indeed there's a huge fallout, then the leaders of this merry adventure, right now, must feel like Napoleon entering Moscow. But well, anyways, cheers dang, it's a tough job. [1]: the strongest possible interpretation of "This is how America ends up being ahead of the rest of world with every new technology breakthrough" is arrogance, right? | |||||||||||||||||
▲ | dang 3 days ago | parent | next [-] | ||||||||||||||||
By generic tangent I just meant we ended up arguing about Napoleon of all things! and the flamebait part was the sarcastic/snarky bit. But I totally get how the GP comment landed the way you describe, but that's why we have guidelines like these: "Please don't pick the most provocative thing in an article or post to complain about in the thread. Find something interesting to respond to instead." and (repeating this one) "Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith." Applying those to the GP comment (https://news.ycombinator.com/item?id=44974675), while it's true that the first sentence could sound like chest-beating, the rest of the comment was making an interesting point about risk tolerance. The 'strongest plausible interpretation' might go something like this: "Even if the article is correct that 95% of companies are seeing zero return on AI spend so far, that by no means proves that they're on the wrong track. With a major technical wave like AI, it's to be expected that early efforts will involve a lot of losses. Long-term success may require taking early risk, and those with lesser risk tolerance, who aren't willing to sustain the losses associated with these pathfinding efforts, may find themselves losing out in the long run." I have no idea whether that's right or not but it would make for a more interesting and less hostile conversation! which is basically what we're shooting for here. | |||||||||||||||||
▲ | eldenring 3 days ago | parent | prev [-] | ||||||||||||||||
I'm not American, I think its sad to see my country dismiss AI and continue to fall behind. | |||||||||||||||||
|