Remix.run Logo
emp17344 3 days ago

[flagged]

dang 3 days ago | parent | next [-]

Please don't start generic flamewars on HN or impugn people who take an opposing view to yours. Both these vectors lead to tedious, unenlightening threads.

There's plenty of rage to go around on literally every divisive topic, and it's not the place we want discussions to come from here.

"Eschew flamebait. Avoid generic tangents."

"Comments should get more thoughtful and substantive, not less, as a topic gets more divisive."

https://news.ycombinator.com/newsguidelines.html

emp17344 3 days ago | parent [-]

There are other users in this very thread using inflammatory language to attack this paper and those who find the paper compelling. One user says, quote: “You just can't reason with the anti-LLM group.”

In light of this, why was my comment - which was in large part a reaction to the behavior of the users described above - the only one called out here?

dang 3 days ago | parent [-]

Purely because I didn't see the others.

emp17344 3 days ago | parent [-]

Fair enough

dang 2 days ago | parent [-]

Thanks! you might be surprised at how meaningful that response is to me.

Topfi 3 days ago | parent | prev | next [-]

No disrespect to them, but unless there is a financial incentive at stake for them (beyond SnP500 exposure), I've gotten to viewing this through the lens of sports teams, gaming consoles and religions. You pick your side, early and guided by hype and there is no way that choice can have been wrong (just like the Wii U, Dreamcast, etc. was the best).

Their viewpoint on this technology has become part of the identity for some unfortunately and any position that isn't either "AGI imminent" or "This is useless" can cause some major emotions.

Thing is, this finding being the case (along with all other LLM limits) does not mean that these models aren't impactful and shouldn't be scrutinised, nor does it mean they are useless. The truth is likely just a bit more nuanced than a narrow extreme.

Also, mental health impact, job losses for white collar workers, privacy issues, concerns of rights holders on training data collection, all the current day impacts of LLMs are easily brushed aside by someone believing that LLMs are near the "everyone dies" stage, which just so happens to be helpful if one were to run a lab. Same if you believe these are useless and will never get better, any discussion about real-life impacts is seen as trying to slowly get them to accept LLMs as a reality, when to them, they never were and never will be.

entropicdrifter 3 days ago | parent [-]

I have a friend who is a Microsoft stan who feels this way about LLMs too. He's convinced he'll become the most powerful, creative and productive genius of all time if he just manages to master the LLM workflow just right.

He's retired so I guess there's no harm in letting him try

stratos123 3 days ago | parent | prev | next [-]

I tend to be annoyed whenever I see a paper with a scandalous title like that, because all such papers that I've seen previously were (charitably) bad or (uncharitably) intentionally misleading. Like that infamous Apple paper "The Illusion of Thinking" where the researchers didn't care that the solution for the problem provided (a Towers of Hanoi with N up to 20) couldn't possibly fit in the allotted space.

simianwords 3 days ago | parent [-]

I checked the paper and got to know that absolutely no reasoning was used for the experiments. So it was as good as using an instant model. We already know that this is necessary to solve anything a bit complicated.

In this case your intuition is completely valid and yet another case of misleading.

ticulatedspline 3 days ago | parent | prev | next [-]

> There’s a certain type of person who reacts with rage when anyone points out flaws with <thing>. Why is that?

FIFY, it's not endemic to here or LLMs. point out Mac issues to an Apple fan, problems with a vehicle to <insert car/brand/model> fan, that their favorite band sucks, that their voted representative is a PoS.

Most people aren't completely objective about everything and thus have some non-objective emotional attachment to things they like. A subset of those people perceive criticism as a personal attack, are compelled to defend their position, or are otherwise unable to accept/internalize that criticism so they respond with anger or rage.

simianwords 3 days ago | parent | prev | next [-]

This paper itself is flawed, misleading and unethical to publish because the prompts they used resulted in zero reasoning tokens. Its like asking a person point blank without thinking to evaluate whether the string is balanced. Why do this? And the worst part was, most people in this thread bought the headline as it is from a flawed article. What does it tell about you that you just bought it without any skepticism?

nonameiguess 3 days ago | parent | prev | next [-]

It's bizarre as hell. Another response compares it to sports fandom, which tracks. It reminds me of the "flare up" ethos of r/CFB, meaning they believe you're not allowed to comment on anything if you don't declare which NCAA Americal football team you're a fan of, because if you do, then anything you ever say can be dismissed with "ah rich coming a fan of team X" like no discussion can ever be had that might be construed as criticism if your own tribe is not perfect and beyond critique itself.

This is stupid enough even in the realm of sports fandom, but how does it make any sense in science? Imagine if any time we studied or enumerated the cognitive biases and logical fallacies in human thinking the gut response of these same people was an immediate "yeah, well dogs are even stupider!" No shit, but it's non-sequitur. Are we forever banned from studying the capabilities and limitations of software systems because humans also have limitations?

ziml77 3 days ago | parent | prev | next [-]

I suspect they're afraid that if the hype dies, so will the pace of progress on LLMs as well as their cheap/free usage of them.

3 days ago | parent | prev [-]
[deleted]