| |
| ▲ | zarzavat 7 hours ago | parent | next [-] | | I swear that AI could independently develop a cure for cancer and people would still say that it's not actually intelligent, just matrix multiplications giving a statistically probable answer! LLMs are at least designed to be intelligent. Our monkey brains have much less reason to be intelligent, since we only evolved to survive nature, not to understand it. We are at this moment extremely deep into what most people would have been considered to be actual artificial intelligence a mere 15 years ago. We're not quite at human levels of intelligence, but it's close. | | |
| ▲ | qsera 7 hours ago | parent | next [-] | | >AI could independently develop a cure for cancer All the answers for all your questions is contained in randomness. If you have a random sentence generator, there is a chance that it will output the answer to this question every time it is invoked. But that does not actually make it intelligent, does it? | | |
| ▲ | famouswaffles 7 hours ago | parent | next [-] | | You are arguing a point no-one is making. LLMs are not random sentence generators. Its probability distributions are anything but random. You could make an actual random sentence generator, but no-one would argue about its intelligence. | |
| ▲ | graemefawcett 7 hours ago | parent | prev [-] | | This is exactly how problem solving works, regardless of the substrate of cognition. Start with "all your questions contained in randomness" -> the unconstrained solution space. The game is whether or not you can inject enough constraints to collapse the solution space to one that can be solved before your TTL expires. In software, that's generally handled by writing efficient algorithms. With LLMs, apparently the SOTA for this is just "more data centers, 6 months, keep pulling the handle until the right tokens fall out". Intelligence is just knowing which constraints to apply and in what order such that the search space is effectively partitioned, same thing the "reasoning" traces do. Same thing thermostats, bacteria, sorting algorithms and rivers do, given enough timescale. You can do the same thing with effective prompting. The LLM has no grounding, no experience and no context other than which is provided to it. You either need to build that or be that in order for the LLM to work effectively. Yes, the answers for all your questions are contained. No, it's not randomness. It's probability and that can be navigated if you know how | | |
| ▲ | qsera 4 hours ago | parent [-] | | You can constrain the solution space all you want, but if you don't have a method to come up with possible solutions that might match the constraints, you ll be just sitting there all day long for the machine to produce some results. So intelligence is not "just knowing which constraints to apply". It is also the ability to come up with solutions within the constraints without going through a lot of trial and error... But hey, if LLMs can go through a lot of trial and error, it might produce useful results, but that is not intelligence. It is just a highly constrained random solution generator.. | | |
| ▲ | graemefawcett 4 hours ago | parent [-] | | I believe that's I and the paper are both saying as well. The LLM is pure routing, the constraints currently are located elsewhere in the system. In this case, both the constraints and the motivation to perform the work are located in Knuth and his assistant. Routing is important, it's why we keep building systems that do it faster and over more degrees of freedom. LLMs aren't intelligent on their own, but it's not because they don't have enough parameters |
|
|
| |
| ▲ | wang_li 7 hours ago | parent | prev | next [-] | | Last week I put "was val kilmer in heat" into the search box on my browser. The AI answer came back with "No, Val Kilmer was not in heat. Val Kilmer played Chris Shiherlis in the movie Heat but the film did not indicate that he was pregnant or in heat. His performance was nuanced and skilled and represents a high point of the film." I was not curious about whether he was pregnant. We are not only not close to human level of intelligence, we are not even at dog, cat, or mouse levels of intelligence. We are not actually at any level of intelligence. Devices that produce text, images, or code do not demonstrate intelligence any more than a printer producing pages of beautiful art demonstrate intelligence. | | |
| ▲ | DennisP 6 hours ago | parent | next [-] | | Honestly, when I read your first sentence, given the lack of a capital H, my brain initially went the same direction the AI did. Then I realized what you meant but since I already went there, I might have made a similar response as a joke. For the sake of my ego I'm forced to reject your claim that this is evidence of stupidity. | |
| ▲ | logicprog 2 hours ago | parent | prev | next [-] | | > I was not curious about whether he was pregnant. I interpreted the question the same way the AI did. | |
| ▲ | sosodev 6 hours ago | parent | prev [-] | | The model that processes search results is tiny and dumb. You shouldn't compare it to the frontier models that are solving complex math problems. | | |
| ▲ | StilesCrisis 4 hours ago | parent [-] | | On Google, just clicking "AI Mode" gives you a substantially smarter model, and it's still pretty weak. But I assume the OP wasn't talking about Google because it doesn't seem to make this mistake even in a search. | | |
| ▲ | wang_li an hour ago | parent [-] | | It was bing as that is the default for Edge as supplied on my work laptop. It doesn't do this now, but it does do something else quite weird: search: was val kilmer pregnant or in heat answer:
Not pregnant
Val Kilmer was not pregnant or in heat during the events of "Heat." His character, Chris Shiherlis, is involved in a shootout and is shot, which indicates he is not in a reproductive or mating state at that time. And then cites wikipedia as the source of information. In terms of cognition the answer is meaningless. Nothing in the question implies or suggests that the question has to do with a movie. Additionally, "involved in a shootout and is shot, which indicates he is not in a reproductive or mating state" makes no sense at all. AI as deployed shows no intelligence. |
|
|
| |
| ▲ | worldsavior 7 hours ago | parent | prev [-] | | That's wrong. Humans were evolved to have big brains so they can better understand the env and use it to their advantage. I still see AI making stupid silly mistakes. I rather think and not waste time on something that only remembers data, and doesn't even understand it. Reasoning in AI is only about finding contradictions between his "thoughts", not actually understand it. | | |
| ▲ | someplaceguy 7 hours ago | parent | next [-] | | > I still see AI making stupid silly mistakes. In contrast with humans, who are famously known for never making stupid silly mistakes... | |
| ▲ | _fizz_buzz_ 7 hours ago | parent | prev [-] | | > I still see AI making stupid silly mistakes. Humans also make silly mistakes. |
|
| |
| ▲ | whimsicalism 2 hours ago | parent | prev | next [-] | | It only took 4 years, but it appears that this view is finally dying out on HN. I would advise everyone who found this viewpoint compelling to think about how those same blinders might be affecting how you are imagining the future to look like. | |
| ▲ | rustyhancock 8 hours ago | parent | prev | next [-] | | I don't even think that's the issue. The issue to my mind is a lack of data at the meeting of QFT/GR. Afterall few humans historically have been capable of the initial true leap between ontologies. But humans are pretty smart so we can't say that is a requirement for AGI. | | |
| ▲ | worldsavior 8 hours ago | parent | next [-] | | When it comes to revolutionary/unsolved subjects, there will never be enough data. That's why its revolutionary/unsolved. | |
| ▲ | cjcole 6 hours ago | parent | prev [-] | | Maybe. “The laws of nature should be expressed in beautiful equations.” - Paul Dirac “It is, indeed, an incredible fact that what the human mind, at its deepest and most profound, perceives as beautiful finds its realisation in external nature. What is intelligible is also beautiful. We may well ask: how does it happen that beauty in the exact sciences becomes recognizable even before it is understood in detail and before it can be rationally demonstrated? In what does this power of illumination consist?” - Subrahmanyan Chandrasekhar “I often follow Plato’s strategy, proposing objects of mathematical beauty as models for Nature.” “It was beauty and symmetry that guided Maxwell and his followers.” - Frank Wilczek “Beauty, is bound up with symmetry.” - Herman Weyl "Still twice in the history of exact natural science has this shining-up of the great interconnection become the decisive signal for significant progress. I am thinking here of two events in the physics of our century: the rise of the theory of relativity and that of the quantum theory. In both cases, after yearlong unsuccessful striving for understanding, a bewildering abundance of details was almost suddenly ordered. This took place when an interconnection emerged which, thought largely unvisualizable, was finally simple in its substance. It convinced through its compactness and abstract beauty – it convinced all those who can understand and speak such an abstract language." - Werner Heisenberg Maybe (just maybe) these things (whatever you want to call them) will (somehow) gain access to some "compact", beautiful, "largely unvisualizable" "interconnection" which will be the self-evident solution. And if they do, many will be sure to label it a statistical accident from a stochastic parrot. And they'll right, for some definitions of "statistical", "accident", "stochastic", and "parrot". |
| |
| ▲ | bobbylarrybobby 7 hours ago | parent | prev [-] | | Did you read the linked paper? Claude out-reasoned humans on a challenging (or at least, unsolved) math problem. | | |
| ▲ | cjcole 7 hours ago | parent | next [-] | | "humans" Donald Knuth is an extremal outlier human and the problem is squarely in his field of expertise. Claude, guided by Filip Stappers, a friend of Knuth, solved a problem that Knuth and Stappers had been working on for several weeks. Unfortunately, it doesn't seem (from my quick scan) to have been stated how long (or how many tokens or $) it took for Claude + Stappers to complete the proof. In response, Knuth said: "It seems that I’ll have to revise my opinions about “generative AI” one of these days." Seems like good advice. From reading elsewhere in this comment section, the goalposts seem to be approaching the infrared and will soon disappear from the extreme redshift due to rate at which they are receding with each new achievement. | | |
| ▲ | emp17344 6 hours ago | parent [-] | | What goalposts do you think are being moved? I constantly see AI enthusiasts use this phrase, but it’s not clear what goalposts they have in mind. Specifically, what is it that you want opponents to recognize that you believe they aren’t currently? We now have a tool that can be useful in some narrow domains in some narrow cases. It’s pretty neat that our tools have new capabilities, but it’s also pretty far from AGI. | | |
| ▲ | cjcole 6 hours ago | parent | next [-] | | I'm not an enthusiast. I'm a Butlerian. Imagine hearing pre-attention-is-all-you-need that "AI" could do something that Donald Knuth could not (quickly solve the stated problem in collaboration with his friend). The idea that this (Putnam perfect, IMO gold, etc) is all just "statistical parrot" stuff is wearing a little thin. | |
| ▲ | whimsicalism 2 hours ago | parent | prev [-] | | You must have forgotten the /s at the end of your comment? | | |
|
| |
| ▲ | worldsavior 7 hours ago | parent | prev [-] | | Merely luck in my opinion. There could be also multiple times where it didn't solve it. |
|
|