| ▲ | byteknight 7 days ago |
| I have to disagree. Anyone that says LLMs do not qualify as AI are the same people who will continue to move the goal posts for AGI. "Well it doesn't do this!". No one here is trying to replicate a human brain or condition in its entirety. They just want to replicate the thinking ability of one. LLMs represent the closest parallel we have experienced thus far to that goal. Saying that LLMs are not AI feel disingenuous at best and entirely purposely dishonest at the worst (perhaps perceived as staving off the impending demise of a profession). The sooner people stop worrying about a label for what you feel fits LLMs best, the sooner they can find the things they (LLMs) absolutely excel at and improve their (the user's) workflows. Stop fighting the future. Its not replacing right now. Later? Maybe. But right now the developers and users fully embracing it are experiencing productivity boosts unseen previously. Language is what people use it as. |
|
| ▲ | sarchertech 7 days ago | parent | next [-] |
| > the developers and users fully embracing it are experiencing productivity boosts unseen previously This is the kind of thing that I disagree with. Over the last 75 years we’ve seen enormous productivity gains. You think that LLMs are a bigger productivity boost than moving from physically rewiring computers to using punch cards, from running programs as batch processes with printed output to getting immediate output, from programming in assembly to higher level languages, or even just moving from enterprise Java to Rails? |
| |
| ▲ | skydhash 6 days ago | parent | next [-] | | Even learning your current $EDITOR and $SHELL can be a great productivity booster. I see people claiming AI is helping them and you see them hunting for files in the file manager tree instead of using `grep` or `find` (Unix). | |
| ▲ | Espressosaurus 7 days ago | parent | prev | next [-] | | Or the invention of the container, or hell, the invention of the filing cabinet (back when computer was a job) | |
| ▲ | 6 days ago | parent | prev [-] | | [deleted] |
|
|
| ▲ | overgard 7 days ago | parent | prev | next [-] |
| The studies I've seen for AI actually improving productivity are a lot more modest than what the hype would have you believe. For example: https://www.youtube.com/watch?v=tbDDYKRFjhk Skepticism isn't the same thing as fighting the future. I will call something AGI when it can reliably solve novel problems it hasn't been pre-trained on. That's my goal post and I haven't moved it. |
|
| ▲ | jerf 6 days ago | parent | prev | next [-] |
| != is "not equal". The symbol for "not a subset of" is ⊄, which you will note, I did not use. |
| |
| ▲ | byteknight 6 days ago | parent [-] | | I think you replied in the wrong place, bud. All the best. EDIT - I see now. sorry. For all intents and purposes of the public. AI == LLM. End of story. Doesn't matter what developers say. | | |
| ▲ | marcus_holmes 6 days ago | parent [-] | | > For all intents and purposes of the public. AI == LLM. End of story. Doesn't matter what developers say. This is interesting, because it's so clearly wrong. The developers are also the people who develop the LLMs, so obviously what they say is actually the factual matter of the situation. It absolutely does matter what they say. But the public perception is that AI == LLM, agreed. Until it changes and the next development comes along, when suddenly public perception will change and LLMs will be old news, obviously not AI, and the new shiny will be AI. So not End of Story. People are morons. Individuals are smart, intelligent, funny, interesting, etc. But in groups we're moronic. |
|
|
|
| ▲ | oinfoalgo 6 days ago | parent | prev | next [-] |
| In cybernetics, this label has existed for a long time. Unfortunately, discourse has followed an epistemic trajectory influenced by Hollywood and science fiction, making clear communication on the subject nearly impossible without substantial misunderstanding. |
|
| ▲ | leptons 6 days ago | parent | prev | next [-] |
| So when an LLM all-too-often produces garbage, can we then call it "Artificial Stupidity"? |
| |
| ▲ | byteknight 6 days ago | parent [-] | | Not sure how that fits. Do you produce good results every time, first try? Didn't think so. | | |
| ▲ | leptons 6 days ago | parent [-] | | >Do you produce good results every time, first try? Almost always, yes, because I know what I'm doing and I have a brain that can think. I actually think before I do anything, which leads to good results. Don't assume everyone is a junior. >Didn't think so. You don't know me at all. | | |
| ▲ | neoromantique 6 days ago | parent | next [-] | | Sr. "human" here. If you always use your first output then you are not a senior engineer, either your problem space is THAT simple that you can fit all your context in your head at the same time first try, or quite frankly you just bodge things together in non-optimal way. It always takes some tries at a problem to grasp edge cases and to easier visualize the problem space. | | |
| ▲ | Jensson 6 days ago | parent [-] | | Depends on how you define "try". If someone asks me to do something I don't come back with a buggy piece of garbage and say "here, I'm done!", the first deliverable will be a valid one, or I'll say I need more to do it. |
| |
| ▲ | danielbln 6 days ago | parent | prev [-] | | Here you have it folks, seniors don't make mistakes. | | |
| ▲ | Jensson 6 days ago | parent [-] | | When I'm confident something will work it almost always works, that is very different from these models. Sure sometimes I do stuff I am not confident about to learn but then I don't say "here I solved the problem for you" without building confidence around the solution first. Every competent senior engineer should be like this, if you aren't then you aren't competent. If you are confident in a solution then it should almost always work, else you are over confident and thus not competent. LLM are confident in solutions that are shit. |
|
|
|
|
|
| ▲ | parineum 6 days ago | parent | prev | next [-] |
| > Anyone that says LLMs do not qualify as AI are the same people who will continue to move the goal posts for AGI. I have the complete opposite feeling. The layman understanding of the term "AI" is AGI, a term that only needs to exist because researchers and businessmen hype their latest creations as AI. The goalposts for AI don't move but the definition isn't precise but we know it when we see it. AI, to the layman, is Skynet/Terminator, Asimov's robots, Data, etc. The goalposts moving that you're seeing is when something the tech bubble calls AI escapes the tech bubble and everyone else looks at it and says, no, that's not AI. The problem is that everything that comes out of the research efforts toward AI, the tech industry calls AI despite it not achieving that goal by the common understanding of the term. LLMs were/are a hopeful AI candidate but, as of today, they aren't but that doesn't stop OpenAI from trying to raise money using the term. |
| |
| ▲ | shkkmo 6 days ago | parent | next [-] | | AI has had many, many lay meanings over the years. Simplistic decision trees and heuristics for video games is called AI. It is a loose term and trying to apply it with semantic rigour is useless, as is trying to tell people that it should only be used to match one of its many meanings. If you want some semantic rigour use more specific terms like AGI, human equivalent AGI, super human AGI, exponentially self improving AGI, etc. Even those labels lack rigour, but at least they are less ambiguous. LLMs are pretty clearly AI and AGI under commonly understood, lay definitions. LLMs are not human level AGI and perhaps will never be by themselves. | | |
| ▲ | parineum 5 days ago | parent [-] | | > LLMs are pretty clearly AI and AGI under commonly understood, lay definitions. That's certainly not clear. For starters, I don't think there is a lay definition of AGI which is largely my point. The only reason people are willing to call LLMs AI is because that's how they are being sold and the shine isn't yet off the rose. How many people call Siri AI? It used to be but people have had time to feel around the edges where it fails to meet their expectations of AI. You can tell what people think of AI by the kind of click bait surrounding LLMs. I read an article not too long ago with the headline about an LLM lying to try and not be turned off. Turns out it was intentionally prompted to do that but the point is that that kind of self preservation is what people expect of AI. Implicitly, they expect that AI has a "self". ChatGPT doesn't have a self. | | |
| ▲ | shkkmo 5 days ago | parent [-] | | AI and AGI are broad umbrella terms. Stuff like Alpha Zero is AI but not AGI while LLMs are both. Engaging in semantic battles to try to change the meanings of those terms is just going to create more confusion, not less. Instead why not use more specific and descriptive labels to be clear about what you are saying. Self-Aware AGI, Human Level AGI, Super-Human ANI, are all much more useful than trying to force general label to be used a specific way. | | |
| ▲ | parineum 5 days ago | parent [-] | | > Engaging in semantic battles to try to change the meanings of those terms is just going to create more confusion You're doing that. I've never seen someone state, as fact, that LLMs are AGI before now. Go ask someone on the street what Super-Human ANI means. | | |
| ▲ | shkkmo 5 days ago | parent [-] | | > I've never seen someone state, as fact, that LLMs are AGI before now. Then you probably haven't been paying attention. https://deepmind.google/research/publications/66938/ > I've never seen someone state, as fact, that LLMs are AGI before now. Many LLMs are AI that weren't designed / trained to solve a narrow problem scope. They can complete a wide range of tasks with varying levels of proficiency. That makes them artificial general intelligence or AGI. You are confused because lots of people use "AGI" as a shorthand to talk about "human level" AGI that isn't limited to a narrow problem scope. It's not wrong to use the term this way, but it is ambiguous and vague. Even the term "human level" is poorly defined and if I wanted to use the term "Human level AGI" for any kind of discussion of what qualifies, I'd need to specify how I was defining that. | | |
| ▲ | parineum 5 days ago | parent [-] | | I'm not confused at all. Your own personal definitions just further my point that tech people have a much different classification system that the general populous and that the need for those excessive classifications is that way ambitious CEOs keep using the term incorrectly in order to increase share prices. It's actually very funny to me that you are stating these definitions so authoritatively despite the terms not having any sort if rigor attached to either their definition or usage. | | |
| ▲ | shkkmo 3 days ago | parent [-] | | > It's actually very funny to me that you are stating these definitions so authoritatively despite the terms not having any sort if rigor attached to either their definition or usage. Huh? My entire point was that AI and AGI are loose, vague terms and if you want to be clear about what you are talkng about, you should use more specific terms. |
|
|
|
|
|
| |
| ▲ | byteknight 6 days ago | parent | prev [-] | | "Just ask AI" is a phrase you will hear around enterprises now. You less often hear "Google it". You hear "ChatGPT it". |
|
|
| ▲ | imiric 6 days ago | parent | prev [-] |
| > The sooner people stop worrying about a label for what you feel fits LLMs best, the sooner they can find the things they (LLMs) absolutely excel at and improve their (the user's) workflows. This is not a fault of the users. These labels are pushed primarily by "AI" companies in order to hype their products to be far more capable than they are, which in turn increases their financial valuation. Starting with "AI" itself, "superintelligence", "reasoning", "chain of thought", "mixture of experts", and a bunch of other labels that anthropomorphize and aggrandize their products. This is a grifting tactic old as time itself. From Sam Altman[1]: > We are past the event horizon; the takeoff has started. Humanity is close to building digital superintelligence Apologists will say "they're just words that best describe these products", repeat Dijkstra's "submarines don't swim" quote, but all of this is missing the point. These words are used deliberately because of their association to human concepts, when in reality the way the products work is not even close to what those words mean. In fact, the fuzzier the word's definition ("intelligence", "reasoning", "thought"), the more valuable it is, since it makes the product sound mysterious and magical, and makes it easier to shake off critics. This is an absolutely insidious marketing tactic. The sooner companies start promoting their products honestly, the sooner their products will actually benefit humanity. Until then, we'll keep drowning in disinformation, and reaping the consequences of an unregulated marketplace of grifters. [1]: https://blog.samaltman.com/the-gentle-singularity |