| ▲ | EA-3167 2 days ago |
| I feel like it's already been pretty well digested and excreted for the most part, now we're into the re-ingestion phase until the bubble bursts. |
|
| ▲ | jdross 2 days ago | parent | next [-] |
| I am tech founder, who spends most of my day in my own startup deploying LLM-based tools into my own operations, and I'm maybe 1% of the way through the roadmap I'd like to build with what exists and is possible to do today. |
| |
| ▲ | croes 2 days ago | parent | next [-] | | What has your roadmap to do with the capabilities? LLMs still hallucinate and make simple mistakes. And the progress seams to be in the benchmarks only https://news.ycombinator.com/item?id=43603453 | | |
| ▲ | edanm 2 days ago | parent [-] | | The parent was contradicting the idea that the existing AI capabilities have already been "digested". I agree with them btw. > And the progress seams to be in the benchmarks only This seems to be mostly wrong given peoples' reactions to e.g. o3 that was released today. Either way, progress having stalled for the last year doesn't seem that big considering how much progress there has been for the previous 15-20 years. |
| |
| ▲ | Jensson 2 days ago | parent | prev | next [-] | | > and I'm maybe 1% of the way through the roadmap I'd like to build with what exists and is possible to do today. How do you know they are possible to do today? Errors gets much worse at scale, especially when systems starts to depend on each other, so it is hard to say what can be automated and not. Like if you have a process A->B, automating A might be fine as long as a human does B and vice versa, but automating both could not be. | |
| ▲ | danielmarkbruce 2 days ago | parent | prev [-] | | 100% this. The rearrangement of internal operations has only started and there is just sooo much to do. |
|
|
| ▲ | dicroce 2 days ago | parent | prev | next [-] |
| Not even close. Software can now understand human language... this is going to mean computers can be a lot more places than they ever could. Furthermore, software can now understand the content of images... eventually this will have a wild impact on nearly everything. |
| |
| ▲ | burnte 2 days ago | parent | next [-] | | It doesn't understand anything, there is no understanding going on in these models. It takes input and generates output based on the statistical math created from its training set. It's Bayesian statistics and vector/matrix math. There is no cogitation or actual understanding. | | |
| ▲ | abletonlive 2 days ago | parent [-] | | This is insanely reductionist and mindless regurgitation of what we already know about how the models work. Understanding is a spectrum, it's not binary. We can measurably show that that there is in fact, some kind of understanding. If you explain a concept to a child you check for understanding by seeing if the output they produce checks out with your understanding of the concept. You don't peer into their brain and see if there are neurons and consciousness happening | | |
| ▲ | burnte a day ago | parent [-] | | The method of verification has no bearing on the validity of the conclusion. I don't open a child's head because there are side effects on the functioning of the child post brain-opening. However I can look into the brain of an AI with no such side effects. This is an example I saw 2 days ago without even searching. Here ChatGPT is telling someone that it independently ran a benchmark on it's MacBook: https://pbs.twimg.com/media/Goq-D9macAApuHy?format=jpg I'm reasonably sure ChatGPT doesn't have a Macbook, and didn't really run the benchmarks. But It DID produce exactly what you would expect a human to say, which is what it is programmed to do. No understanding, just rote repetition. I won't post more because there are a billion of them. LLMs are great, but they're not intelligent, they don't understand, and the output still needs validated before use. We have a long way to go, and that's ok. |
|
| |
| ▲ | AstralStorm 2 days ago | parent | prev [-] | | Understand? It fails with to understand a rephrasing of a math problem a five year old can solve...
They get much better at training to the test from memory the bigger they get. Likewise you can get some emergent properties out of them. Really it does not understand a thing, sadly. It can barely analyze language and spew out a matching response chain. To actually understand something, it must be capable of breaking it down into constituent parts, synthesizing a solution and then phrasing the solution correctly while explaining the steps it took. And that's not even what huge 62B LLM with the notepad chain of thought (like o3, GPT-4.1 or Claude 3.7) can really properly do. Further, it has to be able to operate on sub-token level. Say, what happens if I run together truncated version of words or sentences?
Even a chimpanzee can handle that. (in sign language) It cannot do true multimodal IO either. You cannot ask it to respond with at least two matching syllables per word and two pictures of syllables per word, in addition to letters. This is a task a 4 year old can do. Prediction alone is not indicative of understanding. Pasting together answers like lego is also not indicative of understanding.
(Afterwards ask it how it felt about the task. And to spot and explain some patterns in a picture of clouds.) |
|
|
| ▲ | kokanee 2 days ago | parent | prev | next [-] |
| To push this metaphor, I'm very curious to see what happens as new organic training material becomes increasingly rare, and AI is fed nothing but its own excrement. What happens as hallucinations become actual training data? Will Google start citing sources for their AI overviews that were in turn AI-generated? Is this already happening? I figure this problem is why the billionaires are chasing social media dominance, but even on social media I don't know how they'll differentiate organic content from AI content. |
|
| ▲ | tough 2 days ago | parent | prev [-] |
| maybe silicon valley and the world move at basically different rates idk AI is just a speck outside of the HN and SV info-bubbles still early to mass adoption like the smartphone or the internet,
mostly nerds playing w it |
| |
| ▲ | azinman2 2 days ago | parent | next [-] | | I really disagree. I had a masseuse tell me how he uses ChatGPT, told it a ton of info about himself, and now he uses it for personalized nutrition recommendations. I was in Atlanta over the weekend recently, at a random brunch spot, and overheard some _very_ not SV/tech folks talk about how they use it everyday. Their user growth rate shows this -- you don't hit hundreds of millions of people and have them all be HN/SV info-bubble folks. | | |
| ▲ | tough 2 days ago | parent [-] | | I see ChatGPT as the new Google, not the new Nuclear Power Soruce. maybe im naive | | |
| |
| ▲ | acdha 2 days ago | parent | prev | next [-] | | That doesn’t match what I hear from teachers, academics, or the librarians complaining that they are regularly getting requests for things which don’t exist. Everyone I know who’s been hiring has mentioned spammy applications with telltale LLM droppings, too. | | |
| ▲ | tough 2 days ago | parent [-] | | I can see how students would be first users of this kinda of tech but am not on those spheres, but I believe you. As per spammy applications, hasn't always been this the case and now made worse due to the cheapness of -generating- plausible data? I think ghost-applicants where existent already before AI where consultant companies would pool people to try and get a position on a high paying job and just do consultancy/outsourcing things underneath, many such cases before the advent of AI. AI just accelerates no? | | |
| ▲ | acdha 2 days ago | parent [-] | | Yes, AI is effectively a very strong catalyst because it drives down the cost so much. Kids cheated before but it was more work and higher risk, people faked images before but most were too lazy to make high quality fakes, etc. |
|
| |
| ▲ | aleph_minus_one 2 days ago | parent | prev | next [-] | | > idk AI is just a speck outside of the HN and SV info-bubbles > still early to mass adoption like the smartphone or the internet, mostly nerds playing w it Rather: outside of the HN and SV bubbles, the A"I"s and the fact how one can fall for this kind of hype and dupery is commonly ridiculed. | | |
| ▲ | EA-3167 2 days ago | parent | next [-] | | This is accurate, doubly so for the people who treat it like a religion and fear the coming of their machine god. This, when what we actually have are (admittedly sometimes impressive) next-token predictors that you MUST double-check because they routinely hallucinate. Then again I remember when people here were convinced that crypto was going to change the world, democratize money, end fiat currency, and that was just the start! Programs of enormous complexity and freedom would run on the blockchain, games and hell even societies would be built on the chain. A lot of people here are easily blinded by promises of big money coming their way, and there's money in loudly falling for successive hype storms. | |
| ▲ | umeshunni 2 days ago | parent | prev [-] | | Yeah, I'm old enough to remember all the masses who mocked the Internet and smartphones too. | | |
| ▲ | tough 2 days ago | parent [-] | | Im not mocking AI, and while the internet and smartphones fundamentally changed how societies operate, and AI will probably do so to, why the Doomerism? Isn't that how tech works? We invent new tech and use it and so on? What makes AI fundamentally different than smartphones or the internet? Will it change the world? Probably, already has. Will it end it as we know it? Probably not? | | |
|
| |
| ▲ | kadushka 2 days ago | parent | prev [-] | | ChatGPT has 400M weekly users. https://backlinko.com/chatgpt-stats | | |
| ▲ | tough 2 days ago | parent [-] | | have you wondered how many of these are bots leveraging free chatgpt with proxied vpn IPs? I'm a ChatGPT paying user but I know no one who's not a developer on my personal circles who also is one. maybe im an exeception edit:
I guess 400M global users being the US 300M citizens isn't out of scope for such a highly used product amongst a 7B population But social media like instagram or fb feels like had network effects going for them making their growth faster and thus maybe why openai is exploring that idea idk | | |
| ▲ | kadushka 2 days ago | parent [-] | | Pretty much everyone in high school or college is using them. Also everyone whose job is to produce some kind of content or data analysis. That's already a lot of people. |
|
|
|