| ▲ | bayindirh 2 days ago |
| Well, the first 90% is easy, the hard part is the second 90%. Case in point: Self driving cars. Also, consider that we need to pirate the whole internet to be able to do this, so these models are not creative. They are just directed blenders. |
|
| ▲ | throwthrowuknow 2 days ago | parent | next [-] |
| Even if Opus 4.5 is the limit it’s still a massively useful tool. I don’t believe it’s the limit though for the simple fact that a lot could be done by creating more specialized models for each subdomain i.e. they’ve focused mostly on web based development but could do the same for any other paradigm. |
| |
| ▲ | emodendroket 2 days ago | parent | next [-] | | That's a massive shift in the claim though... I don't think anyone is disputing that it's a useful tool; just the implication that because it's a useful tool and has seen rapid improvement that implies they're going to "get all the way there," so to speak. | |
| ▲ | bayindirh 2 days ago | parent | prev [-] | | Personally I'm not against LLMs or AI itself, but considering how these models are built and trained, I personally refuse to use tools built on others' work without or against their consent (esp. GPL/LGPL/AGPL, Non Commercial / No Derivatives CC licenses and Source Available licenses). Of course the tech will be useful and ethical if these problems are solved or decided to be solved the right way. | | |
| ▲ | ForHackernews 2 days ago | parent [-] | | We just need to tax the hell out of the AI companies (assuming they are ever profitable) since all their gains are built on plundering the collective wisdom of humanity. | | |
| ▲ | thfuran 2 days ago | parent | next [-] | | I don’t think waiting for profitability makes sense. They can be massively disruptive without much profit as long as they spend enough money. | |
| ▲ | encyclopedism 2 days ago | parent | prev [-] | | AI companies and corporations in general control your politicians so taxing isn't going to happen. |
|
|
|
|
| ▲ | literalAardvark 2 days ago | parent | prev | next [-] |
| They're not blenders. This is clear from the fact that you can distill the logic ability from a 700b parameter model into a 14b model and maintain almost all of it. You just lose knowledge, which can be provided externally, and which is the actual "pirated" part. The logic is _learned_ |
| |
| ▲ | encyclopedism 2 days ago | parent | next [-] | | It hasn't learned any LOGIC. It has 'learned' patterns from the input. | | | |
| ▲ | bayindirh 2 days ago | parent | prev [-] | | Are there any recent publications about it so I can refresh myself on the matter? | | |
| ▲ | D-Machine 2 days ago | parent [-] | | You won't find any trustworthy papers on the topic because GP is simply wrong here. That models can be distilled has no bearing whatsoever on whether a model has learned actual knowledge or understanding ("logic"). Models have always learned sparse/approximately-sparse and/or redundant weights, but they are still all doing manifold-fitting. The resulting embeddings from such fitting reflect semantics and semantic patterns. For LLMs trained on the internet, the semantic patterns learned are linguistic, which are not just strictly logical, but also reflect emotional, connotational, conventional, and frequent patterns, all of which can be illogical or just wrong. While linguistic semantic patterns are correlated with logical patterns in some cases, this is simply not true in general. |
|
|
|
| ▲ | 2 days ago | parent | prev | next [-] |
| [deleted] |
|
| ▲ | mcfedr 2 days ago | parent | prev | next [-] |
| i like to think of LLMs as random number generators with a filter |
|
| ▲ | rat9988 2 days ago | parent | prev [-] |
| > Well, the first 90% is easy, the hard part is the second 90%. You'd need to prove that this assertion applies here. I understand that you can't deduce the future gains rate from the past, but you also can't state this as universal truth. |
| |
| ▲ | bayindirh 2 days ago | parent | next [-] | | No, I don't need to. Self driving cars is the most recent and biggest example sans LLMs. The saying I have quoted (which has different forms) is valid for programming, construction and even cooking. So it's a simple, well understood baseline. Knowledge engineering has a notion called "covered/invisible knowledge" which points to the small things we do unknowingly but changes the whole outcome. None of the models (even AI in general) can capture this. We can say it's the essence of being human or the tribal knowledge which makes experienced worker who they are or makes mom's rice taste that good. Considering these are highly individualized and unique behaviors, a model based on averaging everything can't capture this essence easily if it can ever without extensive fine-tuning for/with that particular person. | | |
| ▲ | enraged_camel 2 days ago | parent | next [-] | | >> No, I don't need to. Self driving cars is the most recent and biggest example sans LLMs. Self-driving cars don't use LLMs, so I don't know how any rational analysis can claim that the analogy is valid. >> The saying I have quoted (which has different forms) is valid for programming, construction and even cooking. So it's a simple, well understood baseline. Sure, but the question is not "how long does it take for LLMs to get to 100%". The question is, how long does it take for them to become as good as, or better than, humans. And that threshold happens way before 100%. | | |
| ▲ | bayindirh 2 days ago | parent [-] | | >> Self-driving cars don't use LLMs, so I don't know how any rational analysis can claim that the analogy is valid. Doesn't matter, because if we're talking about AI models, no (type of) model reaches 100% linearly, or 100% ever. For example, recognition models run with probabilities. Like Tesla's Autopilot (TM), which loves to hit rolled-over vehicles because it has not seen enough vehicle underbodies to classify it. Same for scientific classification models. They emit probabilities, not certain results. >> Sure, but the question is not "how long does it take for LLMs to get to 100%" I never claimed that a model needs to reach a proverbial 100%. >> The question is, how long does it take for them to become as good as, or better than, humans. They can be better than humans for certain tasks. They are actually better than humans in some tasks since 70s, but we like to disregard them to romanticize current improvements, but I don't believe current or any generation of AIs can be better than humans in anything and everything, at once. Remember: No machine can construct something more complex than itself. >> And that threshold happens way before 100%. Yes, and I consider that "treshold" as "complete", if they can ever reach it for certain tasks, not "any" task. |
| |
| ▲ | rat9988 2 days ago | parent | prev | next [-] | | Self driving cars is not a proof. It only proves that having quick gains doesn't mean necessarily you'll get a 100% fast. It doesn't prove it will necessarily happen. | |
| ▲ | damethos 2 days ago | parent | prev | next [-] | | "covered/invisible knowledge" aka tacit knowledge | | |
| ▲ | bayindirh 2 days ago | parent [-] | | Yeah, I failed to remember the term while writing the comment. Thanks! |
| |
| ▲ | thfuran 2 days ago | parent | prev [-] | | >None of the models (even AI in general) can capture this None of the current models maybe, but not AI in general? There’s nothing magical about brains. In fact, they’re pretty shit in many ways. | | |
| ▲ | bayindirh 2 days ago | parent | next [-] | | A model trained on a very large corpus can't, because these behaviors are different or specialized enough they cancel each other most of the cases. You can forcefully fine-tune a model with a singular person's behavior up to a certain point, but I'm not sure that even that can capture the subtlest of behaviors or decision mechanisms which are generally the most important ones (the ones we call gut feeling or instinct). OTOH, while I won't call human brain perfect, the things we label "shit" generally turn out to be very clever and useful optimizations to workaround its own limitations, so I regard human brain higher than most AI proponents do. Also we shouldn't forget that we don't know much about how that thing works. We only guess and try to model it. Lastly, searching perfection in numbers and charts or in engineering sense is misunderstanding nature and doing a great disservice to it, but this is a subject for another day. | |
| ▲ | emodendroket 2 days ago | parent | prev [-] | | The understanding of the brain is far from complete whether they're "magical" or "shit." | | |
|
| |
| ▲ | sanderjd 2 days ago | parent | prev | next [-] | | I read the comment more as "based on past experience, it is usually the case that the first 90% is easier than the last 10%", which is the right base case expectation, I think. That doesn't mean it will definitely play out that way, but you don't have to "prove" things like this. You can just say that they tend to be true, so it's a good expectation to think it will probably be true again. | |
| ▲ | rybosworld 2 days ago | parent | prev [-] | | The saying is more or less treated as a truism at this point. OP isn't claiming something original and the onus of proving it isn't on them imo. I've heard this same thing repeated dozens of times, and for different domains/industries. It's really just a variation of the 80/20 rule. |
|