| ▲ | lisper 6 days ago |
| > So how many gates are we talking to factor some "cryptographically useful" number? That is a hard question to answer for two reasons. First, there is no bright line that delineates "cryptographically useful". And second, the exact design of a QC that could do such a calculation is not yet known. It's kind of like trying to estimate how many traditional gates would be needed to build a "semantically useful" neural network back in 1985. But the answer is almost certainly in the millions. [UPDATE] There is a third reason this is hard to predict: for quantum error correction, there is a tradeoff between the error rate in the raw qbit and the number of gates needed to build a reliable error-corrected virtual qbit. The lower the error rate in the raw qbit, the fewer gates are needed. And there is no way to know at this point what kind of raw error rates can be achieved. > Is there some pathway that makes quantum computers useful this century? This century has 75 years left in it, and that is an eternity in tech-time. 75 years ago the state of the art in classical computers was (I'll be generous here) the Univac [1]. Figuring out how much less powerful it was than a modern computer makes an interesting exercise, especially if you do it in terms of ops/watt. I haven't done the math, but it's many, many, many orders of magnitude. If the same progress can be achieved in quantum computing, then pre-quantum encryption is definitely toast by 2100. And it pretty much took only one breakthrough, the transistor, to achieve the improvement in classical computing that we enjoy today. We still don't have the equivalent of that for QC, but who knows when or if it will happen. Everything seems impossible until someone figures it out for the first time. --- [1] https://en.wikipedia.org/wiki/UNIVAC_I#Technical_description |
|
| ▲ | TheOtherHobbes 6 days ago | parent | next [-] |
| It's not an eternity because QC is a low-headroom tech which is already pushing its limits. What made computing-at-scale possible wasn't the transistor, it was the precursor technologies that made transistor manufacturing possible - precise control of semiconductor doping, and precision optical lithography. Without those the transistor would have remained a lab curiosity. QC has no hint of any equivalent breakthrough tech waiting to kick start a revolution. There are plenty of maybe-perhaps technologies like Diamond Defects and Photonics, but packing density and connectivity are always going to be huge problems, in addition to noise and error rate issues. Basically you need high densities to do anything truly useful, but error rates have to go down as packing densities go up - which is stretching optimism a little. Silicon is a very forgiving technology in comparison. As long as your logic levels have a decent headroom over the noise floor, and you allow for switching transients (...the hard part) your circuit will be deterministic and you can keep packing more and more circuitry into smaller and smaller spaces. (Subject to lithography precision.) Of course it's not that simple, but it is basically just extremely complex and sophisticated plumbing of electron flows. Current takes on QC are the opposite. There's a lot more noise than signal, and adding more complexity makes the problem worse in non-linear ways. |
| |
| ▲ | lisper 6 days ago | parent [-] | | I'm sympathetic to this argument, but nearly every technological breakthrough in history has been accompanied by plausible-sounding arguments as to why it should have been impossible. I myself left my career as an AI researcher about 20 years ago because I was convinced the field was moribund and there would be no major breakthroughs in my lifetime. That was about as well-informed a prediction as you could hope to find at the time and it was obviously very wrong. It is in the nature of breakthroughs that they are rare and unpredictable. Nothing you say is wrong. I would bet against QC is 5 years (and even then I would not stake my life savings) but not 75. | | |
| ▲ | lqstuart 6 days ago | parent | next [-] | | In fairness, the biggest breakthrough in AI has been calling more and more things “AI.” Before LLMs it was content based collaborative filtering. | | |
| ▲ | lisper 6 days ago | parent [-] | | No, LLMs are a real breakthrough even if they are not by themselves reliable enough to produce a commercially viable application. Before LLMs, no one knew how to even convincingly fake a natural language interaction. I see LLMs as analogous to Rodney Brooks's subsumption architecture. Subsumption by itself was not enough, but it broke the logjam on the then-dominant planner-centric approach, which was doomed to fail. In that respect, subsumption was the precursor to Waymo, and that took less than 40 years. I was once a skeptic, but I now see a pretty clear path to AGI. It won't happen right away, but I'd be a little surprised if we didn't see it within 10 years. | | |
| ▲ | Retric 6 days ago | parent | next [-] | | > no one knew how to even convincingly fake a natural language interaction. There was some decent attempts at the turing test given limited subject matter long before LLM’s. As in people looking at the conversation where unsure if one of the parties was a computer. It’s really interesting to read some of those transcripts. LLM’s actually score worse one some of those tests. Of course they do a huge range of other things, but it’s worth understanding both their strengths and many weaknesses. | |
| ▲ | kibwen 6 days ago | parent | prev | next [-] | | > It won't happen right away, but I'd be a little surprised if we didn't see it within 10 years. Meanwhile, even after the infamous LK-99 fiasco (which gripped this forum almost more than anywhere else) was exposed as an overblown nothingburger, I still had seemingly-intelligent people telling me with all seriousness that the superconductor breakthrough had a 50% chance of happening within the next year. People are absolutely, terminally terrible at estimating the odds of future events that are surrounded by hype. | |
| ▲ | seanmcdirmid 6 days ago | parent | prev | next [-] | | I thought Waymo was much more ML than logical rules based subsumption? I’m not sure it’s possible to do more than simple robotics without jumping into ML, I guess maybe if you had high level rules prioritized via subsumption but manipulating complex ML-trained sensors and actuators. | | |
| ▲ | lisper 6 days ago | parent [-] | | Yes, that's right. The ostensible idea behind subsumption is dead (because it was wrong). But what subsumption did was open up the possibility of putting the AI into the run-time feedback loop rather than the deliberative planning, and that is what all robotic control architectures do today. |
| |
| ▲ | zppln 6 days ago | parent | prev [-] | | > clear path to AGI What are the steps? | | |
| ▲ | lisper 5 days ago | parent [-] | | It's not really about "steps", it's about getting the architecture right. LLMs by themselves are missing two crucial ingredients: embodiment and feedback. The reason they hallucinate is that they have no idea what the words they are saying mean. They are like children mimicking other people. They need to be able to associate the words with some kind of external reality. This could be either the real world, or a virtual world, but they need something that establishes an objective reality. And then they need to be able to interact with that world, poke at it and see what it does and how it behaves, and get feedback regarding whether their actions were appropriate or not. If I were doing this work, I'd look at a rich virtual environment like Minecraft or simcity or something like that. But it could also be coq or a code development environment. | | |
| ▲ | bryanrasmussen 5 days ago | parent [-] | | if they were able to associate with some sort of external reality will that prevent hallucination or just being wrong. Humans hallucinate and humans are wrong, perhaps being able to have intelligence without these qualities is the impossibility. | | |
| ▲ | lisper 5 days ago | parent [-] | | It's certainly possible that computers will suffer from all the same foibles that humans do, but we have a lot of evolutionary baggage that computers don't, so I don't see any fundamental reason why AGIs could not transcend those limitations. The only way to know is to do the experiment. |
|
|
|
|
| |
| ▲ | kibwen 6 days ago | parent | prev [-] | | > nearly every technological breakthrough in history has been accompanied by plausible-sounding arguments as to why it should have been impossible Indeed, and at the same the breakthroughs are vastly outnumbered by ideas which had plausible sounding counterarguments which turned out to be correct. Which is to say, the burden of proof is on the people making claims that something implausible-sounding is plausible. | | |
| ▲ | lisper 6 days ago | parent [-] | | But QC is quite plausible. There is no theoretical constraint that makes it impossible. It really is just an engineering problem at this point. | | |
| ▲ | kibwen 4 days ago | parent [-] | | But the distinction that we're trying to make here is that people hear "plausible in theory" and think "plausible in practice within the timespan of human civilization", which does not follow. I'm not trying to say anything about whether or not a CRQC will ever be built. I'm also not trying to say that pursuing PQC in the short term is a bad idea. But what I am saying is that the burden of proof remains on the believers to show that the engineering challenges are more than theoretically surmountable. | | |
| ▲ | lisper 4 days ago | parent [-] | | Yes, of course that is true. When I said that QC is "just an engineering problem" I did not mean to imply that it was straightforward. It's not. It's a Really Really Hard engineering problem with a lot of unknowns. It might turn out to be like fusion, perpetually 10-20 years away. Or it might turn out to be like the blue LED, seemingly impossible until someone figured out how to do it. I think you'd be foolish to bet your life savings on it either way. |
|
|
|
|
|
|
| ▲ | fhdkweig 6 days ago | parent | prev | next [-] |
| >> Is there some pathway that makes quantum computers useful this century? > This century has 75 years left in it, and that is an eternity in tech-time. As a comparison, we went from first heavier than air flight to man walking on the moon in only 66 years. |
| |
| ▲ | manquer 6 days ago | parent | next [-] | | > walking on the moon in only 66 years. Yet it has been 53 years since we have been able to send a manned mission to the moon . No other program has or likely to come close in the next 13 years including the current US one. By 2038 the moon landings would be closer to Wright brothers than future us. The curve of progress is only smooth and exponential when you squint hard . It is a narrow few decades of exponential growth hardly can reasonably be expected to last for 100+ years . It is for the same reason you cannot keep doubling grains on a chess board just because you did it 10-20 steps quickly. Fusion power, quantum computing are all always two decades away for a reason despite the money being spent . AI has gone through 3-4 golden ages in living memory and yet too many keep believing this one would last. Reality is when the conditions are right, I.e. all the ground work has been done for decades or centuries before there can be rapid innovation for a short(few decades at best) time | | |
| ▲ | decimalenough 6 days ago | parent | next [-] | | > No other program has or likely to come close in the next 13 years including the current US one. The Chinese are planning manned lunar landings in 2029-2030, and this is not a pipe dream, they've been systematically working at this for several decades now. They have already completed 6 out of 8 preparatory missions plus placed comms satellites in lunar orbit, and the final two are scheduled for 2026 and 2028. https://en.wikipedia.org/wiki/Chinese_Lunar_Exploration_Prog... | | |
| ▲ | manquer 5 days ago | parent [-] | | It does not look like CMSA is planning any human orbital missions or a human lander(lanyue) return flight test before attempting to land with humans in 2030 just two missions from now, that is very ambitious. Perhaps milestones are being set to be competing with Artemis. When NASA gets delayed or reduced in scope, CNSA might reset to more achievable date. That is just engineering risk on dates, there are other class of risks in geopolitics or economics etc. Bottom line I am skeptical that a successful landing and return can be attempted in 2030. 2035 is a more realistic target I think. |
| |
| ▲ | Nevermark 6 days ago | parent | prev [-] | | > Yet it has been 53 years since we have been able to send a manned mission to the moon A near total lack of demand explains that impressive stall. Even if the shuttle had worked out as well as its designers hoped, was envisioned as a major retreat, while sucking all the dollars out of the room. And today, the market for lunar landings is still very small. I think what it shows is that many technologies might have come earlier from a research and development standpoint, if we had enough money to burn. But that was an unusual situation. | | |
| ▲ | manquer 5 days ago | parent [-] | | Yes, Economics is a key factor for innovation. However it alone is not sufficient. At times you simply need other foundational breakthroughs to happen and they will have to be in sequence, i.e. one breakthrough has to happen and become widespread before work on next one can progress, before you can achieve meaningful progress on the end goal. It is not like Fusion or Quantum Computing has lacked serious or continuous funding over the last 20-30 years. Foundational model development is a classic current example. The returns are diminishing significantly, despite the tens of billions each quarter being thrown at the problem. No other R&D effort in our history has this much resources being allocated to it, perhaps including even the Moon landings. However the ability to allocate resources has limits. Big tech can spend few hundred billion a year a number that would have been unimaginable even a decade ago, but even they cannot spend few trillion dollars a year. |
|
| |
| ▲ | thechao 6 days ago | parent | prev | next [-] | | My great grandmother, who was born in 1891, asserted ca. 1990 that her favorite invention was large print novels. More importantly: the social right to read trashy novels. But, yeah, computers, planes, starships, nuclear power, etc etc. | |
| ▲ | sokoloff 6 days ago | parent | prev | next [-] | | Amara’s Law – “We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.” Related to your observation: A piece of the original Wright Flyer was landed on Mars just a bit over 117 years after the first flight. | |
| ▲ | Ekaros 5 days ago | parent | prev | next [-] | | On other hand from first rocket to be used it took 729 years for Soviets to win true space race of first person orbiting the earth. | |
| ▲ | 6 days ago | parent | prev | next [-] | | [deleted] | |
| ▲ | saati 6 days ago | parent | prev | next [-] | | That's only true if you totally ignore hot air balloons, the actual first manned flight was in 1783. | | |
| ▲ | lisper 6 days ago | parent [-] | | The comment you're responding to specified heaver-than-air. (And it should have been even more constrained: the real milestone was heavier-than-air powered flight.) |
| |
| ▲ | hangonhn 6 days ago | parent | prev | next [-] | | We went from neutron being discovered to nuclear weapons in just over a decade. | |
| ▲ | eastbound 6 days ago | parent | prev [-] | | > to man walking on the moon in only 66 years And that was before Epoch (1969, unix time started in 1970). We went from calculator to AI in 55 years, which is, actually, extremely long. It took exactly the time to miniaturize CPUs enough that you would hold as many gates in a GPU as neurones in a human’s brain. The moment we could give enough transistors to a single program, AI appeared. It’s like it’s just an emergent behavior. | | |
| ▲ | jacquesm 6 days ago | parent | next [-] | | > We went from calculator to AI in 55 years, which is, actually, extremely long. I think it is insanely fast. Think about it: that planet has been here for billions of years. Modern humanity has been here for 200,000 years, give or take. It took 199700 years and change to get to a working steam engine. 266 years later men were walking on the moon and another 55 years and we had a good facsimile of what an AI looks like in practice. That's insane progress. The next 75 years are going to be very interesting, assuming we don't fuck it all up, the chances of which are right now probably 50/50 or so. | | |
| ▲ | lttlrck 6 days ago | parent | next [-] | | If this is what AI is going to look like in practice it's a big letdown. Science fiction has been predicting what an AI would be like for over a hundred years, there was even one in a movie in 1927. We're so far from what we dream that, to me, it feels like a mere leaf blowing in the wind compared to the Wright Flyer. | | |
| ▲ | jacquesm 6 days ago | parent | next [-] | | It's not what it can do today (which is already pretty impressive) it is what it can do in another century, which too is a relatively short time. The Wright Flyer was a complete aircraft but small, awkward and not very practical. But it had all of the parts and that was the bit that mattered. LLMs are not a 'complete AI' at all, they are just a very slick imitation of one through a completely different pathway. Useful, but not AI (at least, not to me). Meanwhile, a very large fraction of the users of OpenAI, Claude etc all think that AI has arrived and from that perspective it is mostly the tech crowd that is disappointed. For the rest of the people the thing is nothing short of magic compared to what they were able to do with a computer not so long ago. And for people like translators it is a massive threat to their jobs, assuming they still have one. It is both revolutionary and a letdown, depending on your viewpoint and expectations. | |
| ▲ | thephyber 6 days ago | parent | prev [-] | | This rhymes with “we were promised The Jetsons and all we got was Facebook.” Sci-fi is fanciful and doesn’t take into account psychology. What we got is the local maxima of what entrepreneurs think they can build and what people are willing to pay for. Sci-fi is not a prediction. It is a hypothetical vision for what humanity could be in a distant future. The writer doesn’t have to grapple with limitations of physics (note FTL travel is frequently a plot device, not a plausible technology) or limitations about what product-market-fit the market will adopt. And, of course, sci-fi dates are rarely close or accurate. That’s probably by design (most Star Trek space technologies would be unbelievable if the timeline was 2030, but more easily believable if you add a few thousand years for innovation). | | |
| ▲ | jacquesm 6 days ago | parent [-] | | And yet, a mobile phone is quite close to a Star Trek communicator and in many ways already much more powerful. Ok, you can ask to be beamed up by your friend Scotty and it likely won't happen (call me if it does) but other than that it is an impressive feat of engineering. | | |
| ▲ | sugarkjube 5 days ago | parent | next [-] | | > Star Trek communicator As a trekkie this was a dream come true. Unfortunately we still don't have a tricorder yet (despite Elisabeth Holmes' promise). But we do have the apps and the games, they didn't have these in star trek. My phone is loaded with these (apps, not games) | |
| ▲ | ykonstant 5 days ago | parent | prev [-] | | >(call me if it does) They can just tell you in person! |
|
|
| |
| ▲ | billforsternz 6 days ago | parent | prev [-] | | I agree with everything you say but I'm still exceptionally triggered by you going 2x10^5 - 300 = 199700 and change. | | |
| ▲ | jacquesm 6 days ago | parent [-] | | Well... the 200K is so loosely defined that it could well be 210000 or 190000 (or even further out) so I figured it would be funny to be exact. But you're right that that doesn't carry well online. |
|
| |
| ▲ | galangalalgol 6 days ago | parent | prev [-] | | It just seems that way because people had been researching neural networks from before the time they had floating point units in processors. So there were all these ideas people were waiting tp try when we finally had the speed. Then it was a matter of trying them all to see which worked the best. But yes, there is the point that even a bad ai model can learn most anything if you give it enough parameters. So the emergent property isn't far off either. |
|
|
|
| ▲ | ted_dunning 6 days ago | parent | prev | next [-] |
| Good reminder on the time scale. On the other hand, the Univac could do more useful work than current quantum computers. |
|
| ▲ | 6 days ago | parent | prev [-] |
| [deleted] |