| ▲ | nusl 10 hours ago |
| Legit feels like Nvidia just buying out competition to maintain their position and power in the industry. I sincerely hope they fall flat on their face. |
|
| ▲ | A_D_E_P_T 10 hours ago | parent | next [-] |
| > Legit feels like Nvidia just buying out competition to maintain their position and power Well, I mean, isn't that exactly what they should be doing? (I'm not talking about whether or not it benefits society; this is more along the lines of how they're incentivized.) Put yourself in their shoes. If you had all that cash, and you're hearing people talk of an "AI Bubble" on a daily basis, and you want to try and ensure that you ride the wave without ever crashing... the only rational thing to do is use the money to try and cover all your bases. This means buying competitors and it also means diversifying a little bit. |
| |
| ▲ | zapnuk 10 hours ago | parent | next [-] | | No one is claiming that it's a bad move. It's just an anti-competitive move that could be very bad for the consumer as it makes the inference market less competitive. | |
| ▲ | BoredPositron 10 hours ago | parent | prev | next [-] | | Dunno thought AGI would make everything obsolete and it's just around the corner? It looks rather like it dawns on everyone that transformers won't bring salvation. It's a show of weakness. | |
| ▲ | fatata123 8 hours ago | parent | prev [-] | | [dead] |
|
|
| ▲ | cmrdporcupine 6 hours ago | parent | prev | next [-] |
| That's unfortunately what most acquisitions are. |
|
| ▲ | _zoltan_ 10 hours ago | parent | prev | next [-] |
| which is exactly what a business should do. it's not like Nvidia doesn't invest a ton into R&D, but hey, they have the cash, why not use it? like a good business. |
| |
| ▲ | moffkalast 9 hours ago | parent [-] | | In a normal world, this is where Nvidia gets trust busted. But that's long behind us now. |
|
|
| ▲ | piskov 10 hours ago | parent | prev [-] |
| Stuff like tinygrad will change this. Geohot already made nvidia run on macs via thunderbolt. Also: https://x.com/__tinygrad__/status/1983469817895198783 |
| |
| ▲ | bri3d 10 hours ago | parent | next [-] | | The bottleneck in training and inference isn’t matmul, and once a chip isn’t a kindergarten toy you don’t go from FPGA to tape out by clicking a button. For local memory he’s going to have to learn to either stack DRAM (not “3000 lines of verilog” and requires a supply chain which openai just destroyed) or diffuse block RAM / SRAM like Groq which is astronomically expensive bit for bit and torpedoes yields, compounding the issue. Then comes interconnect. | | |
| ▲ | piskov 9 hours ago | parent [-] | | The main point is that it will not be an nvidia’s monopoly for too long. |
| |
| ▲ | password54321 10 hours ago | parent | prev | next [-] | | This guy has the greatest dunning-kruger of all time. Lots of smoke and mirrors. | | | |
| ▲ | refulgentis 10 hours ago | parent | prev [-] | | There's this curious experience of people bringing up geohot / tinygrad and you can tell they've been sold into a personality cult. I don't mean that pejoratively, I apologize for the bluntness. It's just I've been dealing with his nonsense since iPhone OS 1.0 x jailbreaking, and I hate seeing people taken advantage of. (nvidia x macs x thunderbolt has been a thing for years and years and years, well before geohot) (tweet is non-sequitor beyond bogstandard geohot tells: odd obsession with LoC, and we're 2 years away from Changing The Game, just like we were 2 years ago) | | |
| ▲ | piskov 9 hours ago | parent [-] | | Can you show any other thing that runs nvidia gpu under m-series macs? | | |
| ▲ | MrDarcy 9 hours ago | parent [-] | | Who cares? Nobody is building large scale inference services with macs. | | |
| ▲ | piskov 9 hours ago | parent [-] | | Because this is exactly the demonstration of abstraction: the same stuff allows direct gpu communication so that even mac nvidia thing is possible. It is not tied to nvidia as well. This is the power of tinygrad | | |
| ▲ | refulgentis 7 hours ago | parent [-] | | My deepest apologies, I can't parse this and I earnestly tried: 5 minutes of my own thinking, then 3 llms, then a 10 minute timer of my own thinking over the whole thing. My guess is you're trying to communicate "tinygrad doesn't need gpu drivers" which maybe is transmutated into "tinygrad replaces CUDA" and you think "CUDA means other GPUs can't be used for LLMs, thus nvidia has a strangehold" I know George has pushed this idea for years now, but, you have to look no further than AMD/Google making massive deals to understand how it works on the ground. I hope he doesn't victimize you further with his rants. It's cruel of him to use people to assuage this own ego and make them look silly in public. Re: has someone else does this? https://github.com/albertstarfield/apple-slick-rtx (May 2024, 19 months ago. didn't bother looking further than 4th google result for "apple silicon external gpu") | | |
| ▲ | piskov 7 hours ago | parent [-] | | > Compute Workload Test: This will be add soon Wonder what happened that it never came. > Willy's got his i3-12100 Gen RTX3090 hosted on Ubuntu with Juice Server E-gpu my ass |
|
|
|
|
|
|