| ▲ | the_pwner224 21 hours ago |
| You don't need CUDA for gaming but software is still just as big of a moat. Gaming GPU drivers are complex and have tons of game-specific patches. With their new Radeon/RDNA architecture it took AMD years to overcome their reputation for having shitty drivers on their consumer GPUs (and that reputation was indeed deserved early on). And I bet if you go read GPU discussion online today you'll still find people who avoid AMD because of drivers. That won't stop them, but it's a big barrier to entry. Oh and that's just to get the drivers to work. Not including company-specific features that need to be integrated by the game devs into their game codebase, like DLSS / FrameGen and FSR. And in the past there was other Nvidia/AMD-specific stuff like PhysX, hair rendering, etc. |
|
| ▲ | overfeed 17 hours ago | parent | next [-] |
| > Gaming GPU drivers are complex and have tons of game-specific patches. I don't think the Chinese government will be too upset if cheap Chinese GPUs work best with China-made games. It will be quite the cultural coup if, in 20 years time, the most popular shooter is a Chinese version of Call of Duty or Battlefield. |
| |
| ▲ | dash2 13 hours ago | parent [-] | | They made the most popular RPG last year already - why do you think it'll take 20 years for them to make the most popular shooter? For that matter, the Singapore-HQed SEA makes Free Fire, which topped Google Play in 2019. | | |
| ▲ | overfeed 11 hours ago | parent | next [-] | | Im aware of Genshin Impact, and that NetEase is behind Marvel Rivals. FPS tend to have sticker fanbases, but I chose 20 years because that's what I guess is how long it may take not only for the domestic EUV to launch and get yields good enough for a cheap but competitive GPU out the door. | |
| ▲ | krige 8 hours ago | parent | prev [-] | | Gacha mobages are rarely considered the same kind of entertainment as actual RPGs, and even then the Japanese and the Koreans give them stiff competition. When it's not a skinner box FOMO with titillating skins the Chinese barely register on the radar. |
|
|
|
| ▲ | dontlaugh 20 hours ago | parent | prev | next [-] |
| Yea, but less than in the past. Modern graphics APIs are much thinner layers. This was even proven in practice with Intel’s Arc. While they had (and to some extent still have) their share of driver problems, at a low enough price that isn’t a barrier. |
|
| ▲ | Yoric 20 hours ago | parent | prev | next [-] |
| On the other hand, all it would take would be one successful Steam Deck/Steam Machine-style console to get all the developers of the world making sure that their games work on that hypothetical GPU. I don't think that it will happen in the next 5 years, but who knows? |
|
| ▲ | solid_fuel 17 hours ago | parent | prev | next [-] |
| I believe the software will follow the hardware. Not immediately, of course, but if I want to learn to do ML and have to pick between a $2500 Nvidia GPU and a $500 Chinese GPU that's 80% as fast, I would absolutely take the cheap one and keep an eye out for patches. When it comes to drivers, IMO all they really need is reasonable functionality on linux. That alone would probably be enough to get used in a budget steam machine or budget pc builds, with Windows 11 being a disaster and both RAM and GPU prices shooting through the roof. The choice may soon be Bazzite Linux with a janky GPU or gaming on your phone. |
|
| ▲ | downrightmike 20 hours ago | parent | prev | next [-] |
| Cuda is 20 years old and it shows. Time for a new language that fixes the 20 years of rough edges. The Guy (Lattner) who made LLVM is working on this: https://www.modular.com/mojo Good podcast on him: https://newsletter.pragmaticengineer.com/p/from-swift-to-moj... |
| |
| ▲ | bigyabai 19 hours ago | parent | next [-] | | What I gather from this comment is that you haven't written CUDA code in a while, maybe ever. Mojo looked promising initially. The more details we got though, the more it became apparent that they weren't interested in actually competing with Nvidia. Mojo doesn't replace the majority of what CUDA does, it doesn't have any translation or interoperability with CUDA programs. It uses a proprietary compiler with a single implementation. They're not working in conjunction with any serious standardization orgs, they're reliant on C/C++ FFI for huge amounts of code and as far as I'm aware there's no SemVer of compute capability like CUDA offers. The more popular Mojo gets, the more entrenched Nvidia (and likely CUDA) will become. We need something more like OpenGL with mutual commitment from OEMs. Lattner is an awesome dude, but Mojo is such a trend-chasing clusterfuck that I don't know what anyone sees in it. I'm worried that Apple's "fuck the dev experience" attitude rubbed off on Chris in the long run, and made him callous towards appeals to openness and industry-wide consortiums. | | |
| ▲ | CalmDream 16 hours ago | parent [-] | | Most of the stuff you pointed out is addressed in a series of blog posts by Lattner : https://www.modular.com/democratizing-ai-compute | | |
| ▲ | bigyabai 15 hours ago | parent [-] | | Many of those posts are opinionated and even provably wrong. The very first one about Deepseek's "recent breakthrough" was never proven or replicated in practice. He's drawing premature conclusions, ones that especially look silly now that we know Deepseek evaded US sanctions to import Nvidia Blackwell chips. I can't claim to know more about GPU compilers than Lattner - but in this specific instance, I think Mojo fucked itself and is at the mercy of hardware vendors that don't care about it. CUDA, by comparison, is having zero expense spared in it's development at every layer of the stack. There is no comparison with Mojo, the project is doomed if they intend any real comparison with CUDA. | | |
|
| |
| ▲ | htrp 19 hours ago | parent | prev [-] | | mojo been in the works for 3+ years now.... not sure the language survives beyond the vc funding modular has. |
|
|
| ▲ | checker659 12 hours ago | parent | prev | next [-] |
| There is nothing magical about CUDA |
|
| ▲ | citizenpaul 20 hours ago | parent | prev [-] |
| Its not really just that AMD drivers are not that great (they are not) but they have been stable for a long time. Its that nvidia relentlessly works with game developers to make sure their graphics tricks work with nvidia drivers. Its so obvious you miss it. Look in the nvidia driver updates they always list games that have fixes, performance ect. AMD never (used?) to do this they just gave you the drivers and expected developers to make their game work with it. The same strategy that MS used for their OS back in the 90's. Thats at least how things got where they are now. |
| |
| ▲ | Account_Removed 19 hours ago | parent | next [-] | | AMD provides this. Example:
"Fixed Issues and Improvements Intermittent driver timeout or crash may be observed while playing Warhammer 40,000: Space Marine 2 on some AMD Graphics Products, such as the AMD Ryzen™ AI 9 HX 370.) Lower than expected performance may be observed in Delta Force on Radeon™ RX 7000 series graphics products. Intermittent stutter may be observed while playing Marvel Rivals when AMD FidelityFX™ Super Resolution 3 frame generation is enabled. " https://www.amd.com/en/resources/support-articles/release-no... | |
| ▲ | Waterluvian 19 hours ago | parent | prev [-] | | The whole “improve a game’s performance on the driver side” thing: does AMD simply not do that at all? Or just far less? | | |
| ▲ | wincy 19 hours ago | parent | next [-] | | They definitely do it some, like Starfield came out with FSR out of the box but they didn’t add DLSS for several months. I got Starfield for free when I bought my 7800X3D which was a nice bonus. Definitely to a lesser degree than Nvidia though. | |
| ▲ | m4rtink 19 hours ago | parent | prev [-] | | Frankly, this always seemed like dirty hacks - either the game or the drivers don't actually comply woth the graphics API and then the drivers need to hack around that. :P |
|
|