Remix.run Logo
shmerl 5 days ago

Why are they using different compilers?

account42 5 days ago | parent | next [-]

Either licensing issues (maybe they don't own all parts of the closed source shader compiler) or fears that Nvidia/Intel could find out things about the hardware that AMD wants to keep secret (the fears being Unfounded doesn't make the possibility of them being a reason any less likely). Or alternatively they considered it not worth releasing it (legal review isn't free) because the LLVM back-end was supposed to replace it anyway.

AnthonyMouse 5 days ago | parent | next [-]

> or fears that Nvidia/Intel could find out things about the hardware that AMD wants to keep secret (the fears being Unfounded doesn't make the possibility of them being a reason any less likely)

When the fears are unfounded the reason isn't "Nvidia/Intel could find out things about the hardware", it's "incompetence rooted in believing something that isn't true". Which is an entirely different thing because in one case they would have a proper dilemma and in the other they would need only extricate their cranium from their rectum.

mschuster91 5 days ago | parent [-]

> When the fears are unfounded the reason isn't "Nvidia/Intel could find out things about the hardware"

Good luck trying to explain that to Legal. The problem at the core with everything FOSS is the patent and patent licensing minefield. Hardware patents are already risky enough to get torched by some "submarine patent" troll, the US adds software patents to that mix. And even if you think you got all the licenses you need, it might be the case that the licensing terms ban you from developing FOSS drivers/software implementing the patent, or that you got a situation like the HDMI2/HDCP situation where the DRM <insert derogatory term here> insist on keeping their shit secret, or you got regulatory requirements on RF emissions.

And unless you got backing from someone very high up the chain, Corporate Legal will default to denying your request for FOSS work if there is even a slight chance it might pose a legal risk for the company.

AnthonyMouse 5 days ago | parent | next [-]

> Hardware patents are already risky enough to get torched by some "submarine patent" troll, the US adds software patents to that mix.

Software patents are indeed a scourge, but not publishing source code doesn't get you out of it. Patent trolls file overly broad patents or submarine patents on things they get included into standards so that everyone is infringing their patent because the patent covers the abstract shape of every solution to that problem rather than any specific one, or covers the specific one required by the standard. They can still prove that using binary software because your device is still observably doing the thing covered by the patent.

Meanwhile arguing that this makes it harder for them to figure out that you're infringing their patent actually cuts the other way, because if plaintiffs are clever they're going to use that exact reasoning to argue for willful infringement -- that concealing the source code is evidence that you know you're infringing and trying to hide it.

> And even if you think you got all the licenses you need, it might be the case that the licensing terms ban you from developing FOSS drivers/software implementing the patent, or that you got a situation like the HDMI2/HDCP situation where the DRM <insert derogatory term here> insist on keeping their shit secret, or you got regulatory requirements on RF emissions.

To my knowledge there is no actual requirement that you not publish the source code for radio devices, only some language about not giving the user the option to exceed regulatory limits. But if that can be done through software then it could also be done by patching the binary or using a binary meant for another region, so it's not clear how publishing the code would change that one way or the other. More relevantly, it's pretty uncommon for a GPU to have a radio transceiver in it anyway, isn't it? On top of that, this would only be relevant to begin with for firmware and not drivers.

And the recommended way of implementing DRM is to not, but supposing that you're going to do it anyway, that would only apply to the DRM code and not all the rest of it. A GPU is basically a separate processor running its own OS which is separated into various libraries and programs. The DRM code is code that shouldn't even be running unless you're currently decoding DRM'd media and could be its own optional tiny little blob even if the other 98% of the code is published.

garaetjjte 4 days ago | parent | prev [-]

>Good luck trying to explain that to Legal

Don't let Legal run the company. It's there to support the company, not the other way around. (unless it's Oracle, I guess)

shmerl 5 days ago | parent | prev [-]

> the LLVM back-end was supposed to replace it anyway.

Is this still the case? I.e. why shut down the open amdvlk project then? They could just make it focused on Windows only.

kimixa 5 days ago | parent [-]

The open source release of amdvlk has never been buildable for windows as all the required Microsoft integration stuff has to be stripped out before release.

So at best it'll be of limited utility for a reference, I can see why they might decide that's just not worth the engineering time of maintaining and verifying their cleaning-for-open-source-release process (as the MS stuff wasn't the only thing "stripped" from the internal source either).

I assume the llvm work will continue to be open, as it's used in other open stacks like rocm and mesa.

shmerl 5 days ago | parent [-]

I see, but I still don't get why any of that had to be stripped, aren't they using public APIs? Nothing there has to be in any particularly way secret.

kimixa 4 days ago | parent [-]

From what I remember a lot of the code provided by Microsoft was not publicly available or permissively licensed.

Though I think a lot of it might be considered "Legacy" - it still existed.

shmerl 4 days ago | parent [-]

Huh, interesting. Very MS style of problem. Why can't they develop public OS interfaces for that? I.e. you don't need the full code for that, just interface libraries. Not having public interfaces in this day and age is simply dumb.

jacquesm 5 days ago | parent | prev [-]

Bluntly: because they don't get software and never did. The hardware is actually pretty good but the software has always been terrible and it is a serious problem because NV sure could use some real competition.

AnthonyMouse 5 days ago | parent [-]

I wish hardware vendors would just stop trying to write software. The vast majority of them are terrible at it and even within the tiny minority that can ship something that doesn't non-deterministically implode during normal operation, the vast majority of those are a hostile lock-in play.

Hardware vendors: Stop writing software. Instead write and publish hardware documentation sufficient for others to write the code. If you want to publish a reference implementation that's fine, but your assumption should be that its primary purpose is as a form of documentation for the people who are going to make a better one. Focus on making good hardware with good documentation.

Intel had great success for many years by doing that well and have recently stumbled not because the strategy doesn't work but because they stopped fulfilling the "make good hardware" part of it relative to TSMC.

exDM69 5 days ago | parent | next [-]

> I wish hardware vendors would just stop trying to write software.

How would/should this work? Release hardware that doesn't have drivers on day one and then wait until someone volunteers to do it?

> Intel had great success for many years by doing that well

Not sure what you're referring to but Intel's open source GPU drivers are mostly written by Intel employees.

adrian_b 5 days ago | parent [-]

The documentation can be published in advance of the product launch.

Intel and AMD did this in the past for their CPUs and accompanying chipsets, when any instruction set extensions or I/O chipset specifications were published some years in advance, giving time to the software developers to update their programs.

Intel still somewhat does it for CPUs, but for GPUs their documentation is delayed a lot in comparison with the product launch.

AMD now has significant delays in publishing the features actually supported by their new CPUs, even longer than for their new GPUs.

In order to have hardware that works on day one, most companies still have to provide specifications for their hardware products to various companies that must design parts of the hardware or software that are required for a complete system that works.

The difference between now and how this was done a few decades ago, is that then the advance specifications were public, which was excellent for competition, even if that meant that there were frequently delays between the launch of a product and the existence of complete systems that worked with it.

Now, these advance specifications are given under NDA to a select group of very big companies, which design companion products. This ensures that now it is extremely difficult for any new company to compete with the incumbents, because they would never obtain access to product documentation before the official product launch, and frequently not even after that.

mschuster91 5 days ago | parent | prev [-]

The problem is, making hardware is hard. Screw something up, in the best case you can fix it in ucode, if you're not that lucky you can get away with a new stepping, but in the worst case you have to do a recall and not just deal with your own wasted effort, but also the wasted downstream efforts and rework costs.

So a lot of the complexity of what the hardware is doing gets relegated to firmware as that is easier to patch and, especially relevant for wifi hardware before the specs get finalized, extend/adapt later on.

The problem with that, in turn, is patents and trade secrets. What used to be hideable in the ASIC masks now is computer code that's more or less trivially disassemblable or to reverse engineer (see e.g. nouveau for older NVDA cards and Alyssa's work on Apple), and if you want true FOSS support, you sometimes can't fulfill other requirements at the same time (see the drama surrounding HDMI2/HDCP support for AMD on Linux).

And for anything RF you get the FCC that's going to throw rocks around on top of that. Since a few years, the unique combination of RF devices (wifi, bt, 4G/5G), antenna and OS side driver has to be certified. That's why you get Lenovo devices refusing to boot when you have a non-Lenovo USB network adapter attached at boot time or when you swap the Sierra Wireless modem with an identical modem from a Dell (that only has a different VID/PID), or why you need old, long outdated Lenovo/Dell/HP/... drivers for RF devices and the "official" manufacturer ones will not work without patching.

I would love a world in which everyone in the ecosystem were forced to provide interface documentation, datasheets, errata and ucode/firmware blobs with source for all their devices, but unfortunately, DRM, anti-cheat, anti-fraud and overeager RF regulatory authorities have a lot of influence over lawmakers, way more than FOSS advocates.

yencabulator 5 days ago | parent [-]

Linux already maintains quirks code paths for all kinds of devices where the manufacturer "could have" updated the firmware to fix the bug but never did.

It also contains quirks for Intel x86 core platform features: https://github.com/torvalds/linux/blob/master/arch/x86/kerne...

For the now-fashionable LLMs-on-GPUs world, it's pretty much just matrix multiplications. How many patents can reside in that? I don't expect Google to sell TPUs because that's not the business they're in, but AMD could put them in their SoCs without writing drivers: https://cloud.google.com/tpu/docs/system-architecture-tpu-vm...