Remix.run Logo
Aurornis 3 days ago

Using vendor kernels is standard in embedded development. Upstreaming takes a long time so even among well-supported boards you either have to wait many years for everything to get upstreamed or find a board where the upstreamed kernel supports enough peripherals that you're not missing anything you need.

I think it's a good thing that people are realizing that these SBCs are better used as development tools for people who understand embedded dev instead of as general purpose PCs. For years now you can find comments under every Raspberry Pi or other SBC thread informing everyone that a mini PC is a better idea for general purpose compute unless you really need something an SBC offers, like specific interfaces or low power.

mort96 3 days ago | parent | next [-]

Somehow, this isn't a problem in the desktop space, even though new hardware regularly gets introduced there too which require new drivers.

doubled112 3 days ago | parent | next [-]

x86 hardware has a standard way to boot and bring up the hardware, usually to at least a minimum level of functionality.

ARM devices aren't even really similar to one another. As a weird example, the Raspberry Pi boots from the GPU, which brings up the rest of the hardware.

mort96 3 days ago | parent [-]

It's not just about booting though. We solve this with hardware-specific devicetrees, which is less nice in a way than runtime discovery through PCI/ACPI/UEFI/etc, but it works. But we're not just talking about needing a hardware-specific devicetree; we're talking about needing hardware-specific vendor kernels. That's not due to the lack of boot standardization and runtime discovery.

gspr 2 days ago | parent [-]

Please forgive this naive question from someone with zero knowledge in the area: What's stopping ARM/RISCV-based stuff from using ACPI/UEFI?

mort96 2 days ago | parent [-]

Nothing, and there has been a push for more standardization including adopting UEFI in the ARM server space. It's just not popular in the embedded space. You'd have to ask Qualcomm or Rockchip about why.

gspr 2 days ago | parent [-]

So we can hope for a future where cheap ARM/RISC-V SBCs are as pleasant to use as any bog standard x86?

mort96 2 days ago | parent [-]

You can hope but I don't think it'll happen any time soon.

The lack of standardized boot and runtime discovery isn't such a big issue; u-boot deals with the former and devicetrees deal with the latter, we could already have an ecosystem where you download a bog standard Ubuntu ARM image plus a bootloader and devicetree for your SBC and install them. It wouldn't be quite as elegant as in x86 but it wouldn't be that far off; you wouldn't have to use SBC-specific distros, you could get your packages and kernels straight from Canonical (or Debian or whatever).

The reason we don't have that today is that drivers for important hardware just isn't upstream. It remains locked away in Qualcomm's and Rockchip's kernel forks for years. Last I checked, you still couldn't get HDMI working for the popular RK3588 SoC for example with upstream Linux because the HDMI PHY driver was missing, even though the 3588 had been out for many years and the PHY driver had been available under the GPL for years in Rockchip's fork of Linux.

Even if we added UEFI and ACPI today, Canonical couldn't ship a kernel with support for all SBCs. They'd have to ship SBC-specific kernels to get the right drivers.

sunshine-o 2 days ago | parent | next [-]

I thought the Radxa Orion O6 [0] with it's "SystemReady SR-certified BIOS", UEFI(EDK2) support and "Boot-up in Line with X86 Conventions" was an answer to this problem.

- [0] https://radxa.com/products/orion/o6/

mort96 2 days ago | parent [-]

It's not. It's nice that it supports UEFI and I hope more SBCs follow suit, but it categorically does not do anything to solve the vendor kernel problem. It just means you don't need a hardware-specific bootloader and a devicetree.

Now maybe the O6 also happens to only use hardware which works with upstream kernels, I don't know. I haven't been able to find anything definitive about that (though the fact that they link to special "Orion O6" versions of Fedora and Debian rather than their standard ARM images doesn't inspire confidence). But that's independent of UEFI.

sunshine-o a day ago | parent [-]

> Now maybe the O6 also happens to only use hardware which works with upstream kernels, I don't know.

I think so because I looked in up when it was released and people were able to boot standard images ("All UEFI based ARM images with Mainline Kernel 6.6 and above" [0]). Their specific Fedora and Debian images reflect the progress in better support in the CPU, GPU, etc.

Looking back I should have bought many of those just for the 64Gb of RAM...

- [0] https://sbcwiki.com/docs/soc-manufacturers/cix/cd8180-p1/boa...

gspr 2 days ago | parent | prev [-]

Right. Thanks!

ThrowawayB7 3 days ago | parent | prev [-]

The "somehow" is Microsoft, who defines what the hardware architecture of what a x86-64 desktop/laptop/server is and builds the compatibility test suite (Windows HLK) to verify conformance. Open source operating systems rely on Microsoft's standardization.

mort96 3 days ago | parent | next [-]

Microsoft's standardization got AMD and Intel to write upstream Linux GPU drivers? Microsoft got Intel to maintain upstream xHCI Linux drivers? Microsoft got people to maintain upstream Linux drivers for touchpads, display controllers, keyboards, etc?

I doubt this. Microsoft played a role in standardizing UEFI/ACPI/PCI which allows for a standardized boot process and runtime discovery, letting you have one system image which can discover everything it needs during and after boot. In the non-server ARM world, we need devicetree and u-boot boot scripts in lieu of those standards. But this does not explain why we need vendor kernels.

jiggunjer 3 days ago | parent [-]

I think they're related. You can't have a custom kernel if you can't rebuild the device tree. You can't rebuild blobs.

mort96 3 days ago | parent [-]

> You can't have a custom kernel if you can't rebuild the device tree.

What is this supposed to mean? There is no device tree to rebuild on x86 platforms yet you can have a custom kernel on x86 platforms. You sometimes need to use kernel forks there too to work with really weird hardware without upstream drivers, there's nothing different about Linux's driver model on x86. It's just that in the x86 world, for the vast, vast majority of situations, pre-built distro kernels built from upstream kernel releases has all the necessary drivers.

mayama 2 days ago | parent | prev [-]

It's legacy of IBM PC compatible standard, that has multiple vendors building computers, peripherals that work with each other. Microsoft tried their EEE approach with ACPI that made suspend flaky in linux in early years.

mort96 2 days ago | parent [-]

This does not explain why the drivers for all the hardware is upstreamed almost immediately in the x86 world but remains locked away in vendor trees for years or forever in the ARM world. Vendor kernels don't exist due to the lack of standardized boot and runtime discovery.

apatheticonion 3 days ago | parent | prev | next [-]

I have always found it perplexing. Why is that required?

Is it the lack of drivers in upstream? Is it something to do with how ARM devices seemingly can't install Linux the same way x86 machines can (something something device tree)?

girvo 3 days ago | parent [-]

Yeah lack of peripheral drivers upstream for all the little things on the board, plus (AIUI) ARM doesn't have the same self-describing hardware discovery mechanisms that x86 computers have. Basically, standardisation. They're closer to MCUs in that way, is how I found it (though my knowledge is way out of date now, been years since I was doing embedded)

apatheticonion 3 days ago | parent [-]

I've just been doing some reading. The driver situation in Linux is a bit dire.

On the one hand there is no stable driver ABI because that would restrict the ability for Linux to optimize.

On the other hand vendors (like Orange Pi, Samsung, Qualcomm, etc etc) end up maintaining long running and often outdated custom forks of Linux in an effort to hide their driver sources.

Seems..... broken

pylotlight 2 days ago | parent | prev | next [-]

What's the feasibility these days of using AI assistanted software maintenance for drivers? Does this somewhat bridge the unsupported gap by doing it yourself or is this not really a valid approach?

leoedin 2 days ago | parent | next [-]

I've found AI tools to be pretty awful for low level work. So much of it requires making small changes to poorly documented registers. AI is very good at confidently hallucinating what register value you should use, and often is wrong. There's often such a big develop -> test cycle in embedded, and AI really only solves a very small part of it.

KeplerBoy 2 days ago | parent | prev [-]

That's just the new normal. Everyone is doing AI assisted work, but that doesn't mean the work goes away.

Someone still has to put in meaningful effort to get the AI to do it and ship it.

megous 3 days ago | parent | prev [-]

Or you can just upstream what you need yourself.