| ▲ | krastanov 6 hours ago |
| Wine's APIs are more stable than Linux's APIs, so it seems more plausible to me that Wine will become the first class target itself. |
|
| ▲ | TehCorwiz 6 hours ago | parent | next [-] |
| I wouldn't be surprised if Wine eventually becomes more stable than Windows. |
| |
| ▲ | alexrp 6 hours ago | parent | next [-] | | I've experienced multiple instances where (so I heard; I don't use Windows) a Windows Update completely broke a game on Windows for everyone, but Wine/Proton kept running it just fine. So we're already there in some sense. | |
| ▲ | Aerroon 6 hours ago | parent | prev | next [-] | | Windows 14 will just be a linux distro with wine acting as backwards compatibility. | | | |
| ▲ | carlos_rpn 6 hours ago | parent | prev | next [-] | | It feels like it won't be long before Microsoft starts helping with that (by making Windows less stable, not improving Wine). | | |
| ▲ | keyringlight 6 hours ago | parent [-] | | What I wonder about is if MS wants to keep people on windows, what methods they can use to do that. For simple desktop stuff I don't think they have many options to lock in other developers (and their audiences) to windows unless they want do so themselves (putting aside web based or not PC-desktop). Bleeding edge gaming and multiplayer anti-cheat is one area where I think having a big company owning the OS probably helps them stay ahead, as that structure probably lets them work with hardware designers to get the capabilities in use (i.e. in new versions of DirectX) and available to software developers first. There's generally a lag in adoption for new features within Vulkan and then usage downstream in wine/proton to get compatibility parity with windows, then the games themselves being able to run feature/performance parity. It'd be interesting to see what cooperation would be needed to have the linux gaming stack equal at the point new features are released, and with the least amount of manual hacks or command line tweaking required for the users. As discussed a few weeks back, tough anti-cheat for linux seems like a paradox with the current methods. | | |
| ▲ | mschuster91 5 hours ago | parent [-] | | > What I wonder about is if MS wants to keep people on windows, what methods they can use to do that Microsoft doesn't give a fuck about private customers any more. They don't have money. What has money though is enterprise/government sales, and MS got these customers tightly locked in. Compliance audits and tooling for insurances or legal stuff (SOX, GDPR, ...) are built against a full Microsoft stack of MS Server, Active Directory, Azure, Teams, Office 365 and Windows desktops. You might be able to get away with replacing AD and GPO with Samba servers but even that is already a pain when the auditors come knocking. Everything else? There is no single FOSS based "standard offering" (i.e. a combination of everything needed to run an on-prem enterprise site, Office replacement, remote collaboration tooling), so every audit for such setups must be custom made and involves a lot of extra work. A second leg is industrial control machines, medical devices and the likes. That's all stuff built by third party vendors and integrators. They need to continue on Windows because switching to an alternative OS would require redoing everything from scratch on the software and certification side. These customers buy the LTSC IoT stuff. And that is why you see Microsoft pushing enshittification so hard on private customers... extract the last few cents you can from them. But the real money comes from the large customers. |
|
| |
| ▲ | porphyra 6 hours ago | parent | prev [-] | | Wine actually does run some ancient Windows games better than Windows 11 itself. | | |
| ▲ | duskwuff 6 hours ago | parent | next [-] | | It certainly runs 16-bit Windows games better than Windows 11, which can't run them at all. Not that there are a ton of those, but it's still pretty neat that they work. | | |
| ▲ | senfiaj 41 minutes ago | parent [-] | | 16-bit software won't run natively in 64-bit mode. It requires some programmatic emulator, like DosBox. Or am I missing something? |
| |
| ▲ | anthk 5 hours ago | parent | prev [-] | | Anything Direct Draw related will be mapped into OpenGL under Unix giving you decent speeds. On Windows it will be a crawling slideshow because from Windows 8 and up it will use a really dog slow software mode with no acceleration at all, worse than plain VESA. Yes, you can reuse WineD3D DLL's on Windows and run these game in a fast way, but not by default, it's a Win32 port of some Wine libraries. | | |
| ▲ | rescbr 21 minutes ago | parent [-] | | Once I had to use a Mesa3D build for Windows and use the zink driver to render OpenGL to Vulkan, otherwise it would use Windows' software renderer. |
|
|
|
|
| ▲ | _flux 6 hours ago | parent | prev | next [-] |
| What I'd like to see would be some useful extra APIs in Wine, that would allow it to perform even better in some situations, and that such APIs would be then embraced by the game developers. Finally some embrace, extend, and extinguish love right back at Microsoft! |
|
| ▲ | HerbManic 5 hours ago | parent | prev | next [-] |
| Ever since Proton came along, it has been a quiet agreement that Win32 APIs are the best target for Linux support. |
|
| ▲ | akdev1l 6 hours ago | parent | prev | next [-] |
| People always say this to shit on glibc meanwhile those guys bend over backwards to provide strong API compatibilities. It rubs me off the wrong way. What glibc does not provide is forward compatibility. An application built with glibc 2.12 will not necessarily work with any older version. Such application could be rebuilt to work with an older glibc as the API is stable. The ABI is not which is why the application would need to be rebuilt. glibc does not provide ABI compatibility because from their perspective the software should be rebuilt for newer/older versions as needed. Maintaining a stable ABI mostly helps proprietary software where the source is not available for recompilation. Naturally the gnu guys building glibc don’t care about that use case much. I guess you didn’t mention glibc in your comment but I already typed this out |
| |
| ▲ | kelnos 5 hours ago | parent | next [-] | | > What glibc does not provide is forward compatibility. An application built with glibc 2.12 will not necessarily work with any older version. Is this correct? I think you perhaps have it backward? If I compile something against the glibc on my system (Debian testing), it may fail to run on older Debian releases that have older glibc versions. But I don't see why an app built against glibc 2.12 wouldn't run on Debian testing. glibc actually does a good job of using symbol versioning, and IIRC they haven't removed any public functions, so I don't see why this wouldn't work. More at issue would be the availability of other dependencies. If that old binary compiled against glibc 2.12 was also linked with, say, OpenSSL 0.9.7, I'd have to go out and build a copy of that myself, as Debian no longer provides it, and OpenSSL 3.x is not ABI-compatible. > glibc does not provide ABI compatibility because from their perspective the software should be rebuilt for newer/older versions as needed. If true (I don't think it is), that is a hard showstopper for most companies that want to develop for Linux. And I wouldn't blame them. | | |
| ▲ | jdpage an hour ago | parent | next [-] | | I don't know what the official policy is, but glibc uses versioned symbols and certainly provides enough ABI backward-compatibility that the Python package ecosystem is able to define a "manylinux" target for prebuilt binaries (against an older version of glibc, natch) that continues to work even as glibc is updated. | |
| ▲ | akdev1l 3 hours ago | parent | prev [-] | | Sorry I am not sure if 2.12 is a a recent release or older, I made up this number up If the application is built against 2.12 it may link against symbols which are versioned 2.12 and may not work against 2.11 - the opposite (building against 2.11 and running on 2.12) will work >If true (I don't think it is), that is a hard showstopper for most companies that want to develop for Linux. Not really a show stopper, vendors just do what vendors do and bundle all their dependencies in. Similar to windows when you use anything outside of the win32 API. The only problem with this approach is that glibc cannot have multiple versions running at once. We have “fixed” this with process namespaces and hence containers/flatpak where you can bundle everything including your own glibc. Naturally the downside is that each app bundles their own libraries. | | |
| ▲ | em-bee 3 hours ago | parent [-] | | The only problem with this approach is that glibc cannot have multiple versions running at once that's not correct. libraries have versions for a reason. the only thing preventing the installation of multiple glibc versions is the package manager or the package versioning. this makes building against an older version of glibc non-trivial, because there isn't a ready made package that you can just install. the workarounds take effort: https://stackoverflow.com/questions/2856438/how-can-i-link-t... the problem for companies developing on linux is that it is not trivial | | |
| ▲ | akdev1l 2 hours ago | parent | next [-] | | glibc must match the linker so you would need a separate linker and the binaries usually have a hardcoded path to the system linker (and you need to binary patch the stuff - https://stackoverflow.com/questions/847179/multiple-glibc-li...) So in practice you can only have 1 linker, 1 glibc (unless you do chroot or containers and at that point just build your stuff in Ubuntu 12.04 or whatever environment) | |
| ▲ | seba_dos1 3 hours ago | parent | prev [-] | | You compile in a container/chroot with the userspace you target. Done. In the context of games, that will likely be Steam Runtime. | | |
| ▲ | em-bee 2 hours ago | parent [-] | | it's not that simple. you want to be able to use a modern toolchain (compilers that support the latest standards) but build a binary that runs on older systems. the only way to achieve that is to get the older libraries installed on a newer system, or you could try backporting the new toolchain to the older system. but that's a lot harder. | | |
| ▲ | seba_dos1 27 minutes ago | parent [-] | | It may be hard-ish, sometimes. Sometimes it's a breeze. And sometimes you can just use host's toolchain with container's sysroot and proceed as if you were cross-compiling. Most of the time it's not a big deal. |
|
|
|
|
| |
| ▲ | Levitating an hour ago | parent | prev | next [-] | | I personally believe we should just compile games statically. Problem solved, right? | |
| ▲ | krastanov 5 hours ago | parent | prev | next [-] | | I am sorry, I did not mean to imply anyone else is doing something poorly. I believe glibc's (and the rest of the ecosystem of libraries that are probably more limiting) policies and principled stance are quite correct and overall "good for humanity". But as you mentioned, they are inconvenient for a gamer that just wants to run an executable from 10 years ago (for which the source was lost when the game studio was bought). | | |
| ▲ | em-bee 3 hours ago | parent [-] | | that 10 year old binary should run, unless it links against a library that no longer exists. for example here is a 20 year old binary of the game mirrormagic that runs just fine on my modern fedora machine: ~/Downloads/mirrormagic-2.0.2> ldd mirrormagic
linux-gate.so.1 (0xf7f38000)
libX11.so.6 => /lib/libX11.so.6 (0xf7db5000)
libm.so.6 => /lib/libm.so.6 (0xf7cd0000)
libc.so.6 => /lib/libc.so.6 (0xf7ad5000)
libxcb.so.1 => /lib/libxcb.so.1 (0xf7aa9000)
/lib/ld-linux.so.2 (0xf7f3b000)
libXau.so.6 => /lib/libXau.so.6 (0xf7aa4000)
~/Downloads/mirrormagic-2.0.2> ls -la mirrormagic
-rwxr-xr-x. 1 em-bee em-bee 203633 Jun 7 2003 mirrormagic
ok, there are some issues: the sound is not working, and the resolution does not scale. but there are no issues with linked libraries. |
| |
| ▲ | charcircuit 6 hours ago | parent | prev [-] | | No other operating system works like this. Supporting older versions of an OS or runtime with a compiler toolchain a standard expectation of developers. | | |
| ▲ | akdev1l 5 hours ago | parent | next [-] | | Plenty of operating systems work like this. Just not highly commercial ones because proprietary software is the norm on those. From a bit of research it looks like FreeBSD for example only provides a stable ABI within minor versions and I imagine if you build something for FreeBSD 14 it won’t work on 13. Stable ABI literally only benefits software where the user doesn’t have the source. Any operating system which assumes you have the source will not prioritize it. (Edit: actually thinking harder MacOS/iOS is actually much worse on binary compatibility, as for example Intel binaries will stop working entirely due to M-cpu transition - Apple just hits developers with a stick to rebuild their apps) | | |
| ▲ | kelnos 5 hours ago | parent | next [-] | | Yes, and this is a great reason why FreeBSD isn't a popular gaming platform, or for proprietary software in general. I'm not saying this is a bad thing, but... that's why. > Stable ABI literally only benefits software where the user doesn’t have the source. It also benefits people who don't want to have to do busywork every time the OS updates. | | |
| ▲ | toast0 4 hours ago | parent [-] | | FreeBSD isn't too bad, you can build/install compat packages back to FreeBSD 4.x, and I'd expect things to largely work. At previous jobs we would mostly build our software for the oldest FreeBSD version we ran and distribute it to hosts running newer FreeBSD releases and outside some exceptional cases, it would work. But you'd have to either only use base libraries, or be careful about distribution of the libraries you depend on. You can't really use anything from ports, unless you do the same build on oldest and distribute plan. At Yahoo, we'd build on 4.3-4.8, and run on 4.x - 8.x. At WhatsApp, I think I remember mostly building on 8.x and 9.x, for 8.x - 11.x. The only thing that I remember causing major problems was extending the bitmask for CPU pinning; there were a couple updates where old software + old kernel CPU pinning would work, and old software + new kernel CPU pinning failed; eventually upstream made that better as long as you don't run old software on a system with more cores than fit in the bitmask. I'm sure there were a few other issues, but I don't remember them ... |
| |
| ▲ | charcircuit 4 hours ago | parent | prev [-] | | You can still run x86 binaries on new macbooks. They don't stop working entirely. Using wine I can even run x86 windows binaries. | | |
| ▲ | akdev1l 3 hours ago | parent [-] | | They announced Rosetta 2 will be deprecated and eventually removed (MacOS 28?) By that point they already hit the developers enough to get them to port to aarch64 (arguably though this could be a special case because it is due to architectural transition) |
|
| |
| ▲ | thescriptkiddie 5 hours ago | parent | prev [-] | | what about mac os? | | |
| ▲ | kelnos 5 hours ago | parent | next [-] | | macOS doesn't require developers to rebuild apps with each major OS release, as long as they link with system libraries and don't try to (for example) directly make syscalls. Apple may require rebuilds at some point for their Mac Store (or whatever they call it), but it's not required from a technical perspective. The one exception here is CPU architecture changes, and even then, Apple has provided seamless emulation/translation layers that they keep around for quite a few years before dropping support. | |
| ▲ | charcircuit 4 hours ago | parent | prev [-] | | The latest Xcode supports targeting back to macOS 11. This covers >99% of macs which is acceptable for most developers. https://developer.apple.com/support/xcode/ |
|
|
|
|
| ▲ | zerocrates 6 hours ago | parent | prev [-] |
| Building against the Steam runtime containers seems like the other route, which also gets you more stability. |