Remix.run Logo
notepad0x90 5 hours ago

The NT paths are how the object manager refers to things. For example the registry hive HKEY_LOCAL_MACHINE is an alias for \Registry\Machine

https://learn.microsoft.com/en-us/windows-hardware/drivers/k...

In this way, NT is similar to Unix in that many things are just files part of one global VFS layout (the object manager name space).

Paths that start with drive letters are called a "DOSPath" because they only exist for DOS compatibility. But unfortunately, even in kernel mode, different sub systems might still refer to a DOSPath.

Powershell also exposes various things as "drives", pretty sure you could create your own custom drive as well for your custom app. For example, by default there is the 'hklm:\' drive path:

https://learn.microsoft.com/en-us/powershell/scripting/sampl...

Get-PSDrive/New-PSDrive

You can't access certificates in linux/bash as a file path for example, but you can in powershell/windows.

I highly recommend getting the NtObjectManager powershell module and exploring about:

https://github.com/googleprojectzero/sandbox-attacksurface-a...

ls NtObject:\

eloisant 3 hours ago | parent | next [-]

It's baffling than after 30 years, Windows is still stuck in a weird directory naming structure inherited from the 80's that no longer make sense when nobody has floppy drives.

notepad0x90 2 hours ago | parent | next [-]

I like being able to run games from early 2000s. Being able to write software that will still run longer after you're gone used to be a thing. But here we are with linux abandoning things like 'a.out'. Microsoft doesn't have the luxury to presume that it's users can recompile software, fork it, patch it,etc.. When your software doesn't work on the latest Windows, most people blame Microsoft not the software author.

Gud 30 minutes ago | parent [-]

Ok, I prefer to use software which is future compatible, like ZFS, which is 128-bit.

“The file system itself is 128 bit, allowing for 256 quadrillion zettabytes of storage. All metadata is allocated dynamically, so no need exists to preallocate inodes or otherwise limit the scalability of the file system when it is first created. All the algorithms have been written with scalability in mind. Directories can have up to 248 (256 trillion) entries, and no limit exists on the number of file systems or the number of files that can be contained within a file system.”

https://docs.oracle.com/cd/E19253-01/819-5461/6n7ht6qth/inde...

Don’t want to hit the quadrillion zettabyte limit..

BobbyTables2 an hour ago | parent | prev | next [-]

Yeah, try explaining “drive C:” to a kid these days, and why it isn’t A: or B: …

Of course software developers are still stuck with 80 column conventions even though we have 16x9 4K displays now… Didn’t that come from punchcards ???

strogonoff 15 minutes ago | parent | next [-]

Come for punchcards, stay for legibility.

80 characters per line is an odd convention in the sense that it originated from a technical limitation, but is in fact a rule of thumb perfectly familiar to any typesetting professional from long before personal computing became widespread.

Remember newspapers? Laying the text out in columns[0] is not a random quirk or result of yet another technology limitation. It is the same reason a good blog layout sets a conservative maximum width for when it is read on a landscape oriented screen.

The reason is that when each line is shorter, the entire thing becomes easier to read. Indeed, even accounting for legibility hit caused by hyphenation.

Up to a point, of course. That point may differ depending on the medium and the nature of the material: newspapers, given they deal with solid plain text, limit a line to 40–60 characters; for programming it may be wider due to often longer “words” and other factors and conventions like syntax highlighting or indentation, and when dealing with particularly long identifiers (I’m looking at you, CNLabelContactRelationYoungerCousinMothersSiblingsDaughterOrFathersSistersDaughter) wider still.

[0] Relatedly, codebases roughly following the 80 character line length limitation unlock more interesting columnar layouts in editors and multiplexers.

Sharlin 37 minutes ago | parent | prev | next [-]

It did, but 80 columns also pretty closely matches the 50ish em/70ish character paragraph width that’s usually recommended for readability. I myself wouldn’t go much higher than 100 columns with code.

ahoef an hour ago | parent | prev [-]

While 80 characters is obviously quite short, my experience is that longer line lengths result in much less readable code. You have to try to be concise on shorter lines, with better phrasing.

leptons 2 hours ago | parent | prev [-]

Windows can still run software from the 80's, backwards compatibility has always been a selling point for Windows, so I'd call that a win.

anonymous_sorry 22 minutes ago | parent | next [-]

It's very impressive indeed.

Linux goal is only for code compatibility - which makes complete sense given the libre/open source origins. If the culture is one where you expect to have access to the source code for the software you depend on, why should the OS developers make the compromises needed to ensure you can still run a binary compiled decades ago?

chasing0entropy 2 hours ago | parent | prev | next [-]

My original VB6 apps (mostly) still run on win11

mananaysiempre 2 hours ago | parent [-]

Hmm. IME VB6 is actually a particular pain point, because MDAC (a hodgepodge of Microsoft database-access thingies) does not install even on Windows 10, and a line-of-business VB6 app is very likely to need that. And of course you can’t run apps from the 1980s on Windows 11 natively, because it can no longer run 16-bit apps, whether DOS or Windows ones. (All 32-bit Windows apps are definitionally not from the 1980s, seeing as the Tom Miller’s sailboat trip that gave us Win32 only happened in 1990. And it’s not the absence of V86 mode that’s the problem—Windows NT for Alpha could run DOS apps, using a fatter NTVDM with an included emulator. It’s purely Microsoft’s lack of desire to continue supporting that use case.)

drxzcl 2 hours ago | parent [-]

Wait, what's the story of the sailboat trip? My searches are coming up empty, but it sounds like a great story.

AndrewDavis 2 hours ago | parent | prev [-]

Didn't Microsoft drop 16 bit application support in Windows 10? I remember being saddened by my exe of Jezzball I've carried from machine to machine no longer working.

mkup an hour ago | parent | next [-]

Microsoft has dropped 16-bit application support via builtin emulator (NTVDM) from 64-bit builds of Windows, whether it happens to be Windows 10 or earlier version of Windows, depends on user (in my case, it was Windows Vista). However, you can still run 16-bit apps on 64-bit builds of Windows via third party emulators, such as DOSBox and NTVDMx64.

notepad0x90 28 minutes ago | parent | prev [-]

and Linux stopped supporting 32bit x86 I think around the same time? (just i386?)

p_ing 3 hours ago | parent | prev | next [-]

PnP PowerShell also includes a PSDrive provider [0] so you can browse SharePoint Online as a drive. These aren't limited to local sources.

[0] https://pnp.github.io/powershell/cmdlets/Connect-PnPOnline.h...

anthk an hour ago | parent | prev | next [-]

ReactOS has a graphical NT OBJ browser (maybe as a CLSID) where you can just open an Explorer window and look up the whole registry hierarchy and a lot more.

It works under Windows too.

Proof:

https://winclassic.net/thread/1852/reactos-registry-ntobject...

delusional 4 hours ago | parent | prev [-]

> You can't access certificates in linux/bash as a file path for example, but you can in powershell/windows.

I don't understand what you mean by this. I can access them "as a file" because they are in fact just files

    $ ls /etc/ca-certificates/extracted/cadir | tail -n 5
    UCA_Global_G2_Root.pem
    USERTrust_ECC_Certification_Authority.pem
    USERTrust_RSA_Certification_Authority.pem
    vTrus_ECC_Root_CA.pem
    vTrus_Root_CA.pem
notepad0x90 3 hours ago | parent | next [-]

You can access files that contain certificate information (on any OS), but you can't access individual certificates as their own object. In your output, you're listing files that may or may not contain valid certificate information.

The difference is similar to being able to do 'ls /usr/bin/ls' vs 'ls /proc/12345/...' , the first is a literal file listing, the second is a way to access/manipulate the ls process (supposedly pid 12345). In windows, certificates are not just files but parsed/processed/validated usage specific objects. The same applies on Linux but it is up to openssl, gnutls,etc... to make sense of that information. If openssl/gnutls had a VFS mount for their view of the certificates on the system (and GPG!!) that would be similar to cert:\ in powershell.

jeroenhd 2 hours ago | parent | prev | next [-]

Linux lacks a lot of APIs other operating systems have and certificate management is one of them.

A Linux equivalent of listing certificates through the Windows virtual file system would be something like listing /proc/self/tls/certificates (which doesn't actually exist, of course, because Linux has decided that stuff like that is the user's problem to set up and not an OS API).

kadoban 3 hours ago | parent | prev [-]

I _suspect_ they mean that certs imported into MMC in Windows can be accessed at magic paths, but...yeah linux can do that because it skips the step of making a magical holding area for certs.

notepad0x90 3 hours ago | parent [-]

there are magical holding areas in Linux as well, but that detail is up to TLS libraries like openssl at run-time, and hidden away from their clients. There are a myriad of ways to manage just ca certs, gnutls may not use openssl's paths, and each distro has its own idea of where the certs go. The ideal unix-y way (that windows/powershell gets) would be to mount a virtual volume for certificates where users and client apps alike can view/manipulate certificate information. If you've tried to get a internal certs working with different Linux distros/deployments you might be familiar with the headache (but a minor one I'll admit).

Not for certs specifically (that I know of) but Plan9 and it's derivaties are very hard on making everything VFS abstracted. Of course /proc , /sys and others are awesome, but there are still things that need their own FS view but are relegated to just 'files'. Like ~/.cache ~/.config and all the xdg standards. I get it, it's a standardized path and all, but what's being abstracted is here is not "data in a file" but "cache" and "configuration" (more specific), it should still be in a VFS path, but it shouldn't be a file that is exposed but an abstraction of "configuration settings" or "cache entries" backed by whatever thing you want (e.g.: redis, sqlite, s3,etc..). The windows registry (configuration manager is the real name btw) does a good job of abstracting configurations, but obviously you can't pick and choose the back-end implementation like you potentially could in Linux.

jeroenhd 2 hours ago | parent [-]

> The windows registry (configuration manager is the real name btw) does a good job of abstracting configurations, but obviously you can't pick and choose the back-end implementation like you potentially could in Linux.

In theory, this is what dbus is doing, but through APIs rather than arbitrary path-key-value triplets. You can run your secret manager of choice and as long as it responds to the DBUS API calls correctly, the calling application doesn't know who's managing the secrets for you. Same goes for sound, display config, and the Bluetooth API, although some are "branded" so they're not quite interchangeable as they might change on a whim.

Gnome's dconf system looks a lot like the Windows registry and thanks to the capability to add documentation directly to keys, it's also a lot easier to actually use if you're trying to configure a system.