▲ | nine_k 12 hours ago | ||||||||||||||||||||||||||||
This is a valid take. I do not agree with it in general: if we look beside the consumer devices, FOSS software us everywhere. and powers almost everything consequential. But the mobile phones specifically turned from phones into trusted terminal which institutions like banks and governments use to let users control large amounts of money and responsibility. And the first rule of a secure device is to be limited. In particular, the device should limit the ability of its owner to fake its identity, or do unauthorized things with networking, camera, etc. This junction of a general portable computer and a secure terminal is very unfortunate, because it exerts a very real pressure on the general computing part. Malicious users exist, hence more and more locking, attestation, etc, so that the other side could trust the mobile phone as a secure terminal. It would be great to have a mobile computer where you can run whatever you please, because it's nobody's business. And additionally there'd be a security attachment that runs software which is limited, vetted, signed, completely locked-up and tamper-proof on the hardware level (also open-source), which sides of the communication would trust. Think about a Yubikey, or a TPM, but larger and more capable. The cellular modem and a SIM card are other examples, even though they may be not as severely hardened. They are still quite severely limited, and this is good. If I were to offer an open-source phone (and, frankly, any mobile phone), I would consider following this principle. Much like the cellular modem, it would carry a locked up and certified security block, which would not be user-alterable. It would be also quite limited, unable to snoop into the rest of the phone. The rest of the phone would be a general-purpose computer with few limitations. Anything that would want to run on it securely would connect to the unforgeable interface of the security module, and do encryption / decryption / signing / secure storage that other parties, local and remote, would be able to verify and thus trust. One can dream. | |||||||||||||||||||||||||||||
▲ | ozgrakkurt 5 hours ago | parent | next [-] | ||||||||||||||||||||||||||||
If they want to manage their hedge fund from their phone, then maybe they should consider using a special device for that. It doesn’t really matter for the rest of the people as status quo shows | |||||||||||||||||||||||||||||
▲ | necovek 10 hours ago | parent | prev [-] | ||||||||||||||||||||||||||||
Locked devices are created to supposedly ensure the security of a device user, not because malicious users exist. SIM card is a good example. Technically, that's trivially solvable with a PKI infrastructure (a malicious user can't trivially and successfully misrepresent as google.com): operator runs their CA, and by signing your certificate, they attest that you are the owner of a particular phone number. No malicious user can mess with that (other than attacking the CA). What they can do is attack end-user devices through different cheaper means (social engineering, malicious apps, exploits...), and extract individuals' private keys, thus allowing them to misrepresent as that individual. A SIM card protects against this by not making private key accessible in the first place. This is exactly what locked devices do: they protect customers from not knowing how to properly (including securely) use their devices. This is what we need to focus on as technologists: if we know how to securely use our devices, how do we opt out of others "protecting" us, and take full responsibility and liability for security lapses? | |||||||||||||||||||||||||||||
|