| ▲ | ajross 6 hours ago | |||||||
Yes, where practical. Though recognize that by their very nature web apps aren't part of the trust network. The browser and security stack can make a key for them to use, but it's not possible to be sure that the user of that key is not subject to attack at the backend (or even front end, really the best you can do there is XSS protection, which is hardly at the standard of "crytographically secure"). And likewise you as the app vendor can know the key was generated, and that it works, but you can't[1] know that it's actually locked to a device or that it's non-exportable. You could be running in a virtualized environment that logged everything. Basically it's not really that useful. Which is sort of true for security hardware in general. It's great for the stuff the device vendors have wired up (which amounts to "secured boot", "identifying specific known devices" and "validating human user biometrics on a secured device"), but not really extensible in the way you'd want it to be. [1] Within the bounds of this particular API, anyway. There may be some form of vendor signing you can use to e.g. verify that it was done on iOS or ChromeOS or some other fully-secured platform. I honestly don't know. | ||||||||
| ▲ | 18 minutes ago | parent | next [-] | |||||||
| [deleted] | ||||||||
| ▲ | machinationu 5 hours ago | parent | prev [-] | |||||||
it's possible with CPU secure attestation, but it's not something you will encounter on regular personal computers. the capability is there, but it would he massively inconvenient, since it requires a lot of lockdown might be the next generation of anti-cheats though | ||||||||
| ||||||||