| ▲ | mwwaters 5 hours ago |
| There is some world where somebody scammed through sideloading loses their life savings, and every country is politically fine with the customer, not the bank, taking the losses. But for regular people, that is not really the world they want. If the bank app wrongly shows they’re paying a legitimate payee, such as the bank, themselves or the tax authority, people politically want the bank to reimburse. Then the question becomes not if the user trusts the phone’s software, but if the bank trusts the software on the user’s phone. Should the bank not be able to trust the environment that can approve transfers, then the bank would be in the right to no longer offer such transfers. |
|
| ▲ | Hizonner 4 hours ago | parent | next [-] |
| If the actual bank app does that, or is even easy to fool into doing that, then the bank should be responsible. That's the world "regular people" want and it's the world as it should be. If random malware the user chose to install does that, then that is not the bank's fault. The bank is no more involved than anybody else. And no, I don't think "regular people" want to make that the bank's fault. |
| |
| ▲ | mwwaters 2 hours ago | parent [-] | | The legal infrastructure for banking and securities ownership has long had defaults for liability assignment. For securities, if I own stock outright, the company has to indemnify if they do a transfer for somebody else or if I lack legal capacity. So transfer agents require Medallion Signature Guarantees from a bank or broker. MSGs thereby require a lengthy banking relationship and probably showing up in person. For broker to broker transfers, there is ACATS. The receiving broker is in fact liable in a strict, no-fault way. As far as I know, these liabilities are never waived. Basically for the sizable transfers, there is relatively little faith in the user’s computers (including phones). To the extent there is faith, it has total liability on some capitalized party for fraud. These defaults are probably unknown for most people, even those with large amounts of securities. The system is expected to work since it has been set up this way. Clearly a large number of programmers have a bent to go the complete opposite direction from MSGs, where everything is private keys or caveat emptor no matter the technical sophistication of the customer. I, well, disagree with that sentiment. The regime where it’s possible for no capitalized entity to be liable for wrongful transfers (defined as when the customer believes they are transferring to a different human-readable payee than actually receiving funds) should not be the default. | | |
| ▲ | TeMPOraL 30 minutes ago | parent [-] | | > Basically for the sizable transfers, there is relatively little faith in the user’s computers (including phones). To the extent there is faith, it has total liability on some capitalized party for fraud. But that is expensive, so my impression is that for non-sizeable transfers, and beyond banking, for basically anything dealing with lots of regular people doing regular-people-sized operations, the default in the industry is to try and outsource as much liability onto end-users. So instead of treating user's computers as untrusted and make system secure on the back end, the trend is to treat them as trusted, and then deal with increased risk by a) legal means that make end-users liable in practice (keeping users uninformed about their rights helps), and b) technical means that make end-user devices less untrusted. b) is how we end up with developer registries and remote attestation. And the sad thing is, it scales well - if device and OS vendors cooperate (like they do today), they can enable "endpoint security" for everyone who seeks to externalize liability. |
|
|
|
| ▲ | jrm4 3 hours ago | parent | prev | next [-] |
| Keeeep going. Are banks POWERFUL? Do they have lots of money and/or connections to those who do? Do they have a vested interest in getting transactions right? Absolutely! Now, with all that money and power -- they -- whoever THEY are, need to come up with smart ways to verify transactions that don't involve me giving them all the keys to all my devices. We have protections like this elsewhere - even when they have some "ownership." The bank kinda owns my house, but they still can't come in whenever they want. |
|
| ▲ | jasonjayr 3 hours ago | parent | prev | next [-] |
| Why do banks go through all the know-your-customer (KYC) process if not to identify the beneficial owner of every account? If they receive a transfer via fraud, then they either get it clawed back, have to pay it back, and/or get identified to law enforcement. If the last bank in the chain doesn't want to play by the rules, then other banks shouldn't transfer into them, or that bank itself should be held liable. This is more or less how people expect things to work today .... |
| |
| ▲ | mwwaters 3 hours ago | parent [-] | | In the case of some knowing or blindfully unknowing money mule in the chain or at the end of the chain, the intermediary or final banks may not be at fault. The bank could have followed KYC procedures in that somebody with that name actually existed who controlled the account. The money mule themselves is almost certainly insolvent to pay the damages. Currencies can also change by the money mule (either to a different fiat currency or crypto), putting the ultimate link completely out of reach of the originating country. If intermediary banks are deputized and become liable in a no-fault sense, then legitimate transfers out become very difficult. How does a bank prove a negative for where the funds come from? De-banking has already been a problem for a process-based AML regime. | | |
|
|
| ▲ | jibal 4 hours ago | parent | prev [-] |
| I'm a "regular" person, as are all the signatories, and you don't speak for us. |