Remix.run Logo
electric_muse 2 days ago

This whole “real phone number is your access code to every service” trend is really frustrating.

I had the same experience recently with: - Ticketmaster - Docusign - Vercel

Probably a handful more I forgot.

I believe the main reason is because it prevents fraud.

But I see a deeper motive that phone numbers are more friction to change and therefore our “real” numbers become hard-to-change identity codes that can easily be used to pull tons of info about you.

You give them that number and they immediately can look up your name, addresses, age, and tons of other mined info that was connected to you. Probably credit score, household income, etc.

Phone numbers have tons of “metadata” you provide without really knowing it. Like how the Exif data in a photo may reveal a lot about your location and device.

derekdahmer 2 days ago | parent | next [-]

As someone who implemented phone verification at a company I worked for, it’s 100% for preventing spam signups intending to abuse free tiers. API companies can get huge volumes of fake signups from “multiplexers” who get around free tier limits by spreading their requests across multiple accounts.

jiveturkey 2 days ago | parent | next [-]

I would caution any reader to generalize your statement. Just because you used it at your company to limit abuse, and yes that is a lazy approach and 100% what's going on with Anthropic and most API companies, doesn't mean that every company uses phone number gating for this purpose.

The (probably) most famous example being https://www.eff.org/deeplinks/2019/07/fixed-ftc-orders-faceb...

And it's not enough to say "well we don't use it for that". One, you can't prove it. And two, far more important, in an information leak, by taking and saving the phone number (necessarily, otherwise there's no account gating feature unless you're just giving fake friction), you expose the user to risk of connecting another dot. I would never give my phone number to some rinky dink company.

Now that said, I don't use lazy pejoratively. Products must launch.

anonym29 2 days ago | parent | prev | next [-]

Because SMS verification is so cheap (under a dollar per one-time validation, under $10/mo for ongoing validation), this approach really only makes sense for ultra-low-value services, where e.g. $0.50 per account costs more than the service itself is worth.

Because of this low value dynamic, there are many techniques that can be used to add "cost" to abusive users while being much less infringing upon user privacy: rate limiting, behavioral analysis, proof-of-work systems, IP restrictions, etc.

Using privacy-invasive methods to solve problems that could be easily addressed through simple privacy-respecting technical controls suggests unstated ulterior motives around data collection.

If your service is worth less than $0.50 per account, why are you collecting such invasive data for something so trivial?

If your service is worth more than $0.50 per account, SMS verification won't stop motivated abusers, so you're using the wrong tool.

If Reddit, Wikipedia, and early Twitter could handle abuse without phone numbers, why can't you?

derekdahmer 2 days ago | parent [-]

Firstly, I can tell you phone number verification made a very meaningful impact. The cost of abuse can be quite high for services with high marginal costs like AI.

Second, all those alternatives you described are also not great for user privacy either. One way or another you have to try to associate requests with an individual entity. Each has its own limitations and downsides, so typically multiple methods are used for different scenarios with the hope that all together its enough of a deterrence.

Having to do abuse prevention is not great for UX and hurts legitimate conversion, I promise you most companies only do it when they reach a point where abuse has become a real problem and sometimes well after.

anonym29 2 days ago | parent [-]

>Firstly, I can tell you phone number verification made a very meaningful impact. The cost of abuse can be quite high for services with high marginal costs like AI.

Nobody has made the argument that it's not a deterrent at all. The core argument is that it's privacy-infringing when it doesn't need to be, and the cost posed to attackers is extremely low. If your business is offering a service at a price below your business' own costs, the business itself is choosing to inflict cost asymmetry upon itself.

>Second, all those alternatives you described are also not great for user privacy either.

This is plainly and obviously false at face value. How would blocklisting datacenter IP's, or doing IP-based rate limiting, or a PoW challenge like Anubis be "also not great" for user privacy, particularly when compared to divulging a phone number? Phone numbers are linked to far more commercially available PII than an IP address by itself is, and PoW challenges don't even require you to log IP addresses. Behavioral analysis like blocking more than N sign-ups per minute from IP address X, or blocking headless UA's like curl, or even blocking registrations using email addresses from known temp-mail providers is nowhere remotely close to being as privacy-infringing as requiring phone numbers is.

The privacy difference between your stated practice and my proposed alternatives isn't a difference of degree; it's a fundamental difference of kind.

Being generous, this is lazy, corner-cutting engineering that seeks to impose an unknown amount of privacy risk from the perspective of end users by piggybacking off an existing channel that only good-faith users won't forge (phone number), at the possible expense of good-faith users' privacy, rather than implementing a better control.

Of course, there's no reason to be generous to for-profit corporations - the much more plausible explanation is that your business is data mining your own customers via this PII-linked registration requirement through a coercive ToS that refuses service unless customers provide this information, which is both entirely unnecessary for legitimate users and entirely insufficient to block even a slightly motivated abusive user.

...not that you'd ever admit to that practice if you were aware of it happening, or would even necessarily be aware of it happening if you were not a director or officer of the business.

AlexandrB 2 days ago | parent | prev [-]

This makes sense for free tiers of products, but if you provide CC info for a paid tier, you shouldn't also have to provide a phone number. One or the other.

moduspol 2 days ago | parent | next [-]

I think people can use stolen / one-time use / prepaid / limited purchase size credit cards fairly easily, too. And you might not find out until after they've racked up a non-trivial amount of costs.

xur17 2 days ago | parent [-]

Then accept stablecoins.

whatevaa 10 hours ago | parent [-]

Then you go back to fraud free tier account problem.

xur17 9 hours ago | parent [-]

Require a phone number for free tier, make stablecoins a path for paid only access.

derekdahmer 2 days ago | parent | prev [-]

Theoretically yes but a few issues:

- Account creation usually happens before plan selection & payment. Most users start at free, then add a CC later either during on-boarding or after finishing their trial.

- Virtual credit cards are very easy to create. You can signup with credit card with a very low limit and just use the free tier tokens.

anonym29 2 days ago | parent | prev [-]

Mandatory phone number registration does not and never has prevented fraud.

Plenty of free VOIP services exist, including SMS reception.

Even when the free service providers are manually blocklisted, one-time validations can be defeated with private numbers on real networks / providers for under a dollar per validation, and repeated ongoing validations can be performed with rented private numbers on real networks / providers for under ten dollars per month.

The rent-an-SMS services that enable this are accessible through a web interface that allows connections from tor, vpns, etc - there is no guarantee that the telecom provider's location records of the IMEI tied to that phone number is anywhere close to the end user's real geographic location, so this isn't even helpful for law enforcement purposes where they can subpoena telecom provider records.

This "phone number required" practice exists for one primary reason: for businesses to track non-fraudulent users, data mine their non-fraudulent users, and monetize the correlated personal information of non-fraudulent users without true informed consent (almost nobody reads ToS's, but many would object to these invasive practices if given a dialogue box that let them accept or decline the privacy infringements but still allowed the user to use the business' service either way).

Sometimes, they are also used for a secondary reason: to allow the business to cheap out on developer costs by cutting corners on proper, secure MFA validation. No need to implement modern, secure passkeys or RFC-compliant TOTP MFA, FIDO2, U2F when you can just put your users in harm's way by pretending that SMS is a secure channel, rather than easily compromised by even common criminals with SS7 attacks, which are not relegated to nation-state actors like they once were.

slipnslider 2 days ago | parent | next [-]

>never has prevented fraud.

Interesting, I've heard otherwise but it was anecdotes. Do you have any data on that?

> to track non-fraudulent users

You listed a large number of ways to fake the phone number which is why you believe it doesn't prevent fraud. What is to stop a non-fraudulent user from doing the same thing to prevent the tracking by the company?

anonym29 2 days ago | parent [-]

>Do you have any data on that?

The original stated intention of the practice was that "it" [mandatory phone number registration] "prevents fraud" (though this stance was being critiqued by the person who raised it, not defended).

I'll concede that it probably has stymied some of the most trivial, incompetent fraud attempts made, and possibly reduced a negligible amount of actual fraud, but the idea that it can "prevent" fraud (implying true deterministic blocking, rather than delaying or frustrating) is refutable by the very reasonable assumption that there is almost certainly no company that implements mandatory phone number registration that has or will experience ZERO losses to fraud.

That said, in fairness, this is an unfalsifiable and unverifiable claim, as to my knowledge, there is nothing resembling a public directory of fraud losses experienced by businesses, and there is no incentive for businesses to admit to fraud losses publicly (they may have tax incentives to report it to the IRS, legal incentives to report it to law enforcement, and publicly traded companies may have regulatory incentives to at least indirectly acknowledge operating losses incurred due to fraud in financial reporting), but that doesn't make the claim itself unreasonable or improbable.

>What is to stop a non-fraudulent user from doing the same thing to prevent the tracking by the company?

The argument isn't that mandatory phone registration unavoidably forces privacy infringement upon all users, just that it does infringe upon the privacy of some (I'd suggest a vast majority) of users in practice.

whatevaa 10 hours ago | parent | prev [-]

Virtual phone numbers are usually blocked for this reason.