▲ | anonym29 2 days ago | |||||||
Because SMS verification is so cheap (under a dollar per one-time validation, under $10/mo for ongoing validation), this approach really only makes sense for ultra-low-value services, where e.g. $0.50 per account costs more than the service itself is worth. Because of this low value dynamic, there are many techniques that can be used to add "cost" to abusive users while being much less infringing upon user privacy: rate limiting, behavioral analysis, proof-of-work systems, IP restrictions, etc. Using privacy-invasive methods to solve problems that could be easily addressed through simple privacy-respecting technical controls suggests unstated ulterior motives around data collection. If your service is worth less than $0.50 per account, why are you collecting such invasive data for something so trivial? If your service is worth more than $0.50 per account, SMS verification won't stop motivated abusers, so you're using the wrong tool. If Reddit, Wikipedia, and early Twitter could handle abuse without phone numbers, why can't you? | ||||||||
▲ | derekdahmer 2 days ago | parent [-] | |||||||
Firstly, I can tell you phone number verification made a very meaningful impact. The cost of abuse can be quite high for services with high marginal costs like AI. Second, all those alternatives you described are also not great for user privacy either. One way or another you have to try to associate requests with an individual entity. Each has its own limitations and downsides, so typically multiple methods are used for different scenarios with the hope that all together its enough of a deterrence. Having to do abuse prevention is not great for UX and hurts legitimate conversion, I promise you most companies only do it when they reach a point where abuse has become a real problem and sometimes well after. | ||||||||
|