Remix.run Logo
Subscription bombing and how to mitigate it(bytemash.net)
186 points by homelessdino 8 hours ago | 119 comments
pqdbr 6 hours ago | parent | next [-]

Recently we suffered a different kind of subscription bombing: a hacker using our 'change credit card' form to 'clean' a list of thousands credit cards to see which ones would go through and approve transactions.

He ran the attack from midnight to 7AM, so there were no humans watching.

IPs were rotated on every single request, so no rate limiter caught it.

We had Cloudflare Turnstile installed in both the sign up form and in all credit card forms. All requests were validated by Turnstile.

We were running with the 'invisble' setting, and switched back to the 'recommended' setting after the incident, so I don't know if this less strict setting was to blame.

Just like OP, our website - to avoid the extra hassle on users - did not require e-mail validation, specially because we send very few e-mails.

We never thought this could bite us this way.

Every CC he tried was charged $1 as confirmation that the CC was valid, and then immediately refunded, erroring out if the CC did not approve this $1 transaction, and that's what he used. 10% of the ~2k requests went through.

Simply adding confirmation e-mail won't cut it: the hacker used - even tough he did not need it - disposable e-mail addresses services.

This is a big deal. Payment processors can ban you for allowing this to happen.

shaky-carrousel 5 hours ago | parent | next [-]

Well, what you can do is notify the card issuer about those cards that went through, so they can mark them as stolen. That surely will make the hacker really happy, and discourage them of doing it again :)

gregoriol an hour ago | parent [-]

So you mean you are keeping full card numbers somewhere in your logs to... fix some potential security issue...?

butvacuum 41 minutes ago | parent [-]

>Hey mr processor, the cards for transaction numbers x...y are stolen.

AndroTux 4 hours ago | parent | prev | next [-]

We solved this by introducing a silent block. If the system notices unusual behavior (too many payment attempts per user, for example), it no longer sends the payment attempt to the provider. Instead, it idles for a second or two and then just fails with a generic “payment declined.” Most attackers don’t notice they’re being blocked and just assume all credit cards are bad.

quietbritishjim 3 hours ago | parent [-]

Sounds like any per-user detection wouldn't have worked in this case.

federicosimoni 4 hours ago | parent | prev | next [-]

The $1 auth charge pattern is really common for card testing attacks.

One thing that helps beyond Turnstile: Stripe Radar rules. You can block charges under $2 from IPs that haven't had a successful payment before, or flag accounts with multiple card attempts in short windows.

Not foolproof but adds a layer before the human review kicks in.

Steve16384 3 hours ago | parent | prev | next [-]

Did they use the same username/login every time?

imrozim 4 hours ago | parent | prev | next [-]

the $1 auth charge pattern is what makes this brutal payment processors see you as enabling card testing even if you're the victim.stripe has actually terminated accounts for this. turnstile invisible mode is basically just logging at that point,it rarely challenges anything. lesson learned the hard way i guess.

gib444 5 hours ago | parent | prev | next [-]

Ouch. Just one credit card change per account?

This is one of those levels of monitoring that only gets put in place after such an event. Eg whole subsystem analysis - the change card feature being used 1000s of times (well, proportional to scale) in 7 hours is a massive red flag

eru 5 hours ago | parent [-]

> This is one of those levels of monitoring that only gets put in place after such an event.

For a website, yes. But honestly the credit card people and their infrastructure should probably _also_ watch out for this. They'd be in a much better place to detect these.

Fokamul 4 hours ago | parent | prev [-]

Cloudflare and any other anti-bot service is only good against people without willpower and knowledge to bypass them.

JS can be reversed, you clearly see what data points they use for detection. Anything can be spoofed and it will look like human behavior.

And if everything fails, you outsource it to AI - Always Indian :D

m132 7 hours ago | parent | prev | next [-]

It's a problem, but I really dislike the solution. Putting a website with known security issues behind Cloudflare's Turnstile is comparable to enforcing code signing—works until it doesn't, and in the meantime, helps centralize power around a single legal entitiy while pissing legitimate users off.

The Internet was carefully designed to withstand a nuclear war and this approach, being adopted en masse, is slowly turning it into a shadow of its former self. And despite the us-east1 and multiple Cloudflare outages of last year, we continue to stay blind to this or even rationalize it as a good thing, because that way if we're down, then so are our competitors...

pverheggen 6 hours ago | parent | next [-]

I wouldn't call this "known security issues", it's an inherent problem with any signup or forgot password page.

Also, I doubt this is going to be pissing users off since they added Turnstile in invisible mode, and selectively to certain pages in the auth flow. Already signed in users will not be affected, even if the service is down. This is way different from sites like Reddit who use their site-wide bot protection, which creates those interstitial captcha pages.

jijijijij 25 minutes ago | parent [-]

> I wouldn't call this "known security issues", it's an inherent problem with any signup or forgot password page.

It's not inherent, though! Easy, definite fix: Reverse the communication relation. If the user has to open their mail app anyway, you could simply require them to send an email to you, instead of vice versa. This would solve the problem completely. (If spoofing the sender could be done reliably, the service wouldn't be involved in the first place.)

Now, it would slightly increase friction and lower convenience. That's why it's not done. It's inherently incompatible with dark patterns, data collection and questionable new user acquisition, but this too could be solved through standards and integration - without making Cloudflare de facto infrastructure necessity!

Possible convenient, better solutions: Have the browser send this mail, either by passing a template to the mail app, integrating SMTP into the browser/addon, or instate a novel authentication protocol, which in fact may remove the human interaction completely.

As if 2FA security was the main motivation for asking for email, and/or phone anyway. Companies want user IDs, if possible UIDs, as soon as possible to increase user data value and gain marketing opportunities. I once had a "welcome mail" after typing in the address, before sending the form. Yeah...

stingraycharles 7 hours ago | parent | prev | next [-]

So your solution would be to do nothing?

Cloudflare is an excellent solution for many things. The internet was designed to withstand a nuclear war, but it also wasn’t designed for the level of hostility that goes on on the internet these days.

sdevonoes 5 hours ago | parent [-]

Cloudflare is not the solution

stingraycharles 4 hours ago | parent [-]

What is a better solution?

sarchertech 2 hours ago | parent [-]

You have to think hard about the problem and apply individual solutions. Cloudflare didn’t work for the author anyway. Even if they had more intrusive settings enabled it would have just added captchas, which wouldn’t likely have stopped this particular attacker (and you can do on your own easily anyway).

In this case I assume the reason the attacker used the change credit card form was because the only other way to add a credit card is when signing up, which charges your card the subscription fee (a much larger amount than $1).

So the solution is don’t show the change card option to customers who don’t already have an active (valid) card on file.

A more generic solution is site wide rate limiting for anything that allows someone to charge very small amounts to a credit card.

Or better yet don’t have any way to charge very small amounts to cards. Do a $150 hold instead of $1 when checking a new card

As far as cloudflare centralization goes though, you’re not going to solve this problem by appealing to individual developers to be smarter and do more work. It’s going to take regulation. It’s a resiliency and national security issue, we don’t want a single company to function as the internet gatekeeper. But I’ve said the same about Google for years.

HumanOstrich an hour ago | parent [-]

None of your solutions seem useful in this case, especially a $150 hold. Site-wide rate limiting for payment processing? Too complicated, high-maintenance, and easy to mess up.

You can't block 100% of these attempts, but you can block a large class of them by checking basic info for the attempted card changes like they all have different names and zip codes. Combine that with other (useful) mitigations. Maybe getting an alert that in the past few hours or days even, 90% of card change attempts have failed for a cluster of users.

siruwastaken 3 hours ago | parent | prev | next [-]

I fully agree with your comment. Wouldn't it be possible to just put off sending welcome emails until the user actually engaged with the product in some way? And if an account wigh no engagement persists for more than say three months just delete the account again under the premise of 'eroneousely created'?

AndroTux 4 hours ago | parent | prev | next [-]

I had a similar issue and evaluated alternatives. Sadly, there were none that did the job well enough.

How do you suggest to implement bot prevention that works reliably? Because at this point in time, LLMs are better at solving CAPTCHAs than humans are.

recursivecaveat 5 hours ago | parent | prev | next [-]

Since they updated the flow to only ever push 1 email to unverified users, I would say that's as patched as it can realistically be before you bring in the captchas.

colesantiago 7 hours ago | parent | prev | next [-]

And your solution is assume everyone on the internet is a good actor?

How would you solve this at scale?

RobotToaster 5 hours ago | parent | next [-]

Op basically said that the firewall rules and email confirmation alone would've mostly mitigated this.

But also Anubis is a good alternative to slow bots.

cuu508 6 hours ago | parent | prev [-]

How about a signup flow where the user sends the first email? They send an email to signups@example.com (or to a generated unique address), and receive a one-time sign-in link in the reply. The service would have to be careful not to process spoofed emails though.

Another approach is to not ask for an email address at all, like here on HN.

whatevaa 6 hours ago | parent | next [-]

"The user just needs to be careful not to step on a landmine. Exact steps left as an exercise to the reader".

Anybody can send email with all of the dmarc stuff, how do you "be careful" with spoofed email?

__david__ 4 hours ago | parent [-]

> how do you "be careful" with spoofed email?

You actually verify DKIM and SPF—you know, that “dmarc stuff”. That’s enough to tell you the mail is not spoofed.

red_admiral 4 hours ago | parent | prev | next [-]

That is how you get your conversion rate to drop to the floor, sadly.

Every extra field in the sign-up form already lowers the conversion rate.

xwowsersx 6 hours ago | parent | prev | next [-]

It sounds appealing at first because it flips the trust model... instead of the service initiating contact the user proves control of their email up front That feels cleaner and arguably more robust against certain classes of abuse

But from a UX standpoint its a nonstarter

Youre asking users to

- leave the site/app

- open their email client

- compose a message or at least hit send

- wait for a reply

- then come back and continue

Thats a lot of steps compared to enter email -> click link. Each additional step is a dropoff point especially on mobile or for less technical users. Many people dont even have a traditional mail client set up anymore, they rely on webmail or app switching which adds even more friction

It also introduces ambiguity

- What exactly am I supposed to send

- did it work

- What if I dont get a reply

From the service side youre trading a simple well understood flow for a much more complex inbound email processing system with all the usual headaches (spoofing parsing delivery delays spam filtering)

In practice most systems optimize for minimizing user effort even if that means accepting some level of abuse and mitigating it elsewhere. A solution that significantly increases friction... no matter how principled...just wont get adopted widely

So while the idea is interesting from a protocol design perspective its hard to see it surviving contact with real users

cuu508 3 hours ago | parent | next [-]

I think the main UX obstacle is that it is unfamiliar – no-one does signups like that currently. But the flow does not need to be quite as bad, if you use "mailto:" links. In the happy case:

- user click on the link

- their email client opens, with the To:, Subject:, Body: fields pre-filled

- user clicks "Send"

- a few seconds later a sign-in link arrives in their inbox

__david__ 3 hours ago | parent | prev [-]

> But from a UX standpoint its a nonstarter

Disagree. The UX would be pretty similar. Click a mailto link which opens the email client with to, subject and body precomposed. Click send. Server receives mail and the web page continues/finishes the sign up process. No need for an email reply. It’s different, but it’s not crazy.

grufkork 5 hours ago | parent | prev [-]

Amidst all the age verification and bot spam going on, anonymous private/public key proof of identity could work: the newly signed up service must pass a challenge from the mail server to prove the user actually intended to sign up. Though I guess that would be basically the same thing as the users server initiating the communication. Really, just an aggressive whitelist/spam filter that only shows known senders solves it too, but as I understand part of the attack is having already compromised the mail service of the target. Having a third decoupled identity provider would resolve that, but then that becomes a single point of failure…

AussieWog93 7 hours ago | parent | prev [-]

Honestly I really like CloudFlare as a business. There's no vendor lock-in, just a genuine good product.

If they turn around later and do something evil, literally all I need to do is change the nameserver to a competitor and the users of my website won't even notice.

AndroTux 4 hours ago | parent [-]

Then you're not using any of their services besides DNS, at which point you don't need to use Cloudflare at all.

As soon as you turn on any other service they offer, you need to actively migrate away. It's an inherent issue of services that actually provide a benefit. If you're saying "I can just migrate to any other nameserver" then you're telling me you have no use for Cloudflare in the first place. Because if you did, you couldn't just not use it anymore.

Let's say you're using their WAF. Sure, you can just change your domain's nameserver and you've migrated away. But now you no longer have a WAF. Same for their CDN. Or their load balancer. Or their object storage. Or their CAPTCHAs.

thisisnow 3 hours ago | parent [-]

I think they also lock you into their DNS when you buy a domain from them, unlike other registrars who allow to change your NS freely. Sure, you can just transfer the domain elsewhere for a small price, but the point is they go the extra mile to force their NS, which I havent seen with other registrars.

HexDecOctBin 6 hours ago | parent | prev | next [-]

I was attacked in this way a couple of months back. I use a different email address for each account (of the pattern product@example.com), and use a separate address for Git commits (like git@example.com). It was this second one that was attacked and I ended up with some 500 emails within 12 hours. Fortunately, since I don't expect anyone to actually email me on the Git address, I just put up a filter to send them all to a separate folder to go over at my leisure.

After 12 hours, the pace of emails came to a halt, and then I started receiving emails to made up addresses of a American political nature on the same domain (I have wildcard alias enabled), suggesting that someone was perhaps trying to vent some frustration. This only lasted for about half an hour before the attacker seems to have given up and stopped.

Strangely, I didn't receive any email during the attack which the attacker might have been trying to hide. Which has left me confused at to the purpose of this attack in the first place.

chicagojoe 6 hours ago | parent [-]

I had this happen recently too, also not covering up any email activity (I combed through 3000+ spam emails).

Double check that there are no forwarding rules added to your inbox and add some protection against a SIM swap.

In my case, they didn't compromise any of my accounts but did attempt to open a new credit card so it would be worth double checking your credit reports.

jb1991 5 hours ago | parent | prev | next [-]

One thing I have never understood in this current age is how in the world so many companies, including ones that handle confidential data like banks, don’t require a user to verify their email address after it’s entered. I have an unfortunately very generic email address that’s easy to mistype, and I am almost every day receiving order receipts for expensive vacation hotels, bank transfer or wire transfer confirmations, a very long list of things that I should not be receiving simply because the companies sending those emails never had the user verify if they entered the right email address. They are legitimate emails, they are often addressed to someone with the same first name as me but a different last name, so that person simply typed the wrong email address accidentally.

It’s bonkers to me that there’s any developers out there working for these companies that never thought to implement simple email verification.

xmcqdpt2 24 minutes ago | parent | next [-]

As is often repeated, the optimal amount of fraud is not zero

https://www.bitsaboutmoney.com/archive/optimal-amount-of-fra...

They are optimizing towards making it easy to purchase things on a whim.

Gigachad 5 hours ago | parent | prev | next [-]

Because confirming the email introduces friction. And everyone is optimising for low friction even if it risks private data leaks, which you can always blame on the user for typing their email wrong.

letier 5 hours ago | parent | prev | next [-]

I have a very early gmail address. A very common first name plus two letters. It is almost unusable by now. Invoices, subscriptions, important documents about some persons real estate dealings. They all end up in my inbox.

I have around 20 or 30 google accounts attached where i am the backup email address. Those people forget their passwords or stop using their accounts and i get email notifications about that. No confirmation from my side necessary.

I set up a new address that is less likely to end up with this problem. But migrating away from the old one is not easy…

pixelesque 5 hours ago | parent [-]

Exactly the same situation with me in terms of gmail address (although my names are less common).

I get so many other $MY_NAME emails, including bills (including multiple credit cards and things like Afterpay), deliveries, medical details/reports, family communications, etc, etc.

And it's very clear that quite a few online services blatantly don't verify email addresses, they just assume the email is valid and allow the person to start using it.

wodenokoto 5 hours ago | parent | prev | next [-]

I know e-mail has a faster round-trip, but they also don't ask you to confirm snail mail.

I think it would be quite annoying to have to verify my purchase everywhere, just like how I don't wanna sign up to every single merchant online. Let me purchase as guest without having to enter OTPs.

pprotas 5 hours ago | parent | prev | next [-]

This is intentional. Email verification is friction, so it gives users a chance to reconsider whether their purchase is really necessary. This is bad for business, because they’d prefer if you were impulsive.

Also, people usually type their emails correctly, especially these days with auto-fill. So not sending confirmation emails is optimizing for the happy path.

jb1991 5 hours ago | parent [-]

Not just talking about purchases. I receive transaction details with bank numbers for wire transfers around the world. It’s ridiculous.

I was once even sent all of the legal proceedings for a court case by a lawyer who was sent to the wrong address.

plagiarist 5 hours ago | parent | prev | next [-]

I am dismayed that it is legal to create an account attached to an email without validation of that email. It should be straight-up massive fine illegal to send any email other than account confirmation until validated. Validation emails should have a "do not contact me again" that works with a single click and a massive fine if it does not.

nobodywillobsrv 5 hours ago | parent | prev [-]

Yes it is insane. I am in same boat and have received mortgage applications, police details, applications for police jobs, massage receipts you name it. Many would be considered important leaks of customer data.

I have even had founder level emails that presumably are confidential sent to me because I share the name of someone operating in tech.

I respond or report when it's obviously some real person running a small group but for large monoliths there is very little to do except quickly reply to corporate email.

Really wish there was some kind of high level discussion about building something for this specific problem of non malicious wrong person same name errors.

Google could do it it's just not something that is monetizable at a scale they care about IMO and I have not been able to think of a way to make this work operating outside of email monoliths.

Would love to hear if anyone has ideas.

jb1991 5 hours ago | parent | next [-]

Commend your effort to actually contact the companies to let them know the error. I stopped doing that a long time ago when I stopped getting response or stopped getting any kind of meaningful reaction that I was actually trying to do something good by reporting it.

fragmede 5 hours ago | parent | prev [-]

What Google has done, is add profile pictures for users, so if I'm emailing girlfriend@gmail.com I get her picture, but if I email giirlfriend@gmail.com, I see someone else's pfp which is enough to get me to realize I've spelled it wrong. I'm sure there's more they could be doing, but they're aware of the problem at least.

jb1991 5 hours ago | parent [-]

But that only works if you’re emailing from another Gmail account yes?

mfi 5 hours ago | parent | prev | next [-]

I work the email security company xorlab[0], where my colleagues and I did a thorough analysis of real subscription/email bombing waves that we saw at our customers[1].

Here are some interesting additional information from the attacks we analyzed:

* Email bombing as a service is a thing, where you can buy 10,000 credits for $10 and easily bomb target inboxes with over 2000 emails per hour.

* Most all email bombing attacks starts in the morning, between 8-10.

* Most common day of attack is Friday

[0] https://www.xorlab.com/en/

[1] https://www.xorlab.com/en/blog/from-chaos-to-control-insight...

znnajdla 7 hours ago | parent | prev | next [-]

I absolutely refuse to use BigTech gatekeepers or useless CAPTCHAS (any sufficiently advanced bot can get around any CAPTCHA anyway). We solved this at our startup by running names through a simple LLM filter - if the name is gibberish like Px2846skxojw just block the signup. Worked surprisingly well. Of course this is easy to get around if the bot knows what you’re doing. But bots look for easy targets, as long as there are enough vibe coded crap targets on the internet they’re not going to bother with circumventing a carefully designed app.

avian 4 hours ago | parent | next [-]

> We solved this at our startup by running names through a simple LLM filter - if the name is gibberish like Px2846skxojw just block the signup.

I hope "LLM thinks your name is gibberish" won't become the new "your name can't include invalid characters".

snowe2010 6 hours ago | parent | prev | next [-]

Then you’re also blocking legitimate users that don’t want to be tracked and use services like iCloud Hide my Emails

RobotToaster 5 hours ago | parent | next [-]

> that don’t want to be tracked

>iCloud

Except by apple I guess...

rs_rs_rs_rs_rs 5 hours ago | parent | prev [-]

Those users can take their business somewhere else.

steezeburger 6 hours ago | parent | prev | next [-]

This doesn't seem like a very good solution to be honest. And why use an LLM for this? What if I want a legit random ass string as my username?

rs_rs_rs_rs_rs 5 hours ago | parent [-]

You're not owed anything, you can take your legit random ass string username to another company that allows it.

I suspect any company would take this trade off, losing some customers but significantly lowering fraud.

tholm 6 hours ago | parent | prev | next [-]

Using an LLM for this seems excessive when there are well established algorithms for detecting high entropy strings.

znnajdla 4 hours ago | parent [-]

The high entropy string option led to lots of false positives. The LLM check seems to work fine with no complaints from real users.

imiric 6 hours ago | parent | prev | next [-]

So your solution is to deploy a black box that can be worked around with a basic lookup table for a single field?

CAPTCHAs were never meant to work 100% of the time in all situations, or be the only security solution. They're meant to block lazy spammers and low-level attacks, but anyone with enough interest and resources can work around any CAPTCHA. This is certainly becoming cheaper and more accessible with the proliferation of "AI", but it doesn't mean that CAPTCHAs are inherently useless. They're part of a perpetual cat and mouse game.

Like LLMs, they rely on probabilities that certain signals may indicate suspicious behavior. Sophisticated ones like Turnstile analyze a lot of data, likely using LLMs to detect pseudorandom keyboard input as well, so they would be far more effective than your bespoke solution. They're not perfect, and can have false positives, but this is unfortunately the price everyone has to pay for services to be available to legitimate users on the modern internet.

I do share a concern that these services are given a lot of sensitive data which could potentially be abused for tracking users, advertising, etc., but there are OSS alternatives you can self-host that mitigate this.

mads_quist 7 hours ago | parent | prev | next [-]

Nice.

latexr 5 hours ago | parent | prev [-]

> useless CAPTCHAS (any sufficiently advanced bot can get around any CAPTCHA anyway). We solved this at our startup by (…). Of course this is easy to get around if the bot knows what you’re doing

So, by your own admission, your solution doesn’t get around the “sufficiently advanced bot” problem.

stanac 4 hours ago | parent | next [-]

I added custom captcha (simple math as slightly distorted pictures with audio alternative) on one of my forms, it prevents ~80% of spam submissions. less than 1% of spam passes, other ~20% are blocked on key words (like "sex", "passion", etc...).

latexr an hour ago | parent [-]

Not sure why you’re telling me that. I’m not criticising CAPTCHA, my parent comment was.

znnajdla 4 hours ago | parent | prev [-]

Yes and I don’t claim to solve the problem completely. It’s an impossible to solve problem which BigTech wants you to pay mafia protection money to “solve”.

latexr an hour ago | parent [-]

The point I was making is that you’re criticising something while suggesting something else even easier to bypass.

paaradise 5 hours ago | parent | prev | next [-]

> that meant each victim received three emails from us in under a minute:

> Verify your email address > Welcome to Suga > Reset your password > Three emails they never asked for, from a product they may never have heard of. We were just one of potentially hundreds of sites being hit at the same time.

@homelessdino

Why would you send welcome and reset to some victim that DID NOT verify?

simonkagedal 5 hours ago | parent [-]

This is being addressed in the article. The service no longer does this, and he apologizes.

dgellow an hour ago | parent | prev | next [-]

Oh that’s interesting. I think that matches perfectly with an experience I had with a micro saas I run. I first thought people discovered it organically because it started with just a few signups throughout the weekend, then eventually escalated to multiple an hour. None of the accounts were active but email addresses didn’t look too odd and they weren’t bouncing. I eventually added a captcha that seems to have been effective, but that was a surprising experience because there is nothing you can use that saas for anything nefarious as far as I’m aware

chw9e 5 hours ago | parent | prev | next [-]

This happened to me several years ago. I got signed up to probably 700 newsletters overnight. In the middle of all of the sign ups there was activity on my airbnb account where my notification settings were changed. when i checked my airbnb i noticed that someone had created a fake listing under my account and disabled booking notifications for it. a real multi-layer scam where the hacker would be making money off a fake listing on someone else's account who would probably never even realize it.

duckmysick 2 hours ago | parent | next [-]

How did they access your Airbnb account?

Gigachad 5 hours ago | parent | prev [-]

I’d be probably safe against this because I have an email filter set so anything with an unsubscribe link gets moved to spam.

Account notification emails don’t have unsubscribes while pretty much all junk does.

noname120 4 hours ago | parent [-]

I receive a bunch of transactional emails with an unsubscribe button at the bottom. Unsubscribing of course does nothing to stop the transactional emails but the last thing I want is them to go to spams.

croemer an hour ago | parent | prev | next [-]

Incidentally, email bombing was likely used in the axios attack to bury some important security notifications: https://github.com/axios/axios/discussions/10612

linolevan 7 hours ago | parent | prev | next [-]

Well written piece on an attack vector I'd never thought too hard about before. Thanks for elaborating on why sending an email or two to a random person helps an attacker achieve their goal. A lot of similar articles skip over details like that.

tariky 7 hours ago | parent | prev | next [-]

I had similar situation on WooCommerce shop. But it was much more signups per hour. Putting turnstile in front fixed problem.

My conclusion is to move from WordPress software as fast as possible, every WordPress site I manage gets bombarded by bots.

somat 6 hours ago | parent [-]

Hell every non wordpress software I manage also gets bombarded by wordpress bots.(not really, I am stretching the term to refer to wordpress attack attempts for dramatic purpose. But that still ends up being about 99% of my personal site traffic)

mads_quist 7 hours ago | parent | prev | next [-]

A good old Honey Pot helped us at All Quiet "a lot" with those attacks. Basically all attacks are remediated by this. No need for Cloudflare etc.

grey-area 7 hours ago | parent | next [-]

Can you expand on that? A separate honey pot sign up page invisible to real users, or something else?

mads_quist 7 hours ago | parent [-]

You add "hidden" inputs to your HTML form that are named like "First Name" or "Family Name". Bots will fill them out. You will either expect them to be empty or you fill by JavaScript with sth you expect. It's of course reverse-engineerable, but does the trick.

alexjurkiewicz 7 hours ago | parent | next [-]

Doesn't that break password manager autofill?

grey-area 7 hours ago | parent | prev | next [-]

Thanks, I’ve seen scripted attacks bypass this sort of hidden input unfortunately (perhaps human assisted or perhaps just ignoring hidden fields).

jaggederest 5 hours ago | parent | next [-]

They often do actually ignore truly hidden fields (input type=hidden) but if you put them "behind" an element with css, or extremely small but still rendered, many get caught. It's similar to the cheeky prompt injection attacks people did/do against LLMs.

grey-area 4 hours ago | parent [-]

Thanks.

mads_quist 7 hours ago | parent | prev [-]

Sure, it's really basic of course.

bevr1337 6 hours ago | parent | prev | next [-]

Do you test this against password managers? Seems like this approach could generate false positives

imhoguy 4 hours ago | parent | prev [-]

Watch out, it may break accessibility of your service. If somebody fills these fields I would add extra verification e.g. accessible CAPTCHA.

hrmtst93837 3 hours ago | parent | prev [-]

Honeypots work until the bot starts posting to every field. Dropping traffic scrubbing also means you lose the abuse reporting and IP reputation feed that a service like Cloudflare gives you, so a trick that filters one class of signup spam turns into you handling the rest of the mess yourself.

avian 4 hours ago | parent | prev | next [-]

> The goal [...] to flood the victim’s inbox with so much noise that they can’t find the emails that actually matter.

> While the victim is drowning [...] the attacker is doing something else.

In the past months some personal mail accounts on a mail server I administer were victim of something that looked similar to what's described here.

Hundreds of mails apparently originating from various (legit-looking) random public web services, support requests, issue trackers, web contact forms etc. For example, a good part of them was from Virginia Department of Motor Vehicles (as in something like "thank you for filing a document #123 with us").

To make things even weirder, they were not sent directly to the address, but according to message headers were bounced through Google Groups (each time I checked the relevant group was already deleted). So as far as I can tell it was not the mail address hosted on my server that was being entered into those websites.

No phishing links, no attached malware, no short advertisements snuck into a text field etc. Just a huge amount of automated replies from "noreply@" legit entities.

I've seen several of these attacks and spent some time investigating them. To my knowledge these were not associated with any other malicious activity, like the author of the article mentions. If anything they were just a denial-of-service attack on a mail box (as in, making the human user trawl through garbage, the mail volume was far from saturating the server itself). What exactly would be a motivation for that I can't tell, except making the life of a small mail server admin even harder than it already is.

jiehong 2 hours ago | parent | prev | next [-]

I think it’s time to stop using emails in general for all of that.

What’s the alternative, though?

What if each service generates a link to a uuid url where all new message will be displayed? The user can rss subscribe to that.

So the user doesn’t receives anything.

motbus3 4 hours ago | parent | prev | next [-]

I know this is ain't new But I am tired of people turning everything into weapons When I started working I wanted to see things being built and evolve

Now, every mofo just wants a grant to ---- innocent kids in school.

thisisnow 3 hours ago | parent | prev | next [-]

Interesting, until it turned into an ad for Cloudflare, it spreads like plague on the internet, slowing everyday down, forcing JS and trying to pull every single datapoint from your browser. Is this really the _only_ solution?

z3t4 3 hours ago | parent | prev | next [-]

Could ask new users to send an email on a generated temp adress before sending the confirm e-mail. I do think e-mail should be not only opt-in, but also optional!

Jean-Philipe 6 hours ago | parent | prev | next [-]

Thanks for explaining this! I saw this happened to some of my We sites and I couldn't wrap my head around why someone would do this...

shreyssh 4 hours ago | parent | prev | next [-]

This is the same class of problem we see with AI agents and databases. The 'confused deputy' — a legitimate system being weaponized to do something unintended. Rate limiting and intent verification at the proxy layer is the pattern

aservus 4 hours ago | parent | prev | next [-]

The irony is that the services most vulnerable to this are the ones that collect the most email/data in the first place. Minimal data collection is the best mitigation.

msephton 6 hours ago | parent | prev | next [-]

How can an affected user recover from such an attack?

jeroenhd 5 hours ago | parent [-]

Report all received emails as spam, add a filter to get rid of the @domain.com emails, and probably add the entire service to the blacklist so it doesn't happen again. Probably get rid of any paid account on that service one might have laying around.

If the email is targeted at a domain using mail servers from any major email provider (Gmail, Outlook), the user will probably find most emails into their spam folder anyway and the sending domain will get added to the spam list automatically. Especially if the attack hits multiple users on the same email service.

cuu508 7 hours ago | parent | prev | next [-]

> If a bot creates an account with someone else’s email, the victim gets one email, if they ignore it that’s the end of it. The welcome email and everything after it only fires once the user verifies.

As a user, I would prefer no welcome email at all.

SebastianKra 5 hours ago | parent | next [-]

I suspect everyone feels that way except SaaS providers. They could just give you a checkbox to turn the newsletter off, but they don't.

tomjen3 6 hours ago | parent | prev | next [-]

Yeah, thats part of why I hate "login with SERVICE". The big benefit would be not spamming me, but they always insist on getting my email.

There was a time were you would have to select "sign me up for your newsletter" then you had to uncheck it. Then you had to check to not get an email and now you don't even get that choice.

And lately? You have to go dig through your email because you can't set a password (looking at you Claude), so you can't filter email.

devmor 7 hours ago | parent | prev [-]

Then there's no verification step, preventing the entire mechanism of you not getting spammed.

JoshTriplett 7 hours ago | parent [-]

It sounds like cuu508 didn't want the post-verification welcome, as opposed to the one-time verification message.

cuu508 7 hours ago | parent [-]

Correct.

sodapopcan 6 hours ago | parent [-]

Yes, correct. When I clicked the link I was already welcomed by the welcome page (which is, for the most part, welcomed). But then why send me another email further welcoming me? I already feel welcomed! And don't give me any of that "because it works" BS (even though that is what you are going to say).

(cuu508, "you" in this instance does not mean you)

queenkjuul 7 hours ago | parent | prev | next [-]

I had my email stolen in such an attack, i still get random "you abandoned your cart!" Emails now and then, but luckily (?) they got my credit card at the same time and i cancelled it within minutes. So it's a little annoyance, but it doesn't really make sense to me that the flood works. At least not with American credit cards that are routinely flagging my own trips to microcenter lol

Editing to add: almost 100% of these emails came from the same e-commerce product, I'll have to look up which. But every site i got an email from was running the same off the shelf template.

CrzyLngPwd 5 hours ago | parent | prev | next [-]

We experienced a similar thing; Thousands of new accounts were being created over a short period, but it was Amazon SES sending us a warning about complaint numbers that woke us up to it.

We added a captcha and used a disposable email checking service to get rid of it.

nubg 7 hours ago | parent | prev [-]

This post was written by AI, there are multiple clues.

Author, why can you not use your own words?

I am not sure what you meant to say, vs what is LLM garbage I could have prompted myself.

wdutch 7 hours ago | parent | next [-]

I can't comment on if it was written by AI or not but I found the OP informative and quite dense with useful information. Nothing stood out to me as garbage.

radku 5 hours ago | parent | next [-]

Agreed. Personally I think about it as massaging a text with LLM is like applying filters to your pictures.

The text probably have been based entirely on the internal notes and investigations and is very informative. Would it be better if the OP wrote it entirely by themselves? Not necessarily.

nubg 6 hours ago | parent | prev [-]

I agree the topic and most of the content is legit!

Which makes is even more annoying. Because you don't know which are the good bits where somebody is sharing his unique insight, and which is just taken from the LLMs world knowledge.

chii 6 hours ago | parent [-]

so you are merely just prejudiced against LLM generated content, even if it was good?

Why not accept that it is good, and forget about it being LLM?

tholm 4 hours ago | parent | next [-]

Because sounding skeptical and "clever" is more important to some people than providing meaningful and relevant insight into the topic at hand.

nubg 3 hours ago | parent | prev [-]

?? I literally just wrote my main complaint:

> Because you don't know which are the good bits where somebody is sharing his unique insight, and which is just taken from the LLMs world knowledge.

denismi 6 hours ago | parent | prev [-]

I am quite confident that the following was NOT LLM:

> New users were signing up but not doing anything, they weren’t creating an org, a project, or a deployment, they just left an account sitting there.

Surely the LLM version is:

> New users were signing up but not doing anything; they weren't creating an org, a project, or a deployment—they just left an account sitting there.

nubg 6 hours ago | parent [-]

It really depends on the LLM and the wrapper prompt. There are many other giveaways though - which I am not going to name to burn them.

tpoacher 4 hours ago | parent [-]

You really should stop using LLMs to write messages complaining about LLM use though. (the "it depends" and the hyphen-as-emdash were dead giveaways).

/s