Remix.run Logo
kstrauser a day ago

I'll side with you here. This gives attackers a huge window of time in which to compromise your service and configure it the way they want it configured.

In my recent experience, you have about 3 seconds to lock down and secure a new web service: https://honeypot.net/2024/05/16/i-am-not.html

lucb1e a day ago | parent [-]

Wut? That can't have been a chance visit from a crawler unless maybe you linked it within those 3 seconds of creating the subdomain and the crawler visited the page it was linked from in that same second, or you/someone linked to it (in preparation) before it existed and bots were already constantly trying

Where did you "create" this subdomain, do you mean the vhost in the webserver configuration or making an A record in the DNS configuration at e.g. your registrar? Because it seems to me that either:

- Your computer's DNS queries are being logged and any unknown domains immediately get crawled, be it with malicious or white-hat intent, or

- Whatever method you created that subdomain by is being logged (by whoever owns it, or by them e.g. having AXFR enabled accidentally for example) and immediately got crawled with whichever intent

I can re-do the test on my side if you want to figure out what part of your process is leaky, assuming you can reproduce it in the first place (to within a few standard deviations of those three seconds at least; like if the next time is 40 seconds I'll call it 'same' but if it's 4 days then the 3 seconds were a lottery ticket -- not that I'd bet on those odds to deploy important software, but generally speaking about how aggressive-or-not the web is nowadays)

kstrauser a day ago | parent [-]

Consensus from friends after I posted that is that attackers monitor the Let's Encrypt transparency logs and pounce on new entries the moment they're created. Here I was using Caddy, which by default uses LE to create a cert on any hosts you define.

I can definitely reproduce this. It shocked me so much that I tried a few times:

1. Create a new random hostname in DNS.

2. `tail -f` the webserver logs.

3. Define an entry for that hostname and reload the server (or do whatever your webserver requires to generate a Let's Encrypt certificate).

4. Start your stopwatch.

lucb1e a day ago | parent [-]

Thanks! CT logs do explain it. So it's not actually the DNS entry or vhost, but the sharing of the new domain in a well-known place. That's making a lot more sense to me! I can see how that happens unwittingly though

We also use CT logs at work to discover subdomains that customers forgot about and may host vulnerable software at (if such broad checks are in the scope that the customer contracted us to check)

kstrauser a day ago | parent [-]

Yep, that’s right. And I guarantee, like would bet my retirement savings on it, that someone today has counted on security through obscurity and not realized their new website was compromised a few seconds after they launched it for the first time ever. “I just registered example.com. No one’s ever even heard of it! I’ll just have to clean it up before announcing it”, not realizing they announced it when they turned the server on.

3 seconds.

snickerdoodle12 a day ago | parent [-]

I had a similar fun experience when I was generating UUID subdomains and was shocked to see traffic in the logs before ever sharing the URL. I've since switched to a wildcard certificate but regardless, you can't really trust the hostname to be secret because of SNI and all that.

kstrauser 8 hours ago | parent [-]

That would’ve been quite the surprise! I was initially shocked enough when @ and www were getting hammered. A fully random hostname would’ve dazzled me for a bit.