Remix.run Logo
When internal hostnames are leaked to the clown(rachelbythebay.com)
279 points by zdw 9 hours ago | 132 comments
notsylver 8 hours ago | parent | next [-]

I think people are misunderstanding. This isn't CT logs, its a wildcard certificate so it wouldn't leak the "nas" part. It's sentry catching client-side traces and calling home with them, and then picking out the hostname from the request that sent them (ie, "nas.nothing-special.whatever.example.com") and trying to poll it for whatever reason, which is going to a separate server that is catching the wildcard domain and being rejected.

spondyl 8 hours ago | parent | next [-]

My first thought was perhaps they're trying to fetch a favicon for rendering against the traces in the UI?

n0w 7 hours ago | parent [-]

They're likely trying to retrieve source maps

hsbauauvhabzb 7 hours ago | parent | prev [-]

Sounds like a great way to get sentry to fire off arbitrary requests to IPs you don’t own.

sure hope nobody does that targeting ips (like that blacklist in masscan) that will auto report you to your isp/ans/whatever for your abusive traffic. Repeatedly.

leoc 7 hours ago | parent [-]

Obligatory Bruce Scneier: https://www.schneier.com/blog/archives/2008/03/the_security_...

yabones 18 minutes ago | parent | prev | next [-]

Stuff like this is why I consider uBlock Origin to be the bare minimum security software for going on the web. The amount of 3rd party scripts running on most pages, constantly leaking data to everybody listening, is just mind boggling.

It's treating a symptom rather than a disease, but what else can we do?

b1temy 8 hours ago | parent | prev | next [-]

Is "clown GCP Host" a technical term I am unaware of, or is the author just voicing their discontent?

Seems to me that the problem is the NAS's web interface using sentry for logging/monitoring, and part of what was logged were internal hostnames (which might be named in a way that has sensitive info, e.g, the corp-and-other-corp-merger example they gave. So it wouldn't matter that it's inaccessible in a private network, the name itself is sensitive information.).

In that case, I would personally replace the operating system of the NAS with one that is free/open source that I trust and does not phone home. I suppose some form of adblocking ala PiHole or some other DNS configuration that blocks sentry calls would work too, but I would just go with using an operating system I trust.

jraph 8 hours ago | parent | next [-]

> Is "clown GCP Host" a technical term I am unaware of, or is the author just voicing their discontent?

Clown is Rachel's word for (Big Tech's) cloud.

dehrmann 7 hours ago | parent | next [-]

She was (or is) at Facebook, and "clowntown" and "clowny" are words you see there.

jraph 7 hours ago | parent | next [-]

> She was (or is) at Facebook

was (and she worked at Google too)

> "clowntown" and "clowny" are words you see there.

Didn't know this, interesting!

mintplant 7 hours ago | parent | prev | next [-]

"Clownshoes" is common as an adjective at Mozilla.

iwontberude 6 hours ago | parent | prev | next [-]

Im interested in the provenance, is it because their pasty white, red headed CEO resembles and behaves like a clown?

Anon1096 34 minutes ago | parent [-]

No it's because lots of stuff is duct taped together and then you have tons of scripts or tooling that was someone's weekend project (to make their oncall burden easier) that they shared around. Usually there'll be a flag like --clowntown or --clowny-xyz when it's obvious to all parties involved that it's destined to destroy everything one day but YOLO (also a common one).

zombot 5 hours ago | parent | prev [-]

Good to know, I thought at first she meant the current occupant of the President's chair.

baxtr 6 hours ago | parent | prev | next [-]

Anyone know how she come up with the word or why she chose it?

rwmj 4 hours ago | parent | next [-]

Maybe from JWZ? https://cdn.jwz.org/images/2016/clown-computing.png

kadoban 6 hours ago | parent | prev | next [-]

Probably just because it looks/sounds a little like cloud and has the connotations she wants.

It feels pretty hacker jargon-ish, it has some "hysterical raisins" type wordplay vibes.

oniony 5 hours ago | parent | prev [-]

Maybe she's a juggalo.

senectus1 8 hours ago | parent | prev [-]

amusingly its a term used by my co-workers to describe anyone thats not them.

jraph 7 hours ago | parent | next [-]

Oh well... I suppose humility is your coworker's defining quality? :-)

senectus1 6 hours ago | parent [-]

oh the answer to this is definitive. :-P

jrflowers 6 hours ago | parent | prev [-]

Your coworkers call you a clown?

senectus1 6 hours ago | parent [-]

I didnt call them workmates.

jrflowers 6 hours ago | parent [-]

Hire somebody to make balloon animals in the office for a couple hours, pay in cash, tell the balloonist that your name is [coworker’s name]

user_of_the_wek 10 minutes ago | parent | prev | next [-]

The circus left town, but the clowns are still here.

rausr 4 hours ago | parent | prev | next [-]

> Is "clown GCP Host" a technical term I am unaware of, or is the author just voicing their discontent?

The term has been in use for quite some time; It is voicing sarcastic discontent with the hyperscaler platforms _and_ their users (the idea being that the platform is "someone else's computer" or - more up to date - "a landlord for your data"). I'm not sure if she coined it, but if she did then good on her!

Not everyone believes using "the cloud" is a good idea, and for those of us who have run their own infrastructure "on-premises" or co-located, the clown is considered suitably patronising. Just saying ;)

b1temy 4 hours ago | parent [-]

> the idea being that the platform is "someone else's computer"

I have a vague memory of once having a userscript or browser extension that replaced every instance of the word "cloud" with "other peoples' computers". (iirc while funny, it was not practical, and I removed it).

fwiw I agree and I do not believe using "the cloud" for everything is a good idea either, I've just never heard of the word "clown" being used in this way before now.

masto 2 hours ago | parent [-]

“Cloud to butt” was popular in the early cloud days. It went around Google internally, and caused some… interesting issues.

seethishat 2 hours ago | parent | prev [-]

Also, sometimes, we use the term 'weenie' rather than 'clown'. They are interchangeable.

atmosx 7 hours ago | parent | prev | next [-]

I bought a SynologyNAS and I have regretted already 3-4 times. Apart from the software made available from the community, there is very little one can do with this thing.

Using LE to apply SSL to services? Complicated. Non standard paths, custom distro, everything hidden (you can’t figure out where to place the ssl cert of how to restart the service, etc). Of course you will figure it out if you spent 50 hours… but why?

Don’t get me started with the old rsync version, lack of midnight commander and/or other utils.

I should have gone with something that runs proper Linux or BSD.

joshstrange 3 hours ago | parent | next [-]

Unless you know what you are walking into ahead of time I would not recommend Synology to someone who wants to host a bunch of stuff and also wants a NAS. I don’t touch any of the container/apps stuff on my Synology(s), they are simply file servers for my application server. For this purpose, I find Synology rock solid and I’ve been very happy with them.

That said, I’ll probably try out the UniFi NAS offerings in the near future. I believe Synology has semi-walked-back its draconian hard drive policy but I don’t trust them to not try that again later. And because I only use my Synology as a NAS I can switch to something else relatively easily, as long as I can mount it on my app server, I’m golden.

alexalx666 an hour ago | parent | prev | next [-]

I bought Synology RS217 for $100 last year and it's the best tech purchase I made in years. The software it comes with is the best web interface I experienced in years. The simplicity, stability and attention to detail reminds me of old macs. I have macmini as application server and did not expect to use Synology for anything but file storage / replication. However it comes with a great torrent client that I use all the time now. We also use Synology Office instead of google docs now. It exceeded all my expectations and when it dies, I will immediately buy one of the new rack stations they offer.

PunchyHamster 6 hours ago | parent | prev | next [-]

You wanted a server and complain NAS is not just a server.

Gud 3 hours ago | parent | next [-]

More like, user wanted an open operating system but chose a proprietary one.

atmosx an hour ago | parent | prev [-]

NAS is the primary function. But yes, I want full linux server that I can decide what to install and which protocol to use to upload and/or download files.

tetris11 6 hours ago | parent | prev | next [-]

(Copied from an earlier comment of mine)

There are guides on how to mainline Synology NAS's to run up-to-date debian on them: https://forum.doozan.com/list.php

tgpc 5 hours ago | parent | prev | next [-]

please don't do this to your synology

leave it to serve files and iscsi. it's very good at it

if you leave it alone, no extra software, it will basically be completely stable. it's really impressive

aetherspawn 2 hours ago | parent [-]

Second this, just use it for files, it’s great for it. 10+ years uptime if you leave it alone.

paffdragon 5 hours ago | parent | prev | next [-]

You can run a container on Synology and install your custom services, tools there. At least that is what I do. For custom kernel modules you still need a Synology package for something like Wireguard.

If you have OPNSense, it has an ACME plugin with Synology action. I use that to automatically renew and push a cert to the NAS.

That said, since I like to tinker, Synology feels a bit restricted, indeed. Although there is some value in a stable core system (like these immutable distros from Fedora Atomic).

Arrowmaster an hour ago | parent [-]

The extremely old kernel on Synology makes it hard or impossible to run some containers.

reddalo 6 hours ago | parent | prev [-]

I'm so happy I didn't buy a NAS, Synology or not. I think a proper computer running Linux gives me so much more flexibility.

butvacuum 5 hours ago | parent [-]

that's still a NAS.

mike-cardwell an hour ago | parent | prev | next [-]

Only way I can think of protecting against this is to put a reverse proxy in front of it, like Nginx, and inject CSP headers to prevent cross site requests. Wouldn't block the NAS server side from making external calls, but would prevent your browser doing it for them as is the case here. Also would prevent stuff like Google Analytics if they have it. If you set up a proxy, you could also give it a local hostname like nas.local or something with a cert signed by your private CA that Nginx knows about, and then point the real hostname at Nginx, which has the wildcard cert.

Bit of a pain to set this all up though. I run a number of services on my home network and I always stick Nginx in front with a restrictive CSP policy, and then open that policy up as needed. For example, I'm running Home Assistant, and I have the Steam plugin, which I assume is responsible for requests from my browser like for: https://avatars.steamstatic.com/HASH_medium.jpg, which are being blocked by my injected CSP policy

P.S. I might decide to let that steam request through so I can see avatars in the UI. I also inject "Referrer-Policy: no-referrer", so if I do decide to do that, at least they wont see my HA hostname in there logs by default.

mixedbit 5 hours ago | parent | prev | next [-]

I have investigated similar situation on Heroku. Heroku assigns a random subdomain suffix for each new app, so URLs of apps are hard to guess and look like this: test-app-28a8490db018.herokuapp.com. I have noticed that as soon as a new Heroku app is created, without making any requests to the app that could leak the URL via a DNS lookup, the app is hit by requests from automatic vulnerability scanning tools. Heroku confirmed that this is due the new app URL being published in certificate authority logs, which are actively monitored by vulnerability scanners.

rini17 an hour ago | parent | prev | next [-]

Fancy web interfaces are road to hell. Do simplest thing that works. Plain apache or nginx with webdav, basic auth(proven code, minimal attack surface). Maybe firewall with ip_hashlimit on new connections. I have it set to 2/minute and for browser it's actually fine, while moronic bots make new connection for every request. When they improve, there's always fail2ban.

That the nas server incl. hostname is public does not bother me then.

ggm 7 hours ago | parent | prev | next [-]

Reverse address lookup servers routinely see escaped attempts to resolve ULA and rfc1918. If you can tie the resolver to other valid data, you know inside state.

Public services see one way (no TCP return flow possible) from almost any source IP. If you can tie that from other corroborated data, the same: you see packets from "inside" all the time.

Darknet collection during final /8 run-down captured audio in UDP.

Firewalls? ACLs? Pah. Humbug.

_gmax1 6 hours ago | parent [-]

"Darknet collection during final /8 run-down captured audio in UDP."

Mind elaborating on this? SIP traffic from which year?

ggm 5 hours ago | parent | next [-]

2010/2011 time frame. Google and others helped sink the traffic, all written up at apnic labs. It's how 1.1.1.0/24 got held back from general release.

LtdJorge 6 hours ago | parent | prev [-]

RTP I’d say

notpushkin 2 hours ago | parent | prev | next [-]

https://archive.ph/siEdE

ashu1461 6 hours ago | parent | prev | next [-]

Isn't the article over emphasising a little bit on leakage of internal urls ?

Internal hostnames leaking is real, but in practice it’s just one tiny slice of a much larger problem: names and metadata leak everywhere - logs, traces, code, monitoring tools etc etc.

reddalo 6 hours ago | parent [-]

In other words: never put sensitive information in names and metadata.

dmichulke 5 hours ago | parent [-]

Or name them after little bobby tables.

Is there some sort of injection that's a legal host name?

cwillu an hour ago | parent | prev | next [-]

Just getting 404 not found

zaptheimpaler 7 hours ago | parent | prev | next [-]

Oh god this sucks, i've been setting up lots of services on my NAS pointing to my own domains recently. Can't even name the domains on my own damn server with an expectation of privacy now.

jraph 7 hours ago | parent | next [-]

> Can't even name the domains on my own damn server with an expectation of privacy now.

You never could. A host name or a domain is bound to leave your box, it's meant to. It takes sending an email with a local email client.

(Not saying, the NAS leak still sucks)

ahoka 2 hours ago | parent | next [-]

I have internal zones in my home network and requests to resolve them never leave the private network. So no, it's not meant to.

jraph an hour ago | parent [-]

"Meant to" may indeed not be really accurate.

However, domains and host names were not designed to be particularly private and should not be considered secret, many things don't consider them private, so you should not put anything sensible in a host name, even in a network that's supposed private. Unless your private network is completely air-gapped.

Now, I wouldn't be surprised that hostnames were in fact originally expected to be explicitly public.

zaptheimpaler 6 hours ago | parent | prev [-]

I don't know much about email, but how would some random service send an email from my domain if I've never given it any auth tokens?

TheDong 4 hours ago | parent | next [-]

You don't need any auth to send an email from your domain, or in fact from any domain. Just set whatever `From` you want.

I've received many emails from `root@localhost` over the years.

Admittedly, most residential ISPs block all SMTP traffic, and other email servers are likely to drop it or mark it as spam, but there's no strict requirement for auth.

prmoustache 3 hours ago | parent | next [-]

> Admittedly, most residential ISPs block all SMTP traffic, and other email servers are likely to drop it or mark it as spam, but there's no strict requirement for auth.

Source? I've never seen that. Nobody could use their email provider of choice if that was the case.

namibj 2 hours ago | parent | next [-]

They don't do DPI, they just look at the destination port. And that's why there's a separate port for submission to mail agents where such auth is expected and thus only outbound mail is typically even attempted to be submitted to. Technically local delivery mail too, e.g. where the From and the To headers are valid and have the same domain.

TheDong 2 hours ago | parent | prev [-]

The 3 most common ISPs in the US are Comcast, Spectrum, and AT&T

Comcast blocks port 25: https://www.xfinity.com/support/articles/email-port-25-no-lo...

AT&T says "port 25 may be blocked from customers with dynamically-assigned Internet Protocol addresses", which is the majority of customers https://about.att.com/sites/broadband/network

What ISP are you using that isn't blocking port 25, and have you never had the misfortune of being stuck with comcast or AT&T as your only option?

flexagoon 2 hours ago | parent | prev [-]

You can, but most email providers will immediately reject your email or put it into spam because of missing DKIM/DMARC/SPF

jraph 6 hours ago | parent | prev [-]

It should not, but it's usual to configure random services to send mails to users, for instance for password resets, or for random notifications.

Another thing usually sending mails is cron, but that should only go to the admin(s).

Some services might also display the host name somewhere in their UI.

jeroenhd 6 hours ago | parent | prev [-]

The (somewhat affordable) productized NASes all suffer from big tech diseases.

I think a lot of people underestimate how easy a "NAS" can be made if you take a standard PC, install some form of desktop Linux, and hit "share" on a folder. Something like TrueNAS or one of its forks may also be an option if you're into that kind of stuff.

If you want the fancy docker management web UI stuff with as little maintenance as possible, you may still be in the NAS market, but for a lot of people NAS just means "a big hard drive all of my devices can access". From what I can tell the best middle point between "what the box from the store offers" and "how do build one yourself" is a (paid-for) NAS OS like HexOS where analytics, tracking, and data sales are not used to cover for race-to-the-bottom pricing.

prmoustache 3 hours ago | parent | next [-]

I don't even understand what kind of webui one would want.

All you really need is a bunch of disk and an operating system with an ssh server. Even the likes of samba and nfs aren't even useful anymore.

jeroenhd 2 hours ago | parent [-]

A bunch of out-of-the-box NAS manufacturers provide a web-based OS-like shell with file managers, document editors, as well as an "app store" for containers and services.

I see the traditional "RAID with a SMB share" NAS devices less and less in stores.

If only storage target mode[1] had some form of authentication, it'd make setting up a barebones NAS an absolute breeze.

[1]: https://www.freedesktop.org/software/systemd/man/257/systemd...

zaptheimpaler 5 hours ago | parent | prev | next [-]

Actually I host everything on a linux PC/server, but a different box runs PFSense and a local DNS resolver so I was talking about setting up a split-brain DNS there. So I don't have to manually edit the hosts file on every machine and keep it up to date with IP changes. Personally I really like docker compose, its made running the little homeserver very easy.

jeroenhd 5 hours ago | parent [-]

Personally, I've started just using mDNS/Bonjour for local devices. Comes preinstalled on most devices (may need a manual package on BSD/Linux servers) and doesn't require any configuration. Just type in devicename.local and let the network do the rest. You can even broadcast additional device names for different services, so you don't need to do plex.nas.local, but can just announce plex.local and nas.local from the same machine.

There's a theoretical risk of MitM attacks for devices reachable over self-signed certificates, but if someone breaks into my (W)LAN, I'm going to assume I'm screwed anyway.

I've used split-horizon DNS for a couple of years but it kept breaking in annoying ways. My current setup (involving the pihole web UI because I was sick of maintaining BIND files) still breaks DNSSEC for my domain and I try to avoid it when I can.

AndyMcConachie 4 hours ago | parent | prev [-]

The real trick, and the reason I don't build my own NAS, is standby power usage. How much wattage will a self built Linux box draw when it's not being used? It's not easy to figure out, and it's not easy to build a NAS optimized for this.

Whereas Synology or other NAS manufacturers can tell me these numbers exactly and people have reviewed the hardware and tested it.

teekert 7 hours ago | parent | prev | next [-]

Is this a Chrome/Edge thing? Or do privacy respecting browsers also do this? If so, it's unexpected.

If Firefox also leaks this, I wonder if this is something mass-surveillance related.

(Judging from the down votes I misunderstood something)

nomercy400 6 hours ago | parent [-]

From what I understand, sentry.io is like a tracing and logging service, used by many organizations.

This helps you (=NAS developer) to centralize logs and trace a request through all your application layers (client->server->db and back), so you can identify performance bottlenecks and measure usage patterns.

This is what you can find behind the 'anonymized diagnostics' and 'telemetry' settings you are asked to enable/consent.

For a WebUI it is implemented via javascript, which runs on the client's machine and hooks into the clicks, API calls and page content. It then sends statistics and logs back to, in this case, sentry.io. Your browser just sees javascript, so don't blame them. Privacy Badger might block it.

It is as nefarious as the developer of the application wants to use it. Normally you would use it to centralize logging, find performance issues, and get a basic idea on what features users actually use, so you can debug more easily. But you can also use it to track users. And don't forget, sentry.io is a cloud solution. If you post it on machines outside your control, expect it to be public. Sentry has a self-hosted solution, btw.

jeroenhd 6 hours ago | parent [-]

My employer uses Sentry for (backend) metrics collection so I had to unblock it to do my job. I wish Sentry would have separate infra for "operating on data collected by Sentry" and "submit every mouse click to Sentry" so I could block their mass surveillance and still do my job, but I suppose that would cut into their profit margins.

My current solution is a massive hack that breaks down every now and then.

wbobeirne an hour ago | parent [-]

Most organizations I've set Sentry up for tunnel the traffic through their own domain, since many blocking extensions block sentry requeats by default. Their own docs recommend it as well. All that to say, it's not trivial to fully block it and you were probably sending telemetry anyway even with the domain blocked.

jeroenhd an hour ago | parent [-]

With the right tricks (CNAME detection, URL matching) a bunch of ad blocking tools still pick up the first-party proxies, but that only works when directly communicating with the Sentry servers.

Quite a pain that companies refuse to take no for an answer :/

stingraycharles 8 hours ago | parent | prev | next [-]

I don’t understand. How could a GCP server access the private NAS?

I agree the web UI should never be monitored using sentry. I can see why they would want it, but at the very least should be opt in.

minitech 8 hours ago | parent | next [-]

It couldn’t, but it tried.

copperx 7 hours ago | parent [-]

A for effort, F for firewall.

throwaway290 8 hours ago | parent | prev [-]

It said knocking, not accessing

also

> you notice that you've started getting requests coming to your server on the "outside world" with that same hostname.

NitpickLawyer 8 hours ago | parent | prev | next [-]

Not sure why they made the connection to sentry.io and not with CT logs. My first thought was that "*.some-subdomain." got added to the CT logs and someone is scanning *. with well known hosts, of which "nas" would be one. Curious if they have more insights into sentry.io leaking and where does it leak to...

jraph 8 hours ago | parent | next [-]

That hypothesis seems less likely and more complicated than the sentry one.

Scanning wildcards for well-known subdomains seems both quite specific and rather costly for unclear benefits.

flexagoon 2 hours ago | parent [-]

Bots regularly try to bruteforce domain paths to find things like /wp-admin, bruteforcing subdomains isn't any more complicated

jraph 2 hours ago | parent [-]

> Bots regularly try to bruteforce domain paths to find things like /wp-admin

Sure, when WordPress powers 45% of all websites, your odds to reach something by hitting /wp-admin are high.

The space of all the possible unknown subdomains is way bigger than a few well known paths you can attack.

rawling 7 hours ago | parent | prev | next [-]

I feel like the author would have noticed and said so if she was getting logs for more than just the one host.

A1kmm 7 hours ago | parent | prev | next [-]

But she mentioned: 1) it isn't in DNS only /etc/hosts and 2) they are making a connection to it. So they'd need to get the IP address to connect to from somewhere as well.

jeroenhd 6 hours ago | parent | next [-]

From the article:

> You're able to see this because you set up a wildcard DNS entry for the whole ".nothing-special.whatever.example.com" space pointing at a machine you control just in case something leaks. And, well, something did* leak.

They don't need the IP address itself, it sounds like they're not even connecting to the same host.

bardsore 6 hours ago | parent | prev [-]

Unless she hosts her own cert authority or is using a self-signed cert, the wildcard cert she mentions is visible to the public on sites such as https://crt.sh/.

heipei 4 hours ago | parent [-]

Yes, the wildcard cert, but not the actual hostname under that wildcard.

imtringued 5 hours ago | parent | prev [-]

Because sentry.io is a commercial application monitoring tool which has zero incentive to any kind of application monitoring on non-paying customers. That's just costs without benefits.

You now have to argue that a random third party is using and therefore paying sentry.io to do monitoring of random subdomains for the dubious benefit of knowing that the domain exists even though they are paying for something that is way more expensive.

It's far more likely that the NAS vendor integrated sentry.io into the web interface and sentry.io is simply trying to communicate with monitoring endpoints that are part of said integration.

From the perspective of the NAS vendor, the benefits of analytics are obvious. Since there is no central NAS server where all the logs are gathered, they would have to ask users to send the error logs manually which is unreliable. Instead of waiting for users to report errors, the NAS vendor decided to be proactive and send error logs to a central service.

TZubiri 8 hours ago | parent | prev | next [-]

>Hope you didn't name it anything sensitive, like "mycorp-and-othercorp-planned-merger-storage", or something.

So, no one competent is going to do this, domains are not encrypted by HTTPS, any sensitive info is pushed to the URL Path.

I think being controlling of domain names is a sign of a good sysadmin, it's also a bit schizophrenic, but you gotta be a little schizophrenic to be the type of sysadmin that never gets hacked.

That said, domains not leaking is one of those "clean sheet" features that you go for no reason at all, and it feels nice, but if you don't get it, it's not consequential at all. It's like driving at exactly 50mph, like having a green streak on github. You are never going to rely on that secrecy if only because some ISP might see that, but it's 100% achievable that no one will start pinging your internal host and start polluting your hosts (if you do domain name filtering).

So what I'm saying is, I appreciate this type of effort, but it's a bit dramatic. Definitely uninstall whatever junk leaked your domain though, but it's really nothing.

wasmitnetzen an hour ago | parent | next [-]

I've blown fairly competent colleagues' minds multiple times by showing them the existence of certificate transparency logs. They were very much under the impression that hostnames can be kept secret as a protection against external infrastructure mapping.

jraph 7 hours ago | parent | prev | next [-]

> any sensitive info is pushed to the URL Path

This too is not ideal. It gets saved in the browser history, and if the url is sent by message (email or IM), the provider may visit it.

> Definitely uninstall whatever junk leaked your domain though, but it's really nothing.

We are used to the tracking being everywhere but it is scandalous and should be considered as such. Not the subdomain leak part, that's just how Rachel noticed, but the non advertised tracking from an appliance chosen to be connected privately.

TZubiri 7 hours ago | parent [-]

>This too is not ideal. It gets saved in the browser history, and if the url is sent by message (email or IM), the provider may visit it.

Sure. POST for extra security.

> Not the subdomain leak part, that's just how Rachel noticed, but the non advertised tracking from an appliance chosen to be connected privately.

If this were a completely local product, like say a USB stick. Sure. but this is a Network Attached Storage product, and the user explicitly chose to use network functions (domains, http), it's not the same category of issue.

Jolter 7 hours ago | parent | prev | next [-]

Obl. nitpick: you mean paranoia, presumably. Schizophrenia is a dissociative/psychotic disorder, paranoia is the irrational belief that you’re being persecuted/watched/etc.

Btw, in this case it can’t be paranoia since the belief was not irrational - the author was being watched.

TZubiri 7 hours ago | parent [-]

You are right, I meant paranoid.

>Btw, in this case it can’t be paranoia since the belief was not irrational - the author was being watched.

Yes, but I mean being overly cautious in the threat model. For example, birds may be watching through my window, it's true and I might catch a bird watching my house, but it's paranoid in the sense that it's too tight of a threat model.

nottorp 2 hours ago | parent | next [-]

One never knows, that owl might be electric.

jraph 7 hours ago | parent | prev [-]

I know analogies are not meant to be perfect, but birds don't mass watch, and don't systematically watch every of your moves neither.

nirse 7 hours ago | parent [-]

That's what you think...

jraph 7 hours ago | parent [-]

:-)

OptionOfT 7 hours ago | parent | prev | next [-]

TLS 1.3 has encrypted client hello which encrypts the domain name during an HTTPS connection.

voidUpdate 5 hours ago | parent | prev [-]

> "So, no one competent is going to do this"

What about all the people who are incompetant?

that_guy_iain 7 hours ago | parent | prev | next [-]

This is actually an really interesting way to attack a sensitive network. This is a way of allowing to map the internal network of a sensitive network. Getting access is obviously the main challenge but once you're in there you need to know where you go and what to look for. If you've already got that knowledge when planning the attack to gain entry then you've got the upper-hand. So while it kinda seems like "Ok, so they have a hostname they can't access why do I care?". If you're doing high-end security on your system admin level then this is the sort of small nitpicking that it takes to be the best.

dcrazy 8 hours ago | parent | prev | next [-]

Slightly surprised that this blog seems to have succumbed to inbound traffic.

unsnap_biceps 7 hours ago | parent | next [-]

If you're on an apple device, disable private relay. It appears the blog has tar pitted private relay traffic.

bhaney 7 hours ago | parent [-]

It's tar pitting my normal unproxied residential traffic too

computerfriend 6 hours ago | parent [-]

Same, plus my VPN connection.

daveoc64 3 hours ago | parent | prev | next [-]

Rachel has blogged quite a bit about blocking badly behaved RSS Clients in recent years.

I'd link you to one of the articles if I wasn't blocked too, and my VPN wasn't also blocked!

lapcat 2 hours ago | parent [-]

> Rachel has blogged quite a bit about blocking badly behaved RSS Clients in recent years.

Unfortunately that blocking is buggy and overzealous.

I just gave up eventually and unsubscribed from the RSS feed.

that_lurker 8 hours ago | parent | prev [-]

Opens fine for me

urbandw311er 5 hours ago | parent [-]

“Works on my machine”

renewiltord 6 hours ago | parent | prev | next [-]

Haha, this obtuse way of speech is such a classic FAANG move. I wonder if it’s because of internal corporate style comms. Patio11 also talks like this. Maybe because Stripe is pretty much a private FAANG.

ranger_danger 8 hours ago | parent | prev | next [-]

Pennywise found my hostname? We're cooked.

defrost 8 hours ago | parent | next [-]

You're IT, I'm IT, We're all IT.

bonesss 7 hours ago | parent [-]

We all use floats down here.

ahoka 2 hours ago | parent [-]

For representing monetary values.

TeapotNotKettle 8 hours ago | parent | prev [-]

Misconfigured clown - bad news indeed.

fragmede 8 hours ago | parent | prev [-]

This highlights a huge problem with LetsEncrypt and CT logs. Which is that the Internet is a bad place, with bad people looking to take advantage of you. If you use LetsEncrypt for ssl certs (which you should), that hostname gets published to the world, and that server immediately gets pummeled by requests for all sorts of fresh install pages, like wp-admin or phpmyadmin, from attackers.

krautsauer 8 hours ago | parent | next [-]

That may be related, but it's not what happened here. Wildcard-cert and all.

prmoustache 3 hours ago | parent | prev | next [-]

Why would you care that your hostname on a local only domain is published to the world if it is not reachable from outside? Publicly available hosts are alread published to the world anyway through DNS.

LetsEncrypt doesn't make a difference at all.

ale42 6 hours ago | parent | prev | next [-]

It's not just Let's Encrypt, right? CT is a requirement for all Certificate Authorities nowadays. You can just look at the certificate of www.google.com and see that it has been published to two CT logs (Google's and Sectigo's)

nottorp 2 hours ago | parent | next [-]

Now I get why they want to reduce certificate validity to 20 minutes. The logs will become so spammy then that the bad guys won't be able to scan all hosts in them any more...

tialaramex 5 hours ago | parent | prev [-]

Technically logging certificates is not a Requirement of the trust stores, but most web browsers won't accept a certificate which isn't presented with a proof of logging, typically (but not always) baked inside the certificates.

The reason for this distinction is that failing to meet a Requirement for issued certificates would mean the trust stores might remove your CA, but several CAs today do issue unlogged certificates - and if you wanted to use those on a web server you would need to go log them and staple the proofs to your certs in the server configuration.

Most of the rules (the "Baseline Requirements" or BRs) are requirements and must be followed for all issued certificates, but the rule about logging deliberately doesn't work that way. The BRs do require that a CA can show us - if asked - everything about the certificates they issued, and these days for most CAs that's easiest accomplished by just providing links to the logs e.g. via crt.sh -- but that requirement could also be fulfilled by handing over a PDF or an Excel sheet or something.

thakoppno 8 hours ago | parent | prev | next [-]

> the Internet is a bad place

FWIW - it’s made of people

TZubiri 8 hours ago | parent [-]

No, it's made by systems made by people, systems which might have grown and mutated so many times that the original purpose and ethics might be unrecognizable to the system designers. This can be decades in the case of tech like SMTP, HTTP, JS, but now it can be days in the era of Moltbots and vibecoding.

Spivak 8 hours ago | parent | prev | next [-]

I like only getting *.domain for this reason. No expectation of hiding the domain but if they want to figure out where other things are hosted they'll have to guess.

ttoinou 8 hours ago | parent | next [-]

So how do you get this ?

rossy 8 hours ago | parent [-]

Let's Encrypt can issue wildcard certs too

hsbauauvhabzb 7 hours ago | parent | prev [-]

That’s really not a great fix. If those hostnames leak, they leak forever. I’d be surprised if AV solutions and/or windows aren’t logging these things.

jesterson 8 hours ago | parent | prev [-]

> If you use LetsEncrypt for ssl certs (which you should)

You meant you shouldn't right? Partially exactly for the reasons you stated later in the same sentence.

josh3736 7 hours ago | parent [-]

Let's Encrypt has nothing to do with this problem (of Certificate Transparency logs leaking domain names).

CA/B Forum policy requires every CA to publish every issued certificate in the CT logs.

So if you want a TLS certificate that's trusted by browsers, the domain name has to be published to the world, and it doesn't matter where you got your certificate, you are going to start getting requests from automated vulnerability scanners looking to exploit poorly configured or un-updated software.

Wildcards are used to work around this, since what gets published is *.example.com instead of nas.example.com, super-secret-docs.example.com, etc — but as this article shows, there are other ways that your domain name can leak.

So yes, you should use Let's Encrypt, since paying for a cert from some other CA does nothing useful.

jesterson 7 hours ago | parent [-]

Statistically amount of parasite scanning on LE "secured" domains is way more compared to purchased certficates. And yes, this is without voluntary publishing on LE side.

I am not entirely aware what LE does differently, but we had very clear observation in the past about it.