| ▲ | sebstefan 4 days ago |
| Dodged a bullet indeed I find it insane that someone would get access to a package like this, then just push a shitty crypto stealer. You're a criminal with a one-in-a-million opportunity. Wouldn't you invest an extra week pushing a more fledged out exploit? You can exfiltrate API keys, add your SSH public key to the server then exfiltrate the server's IP address so you can snoop in there manually, if you're on a dev's machine maybe the browser's profiles, the session tokens common sales websites? My personal desktop has all my cards saved on Amazon. My work laptop, depending on the period of my life, you could have had access to stuff you wouldn't believe either. You don't even need to do anything with those, there's forums to sell that stuff. Surely there's an explanation, or is it that all the good cybercriminals have stable high paying jobs in tech, and this is what's left for us? |
|
| ▲ | com2kid 4 days ago | parent | next [-] |
| > You're a criminal with a one-in-a-million opportunity. Wouldn't you invest an extra week pushing a more fledged out exploit? Because the way this was pulled off, it was going to be found out right away. It wasn't a subtle insertion, it was a complete account take over. The attacker had only hours before discovery - so the logical thing to do is a hit and run. They asked what is the most money that can be extracted in just a few hours in an automated fashion (no time to investigate targets manually one at a time) and crypto is the obvious answer. Unless the back doors were so good they weren't going to be discovered even though half the world would be dissecting the attack code, there was no point in even trying. |
| |
| ▲ | pluto_modadic 4 days ago | parent | next [-] | | "found out right away"... by people with time to review security bulletins. There's loads of places I could see this slipping through the cracks for months. | | |
| ▲ | andrewstuart2 4 days ago | parent | next [-] | | I'm assuming they meant the account takeover was likely to be found out right away. You change your password on a major site like that and you're going to get an email about it. Login from a new location also triggers these emails, though I admit I haven't logged onto NPM in quite a long time so I don't know that they do this. It might get missed, but I sure notice any time account emails come through even if it's not saying "your password was reset." | |
| ▲ | benoau 4 days ago | parent | prev | next [-] | | There's probably already hundreds of thousands of Jira tickets to fix it with no sprint assigned.... | | |
| ▲ | brazzy 3 days ago | parent | next [-] | | I feel attacked. And very, very happy that we're proxying all access to npm through Artifactory, which allowed us to block the affected versions and verify that they were in fact never pulled by any of our builds. | | |
| ▲ | Aeolun 3 days ago | parent | next [-] | | Only problem is the artifactory instance is on the other side if the world instead of behind the convenient npmjs CDN, so installing packages takes 5x longer.. | |
| ▲ | pixl97 3 days ago | parent | prev [-] | | About to say, if you're in a company of any size and you're not doing it this way, you're doing it wrong. |
| |
| ▲ | hylaride 3 days ago | parent | prev [-] | | Ugh, have some respect. Some of us have PTSD dealing with security issues where the powers that be prevented us dealing with them due to them deprioritizing them during backlog grooming. My last company literally refused to do any security work except CVE turndowns - because it was contractually promised via a customer contract. |
| |
| ▲ | zahlman 4 days ago | parent | prev | next [-] | | Yes, but this is an ecosystem large enough to include people who have that time (and inclination and ability); and once they have reported a problem, everyone is on high alert. | | |
| ▲ | wongarsu 4 days ago | parent [-] | | If you steal the cookies from dev machines or steal ssh keys along with a list of recent ssh connections or do any other credential theft there are going to be lots of people left impacted. Yes, lots of people reading tech news or security bulletins is going to check if they were compromised and preemptively revoke those credentials. But that's work, meaning even among those informed there will be many who just assume they weren't impacted. Lots of people/organisations are going to be complacent and leave you with valid credentials | | |
| ▲ | ameliaquining 4 days ago | parent | next [-] | | If a dev doesn't happen to run npm install during the period between when the compromised package gets published and when npm yanks it (which for something this high-profile is generally measured in hours, not days), then they aren't going to be impacted. So an attacker's patience won't be rewarded with many valid credentials. | | |
| ▲ | giveita 3 days ago | parent [-] | | Dev, or their IDE, agent, etc. | | |
| ▲ | komali2 3 days ago | parent [-] | | Their build chain, CI environment, server... | | |
| ▲ | ameliaquining 3 days ago | parent [-] | | npm ci wouldn't trigger this, it doesn't pick up newly published package versions. I suppose if you got a PR from Dependabot updating you to the compromised package, and happened to merge it within the window of vulnerability, then you'd get hit, but that will likewise not affect all that many developers. Or if you'd configured Dependabot to automatically merge all updates without review; I'm not sure how common that is. |
|
|
| |
| ▲ | com2kid 4 days ago | parent | prev [-] | | But that is dumb luck. Release an exploit, hope you can then gain further entry into a system at a company that is both high value and doesn't have any basic security practices in place. That could have netted the attacker something much more valuable, but it is pure hit or miss and it requires more skill and patience for a payoff. VS blast out some crypto stealing code and grab as many funds as possible before being found out. > Lots of people/organisations are going to be complacent and leave you with valid credentials You'd get non-root credentials on lots of dev machines, and likely some non-root credentials on prod machines, and possibly root access to some poorly configured machines. Two factor is still in place, you only have whatever creds that NPM install was ran with. Plenty of the really high value prod targets may very well be on machines that don't even have publicly routable IPs. With a large enough blast radius, this may have worked, but it wouldn't be guaranteed. |
|
| |
| ▲ | joshuat 4 days ago | parent | prev [-] | | The window of installation time would be pretty minimal, and the operating window would only be as long as those who deployed while the malicious package was up waited to do another deploy. |
| |
| ▲ | bobbylarrybobby 4 days ago | parent | prev | next [-] | | If they'd waited a week before using their ill-gotten credentials to update the packages, would they have been detected in that week? | | | |
| ▲ | nialv7 3 days ago | parent | prev | next [-] | | > it was a complete account take over is that so? from the email it looks like they MITM'd the 2FA setup process, so they will have qix's 2FA secret. they don't have to immediately start taking over qix's account and lock him out. they should have had all the time they need to come up with a more sophisticated payload. | |
| ▲ | nurettin 3 days ago | parent | prev | next [-] | | > They asked what is the most money that can be extracted in just a few hours in an automated fashion (no time to investigate targets manually one at a time) and crypto is the obvious answer. A decade ago my root/123456 ssh password got pwned in 3-4 days. (I was gonna change to certificate!) Hetzner alerted me saying that I filled my entire 1TB/mo download quota. Apparently, the attacker (automation?) took over and used it to scrape alibaba, or did something with their cloud on port 443. It took a few hours to eat up every last byte. It felt like this was part of a huge operation. They also left a non-functional crypto miner in there that I simply couldn't remove. So while they could cryptolock, they just used it for something insidious and left it alone. | |
| ▲ | jowea 3 days ago | parent | prev [-] | | To be fair, this wasn't a super demanding 0-day attack, it was a slightly targeted email phish. Maybe the attacker isn't that sophisticated and just went with what is familiar? |
|
|
| ▲ | root_axis 4 days ago | parent | prev | next [-] |
| Stolen cryptocurrency is a sure thing because fraudulent transactions can't be halted, reversed, or otherwise recovered. Things like a random dev's API and SSH keys are close to worthless unless you get extremely lucky, and even then you have to find some way to sell or otherwise make money from those credentials, the proceeds of which will certainly be denominated in cryptocurrency anyway. |
| |
| ▲ | buu700 4 days ago | parent | next [-] | | Agreed. I think we're all relieved at the harm that wasn't caused by this, but the attacker was almost certainly more motivated by profit than harm. Having a bunch of credentials stolen en masse would be a pain in the butt for the rest of us, but from the attacker's perspective your SSH key is just more work and opsec risk compared to a clean crypto theft. Putting it another way: if I'm a random small-time burglar who happens to find himself in Walter White's vault, I'm stuffing as much cash as I can fit into my bag and ignoring the barrel of methylamine. | |
| ▲ | jimbo808 4 days ago | parent | prev | next [-] | | And it's probably the lowest risk way to profit from this attack | |
| ▲ | babypuncher 4 days ago | parent | prev [-] | | Ultimately, stolen cryptocurrency doesn't cause real world damage for real people, it just causes a bad day for people who gamble on questionable speculative investments. The damage from this hack could have been far worse if it was stealing real money people rely on to feed their kids. | | |
| ▲ | aspenmayer 3 days ago | parent [-] | | You have the context sort of wrong. To do a comparable “real money” heist en masse, you would be stealing from the banks or from the customers of one, or via debit or credit cards. It’s real enough money, but those fraudulent transactions would be covered by existing protections, like FDIC insurance or chargebacks. I don’t think anyone could steal much cash from a single heist from a bank or other hard target, so your analogy is confusing. There is no analogous situation in which “real money” could be stolen from customers or financial institutions or the interchange system that would impinge end users. That’s the whole reason people use them. Even in friendly fraud situations, the money isn’t gone, it’s just frozen, so you might have to wait a month or so to get it unfrozen after the FBI et al clear the source of funds. Sure, if someone takes my grocery money, that’s a real loss, and that’s why I don’t carry large sums of cash. But that isn’t what happened here. Can you explain what you meant so I can understand? I think you had a point, I just don’t think that the risk of the kind of attack in TFA is comparable to someone getting their grocery money stolen, because the financial situation for that individual in-person theft can’t really occur on the same scale as the attack in TFA, and even if it could, that’s kind of on the end user for carrying more cash than they can defend. | | |
| ▲ | efreak a day ago | parent | next [-] | | Unless they've changed something, I know at least at the very beginning Zelle had no fraud protection. https://techcrunch.com/2018/02/16/zelle-users-are-finding-ou... It appears they still have issues with (more advanced forms of) fraud: https://thecyberexpress.com/zelle-lawsuit-2025-scam-hit-us-f... (this page won't stop reloading, but I think it's my adblock configuration)
https://www.morningstar.com/news/marketwatch/20241221198/mor... | |
| ▲ | lmm 3 days ago | parent | prev [-] | | > It’s real enough money, but those fraudulent transactions would be covered by existing protections, like FDIC insurance or chargebacks. Not always. Many banks will claim e.g. they don't have to cover losses from someone who opened a phishing email, never mind that the bank themselves sends out equally suspicious "real" emails on the regular. Also even if it's covered that money comes from somewhere - ultimately out of the pockets of regular folks who were just using their bank accounts, even if the insurance mechasims mean it's spread out more widely. | | |
| ▲ | aspenmayer 3 days ago | parent [-] | | Good points all around. I don’t mean to blame the victim, as they usually don’t know what they don’t know and aren’t party to the fraud, so they couldn’t begin to know, but informed users ought to know the failure modes. Insurance rates are surely a factor in the industry push for KYC, which is mandated federally for good reasons, but in edge cases like loss of funds, the little people are often blamed for being victims by faceless corporations because they aren’t able to say what caused the issue, due to federal regulations against fraud. It’s a conundrum. |
|
|
|
|
|
| ▲ | jeroenhd 4 days ago | parent | prev | next [-] |
| Get in, steal a couple hundred grand, get out, do the exact same thing a few months later. Repeat a few times and you can live worry free until retirement if you know to evade the cops. Even if you steal other stuff, you're going to need to turn it all into cryptocurrency anyway, and how much is an AWS key really going to bring in. There are criminals that focus on extracting passwords and password manager databases as well, though they often also end up going after cryptocurrency websites. There are probably criminals out there biding their time, waiting for the perfect moment to strike, silently infiltrating companies through carefully picked dependencies, but those don't get caught as easily as the ones draining cryptocurrency wallets. |
| |
|
| ▲ | WhyNotHugo 4 days ago | parent | prev | next [-] |
| The pushed payload didn't generate any new traffic. It merely replaced the recipient of a crypto transaction to a different account. It would have been really hard to detect. Ex-filtrating API keys would have been picked up a lot faster. OTOH, this modus operandi is completely inconsistent with the way they published the injected code: by taking over a developer's account. This was going to be noticed quickly. If the payload had been injected in a more subtle way, it might have taken a long time to figure out. Especially with all the levenshtein logic that might convince a victim they'd somehow screwed up. |
| |
| ▲ | SchemaLoad 3 days ago | parent [-] | | Not only that, but it picked an address from a list which had similar starting/ending characters so if you only checked part of the wallet address, you'd still get exploited. |
|
|
| ▲ | boznz 4 days ago | parent | prev | next [-] |
| It is not a one-in-a-million opportunity though. I hate to take this to the next level, but as criminal elements wake up to the fact that a few "geeks" can possibly get them access to millions of dollars expect much worse to come. As a maintainer of any code that could gain bad guys access, I would be seriously considering how well my physical identity is hidden on-line. |
| |
| ▲ | SchemaLoad 3 days ago | parent | next [-] | | This is why banks make you approve transactions on your phone now. The fact that a random NPM package can redirect your money is a massive issue | | |
| ▲ | cxr 11 hours ago | parent [-] | | This attack was enabled by the normalization of orgs' aggressive 2FA postures. |
| |
| ▲ | jongjong 3 days ago | parent | prev | next [-] | | I just made a very similar comment. Spot on. It's laughable to think that this trivial opportunity that literally any developer could pull off with a couple of thousand dollars is a one-in-a-million. North Korea probably has enough money to buy up a significant percentage of all popular npm dependencies and most people would sell willingly and unwittingly. In the case of North Korea, it's really crazy because hackers over there can do this legally in their own country, with the support of their government! And most popular npm developers are broke. | | |
| ▲ | tonyhart7 3 days ago | parent [-] | | actually, unless you are billionaire or high profile individual You wouldn't get targeted not because they cant but its not worth it many state sponsored attack is well documented in a lot of book that people can read
they don't want to add much record because its create buzz |
| |
| ▲ | pixl97 3 days ago | parent | prev [-] | | As foretold by the prophet https://xkcd.com/538/ |
|
|
| ▲ | hombre_fatal 4 days ago | parent | prev | next [-] |
| You give an example of an incredibly targeted attack of snooping around manually on someone's machine so you can exfiltrate yet more sensitive information like credit card numbers (how, and then what?) But (1) how do you do that with hundreds or thousands of SSH/API keys and (2) how do you actually make money from it? So you get a list of SSH or specific API keys and then write a crawler that can hopefully gather more secrets from them, like credit card details (how would that work btw?) and then what, you google "how to sell credentials" and register on some forum to broker a deal like they do in movies? Sure sounds a hell of a lot more complicated and precarious than swapping out crypto addresses in flight. |
|
| ▲ | balls187 4 days ago | parent | prev | next [-] |
| > You're a criminal with a one-in-a-million opportunity. Wouldn't you invest an extra week pushing a more fledged out exploit? The plot of Office Space might offer clues. Also isn't it crime 101 that greedy criminals are the ones who are more likely to get caught? |
|
| ▲ | alexvitkov 4 days ago | parent | prev | next [-] |
| API/SSH keys can easily be swapped, it's more hassle than it's worth. Be glad they didn't choose to spread the payload of one of the 100 ransomware groups with affiliate programs. |
|
| ▲ | thewebguyd 4 days ago | parent | prev | next [-] |
| > My work laptop, depending on the period of my life, you could have had access to stuff you wouldn't believe either. What gets me is everyone acknowledges this, yet HN is full of comments ripping on IT teams for the restrictions & EDR put in place on dev laptops. We on the ops side have known these risks for years and that knowledge of those risks are what drives organizational security policies and endpoint configuration. Security is hard, and it is very inconvenient, but it's increasingly necessary. |
| |
| ▲ | dghlsakjg 4 days ago | parent | next [-] | | I think people rip on EDR and security when 1. They haven’t had it explained why it does what it does or 2. It is process for process sake. To wit: I have an open ticket right now from an automated code review tool that flagged a potential vulnerability. I and two other seniors have confirmed that it is a false alarm so I asked for permission to ignore it by clicking the ignore button in a separate security ticket. They asked for more details to be added to the ticket, except I don’t have permissions to view the ticket. I need to submit another ticket to get permission to view the original ticket to confirm that no less than three senior developers have validated this as a false alarm, which is information that is already on another ticket. This non-issue has been going on for months at this point. The ops person who has asked me to provide more info won’t accept a written explanation via Teams, it has to be added to the ticket. Stakeholders will quickly treat your entire security system like a waste of time and resources when they can plainly see that many parts of it are a waste of time and resources. The objection isn’t against security. It is against security theater. | | |
| ▲ | MichaelZuo 4 days ago | parent [-] | | This sounds sensible for the “ops person”? It might not be sensible for the organization as a whole, but there’s no way to determine that conclusively, without going over thousands of different possibilities, edge cases, etc. | | |
| ▲ | dghlsakjg 4 days ago | parent [-] | | What about this sounds sensible? I have already documented, in writing, in multiple places, that the automated software has raised a false alarm, as well as providing a piece of code demonstrating that the alert was wrong. They are asking me to document it in an additional place that I don't have access to, presumably for perceived security reasons? We already accept that my reasoning around the false alarm is valid, they just have buried a simple resolution beneath completely stupid process. You are going to get false alarms, if it takes months to deal with a single one, the alarm system is going to get ignored, or bypassed. I have a variety of conflicting demands on my attention. At the same time, when we came under a coordinated DDOS attack from what was likely a political actor, security didn't notice the millions of requests coming from a country that we have never had a single customer in. Our dev team brought it to their attention where they, again, slowed everything down by insisting on taking part in the mitigation, even though they couldn't figure out how to give themselves permission to access basic things like our logging system. We had to devote one of our on calls to walking them through submitting access tickets, a process presumably put in place by a security team. I know what good security looks like, and I respect it. Many people have to deal with bad security on a regular basis, and they should not be shamed for correctly pointing out that it is terrible. | | |
| ▲ | MichaelZuo 4 days ago | parent [-] | | If your sufficiently confident there can be no negative consequences whatsoever… then just email that person’s superiors and cc your superiors to guarantee in writing you’ll take responsibility? The ops person obviously can’t do that on your behalf, at least not in any kind of organizational setup I’ve heard of. | | |
| ▲ | dghlsakjg 3 days ago | parent [-] | | As the developer in charge of looking at security alerts for this code base, I already am responsible, which is why I submitted the exemption request in the first place. As it is, this alert has been active for months and no one from security has asked about the alert, just my exemption request, so clearly the actual fix (disregarding or code changes) are less important than the process and alert itself. So the solution to an illogical, kafkaesque security process is to bypass the process entirely via authority? You are making my argument for me. This is exactly why people don’t take security processes seriously, and fight efforts to add more security processes. | | |
| ▲ | MichaelZuo 3 days ago | parent [-] | | So you agree with me the ops person is behaving sensibly given real life constraints? Edit: I didn’t comment on all those other points, so it seems irrelevant to the one question I asked. | | |
| ▲ | dghlsakjg 3 days ago | parent [-] | | Absolutely not. Ops are the ones who imposed those constraints. You can't impose absurd constraints and then say you are acting reasonable by abiding by your own absurd constraints. | | |
| ▲ | MichaelZuo 3 days ago | parent [-] | | How do you even know it was a single individual’s decision, let alone who exactly imposed the constraints? | | |
| ▲ | dghlsakjg 3 days ago | parent [-] | | I don't, and I never said that. I'm not dumping on the ops person, but the ops and security team's processes. If you as a developer showed up to a new workplace and the process was that for every code change you had to print out a diff and mail a hard copy to the committee for code reviews, you would be totally justified in calling out the process as needlessly elaborate. Anyone could rightly say that your processes are increasing friction while not actually serving the purpose of having code reviewed by peers. You as a developer have a responsibility to point out that the current process serves no one and should be changed. That's what good security and ops people do too. In the real world case I am talking about, we can easily foresee that the end result is that the exemption will be allowed, and there will be no security impact. In no way does the process at all contribute to that, and every person involved knows it. My original post was about how people dislike security when it is actually security theater. That is what is going on here. We already know how this issue ends and how that can be accomplished (document the false alarm, and click the ignore button), and have already done the important part of documenting the issue for posterity. The process could be: you are a highly paid developer who takes security training and has access to highly sensitive systems so we trust your judgment, when you and your peers agree that this isn't an issue, write that down in the correct place, click the ignore button and move on with your work. All of the faff of contacting different fiefdoms and submitting tickets does nothing to contribute to the core issue or resolution, and certainly doesn't enhance security. If anything, security theater like this leads to worse security since people will try to find shortcuts or ways of just not handling issues. |
|
|
|
|
|
|
|
| |
| ▲ | the8472 4 days ago | parent | prev | next [-] | | At least at $employer a good portion of those systems are intended to stop attacks on management and the average office worker. The process is not geared towards securing dev(arbitrary code execution)-ops(infra creds).
They're not even handing out hardware security keys for admin accounts. I use my own, some other devs just use TOTP authenticator apps on their private phones. All their EDR crud runs on Windows, but as a dev I'm allowed to run WSL but the tools do not reach inside WSL so if that gets compromised they would be none the wiser. There is some instrumentation for linux servers and cloud machines, but that too is full of blind spots. And as a sibling comment says, a lot of the policies are executed without anyone being able to explain their purpose, being able to grant "functionally equivalent security" exceptions or them even making sense in certain contexts.
It feels like dealing with mindless automatons, even though humans are involved. For example a thing that happened a while ago: We were using scrypt as KDF, but their scanning flagged it as unknown password encryption and insisted that we should use SHA2 as a modern, secure hashing function. Weeks of long email threads, escalation and several managers suggesting "just change it to satisfy them" followed. That's a clear example of mindless rule-following making a system less secure. Blocking remote desktop forwarding of security keys also is a fun one. | |
| ▲ | balls187 4 days ago | parent | prev [-] | | Funny, I read that quote, and assumed it meant something unsavory, and not say, root access to an AWS account. |
|
|
| ▲ | 3 days ago | parent | prev | next [-] |
| [deleted] |
|
| ▲ | paradite 4 days ago | parent | prev | next [-] |
| Because it's North Korea and crypto currency is the best assets they can get for pragmatic reasons. For anything else you need a fiat market, which is hard to deal with remotely. |
|
| ▲ | jongjong 3 days ago | parent | prev | next [-] |
| Maybe their goal was just surviving, not getting rich. Also, you underestimate how trivial this 'one-in-a-million opportunity' is; it's definitely not a one-in-a-million! Almost anybody with basic coding ability and a few thousand dollars could pull off this hack. There are thousands of libraries which are essentially worthless with millions of downloads and the author who maintains is basically broke and barely uses their npm account anymore. Anybody could just buy those npm accounts under false pretenses for a couple of thousands and then do whatever they want with tens of thousands (or even hundreds of thousands) of compromised servers. The library author is legally within their rights to sell their digital assets and it's not their business what the acquirer does with them. |
|
| ▲ | ignoramous 4 days ago | parent | prev | next [-] |
| > find it insane that someone would get access to a package like this, then just push a shitty crypto stealer Consumer financial fraud is quite big and relatively harmless. Industrial espionage, otoh, can potentially put you in the cross hairs of powerful and/or rouge elements, and so, only the big actors get involved, but in a targeted way, preferring to not leave much if any trace of compromise. |
|
| ▲ | pianopatrick 4 days ago | parent | prev | next [-] |
| Seems possible to me that someone has done an attack exactly like you describe and just was never caught. |
|
| ▲ | doubleorseven 4 days ago | parent | prev | next [-] |
| i fell for this malware once. had the malware on my laptop even with mb in the background. i copy paste and address and didn't even check it. my bad indeed. those guys makes a lot of money from this "one shot" moments |
|
| ▲ | deepanwadhwa 3 days ago | parent | prev | next [-] |
| What makes you so sure that the exploit is over? Maybe they wanted their secondary exploit to get caught to give everyone a sense of security? Their primary exploit might still be lurking somewhere in the code? |
| |
| ▲ | pixl97 3 days ago | parent [-] | | Well, because it is really easy to diff an npm package. The attacker had access to the user's npm repository only. |
|
|
| ▲ | jmull 4 days ago | parent | prev | next [-] |
| There's nothing wrong with staying focused (on grabbing the money). Your ideas are potentially lubricative over time, but first it creates more work and risk for the attacker. |
|
| ▲ | BoredPositron 4 days ago | parent | prev | next [-] |
| As long as we get lucky nothing is going to change. |
|
| ▲ | yieldcrv 4 days ago | parent | prev | next [-] |
| yeah a shitty crypto stealer is more lucrative, more quickly monetized, has less OPSEC issues for the thief if done right, easier to launder nobody cares about your trade secrets, or some nation's nuclear program, just take the crypto |
|
| ▲ | sim7c00 4 days ago | parent | prev [-] |
| one in a million opportunity? the guy registered a domain and sent some emails dude. its cheap as hell |
| |
| ▲ | heywoods 4 days ago | parent | next [-] | | Maybe one in a million is hyperbolic but that’s sorta the game with these attacks isn’t it? Registering thousands upon thousands of domains + tens of thousands of emails until you catch something from the proverbial pond. | |
| ▲ | k4rnaj1k 4 days ago | parent | prev [-] | | [dead] |
|