Remix.run Logo
olavgg 3 hours ago

We aren't just dealing with a shortage; we're dealing with a monopsony. The Big tech companies have moved from being "customers" of the hardware industry to being the "owners" of the supply chain. The shortage isn't just "high demand", but "contractual lock-out."

It is time to talk seriously about breaking up the hyperscalers. If we don't address the structural dominance of hyperscalers over the physical supply chain, "Personal Computing" is going to become a luxury of the past, and we’ll all be terminal-renters in someone else's data center.

bilekas 3 hours ago | parent | next [-]

> "Personal Computing" is going to become a luxury of the past, and we’ll all be terminal-renters in someone else's data center.

This is the game plan of course, why have customers pay one time for hardware when they can have you constantly feed them money over the long term. Shareholders want this model.

It started with planned obsolescence, now this new model is the natural progression.. There is no obsolescence even in discussion when you're only option is to rent a service, that the provider has no incentive to even make competitive.

I really feel this will be China's moment to flood the market with hardware and improve their quality over time.

actionfromafar 3 hours ago | parent [-]

"I think there is a world market for maybe five c̶o̶m̶p̶u̶t̶e̶r̶s̶" compute centers.

ahsillyme 3 hours ago | parent | prev | next [-]

> "Personal Computing" is going to become a luxury of the past, and we’ll all be terminal-renters in someone else's data center.

Yep. My take is that, ironically, it's going to be because of government funding the circular tech economy, pushing consumers out of the tech space.

squeefers 2 hours ago | parent [-]

> pushing consumers out of the tech space.

post consumer capitalism

shit_game 3 hours ago | parent | prev | next [-]

This is the result of the long-planned desire for consumer computing to be subscription computing. Ultimately, there is only so much that can be done in software to "encourage" (read: coerce) vendor-locked, always-online, account-based computer usage; there are viable options for people to escape these ecosystems via the ever growing plethora of web-based productivity software and linux distributions which are genuinely good, user friendly enough, and 100% daily-drivable, but these software options require hardware.

It's no coincidence that Microsoft decided to take such a massive stake in OpenAI - leveraging the opportunity to get in on a new front for vendor locking by force-multiplying their own market share by inserting it into everything they provide is an obvious choice, but also leveraging the insane amount of capital being thrown into the cesspit that is AI to make consumer hardware unaffordable (and eventually unusable due to remote attestation schemes) further enforces their position. OEM computers that meet the hardware requirements of their locked OS and software suite being the only computers that are a) affordable and b) "trusted" is the end goal.

I don't want to throw around buzzwords or be doomeristic, but this is digital corporatism in its endgame. Playing markets to price out every consumer globally for essential hardware is evil and something that a just world would punish relentlessly and swiftly, yet there aren't even crickets. This is happening unopposed.

kuerbel 2 hours ago | parent [-]

What can we do? Serious question.

It's so hard to grasp as a problem for the lay person until it's too late.

shit_game an hour ago | parent [-]

Honestly; I don't know. I don't think there really is a viable solution that preserves consumer computation. Most of the young people I know don't really know or care about computers. Actually, most people at large that I know don't know or care about computers. They're devices that play videos, access web storesfronts, run browsers, do email, save pictures, and play games for them. Mobile phones are an even worse wasteland of "I don't know and I don't care". The average person doesn't give a shit about this being a problem. Coupled with the capital interests of making computing a subscription-only activity (leading to market activity that prices out consumers and lobbying actions that illegalize it), this spells out a very dire, terrible future for the world where computers require government and corporate permission to operate on the internet, and potentially in ones home.

Things are bad and I don't know what can be done about it because the balance of power and influence is so lopsided in favor of parties who want to do bad.

3 hours ago | parent | prev | next [-]
[deleted]
squeefers 2 hours ago | parent | prev | next [-]

it goes mainframe (remote) > PC (local) > cloud (remote) > ??? (local)

GCUMstlyHarmls an hour ago | parent [-]

mainframe (remote) > PC (local) > cloud (remote) > learning at a geometric rate ∴ > (local)

iso1631 2 hours ago | parent | prev | next [-]

> "Personal Computing" is going to become a luxury of the past, and we’ll all be terminal-renters in someone else's data center.

These things are cyclical.

AmazingTurtle 3 hours ago | parent | prev | next [-]

"You'll own nothing. And you'll be happy"

nubg 3 hours ago | parent | prev [-]

Sorry, do people not immediately see that this is an AI bit comment?

Why is this allowed on HN?

Maxion 3 hours ago | parent | next [-]

> Why is this allowed on HN?

1) The comment you replied to is 1 minute old, that is fast for any system to detect weird comments

2) There's no easy and sure-fire way to detect LLM content. Here's wikipedias list of tells https://en.wikipedia.org/wiki/Wikipedia:Signs_of_AI_writing

bilekas 3 hours ago | parent | prev | next [-]

> Sorry, do people not immediately see that this is an AI bit comment?

How do you know that ? Genuine question.

Maxion 3 hours ago | parent | next [-]

To be fair, it is blindingly obvious from the tells. OP also confirms it here: https://news.ycombinator.com/item?id=47045459#47045699

f311a 3 hours ago | parent | prev | next [-]

> isn't just "high demand", but "contractual lock-out."

The "isn't just .., but .." construction is so overused by LLMs.

lsp 3 hours ago | parent | prev | next [-]

The phrasing. "It's not just X, it's Y," overuse of "quotes"

dspillett 2 hours ago | parent [-]

The problem with any of these tells is that an individual instance is often taken as proof on its own rather than an indicator. People do often use “it isn't X, it is Y” like constructs¹ and many, myself included sometimes, overuse “quotes”², or use m-dashes³, or are overly concerned about avoiding repeating words⁶, and so forth.

LLMs do these things because they are in the training data, which means that people do these things too.

It is sometimes difficult to not sound like an LLM-written or LLM-reworded comment… I've been called a bot a few times despite never using LLMs for writing English⁴.

--------

[1] particularly vapid space-filler articles/comments or those using whataboutism style redirection, which might be a significant chunk of model training data because of how many of them are out there.

[2] I overuse footnotes as well, which is apparently a smell in the output of some generative tools.

[3] A lot of pre-LLM style-checking tools would recommend this in place of hyphens, and some automated reformatters would make the change without access, so there are going to be many examples in training data.

[4] I think there is one at work in VS which I use in DayJob, when it is suggesting code completion options to save typing (literally Glorified Predictive Text) and I sometimes accept its suggestion, and some of the tools I use to check my Spanish⁵ may be LLM based, so I can't claim that I don't use them at all.

[5] I'm just learning, so automatic translators are useful to check what I'm written isn't gibberish. For anyone else doing the same: make sure you research any suggested changes preferably using pre-2023 sources, because the output of these tools can be quite wrong as you can see when translating into a language you are fluent in.

[6] Another common “LLM tell” because they often have weighting functions especially designed to avoid token repetition, largely to avoid getting stuck in loops, but many pre-LLM grammar checking tools will pick people up on repeated word use too, and people tend to fix the direct symptom with a thesaurus rather than improving the sentence structure overall.

hakanderyal 3 hours ago | parent | prev [-]

It has Claude all over it. When you spend enough time with them it becomes obvious.

In this case “it’s not x, it’s y” pattern and its placement is a dead giveaway.

bayindirh 3 hours ago | parent | next [-]

Isn't this ironic to use AI to formulate a comment against AI vendors and hyperscalers.

It's not ironic, but bitterly funny, if you ask me.

Note: I'm not an AI, I'm an actual human without a Claude account.

phatfish 2 hours ago | parent | next [-]

I wonder what the ratio is of "constructive" use of AI is, verses people writing pointless internet comments.

It seems personal computing is being screwed so people can create memes, ask questions that take 30 seconds to find the answer to with Google or Wikipedia, and sound clever on social media?

bayindirh 2 hours ago | parent [-]

If you think AI as the whole discipline, there are very useful applications indeed, generally in pattern recognition and regulation space. I'm aware a lot of small projects which rely on AI to monitor ecosystems, systems or used as nice regulatory mechanisms. Also, same systems can be used for genuine security applications (civilian, non-lethal, legal and ethical).

If we are talking generative AI, again from my experience, things get a bit blurry. You can use smaller models to dig data you own.

I personally used LLMs, twice up to this day. In each case it was after very long research sessions without any answers. In one, it gave me exactly one reference, and I followed that reference and learnt what I was looking for. In the second case, it gave me a couple of pointers, which I'm going to follow myself again.

So, generative AI is not that useful for me, uses way too much resources, and industry leading models are well, unethical to begin with.

nubg 2 hours ago | parent | prev [-]

Yes I found this ironic as well lmao.

I do agree with the sentiment of the AI comment, and was even weighting just letting it slide because I do fear the future tht comment was warning against.

A_D_E_P_T 3 hours ago | parent | prev [-]

> “it’s not x, it’s y”

ChatGPT does this just as much, maybe even more, across every model they've ever released to the public.

How did both Claude and GPT end up with such a similar stylistic quirk?

I'd add that Kimi does it sometimes, but much less frequently. (Kimi, in general, is a better writer with a more neutral voice.) I don't have enough experience with Gemini or Deepseek to say.

oytis 3 hours ago | parent | prev | next [-]

LLMs learned rhetorical negation from humans. Some humans continue to use it, because it genuinely makes sense at times.

bombela 3 hours ago | parent | prev | next [-]

It reads almost too AI to the point of being satire maybe?

olavgg 3 hours ago | parent | prev [-]

It is my text, enchanced by AI. Without AI, I would never have used the word "Monopsony". So I learned something new writing this comment.

lpcvoid 3 hours ago | parent | next [-]

This behavior is part of the problem that got us here, using LLMs for everything.

f311a 3 hours ago | parent | prev | next [-]

You are losing your personality by modifying your text with LLMs. It saves you how much, 1 minute of writing?

3 hours ago | parent [-]
[deleted]
bilekas 3 hours ago | parent | prev | next [-]

The irony is lost on you ...

badpenny 3 hours ago | parent | prev | next [-]

You didn't write it. Here's another new word for you: hypocrite.

smcl 3 hours ago | parent | prev [-]

Come on man, you're a "founder" and you can't even write your own comments on a forum?

3 hours ago | parent [-]
[deleted]