Remix.run Logo
eviks 9 hours ago

> The fact that an idle Mac has over 2,000 threads running in over 600 processes is good news, and the more of those that are run on the E cores, the faster our apps will be

This doesn't make sense in a rather fundamental way - there is no way to design a real computer where doing some useless work is better than doing no work, just think about energy consumption and battery life since this is laptops. Or that's just resources your current app can't use

Besides, they aren't that well engineered, bugs exist and last and come back, etc, so even when on average the impact isn't big, you can get a few photo analysis indexing going haywire for awhile and get stuck

ahepp 6 hours ago | parent | next [-]

I think in the example the OP is making, the work is not useless. They're saying if you had a system doing the same work, with maybe 60 processes, you're better off splitting that into 600 processes and a couple thousand threads, since that will allow granular classification of tasks by their latency sensitivity

eviks 5 hours ago | parent [-]

But it is, he's talking about real systems with real processes in a generic way, not a singular hypothetical where suddenly all that work must be done, so you can also apply you general knowledge that some of those background processes aren't useful (but can't even be disabled due to system lockdown)

ua709 5 hours ago | parent | next [-]

I think you're right that the article didn't provide criteria for when this type of system is better or worse than another. For example, the cost of splitting a work into threads and switching between threads needs to be factored in. If that cost is very high, then the multi-thread system could very well be worse. And there are other factors too.

However, given the trend in modern software engineering to break work into units and the fact that on modern hardware thread switches happen very quickly, being able to distribute that work across different compute clusters that make different optimization choices is a good thing and allows schedulers to get results closer to optimal.

So really it boils down to if the gains in doing the work on different compute outweighs the cost splitting and distributing the work, then it's a win. And for most modern software on most modern hardware, the win is very significant.

As always, YMMV

locknitpicker 4 hours ago | parent | prev [-]

> (...) a singular hypothetical where suddenly all that work must be done (...)

This is far from being a hypothesis. This is an accurate description of your average workstation. I recommend you casually check the list of processes running at any given moment in any random desktop or laptop you find in a 5 meter radius.

locknitpicker 5 hours ago | parent | prev | next [-]

> (...) where doing some useless work is better than doing no work (...)

This take expresses a fundamental misunderstanding of the whole problem domain. There is a workload comprised of hundreds of processes, some of which multithreaded, that need to be processed. That does not change nor go away. You have absolutely no suggestion that any of those hundreds of processes is "useless". What you will certainly have are processes that will be waiting for IO, but waiting for a request to return a response is not useless.

ua709 4 hours ago | parent [-]

In this case the “useless” work is the cost of moving and distributing the threads between different compute clusters. That cost is nonzero, and does needs to be factored in, but it’s also more than overwhelmed by the benefits gained from doing the move.

m463 5 hours ago | parent | prev | next [-]

I would say a good number of those processes/cores are something you don't want running. And you can't turn them off unless you can modify the boot partition to disable the launch configs.

sigh.

GeorgeOldfield 2 hours ago | parent | next [-]

most of them are needed at one time or another.

also a mandatory: and yet the macbooks are faster and more battery efficient than any PC laptop with linux/windows

dgxyz 4 hours ago | parent | prev [-]

Most of them don’t do anything at all until needed.

mathisfun123 5 hours ago | parent | prev [-]

> This doesn't make sense in a rather fundamental way - there is no way to design a real compute

Hmm I guess the apple silicon laptops don't exist? Did I dream that I bought one year? Maybe I did - it has been a confusing year.

eviks 5 hours ago | parent [-]

You did dream, though not that you bought a laptop, but rather that you understood the comment.

mathisfun123 5 hours ago | parent [-]

You literally say in a sibling comment

> he's talking about real systems with real processes in a generic way

So which real but impossible to design systems are we discussing then if not the Apple silicon systems?