▲ | maccard 8 hours ago | |||||||
I mean, theoretically it's possible. A super basic example would be if the data is known at compile time, it could be auto-parallelized, e.g.
this could clearly be parallelised. In a C++ world that doesn't exist, we can see that it's valid.If I replace it with int buf_size = 10000000; cin >> buf_size; auto vec = make_large_array(buf_size); for (const auto& val : vec) { do_expensive_thing(val); } the compiler could generate some code that looks like: if buf_size >= SOME_LARGE_THRESHOLD { DO_IN_PARALLEL } else { DO_SERIAL } With some background logic for managing threads, etc. In a C++-style world where "control" is important it likely wouldn't fly, but if this was python...
could be parallelised at compile time. | ||||||||
▲ | lazide 8 hours ago | parent [-] | |||||||
Which no one really does (data is generally provided at runtime). Which is why ‘super smart’ compilers kinda went no where eh? | ||||||||
|