▲ | monkeyelite 5 days ago | ||||||||||||||||
This IO bound myth is commonly repeated - yet most software executes in time many multiples above the IO work. Execution time is summed and using a language like C lets you better control your data and optimize IO resources. | |||||||||||||||||
▲ | gf000 5 days ago | parent [-] | ||||||||||||||||
Well, software is not like a traditional Turing machine of having an input, buzzing a bit, and returning a response. They are most commonly running continuously, and reacting to different events. You can't do the IO work that depends on a CPU work ahead of time, and neither can you do CPU work that depends on IO. You have a bunch of complicated interdependencies between the two, and the execution time is heavily constrained by this directed graph. No matter how efficient your data manipulation algorithm is, if you still have to wait for it to load from the web/file. Just draw a Gantt chart and sure, sum the execution time. My point is that due to interdependencies you will have a longest lane and no matter what you do with the small CPU parts, you can only marginally affect the whole. It gets even more funny with parallelism (this was just concurrency yet), where a similar concept is named Amdahl's law. And I would even go as far and claim that what you may win by C you often lose several-folds due to going with a simpler parallelism model for fear of Segfaults, which you could fearlessly do in a higher-level language. | |||||||||||||||||
|