| ▲ | danhon 5 hours ago |
| You mean like this? "With limited funds, Google founders Larry Page and Sergey Brin initially deployed this system of inexpensive, interconnected PCs to process many thousands of search requests per second from Google users. This hardware system reflected the Google search algorithm itself, which is based on tolerating
multiple computer failures and optimizing around them. This production server was one of about thirty such racks in the first Google data center. Even though many of the installed PCs never worked and were difficult to repair, these racks provided Google with its first large-scale computing system and allowed the company to grow quickly and at minimal cost." https://blog.codinghorror.com/building-a-computer-the-google... |
|
| ▲ | 1970-01-01 4 hours ago | parent | next [-] |
| Google then had complete regret not doing this with ECC RAM: https://news.ycombinator.com/item?id=14206811 |
| |
| ▲ | newmana 3 hours ago | parent | next [-] | | A great version of this and how ex-DEC engineers saved Google and their choice of ECC RAM - inventing MapReduce and BigTable https://www.youtube.com/watch?v=IK0I4f8Rbis | |
| ▲ | ramraj07 4 hours ago | parent | prev [-] | | It got them to where they need to be to then worry about ECC. This is like the dudes who deploy their blog on kubernetes just in case it hits front page of new york times or something. |
|
|
| ▲ | ramraj07 4 hours ago | parent | prev [-] |
| The problem they solved isn't easy. But its not some insane technical breakthrough either. Literally add redundancy, thats the ask. They didnt invent quantum computing to solve the issue did they? Why dunk on sprints? |
| |
| ▲ | vlovich123 4 hours ago | parent [-] | | Wow. What a hand wave away of the intrinsic challenge of writing fault tolerant distributed systems. It only seems easy because of decades of research and tools built since Google did it, but by no means was it something you could trivially add to a project as you can today. | | |
| ▲ | tempest_ 3 hours ago | parent [-] | | > fault tolerant distributed systems I mean there were mainframes which could be described as that. IBM just fixed it in hardware instead of software so its not like it was an unknown field. | | |
| ▲ | vlovich123 40 minutes ago | parent [-] | | Even if that were actually true (it’s not in important ways) Google showed you could do this cheaply in software instead of expensive in hardware. You’re still hand waving away things like inventing a way to make map/reduce fault tolerant and automatic partitioning of data and automatic scheduling which didn’t exist before and made map/reduce accessible - mainframes weren’t doing this. They pioneered how you durably store data on a bunch of commodity hardware through GFS - others were not doing this. And they showed how to do distributed systems at a scale not seen before because the field had bottlenecked on however big you could make a mainframe. |
|
|
|