| ▲ | mmooss a day ago | |||||||
Those effective strategies were developed through the same method of research and development as OLPC. At one point, we didn't know about those benefits; should we have not experimented with those strategies? The nature of research is that some things succeed/fail to different degrees than others, and some that have not sufficiently succeeded will in the future, or will inform other successes. If we already knew the answers, it wouldn't be research. | ||||||||
| ▲ | alephnerd a day ago | parent [-] | |||||||
The issue was there was no robust quantitative research done before OLPC was created. The programs I gave as examples above all had previously been tested in control groups via RCT before they were rolled out en masse. On top of that, these initiatives were done in coordination with local stakeholders. This is why JPAL@MIT [0] (Banerjee, Duflo) and REAP@Stanford [1] (Liu, Wang, Rozelle) have had significant success in helping raise HDIs in the states in India and China respectively that they worked with. On top of that, OLPC (and similar initiatives) took a significant amount of oxygen from the philanthropy ecosystem, with programs and initiatives that had a better strike rate being looked over simply because "it's Negreponte". Even Negreponte's MIT Media Lab largely failed from an outcomes perspective, and was buoyed becuase of donor relations. | ||||||||
| ||||||||