| ▲ | leoc 3 hours ago | |
I'm not an expert, but a quick look at https://en.wikipedia.org/wiki/Buffer_overflow#History suggests that some people, at least, had figured it out by 1972 https://apps.dtic.mil/sti/citations/AD0772806 : > By supplying addresses outside of the space allocated to the users program, it is often possible to get the monitor to obtain unauthorized data for that user, or at the very least, generate a set of conditions in the monitor that causes a system crash. > In one contemporary operating system, one of the functions provided is to move limited amounts of information between system and user space. The code performing this function does not check the source and destination addresses properly, permitting portions of the monitor to be overlaid by the user. This can be used to inject code into the monitor that will permit the user to seize control of the machine. (Volume 1 is at https://apps.dtic.mil/sti/citations/AD0758206 .) However general awareness of the security implications seems to have been very limited before the Morris worm, and then even for several years after that. Even in late 1996 an article which in its own words "attempt[ed] to explain what buffer overflows are, and how their exploits work" could still be published in Phrack magazine, and in fact even be quite a milestone https://en.wikipedia.org/wiki/Buffer_overflow#History . Some people had definitely been thinking about hardware bounds checking for a long time by then https://homes.cs.washington.edu/~levy/capabook/ but I don't know how much they'd specifically considered just this kind of security threat. | ||