▲ | vidarh 4 days ago | |
Indeed. Work on any project that requires humans to carry out largely repetitive steps, and a large part of the problem involves how to put processes around people to work around humans "shutting off" reasoning and going full-on automatic. E.g. I do contract work on an LLM-related project where one of the systemic changes introduced - in addition to multiple levels of quality checks - is to force to make people input a given sentence word for word followed by a word from a set of 5 or so, and a minority of the submissions get that sentence correct including the final word despite the system refusing to let you submit unless the initial sentence is correct. Seeing the data has been an absolutely shocking indictment of human reasoning. These are submissions from a pool of people who have passed reasoning tests... When I've tested the process myself as well, it takes only a handful of steps before the tendency is to "drift off" and start replacing a word here and there and fail to complete even the initial sentence without a correction. I shudder to think how bad the results would be if there wasn't that "jolt" to try to get people back to paying attention. Keeping humans consistently carrying out a learned process is incredibly hard. |