| ▲ | xyzal 4 hours ago | ||||||||||||||||||||||||||||||||||
I use AIs for coding with moderate success, but the more I work with them, the more I am convinced that "intelligence on tap" is a pipe dream, especially in domains where logical thinking in novel (ie not-in-dataset) contexts is required. Recently, I tasked it to study a new Czech building permit law in conjunction with some waste disposal regulations and the result was just tragic. The model (opus 4.6) just could not stop drawing conclusions from obsolete regulations in its training dataset, even when given the fulltext of the new law. The usual "you are totally right" also applied and its conclusions were most of the time obviously wrong even to a human with cursory knowledge of the subject. I ended with studying the relevant regulations myself over the weekend. | |||||||||||||||||||||||||||||||||||
| ▲ | ithkuil 3 hours ago | parent | next [-] | ||||||||||||||||||||||||||||||||||
I wonder what percentage of the job space truly depends on the current edge we have over machines. I think it's reasonable to worry that way before machines are more reliable than the average human (let alone more reliable than a highly trained human) they can pose a significant disruption to the job market which will send shockwaves throughout society | |||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||
| ▲ | lukan 4 hours ago | parent | prev [-] | ||||||||||||||||||||||||||||||||||
"The model (opus 4.6) just could not stop drawing conclusions from obsolete regulations in its training dataset" To be fair, humans are also often like this. If some rule/law/model was deeply ingrained into them, they often cannot stop thinking in terms of that rule, even if they are clearly in a new context (like a new country). | |||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||