Remix.run Logo
xyzal 4 hours ago

I use AIs for coding with moderate success, but the more I work with them, the more I am convinced that "intelligence on tap" is a pipe dream, especially in domains where logical thinking in novel (ie not-in-dataset) contexts is required.

Recently, I tasked it to study a new Czech building permit law in conjunction with some waste disposal regulations and the result was just tragic. The model (opus 4.6) just could not stop drawing conclusions from obsolete regulations in its training dataset, even when given the fulltext of the new law. The usual "you are totally right" also applied and its conclusions were most of the time obviously wrong even to a human with cursory knowledge of the subject.

I ended with studying the relevant regulations myself over the weekend.

ithkuil 3 hours ago | parent | next [-]

I wonder what percentage of the job space truly depends on the current edge we have over machines.

I think it's reasonable to worry that way before machines are more reliable than the average human (let alone more reliable than a highly trained human) they can pose a significant disruption to the job market which will send shockwaves throughout society

xyzal 3 hours ago | parent [-]

That is why we need functioning states -- free markets won't save you in such a case. Though I found it is hard to explain especially to U.S. people, who put "regulation" on par with f words :)

lukan 4 hours ago | parent | prev [-]

"The model (opus 4.6) just could not stop drawing conclusions from obsolete regulations in its training dataset"

To be fair, humans are also often like this. If some rule/law/model was deeply ingrained into them, they often cannot stop thinking in terms of that rule, even if they are clearly in a new context (like a new country).

xyzal 3 hours ago | parent [-]

When the mandatory speed limit in my country was reduced from 60km/h to 50km/h in cities, 95 percent of people instantly adapted.

lukan 3 hours ago | parent [-]

But that is pretty much the same rule, just the numbers slightly adjusted. What do you think would happen if they changed traffic from the right to the left lane?

xyzal 3 hours ago | parent [-]

Heh, that would be surely funny :) But most people at least know there is a new permit law and if they are not sure, they are to seek expert guidance. The model is even with explicit notification unable to reflect upon this fact. How it is supposed to be useful then?

lukan 3 hours ago | parent [-]

Oh, most people would know in theory for sure, but if they go into driving, habit would kick in and they end up on the wrong lane pretty quickly.

At least that is what happened to me in australia and I only had a year of driving practice back then, but driving on the right side was already deeply ingrained and I had to be really aware of what I did.

But to be clear, I am not arguing models have real understanding of anything - I know they don't. My point was humans can be similar in pretending to have understood something, but if their core was modeled different, they will fall into old patterns again quickly.