Remix.run Logo
LeftHandPath a day ago

I recall having to implement A* to search a nxn character grid in my AI course a few years ago. It took me close to a full day to wrap my head around the concepts, get used to python (we usually worked in C++), and actually implement the algorithm. Nowadays, an LLM can spit out a working implementation in seconds.

I think that's a big part of the issue with tests like Hackerrank - LLMs have been trained on a lot of the DSAs that those questions try to hit. Whereas, if you ask an LLM for a truly novel solution, it's much more likely to spit out a garbled mess. For example, earlier today, Google's search AI gave me this nonsense example of how to fix a dangling participle:

> To correct a dangling participle, you can revise the sentence to give the dangling modifier a noun to modify. For example, you can change the sentence "Walking through the kitchen, the smoke alarm was going off" to "Speeding down the hallway, he saw the door come into view".

LLMs have effectively made it impossible to test candidates for crystalline intelligence (e.g. remembering how to write specific DSAs quickly) remotely. Maybe the best solution would be to measure fluid intelligence instead, or front-load on personality/culture assessments and only rigorously assess coding ability in-person towards the end of the interview cycle.

badgersnake a day ago | parent [-]

So if I asked you how A* works in an interview you you be able to explain it. Johnny ChatGPT would not.

deprecative a day ago | parent [-]

In most cases it really doesn't matter. You wanted A* and you got it. Understanding isn't important if the product works.