Remix.run Logo
vermilingua 7 hours ago

Think that means you failed :(

nice_byte 7 hours ago | parent [-]

+1

being cryptic and poorly specified is part of the assignment

just like real code

in fact, it's _still_ better documented an self contained than most of the problems you'd usually encounter in the wild. pulling on a thread to end up with a clear picture of what needs to be accomplished is like 90% of the job very often.

throwaway81523 6 hours ago | parent | next [-]

I didn't see much cryptic except having to click on "perf_takehome.py" without being told to. But, 2 hours didn't seem like much to bring the sample code into some kind of test environment, debug it enough to works out details of its behaviour, read through the reference kernel and get some idea of what the algorithm is doing, read through the simulator to understand the VM instruction set, understand the test harness enough to see how the parallelism works, re-code the algorithm in the VM's machine language while iterating performance tweaks and running simulations, etc.

Basically it's a long enough problem that I'd be annoyed at being asked to do it at home for free, if what I wanted from that was a shot at an interview. If I had time on my hands though, it's something I could see trying for fun.

ithkuil 4 hours ago | parent | next [-]

My instinct to read about the problem was to open the "problem.py" file, which states "Read the top of perf_takehome.py for more introduction"

So yeah. They _could_ have written it much more clearly in the readme.

tayo42 5 hours ago | parent | prev | next [-]

2 hours does seem short. It took me a half hour to get through all you listed and figure out how to get the valu instruction working.

I suspect it would take me another hour to get it implemented. Leaving 30 minutes to figure out something clever?

Idk maybe I'm slow or really not qualified.

nice_byte 6 hours ago | parent | prev [-]

it's "cryptic" for an interview problem. e.g. the fact that you have to actually look at the vm implementation instead of having the full documentation of the instruction set from the get go.

throwaway81523 5 hours ago | parent [-]

That seems normal for an interview problem. They put you in front of some already-written code and you have to fix a bug or implement a feature. I've done tons of those in live interviews. So that part didn't bother me. It's mostly the rather large effort cost in the case where the person is a job applicant, vs an unknown and maybe quite low chance of getting hired.

With a live interview, you get past a phone screening, and now the company is investing significant resources in the day or so of engineering time it takes to have people interview you. They won't do that unless they have a serious level of interest in you. The take-home means no investment for the company so there's a huge imbalance.

There's another thread about this article, which explains an analogous situation about being asked to read AI slop: https://zanlib.dev/blog/reliable-signals-of-honest-intent/

avaer 6 hours ago | parent | prev [-]

It's definitely cleaner than what you will see in the real world. Research-quality repositories written in partial Chinese with key dependencies missing are common.

IMO the assignment('s purpose) could be improved by making the code significantly worse. Then you're testing the important stuff (dealing with ambiguity) that the AI can't do so well. Probably the reason they didn't do that is because it would make evaluation harder + more costly.