Remix.run Logo
whattheheckheck 4 days ago

Why is it better

forgotpwd16 4 days ago | parent | next [-]

Cleaner, more straightforward, more compact code, and considered complete in its scope (i.e. implement backpropagation with a PyTorch-y API and train a neural network with it). MyTorch appears to be an author's self-experiment without concrete vision/plan. This is better for author but worse for outsiders/readers.

P.S. Course goes far beyond micrograd, to makemore (transfomers), minbpe (tokenization), and nanoGPT (LLM training/loading).

tfsh 4 days ago | parent | prev [-]

Because it's an acclaimed, often cited course by a preeminent AI Researcher (and founding member of OAI) rather than four undocumented python files.

gregjw 4 days ago | parent | next [-]

it being acclaimed is a poor measure of success, theres always room for improvement, how about some objective comparisons?

nurettin 4 days ago | parent | prev | next [-]

Objective measures like branch depth, execution speed, memory use and correctness of the results be damned.

CamperBob2 4 days ago | parent [-]

Karpathy's implementation is explicitly for teaching purposes. It's meant to be taken in alongside his videos, which are pretty awesome.

geremiiah 4 days ago | parent | prev [-]

Ironically the reason Karpathy's is better is because he livecoded it and I can be sure it's not some LLM vomit. Unfortunately, we are now indundated with newbies posting their projects/tutorials/guides in the hopes that doing so will catch the eye of a recuiter and land them a high paying AI job. That's not so bad in itself except for the fact that most of these people are completely clueless and posting AI slop.

iguana2000 4 days ago | parent [-]

Haha, couldn't agree with you more. This, however, isn't AI slop. You can see in the commit history that this is from 3 years ago