Remix.run Logo
simedw 3 days ago

Thanks for sharing; you clearly spent a lot of time making this easy to digest. I especially like the tokens-to-embedding visualisation.

I recently had some trouble converting a HF transformer I trained with PyTorch to Core ML. I just couldn’t get the KV cache to work, which made it unusably slow after 50 tokens…

samwho 2 days ago | parent [-]

Thank you so much <3

Yes, I recently wrote https://github.com/samwho/llmwalk and had a similar experience with cache vs no cache. It’s so impactful.

mrgaro 11 hours ago | parent [-]

Hopefully you can write the teased next article about how Feedforward and Output layers work. The article was super helpful for me to get better understanding on how LLM GPTs work!

samwho 8 hours ago | parent [-]

Yeah! It’s planned for sure. It won’t be the direct next one, though. I’m taking a detour into another aspect of LLMs first.

I’m really glad you liked it, and seriously the resources I link at the end are fantastic.