| ▲ | badsectoracula 4 hours ago | |
Pretty much all of my LLM usage has been using Mistral's open source models running on my PC. I do not do full agentic coding as when i tried it with Devstral Small 2 it was a bit too slow (though if i could get 2-3 times the speed of my PC from a second computer it'd be be a different story and AFAIK that is doable if i was willing to spend $2-3k on it). However i've used Mistral's models for spelling and grammar checks[0], translations[1][2], summaries[3] and trying to figure out if common email SPAM avoidance tricks are pointless in the LLM age :-P [4]. FWIW that tool you can see in the shots is a Tcl/Tk script calling a llama.cpp-based command-line utility i threw together some time ago when experimenting with llama.cpp. I've also used Devstral Small to make a simple raytracer[5][6] (it was made using the "classic" chat by copy/pasting code, not any agentic approach and i did fix bits of it in the process) and a quick-and-dirty "games database" in Python+Flask+Sqlite for my own use (mainly a game backlog DB :-P). I also use it to make various small snippets, have it generate some boilerplate stuff (e.g. i have an enum in C and want to write a function that prints names for each enum value or have it match a string i read from a json file with the appropriate enum value), "translate" between languages (i had it recently convert some matrix code that i had written in Pascal into C), etc. [0] https://i.imgur.com/f4OrNI5.png [1] https://i.imgur.com/Zac3P4t.png [2] https://i.imgur.com/jPYYKCd.png [3] https://i.imgur.com/WZGfCdq.png [4] https://i.imgur.com/ytYkyQW.png [5] https://i.imgur.com/FevOm0o.png (screenshot) [6] https://app.filen.io/#/d/e05ae468-6741-453c-a18d-e83dcc3de92... (C code) | ||