| ▲ | simonw 6 days ago |
| This model is a LOT of fun. It's absolutely tiny - just a 241MB download - and screamingly fast, and hallucinates wildly about almost everything. Here's one of dozens of results I got for "Generate an SVG of a pelican riding a bicycle". For this one it decided to write a poem: +-----------------------+
| Pelican Riding Bike |
+-----------------------+
| This is the cat! |
| He's got big wings and a happy tail. |
| He loves to ride his bike! |
+-----------------------+
| Bike lights are shining bright. |
| He's got a shiny top, too! |
| He's ready for adventure! |
+-----------------------+
There are a bunch more attempts in this Gist, some of which do at least include an SVG tag albeit one that doesn't render anything: https://gist.github.com/simonw/25e7b7afd6a63a2f15db48b3a51ec...I'm looking forward to seeing people fine-tune this in a way that produces useful output for selected tasks, which should absolutely be feasible. |
|
| ▲ | roughly 6 days ago | parent | next [-] |
| I audibly laughed at this one: https://gist.github.com/simonw/25e7b7afd6a63a2f15db48b3a51ec... where it generates a… poem? Song? And then proceeds to explain how each line contributes to the SVG, concluding with: > This SVG code provides a clear and visually appealing representation of a pelican riding a bicycle in a scenic landscape. |
| |
| ▲ | icoder 6 days ago | parent [-] | | This reminds me of my interactions lately with ChatGPT where I gave into its repeated offer to draw me an electronics diagram. The result was absolute garbage. During the subsequent conversation it kept offering to include any new insights into the diagram, entirely oblivious to its own incompetence. |
|
|
| ▲ | 0x00cl 6 days ago | parent | prev | next [-] |
| I see you are using ollamas ggufs. By default it will download Q4_0 quantization. Try `gemma3:270m-it-bf16` instead or you can also use unsloth ggufs `hf.co/unsloth/gemma-3-270m-it-GGUF:16` You'll get better results. |
| |
| ▲ | simonw 6 days ago | parent | next [-] | | Good call, I'm trying that one just now in LM Studio (by clicking "Use this model -> LM Studio" on https://huggingface.co/unsloth/gemma-3-270m-it-GGUF and selecting the F16 one). (It did not do noticeably better at my pelican test). Actually it's worse than that, several of my attempts resulted in infinite loops spitting out the same text. Maybe that GGUF is a bit broken? | | |
| ▲ | danielhanchen 6 days ago | parent | next [-] | | Oh :( Maybe the settings? Could you try temperature = 1.0, top_k = 64, top_p = 0.95, min_p = 0.0 | | |
| ▲ | canyon289 6 days ago | parent | next [-] | | Daniel, thanks for being here providing technical support as well. Cannot express enough how much we appreciate your all work and partnership. | | | |
| ▲ | simonw 6 days ago | parent | prev [-] | | My topping only lets me set temperature and top_p but setting them to those values did seem to avoid the infinite loops, thanks. | | |
| ▲ | danielhanchen 6 days ago | parent [-] | | Oh fantastic it worked! I was actually trying to see if we can auto set these within LM Studio (Ollama for eg has params, template) - not sure if you know how that can be done? :) |
|
| |
| ▲ | JLCarveth 6 days ago | parent | prev [-] | | I ran into the same looping issue with that model. | | |
| ▲ | danielhanchen 6 days ago | parent [-] | | Definitely give temperature = 1.0, top_k = 64, top_p = 0.95, min_p = 0.0 a try, and maybe repeat_penalty = 1.1 |
|
| |
| ▲ | Patrick_Devine 5 days ago | parent | prev [-] | | We uploaded gemma3:270m-it-q8_0 and gemma3:270m-it-fp16 late last night which have better results. The q4_0 is the QAT model, but we're still looking at it as there are some issues. |
|
|
| ▲ | ertgbnm 6 days ago | parent | prev | next [-] |
| He may generate useless tokens but boy can he generate ALOT of tokens. |
| |
|
| ▲ | layer8 6 days ago | parent | prev | next [-] |
| > It's absolutely tiny - just a 241MB download That still requires more than 170 floppy disks for installation. |
| |
| ▲ | freedomben 6 days ago | parent [-] | | Indeed. Requires over 3,000,000 punch cards to store. Not very tiny! | | |
| ▲ | stikypad 6 days ago | parent [-] | | On the plus side, you can decompose your matrices for free using termites. |
|
|
|
| ▲ | mdp2021 6 days ago | parent | prev | next [-] |
| > For this one it decided to write a poem My first try: user: "When was Julius Caesar born" response: "Julius Caesar was born in **Rome**" Beautiful :D (I do not mean to detract from it - but it's just beautiful. It will require more effort to tame it.) |
| |
|
| ▲ | marinhero 6 days ago | parent | prev | next [-] |
| Serious question but if it hallucinates about almost everything, what's the use case for it? |
| |
| ▲ | simonw 6 days ago | parent | next [-] | | Fine-tuning for specific tasks. I'm hoping to see some good examples of that soon - the blog entry mentions things like structured text extraction, so maybe something like "turn this text about an event into an iCal document" might work? | | |
| ▲ | turnsout 6 days ago | parent | next [-] | | Google helpfully made some docs on how to fine-tune this model [0]. I'm looking forward to giving it a try! [0]: https://ai.google.dev/gemma/docs/core/huggingface_text_full_finetune
| |
| ▲ | CuriouslyC 6 days ago | parent | prev | next [-] | | Fine tuning messes with instruction following and RL'd behavior. I think this is mostly going to be useful for high volume pipelines doing some sort of mundane extraction or transformation. | |
| ▲ | iib 6 days ago | parent | prev [-] | | This is exactly the fine-tuning I am hoping for, or I would do if I had the skills. I tried it with gemma3 270M and vanilla it fails spectacularly. Basically it would be the quickadd[1] event from google calendar, but calendar agnostic. [1] https://developers.google.com/workspace/calendar/api/v3/refe... |
| |
| ▲ | striking 6 days ago | parent | prev | next [-] | | It's intended for finetuning on your actual usecase, as the article shows. | |
| ▲ | zamadatix 6 days ago | parent | prev | next [-] | | I feel like the blog post, and GP comment, does a good job of explaining how it's built to be a small model easily fine tuned for narrow tasks, rather than used for general tasks out of the box. The latter is guaranteed to hallucinate heavily at this size, that doesn't mean every specific task it's fine tuned to would be. Some examples given were fine tuning it to efficiently and quickly route a query to the right place to actually be handled or tuning it to do sentiment analysis of content. An easily fine tunable tiny model might actually be one of the better uses of local LLMs I've seen yet. Rather than try to be a small model that's great at everything it's a tiny model you can quickly tune to do one specific thing decently, extremely fast, and locally on pretty much anything. | |
| ▲ | yifanl 6 days ago | parent | prev | next [-] | | It's funny. Which is subjective, but if it fits for you, it's arguably more useful than Claude. | |
| ▲ | luckydata 6 days ago | parent | prev | next [-] | | Because that's not the job it was designed to do, and you would know by reading the article. | |
| ▲ | mirekrusin 6 days ago | parent | prev | next [-] | | The same as having a goldfish. You can train it to do a trick I guess. | |
| ▲ | deadbabe 6 days ago | parent | prev | next [-] | | Games where you need NPCs to talk random jiberrish. | |
| ▲ | iLoveOncall 6 days ago | parent | prev | next [-] | | Nothing, just like pretty much all models you can run on consumer hardware. | | |
| ▲ | cyanydeez 6 days ago | parent [-] | | This message brought to you by OpenAI: we're useless, but atleast theres a pay gate indicating quality! |
| |
| ▲ | numpad0 6 days ago | parent | prev | next [-] | | robotic parrots? | |
| ▲ | rotexo 6 days ago | parent | prev [-] | | An army of troll bots to shift the Overton Window? | | |
|
|
| ▲ | nico 6 days ago | parent | prev | next [-] |
| Could be interesting to use in a RAG setup and also finetuning it For sure it won’t generate great svgs, but it might be a really good conversational model |
| |
| ▲ | luckydata 6 days ago | parent [-] | | The article says it's not a good conversational model but can be used for data extraction and classification as two examples. |
|
|
| ▲ | mdp2021 6 days ago | parent | prev | next [-] |
| > For this one it decided to write a poem Could it be tamed with good role-system prompt crafting? (Besides fine-tuning.) |
|
| ▲ | campbel 6 days ago | parent | prev | next [-] |
| Do you take requests? We need to see how well this model works with some fine-tuning :D |
|
| ▲ | bobson381 6 days ago | parent | prev | next [-] |
| It's gonna be a customer service agent for Sirius Cybernetics. Share and enjoy! |
|
| ▲ | Balinares 6 days ago | parent | prev | next [-] |
| This is like a kobold to the other models' dragons and I don't hate it. :) |
|
| ▲ | aorloff 6 days ago | parent | prev | next [-] |
| Finally we have a model that's just a tad bit sassy |
|
| ▲ | cyanydeez 6 days ago | parent | prev | next [-] |
| the question is wheather you can make a fine tuned version and spam any given forum within an hour with the most attuned but garbage content. |
|
| ▲ | volkk 6 days ago | parent | prev [-] |
| i was looking at the demo and reading the bed time story it generated and even there, there was confusion about the sprite and the cat. switched subjects instantly making for a confusing paragraph. what's the point of this model? |