| ▲ | aggregator-ios a day ago | |
I tested the E2B and E4B models and they get close but inaccurate (non working) results when generating jq queries from natural language. This is of importance to me as I work on https://jsonquery.app and would prefer to use a model that works well with browser inference. gemma-4-26b-a4b-it and gemma-4-31b-it produced accurate results in a few of my tests. But those are 50-60GB in size. Chrome has a developer preview that bundles Gemini Nano (under 2GB) and it used to work really well, but requires a few switches to be manually switched on, and has recently gotten worse in quality when testing for jq generation. | ||
| ▲ | curioussquirrel 2 hours ago | parent [-] | |
Same, I quickly tested it for code gen and it produced mostly good code for simple problems, but it sometimes hallucinated words in non-English scripts inside the code. | ||