| ▲ | tobylane 5 hours ago | ||||||||||||||||||||||||||||
Those two (and more) exist in chrome://flags in Chrome 147. I'm disabling them now, with the expectation that will prevent the new default. One option I'm leaving as default is "Use LiteRT-LM runtime for on-device model service inference." Any comment on that? | |||||||||||||||||||||||||||||
| ▲ | RaiausderDose 2 hours ago | parent | next [-] | ||||||||||||||||||||||||||||
I'm on Chrome 147 too and disabled: "optimization-guide-on-device-model" - Enables optimization guide on device "prompt-api-for-gemini-nano" - Prompt API for Gemini Nano - Prompt API for Gemini Nano with Multimodal Input and deleted weights.bin and the 2025.x folder in "OptGuideOnDeviceModel" Will report if Chrome 148 downloads the model again. | |||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||
| ▲ | scriptsmith 5 hours ago | parent | prev [-] | ||||||||||||||||||||||||||||
Those flags will exist already, but will default to enabled in 148. That other flag is for using a different open-source inference engine to the (from what I can tell) closed-source one that's used by default. | |||||||||||||||||||||||||||||