| ▲ | OutOfHere 3 hours ago |
| Gemma 4 is a strongly censored model, so much so that it refused to answer medical and health related questions, even basic ones. No one should be using it, and if this is the best that Google can do, it should stop now. Other models do not have such ridiculous self-imposed problems. |
|
| ▲ | vorticalbox 12 minutes ago | parent | next [-] |
| You can get abliterated versions that have no (or very limited) refusals. I tend to use Huihuiai versions. |
|
| ▲ | mft_ 2 hours ago | parent | prev | next [-] |
| I suspect a possible future of local models is extreme specialisation - you load a Python-expert model for Python coding, do your shopping with a model focused just on this task, have a model specialised in speech-to-text plus automation to run your smart home, and so on. This makes sense: running a huge model for a task that only uses a small fraction of its ability is wasteful, and home hardware especially isn't suited to this wastefulness. I'd rather have multiple models with a deep narrow ability in particular areas, than a general wide shallow uncertain ability. Anyway, is it possible that this may be what lies behind Gemma 4's "censoring"? As in, Google took a deliberate choice to focus its training on certain domains, and incorporated the censor to prevent it answering about topics it hasn't been trained on? Or maybe they're just being sensibly cautious: asking even the top models for critical health advice is risky; asking a 32B model probably orders of magnitude moreso. |
| |
| ▲ | OutOfHere 2 hours ago | parent [-] | | > is it possible that this may be what lies behind Gemma 4's "censoring" Your explanation would make sense if various other rare domains were also censored, but they aren't, so it doesn't. > asking even the top models for critical health advice is risky Not asking, and living in ignorance, is riskier. For high-stakes questions, of course I'd want references that only an online model like ChatGPT or Gemini, etc. would be able to find. If I am asking a local model for health advice, odds are that it is because I am traveling and am temporarily offline, or am preparing off-grid infrastructure. In both cases I definitely require a best-effort answer. I also require the model to be able to tell when it doesn't know the answer. If you would, ignore health advice for a moment, and switch to electrical advice. Imagine I am putting together electrical infrastructure, and the model gives me bad advice, risking electrocution and/or a serious fire. Why is electrical advice not censored, and what makes it not be high-stakes!? The logic is the same. For the record, various open-source Asian models do not have any such problem, so I would rather use them. | | |
| ▲ | mft_ 16 minutes ago | parent | next [-] | | > Not asking, and living in ignorance, is riskier. For high-stakes questions, of course I'd want references that only an online model like ChatGPT or Gemini, etc. would be able to find. If I am asking a local model for health advice, odds are that it is because I am traveling and am temporarily offline, or am preparing off-grid infrastructure. In both cases I definitely require a best-effort answer. I also require the model to be able to tell when it doesn't know the answer. If I was prepping, I’d want e.g. Wikipedia available offline and default to human-assisted decision-making, and definitely not rely on a 31B parameter model. To be reductive, the ‘brain’ of any of these models is essentially a compression blob in an incomprehensible format. The bigger the delta between the input and the output model size, the lossier the compression must be. It therefore follows (for me at least) that there’s a correlation between the risk of the question and the size of model I’d trust to answer it. And health questions are arguably some of the most sensitive - lots of input data required for a full understanding, vs. big downsides of inaccurate advice. > If you would, ignore health advice for a moment, and switch to electrical advice. Imagine I am putting together electrical infrastructure, and the model gives me bad advice, risking electrocution and/or a serious fire. Why is electrical advice not censored, and what makes it not be high-stakes!? The logic is the same. You’re correct that it’s possible to find other risky areas that might not be currently censored. Maybe this is deliberate (maybe the input data needed for expertise in electrical engineering is smaller?) or maybe this is just an evolving area and human health questions are an obvious first area to address? Either way, I’m not trusting a small model with detailed health questions, detailed electrical questions, or the best way to fold a parachute for base jumping. :) (Although, if in the future there’s a Gemma-5-Health 32B and a Gemma-5-Electricity 32B, and so on, then maybe this will change.) | |
| ▲ | dist-epoch an hour ago | parent | prev [-] | | > Imagine I am putting together electrical infrastructure, and the model gives me bad advice, risking electrocution and/or a serious fire That's a weird demand from models. What next, "Imagine I'm doing brain surgery and the model gives me bad advice", "Imagine I'm a judge delivering a sentencing and the model gives me bad advice", ... | | |
| ▲ | OutOfHere 29 minutes ago | parent [-] | | Requesting electrical advice is not a weird ask at all. If writing sophisticated code requires skill, then so does electrical work, and one doesn't require more or less skill than the other. I would expect that the top-ranked thinking models are wholly capable of offering correct advice on the topic. The issues arise more from the user's inability to input all applicable context which can affect the decision and output. All else being equal, bad electrical work is 10x more likely to be a result of not adequately consulting AI than from consulting AI. Secondly, the primary point was about censorship, not accuracy, so let's not get distracted. | | |
| ▲ | doubled112 24 minutes ago | parent [-] | | Bad electrical work is more likely to burn your house down than some bad code. Bad medical advice is different again. I assumed it was more about risk management/liability than censorship. |
|
|
|
|
|
| ▲ | fortyseven 9 minutes ago | parent | prev | next [-] |
| Word. A great number of my medical or legal queries are actually answered, but come with a disclaimer, often at the end of the inference. (I'd offer up some examples, but I'm not at the desk.) I also find that you can coerce a wide spectrum of otherwise declined queries by editing its initial rejection into the start of an answer. For example changing the "I'm sorry I can't answer that..." response to "Here's how..." And then resubmitting the inference, allowing it to continue from there. It's not perfect, sometimes it takes multiple attempts, but it does work. At least in my experience. (This isn't Gemma-specific tip, either. Nearly every model I've tried this with tends to bend quite a bit doing this.) |
|
| ▲ | tgv 3 hours ago | parent | prev [-] |
| I don't quite get why you feel so strongly about it that this should be a deal breaker for everyone. It's really much better than a wrong answer, for everyone. |
| |
| ▲ | OutOfHere 2 hours ago | parent [-] | | > It's really much better than a wrong answer That is a bad premise and a false dichotomy, because most medical questions are simple, with well-known standard answers. ChatGPT and Gemini answer such questions correctly, also finding glaring omissions by doctors, even without having to look up information. As for the medical questions that are not simple, the ones that require looking up information, the model should in principle be able to respond that it does not know the answer when this is truthfully the case, implying that the answer, or a simple extrapoloation thereof, was not in its training data. |
|