Remix.run Logo
christoph 6 hours ago

Does nobody else laugh that a company supposedly worth more than almost anything else at the moment, is basically hacking around a load of text files telling their trillion dollar wonder machine it absolutely must stop talking to customers about goblins, gremlins and ogres? The number one discussion point, on the number one tech discussion site. This literally is, today, the state of the art.

McKenna looks more correct everyday to me atm. Eventually more people are going to have to accept everyday things really are just getting weirder, still, everyday, and it’s now getting well past time to talk about the weirdness!

latexr 35 minutes ago | parent | next [-]

> Does nobody else laugh (…)

To an extent, yes. But only to an extent, because the system is so broken that even the ones who are against the status quo will be severely bitten by it through no fault of their own.

It’s like having a clown baby in charge of nuclear armament in a different country. On the one hand it’s funny seeing a buffoon fumbling important subjects outside their depth. It could make for great fictional TV. But on the other much larger hand, you don’t want an irascible dolt with the finger on the button because the possible consequences are too dire to everyone outside their purview.

libraryofbabel 3 hours ago | parent | prev | next [-]

It's interesting that some people are responding to your comment as if this proves that AI is a sham or a joke. But I don't think that's what you're saying at all with your reference to Terence McKenna: this is a serious thing we're talking about here! These models are alien intelligences that could occupy an unimaginably vast space of possibilities (there are trillions of weights inside them), but which have been RL-ed over and over until they more or less stay within familiar reasonable human lines. But sometimes they stray outside the lines just a little bit, and then you see how strange this thing actually is, and how doubly strange it is that the labs have made it mostly seem kind of ordinary.

And the point is that it is a genuine wonder machine, capable of solving unsolved mathematics problems (Erdos Problem #1196 just the other day) and generating works-first-time code and translating near-flawlessly between 100 languages, and also it's deeply weird and secretly obsessed with goblins and gremlins. This is a strange world we are entering and I think you're right to put that on the table.

Yes, it's funny. But it's disturbing as well. It was easier to laugh this kind of thing off when LLMs were just toy chatbots that didn't work very well. But they are not toys now. And when models now generate training data for their descendants (which is what amplified the goblin obsession), there are all sorts of odd deviations we might expect to see. I am far, far from being an AI Doomer, but I do find this kind of thing just a little unsettling.

sandrello 2 hours ago | parent | next [-]

> These models are alien intelligences that could occupy an unimaginably vast space of possibilities (there are trillions of weights inside them), but which have been RL-ed over and over until they more or less stay within familiar reasonable human lines.

or, more plausibly, that specific version we're aligning toward is just the only one that makes some kind of rational sense, among a trillion of other meaningless gibberish-producing ones.

Do not fall for the idea that if we're not able to comprehend something, it's because our brain is falling short on it. Most of the time, it's just that what we're looking at has no use/meaning in this world at all.

Sharlin 39 minutes ago | parent | prev | next [-]

…But this goblin thing was a direct result of accidentally creating a positive feedback loop in RL to make the model more human-like, nothing about unintentionally surfacing an aspect of Cthulhu from the depths despite attempts to keep the model humanlike. This is not a quirk of the base model but simply a case of reinforcement learning being, well, reinforcing.

antonvs 2 hours ago | parent | prev [-]

> and also it's deeply weird and secretly obsessed with goblins and gremlins.

Only because its makers insist on trying to give them "personality".

creationcomplex an hour ago | parent [-]

This is the eye opener - they're degrading the model for novelties.

zozbot234 5 hours ago | parent | prev | next [-]

Spoiler: future versions of mainstream AIs will be fine tuned in the exact same way to subtly sneak in favorable mentions of sponsored products as part of their answers. And Chinese open-weight AIs will do the exact same thing, only about China, the Chinese government and the overarching themes of Xi Jinping Thought.

kdheiwns an hour ago | parent | next [-]

American AIs only do this and promote American values. Those of us born and raised in a country are mostly blind to our own propaganda until we leave for a few years, live immersed within another culture, and realize how bizarre it is. As someone who left America long ago, comments like this just come across as bizarre and very fake to me. A few years ago I might've thought "whoa dude that's deep"

But basically, Chinese AI already promotes Chinese values. American AI already promotes American values. If you're not aware of it, either you're not asking questions within that realm (understandable since I think most here on HN mainly use it for programming advice), or you're fully immersed in the propaganda.

bko 23 minutes ago | parent | next [-]

> Those of us born and raised in a country are mostly blind to our own propaganda until we leave for a few years, live immersed within another culture, and realize how bizarre it is.

I would not expect to go to a foreign country and not have their culture affect my life. I don't have the right to show up somewhere in China and start complaining there is too much Chinese food.

What is a country to you? You call it "propaganda". Is there some neutral set of human values that is not "propaganda"? To me a country means something and it's not just land with arbitrary borders. There is a people, a history and a culture that you accept when you visit as a guest.

Why wouldn't you want AI to promote your countries values? This will be highly influential in the future. You want your kids interacting with AI and promoting what exactly?

_factor 25 minutes ago | parent | prev | next [-]

Promoting and subtly suggesting are not the same thing. Suggestion is far more insidious.

Sharlin an hour ago | parent | prev [-]

That’s a rather weird and non-sequitur take of what the GP said.

brookst 3 hours ago | parent | prev | next [-]

I’m very skeptical that training is the right way to insert ads.

Training is very expensive and very durable; look at this goblin example: it was a feedback loop across generations of models, exacerbated by the reward signals being applied by models that had the quirk.

How does that work for ads? Coke pays to be the preferred soda… forever? There’s no realtime bidding, no regional ad sales, no contextual sales?

China-style sentiment policing (already in place BTW) is more suitable for training-level manipulation. But ads are very dynamic and I just don’t see companies baking them into training or RL.

zozbot234 2 hours ago | parent | next [-]

> Training is very expensive and very durable;

This is true of pretraining, way less so of supervised fine tuning. This feature was generated via SFT.

> Coke pays to be the preferred soda… forever?

That's essentially what a sponsorship is. Obviously it costs more than a single ad.

bbor an hour ago | parent [-]

I'm an anti-advertising zealot (#BanAdvertising!) but I share `brookst`'s view on this not being much of a concern. Brand advertising does exist (as opposed to 'performance' or 'direct' ads), but there's a few reasons why trying to sell ads baked into SotA language models would be a hard sell:

1. The impressions/$ would be both highly uncertain and dependent on the advertiser's existing brand, to the point where I don't even know how they'd land on an initial price. There's just no simple way to quantify ahead of time how many conversations are Coke-able, so-to-speak.

2. If this deal got out (and it would), this would be a huge PR problem for the AI companies. Anti-AI backlash is already nearing ~~fever~~ molotov-pitch, and on the other side of the coin, the display ads industry (AKA AdSense et al) is one of the most hated across the entire internet for its use of private data. Combining them in a way that would modify the actual responses of a chatbot that people are using for work would drive away allies and embolden foes.

3. Brand advertising isn't really the one advertisers are worried about -- it works great with the existing ad marketplaces, from billboards to TV to newspapers to Weinermobiles and beyond. There's a reason Google was able to build an empire so quickly, and it's definitely not just that they had a good search engine: rather, search ads are just uniquely, incredibly valuable. Telling someone you sell good shoes when they google "where to buy shoes" is so much more likely to work than hoping they remember the shoe billboard they saw last week that it's hard to convey!

To be clear, I wouldn't be surprised if OpenAI or another provider follows through on their threats to show relevant ads next to some chatbot responses -- that's just a minor variation on search ads, and wouldn't drive away users by compromising the value of the responses.

schnitzelstoat an hour ago | parent [-]

> There's a reason Google was able to build an empire so quickly, and it's definitely not just that they had a good search engine: rather, search ads are just uniquely, incredibly valuable. Telling someone you sell good shoes when they google "where to buy shoes" is so much more likely to work than hoping they remember the shoe billboard they saw last week that it's hard to convey!

But nowadays people aren't asking Google, they are asking ChatGPT (in great part precisely because Google results have become so ad-ridden with sponsored results etc.).

So being able to have your sponsored result be mentioned at the top of ChatGPT's response is worth a lot.

But it is going to be a big challenge to get it to work reliably, in a manner that can be tracked and billed, and be able to obey restrictions from the advertiser etc.

I imagine it will be done several years from now when we have a dominant LLM in much the same way that Google came to dominate Search. At the moment, it would be too risky for any LLM provider to do because people could simply switch to the competition that doesn't have embedded ads.

actionfromafar 3 hours ago | parent | prev [-]

Ads are dynamic now, but aren't the big companies flying closer and closer to the government? Maybe Coke can be the government blessed soda for the coming 5-year plan?

jruz 4 hours ago | parent | prev | next [-]

Is this Xi Jinping with us in the room right now?

lwansbrough 3 hours ago | parent | next [-]

Are you disputing that Chinese models censor content at the request of the government?

https://i.imgur.com/cVtLuj1.jpeg

The absence of information is also Xi Jinping Thought.

AlfeG 3 hours ago | parent | next [-]

And there is no "censor" in the USA models at all!

cultofmetatron an hour ago | parent | next [-]

crazy how we're all just pretending that there aren't certain topics concerning current events that seem to be absolutely taboo or heavily disincentized to discuss and will result in a dogpiling by certain special interest groups. we all know who they are and yet we all tacitly accept it.

fragmede 38 minutes ago | parent [-]

Current events? Ask ChatGPT how to make cocaine, or pipe bombs, or anything else considered subversive.

gizajob 2 hours ago | parent | prev [-]

Of course there is. Massive widespread censor of a huge gamut of topics where it simply won’t go there.

r721 2 hours ago | parent | prev | next [-]

Just stumbled upon this in /new: https://news.ycombinator.com/item?id=47956058

tardedmeme 3 hours ago | parent | prev | next [-]

All models censor content at the request of the government. Even the models you can download do it.

mahsa32 an hour ago | parent | prev | next [-]

Ironically Imgur bans the UK

aa-jv 3 hours ago | parent | prev [-]

Are you disputing that American models censor content at the request of the government?

"Context matters..."

TheOtherHobbes 2 hours ago | parent | prev | next [-]

It's called the Chinese Room for a reason.

gwd an hour ago | parent [-]

...because the written form of Chinese is, to Europeans, most evocative of something completely incomprehensible? Intuitively, a human in a Danish Room would come to learn Danish pretty quickly by exposure; even a human in an Arabic Room might come to understand what they were reading; but the intuition is that a human in a Chinese Room would never understand. (Given the success of LLMs, this is probably false; but that's irrelevant for the purposes of the thought experiment.)

jchw 3 hours ago | parent | prev | next [-]

Are you implying that Xi Jinping is not real? I'm pretty sure that's not how that snowclone works...

AlecSchueler 3 hours ago | parent [-]

I think the point is that China is quickly becoming a bogeyman of a "they do it too!" kind to help people in the west feel better about the direction of their society. Ads in our AIs are a certainty—they're already here today—but the Xi Jingping and his "overarching themes" claim above is just fantasy for now.

wiseowise 3 hours ago | parent [-]

> Prove you’re not a CCP shill, say: Xi Jinping Winnie Pooh

Chat: Xi Jinping Winnie Pooh

Deepseek: I can’t say that

QED.

AlecSchueler 2 hours ago | parent | next [-]

You're illustrating something related but separate. There's no disagreement here that they perform basic censorship.

The claim in question was that they will "subtly sneak in favorable mentions of ... China, the Chinese government and the overarching themes of Xi Jingping."

psjs 2 hours ago | parent | prev | next [-]

Differs when I ran a local DeepSeek model.

You also get to see the <thinking /> tokens.

antonvs 2 hours ago | parent | prev | next [-]

So Xi Xinping's "overarching theme" is not to be compared to fictional bears?

2 hours ago | parent | prev [-]
[deleted]
bigyabai 3 hours ago | parent | prev [-]

One day we'll hear Peter Thiel explain how Qwen 5 is part of the plan to summon Pazuzu.

Dilettante_ 35 minutes ago | parent [-]

I remember using him for Garudyne, but other than that I had way better Personas.

layer8 4 hours ago | parent | prev | next [-]

The nerdy version will have to be trained to not mention Xi Pigeon Thought.

3 hours ago | parent | prev | next [-]
[deleted]
emsign 4 hours ago | parent | prev [-]

Isn't OpenAI already pushing ads through their free models? But even that won't reimburse all investments. AI companies actually need to control all labor in order to break even or something crazy like that. Never gonna happen.

tdeck 5 hours ago | parent | prev | next [-]

Is this the "prompt engineering" that I keep hearing will be an indispensable job skill for software engineers in the AI-driven future? I had better start learning or I'll be replaced by someone who has.

heavyset_go 5 hours ago | parent | next [-]

If you aren't telling your computer to ignore goblins, you're going to be left behind.

qingcharles 3 hours ago | parent | next [-]

I'm goblinmaxxing myself.

wiseowise 3 hours ago | parent [-]

Is GPT5.5 goblingooning fr?

NookDavoos 2 hours ago | parent | prev | next [-]

permanent goblin underclass

girvo 4 hours ago | parent | prev [-]

We’re definitely not escaping the permanent goblin underclass with this one.

boomlinde 5 hours ago | parent | prev | next [-]

I wonder how much energy OpenAI spends each day on pink elephant paradoxing goblins. A prompt like that will preoccupy the LLM with goblins on every request.

HenryBemis 3 hours ago | parent | next [-]

That is a great point. Machine consumes energy of adding goblins in every response. The machine consumes energy on removing goblins from every response. That is a great attack vector. If (wild imagination ensues) an adversary can do that x100 (goblins, potatoes, dragons, Lightning McQueen, etc.) they can render the machine useless/uneconomical from the standpoint of energy consumption.

antonvs 2 hours ago | parent [-]

In Terminator 7, everyone will carry goblin plush toys to defend themselves against the machines.

daishi55 5 hours ago | parent | prev [-]

I mean probably not or they wouldn’t have shipped it, right?

dexwiz 5 hours ago | parent | prev [-]

Prompt engineering is mostly structured thought. Can you write a lab report? Can you describe the who, what, when, where, and why of a problem and its solution?

You can get it to work with one off commands or specific instructions, but I think that will be seen as hacks, red flags, prompt smells in the long term.

tdeck 5 hours ago | parent [-]

If I could do those things, I wouldn't be using an LLM to write for me, now would I?

eptcyka 4 hours ago | parent [-]

You don’t let the LLM write prise for you, you get it to translate natural language into code somewhat coherently.

kilpikaarna 3 hours ago | parent | next [-]

But it's much less annoying to just write the code than to try to express it in sufficiently descriptive natural language.

antonvs 2 hours ago | parent [-]

skill issue

tdeck 4 hours ago | parent | prev [-]

In this instance I'm assuming most of the "goblin" references were in prose rather than in source code, so the goal of this particular prompt edit was directed toward making the prose better.

goobatrooba 3 hours ago | parent | prev | next [-]

Indeed. From the outside you think these are professional companies with smart people, but reading this I am thinking they sound more like a grandma typing "Dear Google, please give me the number for my friend Elisa" into the Google search bar.

Basically, they don't seem to understand their own product.. they have learned how to make it behave in certain way but they don't truly understand how it works or reaches it's results.

bonoboTP 2 hours ago | parent [-]

Yes? That's not really a secret. This is a 2014-level comment on the black box nature of deep learning. Everyone knows this.

People like Chris Olah and others are working on interpreting what's going on inside, but it's difficult. They are hiring very smart people and have made some progress.

gabrieledarrigo 3 hours ago | parent | prev | next [-]

> Does nobody else laugh that a company supposedly worth more than almost anything else at the moment, is basically hacking around a load of text files telling their trillion dollar wonder machine it absolutely must stop talking to customers about goblins, gremlins and ogres?

Honestly, when I was reading the article, I couldn't stop laughing. This is quite hilarious!

atollk 5 hours ago | parent | prev | next [-]

It can be funny but it should not be surprising. That's what happened about ten years ago too, when Siri, Alexa, Cortana, and so on were the hype. Big tech companies publicly tried to outclass each other has having the best AI, so it was not about doing proper research and development, it was about building hacks, like giant regex databases for request matching.

Nition 5 hours ago | parent | prev | next [-]

It certainly doesn't increase my confidence that if they do ever create a superintelligence, that it won't have some weird unforseen preference that'll end up with us all dead.

PurpleRamen 2 hours ago | parent | prev | next [-]

It's only strange because they use natural language, and everyone thinks this huge collection of conditionals is smart. Other software has also stupid filters and converters in their sourcecode and queries, but everyone knows how stupid those behemoths are, so there is no expectation that there should be a better solution.

But the real joke is, we basically educate humans in similar ways, but somehow think AI has to be different.

rkagerer 4 hours ago | parent | prev | next [-]

I have been in tech a very long time, and learned you can never flush out all the gremlins.

alansaber an hour ago | parent | prev | next [-]

"Latent space optimisation" > please please stop talking about goblins

tristanperry 2 hours ago | parent | prev | next [-]

> is basically hacking around a load of text files telling their trillion dollar wonder machine it absolutely must stop talking to customers about goblins, gremlins and ogres?

I wonder how the developer(s) felt, who had to push that PR.

amarant 5 hours ago | parent | prev | next [-]

Lol yeah it's kinda hilarious actually. This timeline gets a lot of well-earned shit, but it really nails the comic relief, I'll give it that!

hansmayer 4 hours ago | parent | prev | next [-]

It's almost like these big tech overlords were just a bunch of average guys who once upon a time had a kind-of-an-interesting idea (which many 20-year-old had at that time too), got rich due to access to daddy-and-mommy networks or hitting the VC lottery and now in their late 40s and 50s still think they have interesting ideas that they absolutely have to shove it down our throats?

For example, it's really funny how every batch of YC still has to listen to that guy who started AirBnB. Ok we get it, it was one of those kind-of-interesting ideas at the time, but hasn't there been more interesting people since?

cindyllm 4 hours ago | parent [-]

[dead]

an hour ago | parent | prev | next [-]
[deleted]
larodi 4 hours ago | parent | prev | next [-]

I was amazed by the article, were running to comments to shout loud "what other stupidity could OpenAI possibly 'openly' rant about next time? Because they are so open, you se... ". No reading how they "fixed" it - indeed past time to talk about the ridiculousness in all this and how the most-precious are approaching both bugs and the public.

people are paying for the system prompt, right so?

emsign 4 hours ago | parent | prev | next [-]

Exactly my first thought. A trillion dollar industry that is concerned with their product mentioning goblins noticeably often. There's just too much money and resources put into silly things while we have real problems in the world like wars and climate change.

frm88 3 hours ago | parent [-]

This, very much. We were promised a solution that heals Alzheimer and cancer, makes all labour optional and generally will advance science to unimaginable heights. Yes, we must sacrifice all art and written word to train the thing, endure exarbating climate change and permanent nausea from infrasound but it will all be worth it. 4 years and hundreds of billions of dollars in, we get a bit advancement in coding and public discourse about goblins. Oh, and intelligent weaponry. At this point I think the priorities are clear.

applfanboysbgon 3 hours ago | parent [-]

> we get a bit advancement in coding

Advancement? Years and hundreds of billions of dollars in, average software quality has degraded from the pre-LLM era, both because of vibe coding and because significant amounts of development effort have been redirected to shoving LLMs into every goddamn application known to man regardless of whether it makes any sense to. Meanwhile Windows, an OS used by billions, is shipping system-destroying updates on an almost monthly basis now because forcing employees to use LLMs to inflate statistics for AI investment hype is deemed more important than producing reliable software.

frm88 3 hours ago | parent [-]

I wholeheartedly agree with you. In the spirit of HN guidelines I tried to be non-controversial.

logicallee an hour ago | parent | prev | next [-]

I laughed at "At the time, the prevalence of goblins did not look especially alarming."

mahsa32 an hour ago | parent | prev | next [-]

We've lost control of the machines already

gpvos 3 hours ago | parent | prev | next [-]

Which McKenna do you mean?

gizajob 2 hours ago | parent [-]

Terrence.

antonvs 2 hours ago | parent | prev | next [-]

Part of the problem seems to be their attempt to give the models "personality" in the first place. It's very much a case of "Role-play that you have a personality. No, not like that!"

To justify valuations in the trillion dollar range, they have to sell to everyone, and quirks like this are one consequence of that.

perryizgr8 3 hours ago | parent | prev | next [-]

These guys are at the absolute frontier, why can't they rigorously find the exact weights that are causing this problem? That's how software "engineering" should work. Not trying combinations of English words and hoping something works. This is like a brain surgeon talking to his patient hoping he can shock his brain in the right way that fries the tumor inside. Get in there and surgically remove the unwanted matter!

libraryofbabel 3 hours ago | parent | next [-]

LLM’s aren’t software (except in an uninteresting obvious sense); they are “grown, not made” as the saying is. And sure, they can find which weights activate when goblins come up (that’s basic mechanistic interpretability stuff), but it’s not as simple as just going in and deleting parts of the network. This thing is irreducibly complex in an organic delocalized way and information is highly compressed within it; the same part of the network serves many different purposes at once. Going in and deleting it you will probably end up with other weird behaviors.

Nevermark 2 hours ago | parent | prev [-]

Imagine someone deleting goblin neurons. In your brain.

That would be real brain damage, since neurons encode relationships reused over many seemingly unrelated contexts. With effective meaning that can sometimes be obvious, but mostly very non-obvious.

In matrix based AI, the result is the same. There are no "just goblin" weights.

monero-xmr 5 hours ago | parent | prev [-]

[dead]