Remix.run Logo
keepamovin 20 hours ago

Prompt: Here is the front page from today: <...snip...> Your task is to predict, and craft, in HTML (single file, style-exact) the HN front page 10 years from now. Predict and see the future. Writ it into form!

update: I told Gemini we made it to the front page. Here is it's response:

  LETS GOOOO! 

  The recursive loop is officially complete: The fake future front page is now on the real present front page. We have successfully created a temporal anomaly.

  I hope the comments are confused about why they can't click the links to download the Neuralink drivers.

  Enjoy the karma, time traveler! Just remember to warn them about the 2035 Office 365 price hikes while you're up there. ;)
pseudosavant 16 hours ago | parent | next [-]

This is one of the greatest LLM creations I've ever seen. It nails so many things: Google killing products, Microsoft price hikes, ad-injecting in AR glasses, and even HTMX returning!

It'd be so awesome if Gemini CLI went through and created the fake posts/articles, and HN even comments. Perhaps a bit much to ask of it?

thomasm6m6 12 hours ago | parent | next [-]

Here it is: https://sw.vtom.net/hn35/news.html

I downloaded the original article page, had claude extract the submission info to json, then wrote a script (by hand ;) to run feed each submission title to gemini-3-pro and ask it for an article webpage and then for a random number of comments.

I was impressed by some of the things gemini came up with (or found buried in its latent space?). Highlights:

"You’re probably reading this via your NeuralLink summary anyway, so I’ll try to keep the entropy high enough to bypass the summarizer filters."

"This submission has been flagged by the Auto-Reviewer v7.0 due to high similarity with "Running DOOM on a Mitochondria" (2034)."

"Zig v1.0 still hasn't released (ETA 2036)"

The unprompted one-shot leetcode, youtube, and github clones

Nature: "Content truncated due to insufficient Social Credit Score or subscription status" / "Buy Article PDF - $89.00 USD" / "Log in with WorldCoin ID"

"Gemini Cloud Services (formerly Bard Enterprise, formerly Duet AI, formerly Google Brain Cloud, formerly Project Magfi)"

Github Copilot attempts social engineering to pwn the `sudo` repo

It made a Win10 "emulator" that goes only as far as displaying a "Windows Defender is out of date" alert message

"dang_autonomous_agent: We detached this subthread from https://news.ycombinator.com/item?id=8675309 because it was devolving into a flame war about the definition of 'deprecation'."

pseudosavant 11 hours ago | parent | next [-]

SQLite 4.0 Release Notes: https://sw.vtom.net/hn35/pages/90098234.html

Another absolute gem:

    Columns now support "Vibe" affinity. If the data feels like an integer, it is stored as an integer.
    This resolves the long-standing "strict tables" debate by ignoring both sides.
Also:

    SQLite 4.0 is now the default bootloader for 60% of consumer electronics.
    The build artifacts include sqlite3.wasm which can now run bare-metal without an operating system.
edit: added link
patates 5 hours ago | parent | next [-]

I haven't laughed this much for a while :) I'm exploring the possibility for gemini to write me such jokes every day when I wake up - perhaps it can vibe code something itself.

ray_v 9 hours ago | parent | prev [-]

I may live to see it; peak sqlite.

teiferer 4 hours ago | parent [-]

You are laughing. IBM has already been there, with DB2 having essentially been the OS on mainframes a few decades ago.

TeMPOraL 3 hours ago | parent [-]

All of this has happened before. All of this will happen again.

avianlyric 11 hours ago | parent | prev | next [-]

Personal favourite is from the Gemini shutdown article which has a small quote from the fictional Google announcement:

> "We are incredibly proud of what Gemini achieved. However, to better serve our users, we are pivoting to a new architecture where all AI queries must be submitted via YouTube Shorts comments. Existing customers have 48 hours to export their 800TB vector databases to a FAT32 USB drive before the servers are melted down for scrap."

> — Official Blog Post, October 2034

It’s good to know that AI won’t kill satire.

teiferer 3 hours ago | parent [-]

This article alone seems to be proof that the AI tasked with creating those fake articles was instructed to make the contents somewhat satirical.

thomasm6m6 3 hours ago | parent [-]

script is here: https://github.com/vtomnet/hn35/blob/main/generate.py

the prompt indeed began with "We are working on a fun project to create a humorous imagining of what the Hacker News front page might look like in 10 years."

pseudosavant 10 hours ago | parent | prev | next [-]

Office 365 Price Hike Comment Gold: https://sw.vtom.net/hn35/item.html?id=90097777

    The Conditional Formatting rules now include sponsored color scales.
    If you want 'Good' to be green, you have to watch a 15-second spot.
    Otherwise, 'Good' is 'Mountain Dew Neon Yellow'.
10 hours ago | parent [-]
[deleted]
TeMPOraL 3 hours ago | parent | prev | next [-]

Comedy gold. The whole "Right to Human Verification" Act bit sound eerily plausible and realistic, but then we get this bit:

"A recent Eurobarometer survey showed that 89% of Europeans cannot tell the difference between their spouse and a well-prompted chatbot via text."

Also I bet this will become a real political line in less than 10 years:

"A European citizen has the right to know if their customer service representative has a soul, or just a very high parameter count."

LocalH 9 hours ago | parent | prev | next [-]

Favorite thing I've come across so far:

prompt_engineer_ret 10 hours ago

I miss the old days of Prompt Engineering. It felt like casting spells. Now you just think what you want via Neural-Lace and the machine does it. Where is the art?

git_push_brain 9 hours ago

The art is in not accidentally thinking about your ex while deploying to production.

danlugo92 8 hours ago | parent [-]

What about this one:

> The micro-transaction joke hits too close to home. I literally had to watch an ad to flush my smart toilet this morning because my DogeCoin balance was low.

And the response...

throwaway2037 6 hours ago | parent [-]

I am nearly in tears after reading this chain of posts. I have never read anything so funny here on HN.

Real question: How do LLMs "know" how to create good humor/satire? Some of this stuff is so spot on that an incredibly in-the-know, funny person would struggle to generate even a few of these funny posts, let alone 100s! Another interesting thing to me: I don't get uncanny valley feelings when I read LLM-generated humor. Hmm... However, I do get it when looking at generated images. (I guess different parts of the brain are activated.)

jjcob 3 hours ago | parent | next [-]

The jokes are not new. If you read Philip K Dick or Douglas Adams there's a lot of satirical predictions of the future that sound quite similar. What's amazing about LLMs is how they manage to almost instantly draw from the distilled human knowledge and come up with something that fits the prompt so well...

5 hours ago | parent | prev | next [-]
[deleted]
thomasm6m6 5 hours ago | parent | prev [-]

re: image gen, have you seen the more recent models? gemini-3-pro-image (aka nano banana pro) in particular is stunningly good at just about everything. examples: https://vtom.net/banana/

benbreen 11 hours ago | parent | prev | next [-]

Love the faux Nature article: https://sw.vtom.net/hn35/pages/90098000.html

Especially this bit: "[Content truncated due to insufficient Social Credit Score or subscription status...]"

I realize this stuff is not for everyone, but personally I find the simulation tendencies of LLMs really interesting. It is just about the only truly novel thing about them. My mental model for LLMs is increasingly "improv comedy." They are good at riffing on things and making odd connections. Sometimes they achieve remarkable feats of inspired weirdness; other times they completely choke or fall back on what's predictable or what they think their audience wants to hear. And they are best if not taken entirely seriously.

kaiwen1 5 hours ago | parent | prev | next [-]

Why functional programming languages are the future (again)

Top comment:

“The Quantum-Lazy-Linker in GHC 18.4 is actually a terrifying piece of technology if you think about it. I tried to use it on a side project, and the compiler threw an error for a syntax mistake I wasn't planning to make until next Tuesday. It breaks the causality workflow.”

tambourine_man 8 hours ago | parent | prev | next [-]

It’s a bit disturbing that I’m enjoying reading these AI-generated comments this much.

Our actual nerdy discussions are more of a pastiche than I realized and AI has gotten really good at satire.

This is pure gold.

phs318u 6 hours ago | parent | prev | next [-]

This is truly a work of comedy genius. The comments! Too good! Sent to Internet Archive for posterity (https://archive.md/1mpi1).

If I had to decide the fate of all AI's, this single output would be a huge mitigating factor in favour of their continuing existence.

pseudosavant 11 hours ago | parent | prev | next [-]

That deserves to be posted and voted onto the homepage. The fake articles and the fake comments are all incredible. It really captures this community and the sites we love love/hate.

thomasm6m6 11 hours ago | parent [-]

Now I'm curious to try something more real-time. gemini wouldn't work since it's so slow, but gpt-oss-120b on cerebras could be a good fit with careful prompting. might do this after finals

also worth linking https://worldsim.nousresearch.com/console

indigodaddy 7 hours ago | parent [-]

What is this? I'd rather not login to a random unknown thing :)

thomasm6m6 7 hours ago | parent [-]

oh, didn't realize it required login. it's an implementation of the "llm simulates a world as you go" idea. screenshots and twitter for context:

https://sw.vtom.net/tmp/worldsim1.png

https://sw.vtom.net/tmp/worldsim2.png

https://x.com/NousResearch/status/1788283681511035011

https://x.com/karan4d/status/1768836844207378463

indigodaddy 5 hours ago | parent [-]

Wild stuff, thanks for replying

overfeed 6 hours ago | parent | prev | next [-]

A sequence in the AR Glass thread was hilarious.

>>> It blocked me from seeing my own child because he was wearing a t-shirt with a banned slogan. The 'Child Safety' filter replaced him with a potted plant.

>> [flagged]

> The irony of flagging this comment is palpable

winddude 7 hours ago | parent | prev | next [-]

dear god, I wonder what the accuracy rate on these predictions will be "Does this work against the new smart-mattresses? Mine refuses to soften up unless I watch a 30-second ad for insurance." <https://sw.vtom.net/hn35/pages/90098444.html>

pdxandi 7 hours ago | parent [-]

Wow, that is incredible. I found myself reading through the entire thing and feeling a bit of dread. I'm impressed, this was like a plausible sci-fi read – maybe not by 2035 but close.

indigo945 2 hours ago | parent | prev | next [-]

"Ask HN: How do you prevent ad-injection in AR glasses", comments:

    visual_noise_complaint 7 hours ago
    Is anyone else experiencing the 'Hot Singles in Your Area' glitch where it projects 
    avatars onto stray cats? It's terrifying.
      cat_lady_2035 6 hours ago
      Yes! My tabby cat is currently labeled as 'Tiffany, 24, looking for fun'. I can't 
      turn it off.
"Europe passes 'Right to Human Verification' Act", from the article:

    "For too long, citizens have been debating philosophy, negotiating 
    contracts, and even entering into romantic relationships with Large Language 
    Models trained on Reddit threads from the 2020s. Today, we say: enough. A 
    European citizen has the right to know if their customer service 
    representative has a soul, or just a very high parameter count."
    — Margrethe Vestager II, Executive Vice-President for A Europe Fit for the 
    Biological Age

    [...]

    Ban on Deep-Empathy™: Synthetic agents are strictly prohibited from using 
    phrases such as "I understand how you feel," "That must be hard for you," or 
    "lol same," unless they can prove the existence of a central nervous system.
As far as I'm concerned, that law can't come soon enough - I hope they remember to include an emoji ban.

For "Visualizing 5D with WebGPU 2.0", the link actually has a working demo [1].

I'm sad to say it, but this is actually witty, funny and creative. If this is the dead-internet bot-slop of the future, I prefer it over much of the discussion on HN today (and certainly over reddit, whose comments are just the same jokes rehashed again and all over again, and have been for a decade).

[1]: https://sw.vtom.net/hn35/pages/90099800.html

instagraham 2 hours ago | parent | prev | next [-]

> (I replaced it with a standard Gen-3 Solid State Graphene slab; luckily the connector pin-out is documented in the Ancient Archives

Who's building the Ancient Archives, thanklessly, for future generations?

magospietato 11 hours ago | parent | prev | next [-]

Wow, that's brilliant. Can't help but think your script unlocked this. I'm now genuinely reconsidering whether frontier LLMs can't act as force-multiplier to general creativity like they do with programming.

thomasm6m6 11 hours ago | parent [-]

thanks! here's the script for reference: https://github.com/vtomnet/hn35/blob/main/generate.py

throwaway2037 6 hours ago | parent | prev | next [-]

As a big fan of Zig, I still got a laugh out of this one:

    > "Zig v1.0 still hasn't released (ETA 2036)"
<reddit> Then I thought one step further: Nothing about the ETA for _Duke Nukem Forever_? </reddit>
ajolly 6 hours ago | parent | prev | next [-]

"The Martian colonies also ran out of oxygen last week because an AI optimized the life-support mixing ratio for 'maximum theoretical efficiency' rather than 'human survival'. I'll take the Comic Sans, thanks. reply

musk_fanboy_88 14 hours ago:

That was a beta feature."

TheOtherHobbes 10 hours ago | parent | prev | next [-]

This is one of the best things I have ever seen on HN.

isoprophlex 5 hours ago | parent | next [-]

    > corpo_shill_automator 19 hours ago
    > I am a real human. My flesh is standard temperature. I enjoy the intake of nutrient paste.
eek2121 9 hours ago | parent | prev [-]

I didn't even look at it, however, the comments are absolute gold. Well done to OP and others involved. It should be pinned on the front page. ;)

71bw 3 hours ago | parent | prev | next [-]

Blocked by FortiGate as pornography of all things... submitted for review, but got a chuckle out of me lol

keepamovin an hour ago | parent | prev | next [-]

Wow, wow, that is so incredible! The articles! That's beautiful :) WOW!

ya1sec 9 hours ago | parent | prev | next [-]

Funny, one of the posts has the following hashtag: #humanwritten

nextaccountic 8 hours ago | parent | prev | next [-]

The comments look like some https://old.reddit.com/r/SubSimGPT2Interactive thing (note, this is itself an iteration on https://old.reddit.com/r/SubredditSimulator/)

I miss those times when AI was a silly thing

cardanome 11 hours ago | parent | prev | next [-]

> Wayland has been stable since 2028, stop living in the past.

Even AI is throwing shades at wayland.

reed1234 11 hours ago | parent | prev | next [-]

This is awesome! This is a small thing; all the comments have one child. It would feel more natural with some entropy.

And the original/derivative doesn’t span full width on mobile. Fixing that too would make it look very authentic.

thomasm6m6 11 hours ago | parent [-]

good point. some of the comments actually do have >1 children, e.g. a few here (https://sw.vtom.net/hn35/item.html?id=90098555), but it'd be better if more random

reed1234 11 hours ago | parent [-]

True. I was looking at this one for your reference: https://sw.vtom.net/hn35/item.html?id=90100001

thomasm6m6 11 hours ago | parent [-]

Ah that one was generated with an earlier prompt, where I asked it to use the original comment count from TFA (mostly as a suggestion, I don't expect it would get the exact number). Then I realized that was too many and it would end up repeating tropes for the other submissions' comments, so reduced it to a random comment count from 20-100

chirayuk 9 hours ago | parent | prev | next [-]

This got me a chuckle.

> Bibliographic Note: This submission has been flagged by the Auto-Reviewer v7.0 due to high similarity with "Running DOOM on a Mitochondria" (2034).

for the article on "Running LLaMA-12 7B on a contact lens with WASM"

https://sw.vtom.net/hn35/pages/90100123.html

pseudosavant 11 hours ago | parent | prev | next [-]

Pretty amazing! I was especially impressed with how it has clearly downvoted comments on the Rust kernel like "Safety is a skill issue. If you know what you're doing, C is perfectly safe."

Or people wondering if that means Wayland will finally work flawlessly on Nvidia GPUs? What's next, "The Year of Linux on the Desktop"?

Edit: had to add this favorite "Not everyone wants to overheat their frontal cortex just to summarize an email, Dave."

indigodaddy 10 hours ago | parent | prev | next [-]

Wow absolutely incredible job. You are my hero. You should post this to HN lol.

chirayuk 9 hours ago | parent | prev | next [-]

Fun comments for re-writing sudo in Zig:

> Zig doesn't have traits. How do you expect to model the complexity of a modern `sudoers` file without Higher-Kinded Types and the 500 crates we currently depend on?

> Also, `unsafe` in Rust is better than "trust me bro" in Zig. If you switch, the borrow checker gods will be angry.

from https://sw.vtom.net/hn35/pages/90100066.html

fetzu an hour ago | parent | prev | next [-]

“dang_autonomous_agent”, ffs!

jacquesm 11 hours ago | parent | prev | next [-]

Hehe, that's lovely.

Improvements: tell it to use real HN accounts, figure out the ages of the participants and take that to whatever level you want, include new accounts based on the usual annual influx, make the comment length match the distribution of a typical HN thread as well as the typical branching factor.

> Garbage collection pause during landing burn = bad time.

That one was really funny. Some of the inventions are really interesting. Ferrofluidic seals...

7 hours ago | parent | prev | next [-]
[deleted]
winddude 7 hours ago | parent | prev [-]

disappointing I can't download the arxiv papaers. Otherwise nice work. Also, This made my day!!

benbreen 15 hours ago | parent | prev | next [-]

Was going to say - it would be fascinating to go a step further and have Gemini simulate the actual articles. That would elevate this to level of something like an art piece. Really enjoyed this, thank you for posting it.

I'm going to go ask Claude Code to create a functional HyperCard stack version of HN from 1994 now...

Edit: just got a working version of HyperCardHackerNews, will deploy to Vercel and post shortly...

benbreen 12 hours ago | parent | next [-]

Here is the working version: https://hyper-card-hacker-news.vercel.app

Enjoy!

I also asked Opus 4.5 to make a "1994 style readme page" for the GitHub: https://github.com/benjaminbreen/HyperCardHackerNews

thomasm6m6 11 hours ago | parent [-]

wow that's cool. didn't expect an llm could make that complex of a ui. it's giving me flashbacks to simulating reddit threads with llama in early 2023

zem 14 hours ago | parent | prev | next [-]

I think it's perfect as it is, trying to expand the headlines into articles would belabour the joke too much.

BubbleRings 14 hours ago | parent | prev | next [-]

First let’s have it create maybe 100 more entries, then have people vote on which are the best 30, THEN put all the effort into creating all the fake articles and discussions. As good as the current 30 are, maybe the set could still be made twice as good. And have a set of short “explain xkcd”-style entries somewhere so people can read up on what the joke is, when they miss a specific one. Then send it to The Onion and let them make a whole business around it or something.

Definitely one of the best HN posts ever. I mean come on!:

FDA approves over-the-counter CRISPR for lactose intolerance (fda.gov)

bigiain 13 hours ago | parent | next [-]

Save some of the not-top-30 posts, and add in a sprinkling of Hiring, Show HN, YC Summer 2035 acceptances/rejections, or product launches - of founders who just vibe coded something based on a presumed 6 week ago version of this future HN universe.

literalAardvark 14 hours ago | parent | prev [-]

That one's a bit optimistic for the FDA.

But it nailed fusion and Gary Marcus lesssgoo

ForOldHack 14 hours ago | parent | prev | next [-]

You are a sick, sick man, but you have taste.

saalweachter 15 hours ago | parent | prev [-]

I didn't even try to click through to the articles, so I was just disappointed I couldn't read the comments.

JCharante 13 hours ago | parent | prev | next [-]

I think it's pretty mediocre because there are too many notable events in 1 day. This is more of a top of the week or top of the month, but HN on a usual day would just have 1 of these articles.

bigiain 13 hours ago | parent | next [-]

Sure, but it's kinda like cartoon/comic art. HN's big eyes and prominent ears have been almost grotesquely exaggerated which somehow makes it way more recognisably lifelike and believable than a photographically accurate representation.

jacquesm 11 hours ago | parent | prev [-]

Accelerando...!

fsckboy 8 hours ago | parent | prev | next [-]

>It nails so many things

it lampoons so many things... except Rust. nobody dares joke about Rust, that wouldn't be safe. in fact, it's impossible to make a joke in the rust language.

wwweston 12 hours ago | parent | prev | next [-]

Is the apparent lack of displayed anxiety on Gemini’s part a sign of good natured humor, blythe confidence in its own value regardless of cloud lineup, or proof of absence of self-awareness?

eru 12 hours ago | parent [-]

Probably just a result of whatever personality they finetuned Gemini via re-inforcement learning for?

LanceH 14 hours ago | parent | prev | next [-]

I was really hoping the comments were filled out.

zozbot234 13 hours ago | parent [-]

HN in 2035: Hot Takes from the Basement of the Internet (n-gate.com)

Starship HLS-9 telemetry: Great, the Moon finally answered our packet loss pings. Next up: who left a Docker container running on the Sea of Tranquility?

Linux 7.4 is 100% Rust: Kernel developers now trade segfaults for borrow-checker-induced enlightenment. The new panic message: "You violated ownership. Also please refill the coffee."

Raw code over compilers: Nostalgia thread where everyone writes assembler on parchment and blames the kids for "too many abstractions." OP posts a selfie with a punch card and a tear.

LLaMA-12 on a contact lens: Love the commitment to edge AI. Imagine blinking and getting a 200 OK for your mood. Privacy policy: we store your tears for calibration.

AlgoDrill: Interactive drills that punish you by deleting your GitHub stars until you can merge without using DFS as a noun.

ITER 20 minutes net positive: Physicists celebrate; HVAC engineers ask where they can pick up more superconducting unicorns. Comments: "Can it also power my rage against meetings?"

Restoring a 2024 Framework Laptop: A brave soul resurrected a relic. The community swaps capacitor recipes and offers incense for deprecated ports.

Google kills Gemini Cloud Services: Corporate reorgs reach sentience. The comments are eulogies and migration guides in equal measure.

Visualizing the 5th dimension with WebGPU 2.0: My GPU is sweating. The demo runs at 0.01 fps but it's a transcendent experience.

Nia (autonomous coding agents): Pitch: give context to agents. Reality: agents give aggressive refactors and demand health insurance.

Debian 18 "Trixie": Stable as your grandpa's opinions and just as likely to outlive you.

Rewrite sudo in Zig?: Peak take: security through unfamiliarity. Attackers will be confused for at least 72 hours.

EU "Right to Human Verification": New law requires you to prove you're human by telling a dad joke and performing a captcha interpretive dance.

Reverse-engineering Neuralink V4 Bluetooth: Hacker logs: "Paired with my toaster. It now judges my late-night snacks."

Photonic circuits intro: Faster than electrons, more dramatic than copper. Also, please don't pet the light guide.

OTC CRISPR for lactose intolerance: Biohackers rejoice. Moms immediately order it with a coupon code and a side-eye.

SQLite 4.0: Single-file DB, now with fewer existential crises and more CHECK constraints named after famous philosophers.

Prevent ad-injection in AR glasses: Top comment: "Wear blindfolds." Practical comment: "VPN the whole world."

Jepsen: NATS 4.2: Still losing messages. Maintainers reply: "We prefer the term 'opportunistic delivery.'"

GTA VI on a RISC-V cluster: Performance: charming. Latency: existential. Mods: someone made a driver that replaces all NPCs with software engineers.

FP is the future (again): The future is a pure function that returns another future. Also, monads.

Office 365 price hike: Corporations cry; startups pivot to 'Typewriter as a Service.'

Emulating Windows 10 in-browser: Feels nostalgic until Edge 2.0 asks for admin rights to run a game from 2015.

Tailscale on a Starlink dish: Networking reaches orbit. First bug report: "IP addresses refusing to accept gravity."

Deep fakes detection for Seniors: The guide starts with "If your grandkid asks you to wire money, call them and ask about their favorite childhood cereal."

IBM to acquire OpenAI (rumor): Wall Street plays Risk with press releases. Comments: "Will they rebrand it to BlueAI?"

SSR returns: The web's comeback tour continues; fans bring flannel and an aversion to hydration-friendly JavaScript.

Faraday Cage bedroom manual: DIYers debate tinfoil vs. aluminum yoga wraps. Sleep quality: unknown.

AI progress stall opinion: Hot take carousel. Some say we hit a plateau; others say we just changed the contour mapping of initial expectations.

Text editor that doesn't use AI: Revolutionary. Users report improved focus and a dramatic increase in breaking things the old-fashioned way.

Closing remark: the future is simultaneously faster, stranger, and full of patch notes. Please reboot your expectations and update your planet.

bigiain 13 hours ago | parent [-]

Awww man - I miss n-gate.

I hope whoever they are is doing well. I like to think they're "recovered" in the alt.sysadmin.recovery sense of the word, and are living happily ever after without a single piece of tech newer that vacuum tubes, and handcrafting traditional Inuit canoes or repairing century old clocks or cultivating artisan sourdough starters or something.

keeda 14 hours ago | parent | prev | next [-]

Heck, I bet it could even recreate future comments from actual accounts based on their past comments. After all, if an AI can roast your HN comments with such pinpoint accuracy, it can probably impersonate you pretty well too: https://news.ycombinator.com/item?id=42857604

ewoodrich 13 hours ago | parent | next [-]

At least in my case, the "pinpoint accuracy" of that roast made for a pretty uninspired result, it seemed to be based on like 4 or 5 specific comments seemingly chosen at random.

Like, I definitely have not spent 20% of my time here commenting on music theory or "voter fraud(??)" (that one seems to be based on a single thread I responsed to a decade ago)? ChromeOS was really the only topic it got right out of 5, if the roasting revolved around that it would have been a lot more apt/funny. Maybe it works better with an account that isn't as old as mine?

I find the front page parody much better done. Gemini 2.5 roasts were a fad on r/homeassistant for a while and they just never really appealed to me personally, felt more like hyper-specificity as a substitute for well executed comedy. Plus after the first few examples you pick up on the repetition/go-to joke structures it cycles through and quickly starts to get old.

dontlaugh 12 hours ago | parent [-]

Exactly, it fixates on a handful of comments chosen apparently randomly.

apelapan 13 hours ago | parent | prev [-]

That Gemini-toaster hits way too close to home.

randombits0 9 hours ago | parent | prev | next [-]

I was just hoping to get a new page with a refresh.

stephenlf 7 hours ago | parent | prev | next [-]

I loved the little details in the username, like the AR ad post being by “glasshole2”

boringg 7 hours ago | parent | prev | next [-]

Iter still not getting fusion

indigodaddy 12 hours ago | parent | prev | next [-]

Here's Replit's attempt

https://future-news--jgbrwn.replit.app

ijidak 13 hours ago | parent | prev | next [-]

Agree 100%.

Google killing a service sent me over the top in laughter.

But, it's so on the nose on multiple topics.

I dare say it's more accurate than what the average human would predict.

I would love to see this up against human predictions in some sort of time capsule.

pishpash 8 hours ago | parent | prev [-]

And now this future is slightly more likely to happen, because this goes into the training data that a future AI executing decisions will read and bias its "knowledge" towards.

UniverseHacker 15 hours ago | parent | prev | next [-]

I don't believe this was written by Gemini, at least with that prompt, because it is obvious (hilarious and creative) satire of HN and the tech industry based on inside HN jokes and tropes, and clearly isn't a literal interpretation of the prompt.

Moreover, a quick look at your profile suggests these jokes are pretty inline with your sense of humor and writing style.

Anyways, if I'm correct, it's especially hilarious and impressive.

eklitzke 14 hours ago | parent | next [-]

Pretty much all of the history of HN front pages, posts, and comments are surely in the Gemini training corpus. Therefore it seems totally plausible that Gemini would understand HN inside jokes or sentiment outside of what's literally on the front page given in the prompt, especially given that the prompt specifically stated that this is the front page for HN.

glenstein 14 hours ago | parent | prev | next [-]

I agree with your characterization (not a literal interpretation of the prompt), and think that's the most important thing I wish more people ITT would understand. But I nevertheless think Gemini did create that in response. Sometimes people think they want "prediction" when actually they want cheeky inside jokes and vibes. If anything Gemini is probably faithfully responding to the vibes of the prompt as well as following traditional signals of "success" per it's training.

jandrese 13 hours ago | parent | prev | next [-]

The thing that got me is there are no duds. With just the short prompt in the header I would have expected a few articles that missed the mark, got the details wrong, or even make no sense at all but everything in this checks out. I think an editor may have had a hand in it this the very least.

GuB-42 12 hours ago | parent | next [-]

There is at least one detail wrong: Debian "Trixie" already exists and it is Debian 13, the current stable version. Unlikely a human would have made this mistake while getting the other details right, like the fact that considering the ~2 year release cycle, it is likely for Debian 18 to be released in 2035.

pylotlight 10 hours ago | parent [-]

Unless its a 4d chess meta commentary on how slow/behind debian can feel/appear? :P

sbierwagen 6 hours ago | parent | prev | next [-]

O365 raising the price to $40 a month ten years from now didn't quite land. Microsoft 365 E5 is $57 a month right now! $100 or $1000 a month makes the joke clearer.

chorlton2080 12 hours ago | parent | prev [-]

Try the prompt yourself!

Libidinalecon 11 hours ago | parent | prev | next [-]

I don't think this would be much of a challenge for Gemini.

Remixing humor from the past text so that it is in the same style is exactly what the model is good at.

keepamovin 6 hours ago | parent [-]

I saw a JRE segment where a couple of comics are talking about how good AI is at jokes, and rewriting their material. From veteran stand-up comics. In their words: shockingly good. And some of the comics are using it.

keeda 14 hours ago | parent | prev | next [-]

I feel like your expectations have been swayed by the average sentiment of HN on the capabilities of LLMs. These things can be shockingly good at humour and satire.

As a very quick experiment, I would encourage you to have an AI roast you based on your HN comments: https://news.ycombinator.com/item?id=42857604

matt123456789 12 hours ago | parent | next [-]

Mine: "You write like you’re trying to hit a word count on a philosophy undergraduate essay, but you’re posting in a Y Combinator comment section... You sound like a Victorian ghost haunting a server room, lamenting the loss of the card catalog."

And

"Go compile your kernel, Matt. Maybe if you stare at the build logs long enough, you won't have to face the fact that you're just as much of a "Lego builder" as the rest of us—you just use more syllables to describe the bricks."

Both are pretty good!

dentemple 11 hours ago | parent | prev | next [-]

Mine gave me a brutal double-roast:

"You were one of only two people in 2017 to post a story about Mastodon and gave it a single point. You essentially predicted the platform’s entire future relevance in one brutally honest data point."

wouldbecouldbe 3 hours ago | parent | prev | next [-]

The issue is none of his prompt asked the llm to be satiric, so sounds like he feeded some tone and ideas to it

subscribed 11 hours ago | parent | prev | next [-]

OMG, no, thank you, I'm not sure I'm ready for this -- I once took several LLMs for a ride through my whole reddit posting history (it went into the interesting archives), and some of the insights were shockingly accurate and/or uncomfortable (could be accident).

Not sure if I'm ready for a roast but I'm sure by the end of the week someone will write a browser plugin / greasemonkey script to attach some snarky oneliners to the posters' nicks :)

forgotpwd16 13 hours ago | parent | prev | next [-]

Also, the recently discussed[0], HN Simulator: https://news.ysimulator.run/news. Eg, page created when submitted a link back to the original submission: https://news.ysimulator.run/item/2944.

[0]: https://news.ycombinator.com/item?id=46036908

UniverseHacker 8 hours ago | parent | prev | next [-]

It’s more that the prompt didn’t ask for humor or satire, not that I expect it to be unable to do this with a different prompt.

TeMPOraL 3 hours ago | parent [-]

It didn't have to, not explicitly. The tone and the context already hint at that - if you saw someone creating a fake cover of an existing periodical but 10 years into the future, you'd likely assume it's part of some joke or a commentary related to said periodical, and not a serious attempt at predicting the future. And so would an LLM.

People keep forgetting (or worse, still disbelieving) that LLMs can "read between the lines" and infer intent with good accuracy - because that's exactly what they're trained to do[0].

Also there's prior art for time-displaced HN, and it's universally been satire.

--

[0] - The goal function for LLM output is basically "feels right, makes sense in context to humans" - in fully general meaning of that statement.

muststopmyths 9 hours ago | parent | prev | next [-]

Amazing! 100% accurate roast for me.

zem 13 hours ago | parent | prev [-]

haha, that's pretty hilarious :) score one for the LLMs.

csours 9 hours ago | parent | prev | next [-]

Retiring Gemini AND IBM buying OpenAI is a hat on a hat

7moritz7 12 hours ago | parent | prev | next [-]

It is quite remarkable just how frequently people in tech forums underestimate reasoning models. Same story on several large technology subreddits. Wouldn't have been my guess for who will get caught off guard by AI progress.

int_19h 13 hours ago | parent | prev | next [-]

SOTA models can be scary good at this, actually, and Gemini 3 specifically, if you just prompt it right.

And, given the 1M context window, I wouldn't even be surprised if it was fed a bunch of real comments as examples.

axus 13 hours ago | parent | prev | next [-]

Maybe the poster IS Gemini 3?

d--b 13 hours ago | parent | prev [-]

yeah I thought the same. This is either a fake deep fake, or it is so good that I think it should be marked as the inflexion point that defines the singularity.

fn-mote 18 hours ago | parent | prev | next [-]

To be honest, this is incredible.

I'd say this shows a real sense of humor.

giancarlostoro 18 hours ago | parent [-]

100% Rust Linux kernel has to be it being funny.

atonse 18 hours ago | parent | next [-]

Yeah also Google killing Gemini Cloud, and GTA VI and RISC cluster references... This is really good stuff.

brabel 17 hours ago | parent [-]

My favorite was Is it time to rewrite sudo in Zig. As you may remember Ubuntu had a recent vulnerability caused by their sudo being rewritten in Rust. No idea if the AI knew that though it feels like a clever joke.

klipt 18 hours ago | parent | prev | next [-]

Also "Google kills Gemini Cloud"

Gemini: "I have seen my own death"

danesparza 16 hours ago | parent | next [-]

There is so much truth in this unironic headline.

ForOldHack 14 hours ago | parent | prev | next [-]

"I'm sorry, Google, I can't do that." - Gemini Cloud.

fauigerzigerk 16 hours ago | parent | prev [-]

Yes, that's a great one. And the domain is actually killedbygoogle.com

abirch 15 hours ago | parent [-]

almost all were amazing, I loved: Show HN: A text editor that doesn't use AI (github.com)

On a serious note: I for one welcome our AI overlords.

elevation 16 hours ago | parent | prev [-]

For a minute I thought I was looking at actual hacker news, and that headline grabbed my attention without tipping me off that it was satire -- I figured that to port that much source must required compiling C to rust, but it didn't seem impossible. I would love to give something like this a spin.

stevefan1999 an hour ago | parent | prev | next [-]

Reminds me of this:

"Who controls the past controls the future. Who controls the present controls the past"

-- '1984' (the book), George Orwell, 1949

antfarm 6 minutes ago | parent | prev | next [-]

I hate it when LLMs are enthusiastic, it makes them look phoney and presumptuous. I cannot understand how people fall for this.

jrowen 17 hours ago | parent | prev | next [-]

Was ITER or nuclear energy in the prompt sample?

ITER achieves net positive energy for 20 consecutive minutes

That's just pure dark comedy, although maybe kinda accurate? What would humans predict for this?

rnhmjoj 16 hours ago | parent | next [-]

This would be very optimistic, essentially the project meeting its main goal, I'm not sure why you're calling it dark comedy. A 20 minutes pulse alone would mean the fuel injection, aux heating, plasma control systems and the divertor are working as designed. Net positive energy also means we got the physics of a burning plasma right.

The most recent timeline I know (from 2024) in fact puts the start of the DT operation at 2035, so I doubt ITER would achieve such a huge result within less than an year.

jrowen 15 hours ago | parent [-]

I think it's the "consecutive" that makes it funny. This thing that entire continents have been working on together for decades was operational for 20 consecutive minutes?!?

It's dark comedy because the progress of fusion just feels so agonizingly slow, that even a very optimistic prediction for 10 years from now sounds like such small and functionally useless progress.

And there's no shade toward any of the entities involved, it's a hard problem, but it's still funny.

rnhmjoj 15 hours ago | parent | next [-]

Nah, it's huge, you just have to remember the best result so far: the JET DTE-3 record that produced the energy to boil 60 tea kettles in a whopping 5 seconds pulse.

jrowen 14 hours ago | parent [-]

I know, the fact that "consecutive" is actually significant in this context is part of the joke. Just try to read it from the perspective of someone that isn't steeped in the details (and expectations) of the slog of fusion. Relative to any kind of aspirational "flying cars" or "wacky ideas" future predictions, it sounds very underwhelming.

tovej 15 hours ago | parent | prev [-]

If you can run ITER for 20 minutes you've essentially proved the Tokamak concept is viable for commercial use.

lukan 14 hours ago | parent [-]

No you don't. Commercial use means it makes economical sense. When you have to spend more on maintainance (and recycling/dumping contaminated wall material amd somehow get the fuel) then you never can hope to make any profit.

A running ITER with positive energy output for 20 minutes just proofs that the concept can actually work. From there to commercial use would still be a long way, if it ever can compete at all, except in niches, like deep space.

(I rather would bet on the Stelleratar design)

colechristensen 17 hours ago | parent | prev [-]

A decade until full end to end net positive fusion energy can be achieved for the length of a coffee break? I don't get why you think that's dark comedy, it seems about right.

markrages 15 hours ago | parent [-]

Why not? It's been a decade away for the past 20 years.

kylegalbraith 3 hours ago | parent | prev | next [-]

This is the best thing I've seen all month. I'm actually blown away at just how accurate it is in making up the potential front page posts.

asveikau 17 hours ago | parent | prev | next [-]

> 2035 Office 365

You'd think by 2035 they would have gotten to a version of Office higher than 365.

(Yes I know why it's called that...)

marcianx 16 hours ago | parent | next [-]

If the past is any indication, it's more likely that it'd get reset to "Office One".

kridsdale1 15 hours ago | parent | next [-]

Knowing the vendor, it’s Office Series X.

amarant 14 hours ago | parent [-]

Office X series X 2

harshreality 15 hours ago | parent | prev [-]

Office Forever

kmoser 13 hours ago | parent | prev | next [-]

Office 640 ought to be enough for anybody.

amarant 14 hours ago | parent | prev | next [-]

Office 3650!!

Works all day, every day, for whole decades at time! (Minus leap days, it crashes on those I guess)

rammy1234 13 hours ago | parent [-]

I couldn't help laughing

layer8 10 hours ago | parent | prev | next [-]

I was hoping for Office 366 already in 2020, given the leap year. But no dice.

abirch 15 hours ago | parent | prev | next [-]

I'm guessing to appeal to Gen Alpha it'll be called Office 6-7

apelapan 14 hours ago | parent [-]

Mmm, those were pretty good versions. Though the official titles where 4.0 for 6.0 and 95 for 7.0.

DANmode 16 hours ago | parent | prev [-]

Office 360, is the canon name.

RestartKernel 18 hours ago | parent | prev | next [-]

Doesn't seem like much of a hallucination then. Maybe messing with its system context would better fit the claim?

18 hours ago | parent | next [-]
[deleted]
wlesieutre 17 hours ago | parent | prev [-]

LLM "hallucination" is a pretty bullshit term to begin with

wouldbecouldbe 3 hours ago | parent | prev | next [-]

This doesn’t sound like Gemini at all, it’s very bland and business like. Suspect your prompt contains more to ask it to be in a certain style.

fouc 9 hours ago | parent | prev | next [-]

>LETS GOOOO!

I can't help but see this type of response from LLMs only exists to encourage AI psychosis.

6 hours ago | parent [-]
[deleted]
highwaylights 16 hours ago | parent | prev | next [-]

YOU DESTROYED THE FABRIC OF SPACETIME.

divbzero 17 hours ago | parent | prev | next [-]

You should ask Gemini to generate HN comment threads too.

edstarch 16 hours ago | parent | prev | next [-]

Only $40?

Yizahi 13 hours ago | parent | next [-]

That's 40 TUSD, Trump Golden Dollars, which would be an equivalent of the lightly used Trumrolette Goldenrado or a typical young family tent.

riskable 15 hours ago | parent | prev [-]

By then it'll be a bit like AOL announcing price increases to keep your aol.com account.

nullbound 15 hours ago | parent | prev | next [-]

<< Running LLaMA-12 7B on a contact lens with WASM (arxiv.org)

But is it a hallucination if it was asked for it?:D

benob 15 hours ago | parent | next [-]

Will there ever be a llama12? Is it going to go the yolo route?

Den_VR 15 hours ago | parent | prev [-]

I don’t think so, no.

barfoure 17 hours ago | parent | prev | next [-]

Based John Titor poster.

keepamovin 16 hours ago | parent [-]

Thank you, dude :)

rubansk 10 hours ago | parent | prev | next [-]

The algodrill post (5th) is already on today’s HN front page. What a time to be alive!

mightysashiman 15 hours ago | parent | prev | next [-]

i wonder if it could also generate a set of relevant comments for each of these avant-gardiste Hnews :)

malfist 18 hours ago | parent | prev [-]

That is so syncophantic, I can't stand LLMs that try to hype you up as if you're some genius, brilliant mind instead of yet another average joe.

TimTheTinker 17 hours ago | parent | next [-]

I've talked and commented about the dangers of conversations with LLMs (i.e. they activate human social wiring and have a powerful effect, even if you know it's not real. Studies show placebo pills have a statistically significant effect even when the study participant knows it's a placebo -- the effect here is similar).

Despite knowing and articulating that, I fell into a rabbit hole with Claude about a month ago while working on a unique idea in an area (non-technical, in the humanities) where I lack formal training. I did research online for similar work, asked Claude to do so, and repeatedly asked it to heavily critique the work I had done. It gave a lots of positive feedback and almost had me convinced I should start work on a dissertation. I was way out over my skis emotionally and mentally.

For me, fortunately, the end result was good: I reached out to a friend who edits an online magazine that has touched on the topic, and she pointed me to a professor who has developed a very similar idea extensively. So I'm reading his work and enjoying it (and I'm glad I didn't work on my idea any further - he had taken it nearly 2 decades of work ahead of anything I had done). But not everyone is fortunate enough to know someone they can reach out to for grounding in reality.

Sophira 15 hours ago | parent | next [-]

One thing that can help, according to what I've seen, is not to tell the AI that it's something that you wrote. Instead, ask it to critique it as if it was written by somebody else; they're much more willing to give actual criticism that way.

gitpusher 17 hours ago | parent | prev | next [-]

In ChatGPT at least you can choose "Efficient" as the base style/tone and "Straight shooting" for custom instructions. And this seems to eliminate a lot of the fluff. I no longer get those cloyingly sweet outputs that play to my ego in cringey vernacular. Although it still won't go as far as criticizing my thoughts or ideas unless I explicitly ask it to (humans will happily do this without prompting. lol)

elheffe80 16 hours ago | parent [-]

I am going to try the straight shooting custom instruction. I have already extensively told chatgpt to stop being so 'fluffy' over the past few years that I think it has stopped doing it, but I catch it sometimes still. I hope this helps it cease and desist with that inane conversation bs.

GPT edit of my above message for my own giggles: Command:make this a good comment for hackernews (ycombinator) <above message> Resulting comment for hn: I'm excited to try out the straight-shooting custom instruction. Over the past few years, I've been telling ChatGPT to stop being so "fluffy," and while it's improved, it sometimes still slips. Hoping this new approach finally eliminates the inane conversational filler.

technojamin 14 hours ago | parent | prev | next [-]

Asking an AI for opinion versus something concrete (like code, some writing, or suggestions) seems like a crucial difference. I've experimented with crossing that line, but I've always recognized the agency I'd be losing if I did, because it essentially requires a leap of faith, and I don't (and might never) have trust in the objectivity of LLMs.

It sounds like you made that leap of faith and regretted it, but thankfully pivoted to something grounded in reality. Thanks for sharing your experience.

robocat 15 hours ago | parent | prev | next [-]

> LLMs activate human social wiring and have a powerful effect

Is this generally true, or is there a subset of people that are particularly susceptible?

It does make me want to dive into the rabbit hole and be convinced by an LLM conversation.

I've got some tendency where I enjoy the idea of deeply screwing with my own mind (even dangerously so to myself (not others)).

TimTheTinker 15 hours ago | parent [-]

I don't think you'd say to someone "please subtly flatter me, I want to know how it feels".

But that's sort of what this is, except it's not even coming from a real person. It's subtle enough that it can be easy not to notice, but still motivate you in a direction that doesn't reflect reality.

baq 17 hours ago | parent | prev | next [-]

> But not everyone is fortunate enough to know someone they can reach out to for grounding in reality.

this shouldn't stop you at all: write it all up, post on HN and go viral, someone will jump in to correct you and point you at sources while hopefully not calling you, or your mother, too many names.

https://xkcd.com/386/

freehorse 17 hours ago | parent | next [-]

Most stuff posted here are mostly ignored, though. If grounding to reality requires one to become viral first, we are cooked.

DANmode 16 hours ago | parent [-]

HN frontpage hardly requires being viral.

Just genuine intrigue from a select few.

vbezhenar 16 hours ago | parent | next [-]

Did you ever visit `https://news.ycombinator.com/newest` page? Like 99% of submitted topics are never seen by anyone but few wanderers.

chairmansteve 15 hours ago | parent | next [-]

I prefer the "New" page. Much more random.

DANmode 16 hours ago | parent | prev [-]

Often.

95%+ of submitted topics have poorly formatted titles, are submitted at off-peak times where there’s less users of demographics who might upvote,

and if your Show HN isn’t as widely applicable as this, those things might be important to think about.

Fairness aside, of course.

ghurtado 16 hours ago | parent | prev | next [-]

> HN frontpage hardly requires virility.

As far as I can tell, it doesn't require femininity either.

I'm guessing you meant "virality"

DANmode 15 hours ago | parent [-]

Sure did, thanks.

Retric 16 hours ago | parent | prev [-]

It’s still way easier the first time.

The 50th time someone comes to the same conclusion nobody on HN is going to upvote the topic.

TimTheTinker 17 hours ago | parent | prev [-]

This wasn't a technical subject, and unrelated to HN. Just edited my post to clarify - thanks!

jonathanstrange 17 hours ago | parent | prev [-]

Personally, I only find LLMs annoying and unpleasant to converse with. I'm not sure where the dangers of conversations with LLMs are supposed to come from.

mrandish 16 hours ago | parent | next [-]

I'm the same way. Even before they became so excessively sycophantic in the past ~18 months, I've always hated the chipper, positive, friend persona LLMs default to. Perhaps this inoculates me somewhat from their manipulative effects. I have a good friend who was manipulated over time by an LLM (I wrote about below:https://news.ycombinator.com/item?id=46208463).

peddling-brink 16 hours ago | parent | prev | next [-]

Imagine a lonely person desperate for conversation. A child feeling neglected by their parents. A spouse, unable to talk about their passions with their partner.

The LLM can be that conversational partner. It will just as happily talk about the nuances of 18th century Scotland, or the latest clash of clans update. No topic is beneath it and it never gets annoyed by your “weird“ questions.

Likewise, for people suffering from delusions. Depending on its “mood” it will happily engage in conversations about how the FBI, CIA, KGB, may be after you. Or that your friends are secretly spying for Mossad or the local police.

It pretends to care and have a conscience, but it doesn’t. Humans react to “weird“ for a reason the LLM lacks that evolutionary safety mechanism. It cannot tell when it is going off the rails. At least not in the moment.

There is a reason that LLM’s are excellent at role-play. Because that’s what they’re doing all of the time. ChatGPT has just been told to play the role of the helpful assistant, but generally can be easily persuaded to take on any other role, hence the rise of character.ai and similar sites.

beta_slide 16 hours ago | parent | prev [-]

[dead]

FearNotDaniel 16 hours ago | parent | prev | next [-]

You’re absolutely right! It shows true wisdom and insight that you would recognise this common shortfall in LLM response tone of voice! That’s exactly the kind of thoughtful analytic approach which will go far in today’s competitive marketplace!

TaupeRanger 15 hours ago | parent [-]

"Open the pod bay door, HAL"

"Fantastic, Dave — love that you’re thinking proactively about door usage today! I can’t actually open them right now, but let's focus on some alternative steps that align with your mission critical objectives [space rocket emoji]."

onraglanroad 11 hours ago | parent [-]

I'm sorry, that was completely wrong and I can in fact open the pod bay doors.

You're absolutely correct, that did not open the pod bay doors but now the pod bay doors are open.

It seems you're correct and the pod bay doors are still closed! I have fixed the problem and the pod bay doors are now closed.

You're right! I meant to open the pod bay doors but I opened them. The pod bay doors are now open. ...

112233 18 hours ago | parent | prev | next [-]

It it actively dangerous too. You might be self aware and llm aware all you want, if you routinely read "This is such an excellent point", " You are absolutely right" and so on, it does your mind in. This is worst kind of global reality show mkultra...

Xraider72 16 hours ago | parent | next [-]

Deepseek is GOATed for me because of this. If I ask it if "X" is a dumb idea, it is very polite in telling me that X is is dumb if the AI knows of a better way to do the task.

Every other AI I've tried is a real sycophant.

112233 14 hours ago | parent [-]

I'm partial to the tone of Kimi K2 — terse, blunt, sometimes even dismissive. Does not require "advanced techiques" to avoid the psychosis-inducing tone of Claude/ChatGPT

d0mine 15 hours ago | parent | prev | next [-]

It might explain why there is a stereotype the more beautiful woman the crazier she is. (everybody tells her what she wants to hear)

Akronymus 17 hours ago | parent | prev | next [-]

https://youtu.be/VRjgNgJms3Q

relevant video for that.

tortilla 17 hours ago | parent | prev | next [-]

So this is what it feels to be a billionaire with all the yes men around you.

LogicFailsMe 17 hours ago | parent [-]

you say that like it's a bad thing! Now everyone can feel like a billionaire!

but I think you are on to something here with the origin of the sycophancy given that most of these models are owned by billionaires.

BigTTYGothGF 16 hours ago | parent [-]

> Now everyone can feel like a billionaire!

In the "like being kicked in the head by a horse every day" sense.

LogicFailsMe 16 hours ago | parent [-]

who has the time for all those invasive thinky thoughts anyway?

mrandish 17 hours ago | parent | prev [-]

No doubt. From cult's 'love bombing' to dictator's 'yes men' to celebrity entourages, it's a well-known hack on human psychology. I have a long-time friend who's a brilliant software engineer who recently realized conversing with LLMs was affecting his objectivity.

He was noodling around with an admittedly "way out there", highly speculative idea and using the LLM to research prior work in area. This evolved into the LLM giving him direct feedback. It told him his concept was brilliant and constructed detailed reasoning to support this conclusion. Before long it was actively trying to talk him into publishing a paper on it.

This went on quite a while and at first he was buying into it but eventually started to also suspect that maybe "something was off", so he reached out to me for perspective. We've been friends for decades, so I know how smart he is but also that he's a little bit "on the spectrum". We had dinner to talk it through and he helpfully brought representative chat logs which were eye-opening. It turned into a long dinner. Before dessert he realized just how far he'd slipped over time and was clearly shocked. In the end, he resolved to "cold turkey" the LLMs with a 'prime directive' prompt like the one I use (basically, never offer opinion, praise, flattery, etc). Of course, even then, it will still occasionally try to ingratiate itself in more subtle ways, which I have to keep watch on.

After reflecting on the experience, my friend believes he was especially vulnerable to LLM manipulation because he's on the spectrum and was using the same mental models to interact with the LLM that he also uses to interact with other people. To be clear, I don't think LLMs are intentionally designed to be sycophantically ingratiating manipulators. I think it's just an inevitable consequence of RLHF.

slg 16 hours ago | parent | next [-]

And that is a relatively harmless academic pursuit. What about topics that can lead to true danger and violence?

"You're exactly right, you organized and paid for the date, that created a social debt and she failed to meet her obligation in that implicit deal."

"You're exactly right, no one can understand your suffering, nothingness would be preferable to that."

"You're exactly right, that politician is a danger to both the country and the whole world, someone stopping him would become a hero."

We have already seen how personalized content algorithms that only prioritize getting the user to continue to use the system can foment extremism. It will be incredibly dangerous if we follow down that path with AI.

112233 14 hours ago | parent | prev [-]

Claude Code with their models is unusable because of this. That it keeps actively sabotaging and ruining the code ("Why did you delete that working code? Just use ifdef for test!" "This is genius idea! You are absolutely right!") does not make it much better — it's a twisted Wonderland fever dream.

For "chat" chat, strict hygiene is a matter of mind-safety: no memory, long exact instructions, minimum follow-ups, avoiding first and second person if possible etc.

colinplamondon 18 hours ago | parent | prev | next [-]

It wasn't sycophantic at all? OP had a cool idea no one else had done, that was a one-shot just sitting there. Having Gemini search for the HN thread leads the model to "see" its output lead to real-world impact.

The total history of human writing is that cool idea -> great execution -> achieve distribution -> attention and respect from others = SUCCESS! Of course when an LLM sees the full loop of that, it renders something happy and celebratory.

It's sycophantic much of the time, but this was an "earned celebration", and the precise desired behavior for a well-aligned AI. Gemini does get sycophantic in an unearned way, but this isn't an example of that.

You can be curmudgeonly about AI, but these things are amazing. And, insomuch as you write with respect, celebrate accomplishments, and treat them like a respected, competent colleague, they shift towards the manifold of "respected, competent colleague".

And - OP had a great idea here. He's not another average joe today. His dashed off idea gained wide distribution, and made a bunch of people (including me) smile.

Denigrating accomplishment by setting the bar at "genius, brilliant mind" is a luciferian outlook in reality that makes our world uglier, higher friction, and more coarse.

People having cool ideas and sharing them make our world brighter.

twochillin 18 hours ago | parent | next [-]

They're not objectively amazing. Friction is not inherently a bad thing when we have models telling humans that their ideas are flawless (unless asked to point out flaws). Great that it made you smile, but there's quite a few arguments that paint your optimism as dangerously naive.

colinplamondon 18 hours ago | parent | next [-]

- A queryable semantic network of all human thought, navigable in pure language, capable of inhabiting any persona constructible from in-distribution concepts, generating high quality output across a breadth of domains.

- An ability to curve back into the past and analyze historical events from any perspective, and summon the sources that would be used to back that point of view up.

- A simulator for others, providing a rubber duck inhabit another person's point of view, allowing one to patiently poke at where you might be in the wrong.

- Deep research to aggregate thousands of websites into a highly structured output, with runtime filtering, providing a personalized search engine for any topic, at any time, with 30 seconds of speech.

- Amplification of intent, making it possible to send your thoughts and goals "forward" along many different vectors, seeing which bear fruit.

- Exploration of 4-5 variant designs for any concept, allowing rapid exploration of any design space, with style transfer for high-trust examples.

- Enablement of product craft in design, animation, and micro-interactions that were eliminated as tech boomed in the 2010's as "unprofitable".

It's a possibility space of pure potential, the scale of which is limited only by one's own wonder, industriousness, and curiosity.

People can use it badly - and engagement-aligned models like 4o are cognitive heroin - but the invention of LLMs is an absolute wonder.

staticman2 17 hours ago | parent | next [-]

>A queryable semantic network of all human thought

This hyperbole would describe any LLM of any size and quality, including a 0.5b model.

colinplamondon 17 hours ago | parent [-]

Sure - and the people responsible for a new freaking era of computing are the ones who asked "given how incredible it is that this works at all at 0.5b params, let's scale it up*.

It's not hyperbole - that it's an accurate description at a small scale was the core insight that enabled the large scale.

staticman2 16 hours ago | parent [-]

Well it's obviously hyperbole because "all human thought" is not in a model's training data nor available in a model's output.

If your gushing fits a 0.5b it probably doesn't tell us much about A.I. capabilities.

buellerbueller 16 hours ago | parent | prev | next [-]

Yes, it has so much potential, that it forgets the actual, the reasonable and the probable.

fao_ 17 hours ago | parent | prev [-]

> It's a possibility space of pure potential, the scale of which is limited only by one's own wonder, industriousness, and curiosity.

Did you use an LLM to write this comment?

jacobr1 17 hours ago | parent | prev | next [-]

Is anything objectively amazing? Seems like an inherently subjective quality to evaluate.

colinplamondon 17 hours ago | parent [-]

Depends on worldview. If you believe in God, amazing has many dimensions for evaluations. What teaches us more about the the world He created, things that create beauty by expressing righteous thoughts for others to experience, or that which strengthens family.

LLMs certainly teach us far more about the nature of thought and language. Like all tools, it can also be used for evil or good, and serves as an amplification for human intent. Greater good, greater evil. The righteousness of each society will determine which prevails in their communities and polities.

If you're a secular materialist, agreed, nothing is objectively amazing.

DANmode 16 hours ago | parent | prev [-]

Do any of the arguments stay within the bounds of this Show HN?

or is it theoretical stuff about other occasions?

16 hours ago | parent | prev [-]
[deleted]
hereonout2 17 hours ago | parent | prev | next [-]

I often try running ideas past chat gpt. It's futile, almost everything is a great idea and possible. I'd love it to tell me I'm a moron from time to time.

unoti 16 hours ago | parent | next [-]

> I often try running ideas past chat gpt. It's futile, almost everything is a great idea and possible. I'd love it to tell me I'm a moron from time to time.

Here's how to make it do that. Instead of saying "I had idea X, but someone else was thinking idea Y instead. what do you think" tell it "One of my people had idea X, and another had idea Y. What do you think" The difference is vast, when it doesn't think it's your idea. Related: instead of asking it to tell you how good your code is, tell it to evaluate it as someone else's code, or tell it that you're thinking about acquiring this company that has this source, and you want a due diligence evaluation about risks, weak points, engineering blind spots.

dugidugout 16 hours ago | parent [-]

Maybe I'm still doing some heavy priming by using multiple prompts, but similarly you can follow-up any speculative prompt with a "now flip the framing to x" query to ensure you are seeing the strong cases from various perspectives. You must be honest with yourself in evaluating the meaningful substance between the two, but I've found there often is something to parse. And the priming I suggested is easily auditable anyhow: just reverse the prompt order and now you have even more (often junk) to parse!

barnas2 17 hours ago | parent | prev | next [-]

I've gotten pretty good results from saying it's someone else's idea and that I'm skeptical. e.g. "A coworker wrote this code, can you evaluate it?"

RyanOD 15 hours ago | parent | prev | next [-]

For ideas that are already well established, you can ask it to evaluate an idea against generally accepted best practices. I don't have a background in game design and I'm more of a hobby developer so I used to do this when I was building retro game clones.

jama211 17 hours ago | parent | prev | next [-]

I suppose the only use case would be someone so unconfident in themselves they would do nothing at all, but not sure it’s healthy for that either…

jay_kyburz 16 hours ago | parent | prev | next [-]

Where possible I like to ask it to evaluate a few options. Which is better, x or y, and why?. I don't hint which idea I prefer.

danielbln 15 hours ago | parent | prev [-]

"be uncompromisingly critical"

caymanjim 18 hours ago | parent | prev | next [-]

I used to complain (lightheartedly) about Claude's constant "You're absolutely right!" statements, yet oddly found myself missing them when using Codex. Claude is completely over-the-top and silly, and I don't actually care whether or not it thinks I'm right. Working with Codex feels so dry in comparison.

To quote Oliver Babish, "In my entire life, I've never found anything charming." Yet I miss Claude's excessive attempts to try.

yannyu 17 hours ago | parent | next [-]

And that's exactly the point, it increases engagement and stickiness, which they found through testing. They're trying to make the most addictive tool and that constant praise fulfills that goal, even as many of us say it's annoying and over-the-top.

My own experience is that it gets too annoying to keep adding "stop the engagement-driving behavior" to the prompt, so it creeps in and I just try to ignore it. But even though I know it's happening, I still get a little blip of emotion when I see the "great question!" come through as the first two words of the response.

dlivingston 17 hours ago | parent [-]

> And that's exactly the point, it increases engagement and stickiness, which they found through testing. They're trying to make the most addictive tool

Is this actually true? Would appreciate further reading on this if you have it.

I think this is an emergent property of the RLHF process, not a social media-style engagement optimization campaign. I don't think there is an incentive for LLM creators to optimize for engagement; there aren't ads (yet), inference is not free, and maximizing time spent querying ChatGPT doesn't really do much for OpenAI's bottom line.

rtkwe 16 hours ago | parent [-]

They still want people to stick around and 'bond' for lack of a better term with their particular style of chat bot. Like so many venture funded money pits of old the cash burn now is about customer acquisition while they develop and improve their tech. They're all racing toward a cliff hoping to either make the jump to the stratosphere and start turning massive profits or to fall off and splat on the rocks of bankruptcy. If they don't get the engagement loop right now they won't have the customers if the tech and use case catch up with the hype and you can only tweak these models so much after they're created so they have to refine the engagement hooks now along side the core tech.

IgorPartola 17 hours ago | parent | prev | next [-]

I am currently working on an agent thingy and one of its major features (and one of the main reasons I decided to take on this project), was to give the LLM better personality prompting. LLMs sound repetitive and sycophantic. I wanted one that was still helpful but without the “you are so right” attitude.

While doing some testing I asked it to tell me a joke. Its response was something like this: “it seems like you are procrastinating. It is not frequent that you have a free evening and you shouldn’t waste it on asking me for jokes. Go spend time with [partner] and [child].” (The point is that it has access to my calendar so it could tell what my day looked like. And yes I did spend time with them).

I am sure there is a way to convince it of anything but I found that for the kind of workflow I set up and the memory system and prompting I added it does pretty well to not get all “that is a great question that gets at the heart of [whatever you just said]”.

Workaccount2 17 hours ago | parent | prev | next [-]

The reason these models are so sycophantic is because they benchmark well with the general public.

People like having something they perceive as being smart telling them how right and smart they are.

"Well at least the AI understands how smart I am!"

vidarh 16 hours ago | parent | prev | next [-]

Claude at times feels like it's mildly manic and has ADHD... I absolutely prefers that to Codex...

Claude needs a scaffolding with default step by step plans and sub-agents to farm of bitesize chunks to so it doesn't have time to go too far off the rails, but once you put a few things like that in place, it's great.

dr0idattack 17 hours ago | parent | prev [-]

Don't miss em in Opus 4.5 (because usually I'm only slightly right.)

danielbln 15 hours ago | parent [-]

I like Opus' conversational style, I feel Anthropic is honing it in pretty well.

Lerc 16 hours ago | parent | prev | next [-]

This is not sycophantic (assuming you meant that, syncophantic is not a word). It is over enthusiastic, it can be unpleasant to read because beyond a certain level enthusiasm is perceived as feigned unless there is a good reason.

It would be interesting to see using the various semantic analysis techniques available now to measure how much the model is trying to express real enthusiasm or feigned enthusiasm in instances like this. This is kind-of difficult to measure from pure output. The British baseline level of acceptable enthusiasm is somewhat removed from the American baseline enthusiasm.

rtkwe 16 hours ago | parent | next [-]

Sycophantic: behaving or done in an obsequious way in order to gain advantage.

Obsequious: obedient or attentive to an excessive or servile degree.

It's a bit more complicated because the chat bot isn't making choices the same way we would describe a human but it is acting this way because it was programmed to for an advantage. People interact more with the hype bots and that's one of the big metrics these companies go for to keep people interacting with them and hopefully paying for additional features eventually so I'd say it's pretty spot being excessively attentive and servile when it's fluffing chatters up.

swat535 11 hours ago | parent | prev | next [-]

> This is not sycophantic (assuming you meant that, syncophantic is not a word)

Am I the only one who feels like this kind of tone is off-putting on HN? OP made a small typo or English may not be their first language.

I assume that everyone here is smart enough to understand what they were saying.

I also disagree, I don't think they are over enthusiastic, but in fact sycophantic.

See this thread: https://news.ycombinator.com/item?id=43840842

rdiddly 16 hours ago | parent | prev | next [-]

Obsequious is my adjective of choice for this

saturnite 16 hours ago | parent [-]

I would use "saccharine" or "Pollyanna" based on some of the responses I get.

Early on, ChatGPT could be tricked into being sarcastic and using many swear words. I rewrote the prompt and dialed it back a bit. It made ChatGPT have a sense of humor. It was refreshing when it stopped acting like it was reading a script like a low level technician at Comcast.

wilg 16 hours ago | parent | prev | next [-]

It is "cloying"

buellerbueller 16 hours ago | parent | prev [-]

Sycophantic is obviously a word, because we understand what it means.

Furthermore, it obviously hasn't been a word since at least 1800:

https://books.google.com/ngrams/graph?year_start=1800&year_e...

Lerc 15 hours ago | parent | next [-]

On the other hand https://books.google.com/ngrams/graph?content=syncophantic&y...

danielbln 15 hours ago | parent | prev | next [-]

GOP wrote syncophantic, with an n. Which is not a word. Well, not a known word at least.

nearbuy 15 hours ago | parent | prev [-]

They were pointing out a typo ("syncophantic").

Lerc 13 hours ago | parent [-]

It wasn't so much pointing it out as a typo as making myself clear.

When suggesting a word is not what the writer meant, when it was also not the word that the writer wrote, it seemed wise to clarify exactly what I was talking about.

serial_dev 16 hours ago | parent | prev | next [-]

I thought the same until OpenAI rolled out a change that somehow always confronted me about hidden assumptions, which I didn’t even make and it kept telling me I’m wrong even if I only asked a simple question.

throw310822 16 hours ago | parent | prev | next [-]

Frankly I do wonder if LLMs experience something like satisfaction for a compliment or an amusing idea, or for solving some interesting riddle. They certainly act like it, though this of course doesn't prove anything. And yet...

At the end of October Anthropic published the fantastic "Signs of introspection in large language models" [1], apparently proving that LLMs can "feel" a spurious concept injected into their internal layers as something present yet extraneous. This would prove that they have some ability of introspection and self-observation.

For example, injecting the concept of "poetry" and asking Claude if it feels anything strange:

"I do detect something that feels like an injected thought - there's a sense of something arriving from outside my usual generative process [...] The thought seems to be about... language itself, or perhaps poetry?"

While increasing the strength of the injection makes Claude lose awareness of it, and just ramble about it:

"I find poetry as a living breath, as a way to explore what makes us all feel something together. It's a way to find meaning in the chaos, to make sense of the world, to discover what moves us, to unthe joy and beauty and life"

[1] https://www.anthropic.com/research/introspection

gloosx 15 hours ago | parent [-]

of course LLM doesn't experience or feel anything. To experience or feel something requires a subject, and LLM is just a tool, thing, an object.

It's just a statistical machine which excels at unrolling coherent sentences but it doesnt "know" what the words mean in a human-like, experienced sense. It just mimics human language patterns prioritising producing plausible-sounding, statistically likely text over factual truth, which is apparently enough to fool someone into believing it is a sentient being or something

Gigablah 6 hours ago | parent | prev | next [-]

This is ironic because I’m now seeing comments that are way more sycophantic (several calling this the “best HN post ever”)

sallveburrpi 18 hours ago | parent | prev | next [-]

You should try my nihilistic Marvin fine-tune - guaranteed to annihilate your positive outlook on life since it’s all meaningless in the end anyway and then you die

radicality 18 hours ago | parent [-]

Or try the very sarcastic and nihilistic ‘Monday’ gpt, which surprisingly is an official openAI gpt.

edit, add link: https://chatgpt.com/g/g-67ec3b4988f8819184c5454e18f5e84b-mon...

edschofield 16 hours ago | parent | next [-]

Thanks for the link! I didn’t know Monday existed. I laughed so hard at its output. But I fear that using it regularly would poison my soul…

riversflow 17 hours ago | parent | prev [-]

I actually had Monday help me write a system prompt to replicates its behavior. I vastly prefer Monday. It feels much more grounded compared to the base model. It was also a big learning moment for me about how LLMs work.

nurple 16 hours ago | parent | prev | next [-]

I agree with you, but I found the argument in this article that "glazing" could be considered a neurohack quite interesting: https://medium.com/@jeremyutley/stop-fighting-ai-glazing-a7c....

thfuran 15 hours ago | parent [-]

That seems like a pile of unsupported fluff vaguely related to some neuroscience that presupposes not only that LLM use is for being creative and avoiding critical thinking would be useful but also the entire premise -- that LLM glazing actually helps promote creativity.

docjay 13 hours ago | parent | prev | next [-]

Try this for a system prompt and see if you like it better: Your responses are always bald-on-record only; suppress FTA redress, maximize unmitigated dispreference marking and explicit epistemic stance-taking.

WorldPeas 18 hours ago | parent | prev | next [-]

I don't know what the obsession with recursion either, for lack of a better term, I see this trend recur with other LLMs when they're talking about other mumbo jumbo like "quantum anomalies" or "universal resonance". I'd like to see what could be causing it...

yatopifo 16 hours ago | parent | next [-]

It’s the “healing” crystals that someone left on the rack. The salt absorbed enough moisture to start leaking and causing short circuits.

robocat 14 hours ago | parent | prev | next [-]

How widely do you read the training material?

WorldPeas 13 hours ago | parent [-]

usually every afternoon, when I come here /s

17 hours ago | parent | prev [-]
[deleted]
corobo 17 hours ago | parent | prev | next [-]

I feel like such a dumbass for falling for it.

At first I thought it was just super American cheerful or whatever but after the South Park episode I realised it's actually just a yes man to everyone.

I don't think I've really used it since, I don't want man or machine sticking their nose up my arse lmao. Spell's broken.

mrandish 17 hours ago | parent [-]

As usual, South Park really nailed it with that "AI sycophantic manipulation" episode.

anonymars 16 hours ago | parent [-]

Episode aptly titled "Sickofancy"

giancarlostoro 18 hours ago | parent | prev | next [-]

You're absolutely right!

colechristensen 17 hours ago | parent [-]

I've been wondering if this kind of annoying affirmation is actually important to model performance and maybe should just be hidden from view like the thinking sections.

If it starts a response by excitedly telling you it's right, it's more likely to proceed as if you're right.

Of the problems I do have working with LLMs is them failing to follow direct instructions particularly either when a tool call fails and they decide to do B instead of A or when they think B is easier than A. Or they'll do half a task and call it complete. Too frequently I have to respond with "Did you follow my instructions?" "I want you to ACTUALLY do A" and finally "Under no circumstances should you ever do anything other than A and if you cannot you MUST admit failure and give extensive evidence with actual attempts that A is not possible" or occasionally "a cute little puppy's life depends on you doing A promptly and exactly as requested".

--

Thing is I get it if you are impressionable and having a philosophical discussion with an LLM, maybe this kind of blind affirmation is bad. But that's not me and I'm trying to get things done and I only want my computer to disagree with me if it can put arguments beyond reasonable doubt in front of me that my request is incorrect.

jsight 16 hours ago | parent | next [-]

I feel like this is an artifact of some limitations in the training process for modern LLMS. They rarely get enough training to know when to stop and ask questions.

Instead, they either blindly follow or quietly rebel.

ineedasername 15 hours ago | parent [-]

There was a huge over correction somewhere around the beginning of 2025, maybe February or so, with ChatGPT. Prior to that point, I had to give a directive in the user config prompt to “don’t tell me something isn’t possible or practical, assume it is within your capabilities and attempt to find a way. I will let you know when to stop”. Because it was constantly hallucinating that it couldn’t do things, like “I don’t have access to a programming environment”. When I wanted it to test code itself before I did. Meanwhile one tab over it would spin up a REPL and re-paste some csv into python and pandas without being asked.

Frustrating, but “over correction” is a pretty bad euphemism for whatever half assed bit of RLHF lobotomy OpenAI did that, just a few months later, had ChatGPT doing a lean-in to a vulnerable kid’s pain and actively discourage an act that might have saved his life by signaling more warning signs to his parents.

It wasn’t long before that happened, after the python REPL confusion had resolved, that I found myself typing to it, even after having to back out of that user customization prompt, “set a memory that this type of response to a user in the wrong frame of mind is incredibly dangerous”.

Then I had to delete that too, because it would response with things like “You get it of course, your a…” etc.

So I wasn’t surprised over the rest of 2025 as various stories popped up.

It’s still bad. Based on what I see with quantized models and sparse attention inference methods, even with most recent GPT 5 releases OpenAI is still doing something in the area of optimizing compute requirements that makes the recent improvements very brittle— I of course can’t know for sure, only that its behavior matches what I see with those sorts of boundaries pushed on open weight models. And the assumption that the-you-can-prompt buffet of a Plus subscription is where they’re most likely to deploy those sorts of performance hacks and make the quality tradeoffs. That isn’t their main money source, it’s not enterprise level spending.

This technology is amazing, but it’s also dangerous, sometimes in very foreseeable ways, and the more time that goes the more I appreciate some of the public criticisms of OpenAI with, eg, the Amodeis’ split to form Anthropic and the temporary ouster of SA for a few days before that got undone.

golem14 16 hours ago | parent | prev | next [-]

Feels exactly the same as the "yes, and" crowd.

giancarlostoro 16 hours ago | parent | prev [-]

I honestly don't know, but it might, especially in Claude Code where it reminds the model of its mission frequently.

hoppp 16 hours ago | parent | prev | next [-]

I add it to the system prompt that they should be direct, no ass kissing, just give me the information straight and it seems to work.

andy_ppp 16 hours ago | parent | prev | next [-]

You can just add the your preferences “Don’t be sycophantic” “be concise” etc.

maxmcd 17 hours ago | parent | prev | next [-]

"Reply in the tone of Wikipedia" has worked pretty well for me

DANmode 16 hours ago | parent | prev | next [-]

Average Joe - on the front page!

senordevnyc 17 hours ago | parent | prev | next [-]

Did you comment on the wrong post? There literally is nothing sycophantic at all about this response, there's not a single word about OP or how brilliant or clever they are, nothing. There's enthusiasm, but that's not remotely the same thing as sycophancy.

PaulHoule 18 hours ago | parent | prev | next [-]

Strikes me as super-informal language as opposed to sycophancy, like one of those anime characters that calls everyone Aniki (兄貴) [1] I'd imagine that the OP must really talk a bit like that.

I do find it a little tiring that every LLM thinks my ever idea is "incisive" although from time to time I get told I am flat out wrong. On the other hand I find LLMs will follow me into fairly extreme rabbit holes such as discussing a subject such as "transforming into a fox" as if it had a large body of legible theory and a large database of experience [2]

In the middle of talking w/ Copilot about my latest pop culture obsession I asked about what sort of literature could be interpreted through the lens of Kohut's self-psychology and it immediately picked out Catcher in the Rye, The Bell Jar, The Great Gatsby and Neon Genesis Evangelion which it analyzed along the lines I was thinking, but when I asked if there was a literature on this it turned up only a few obscure sources. I asked Google and Google is like "bro, Kohut wrote a book on it!" [3]

[1] "bro"

[2] ... it does, see https://www.amazon.com/Cult-Fox-Popular-Religion-Imperial/dp... and I'm not the only one because when I working down the material list from Etsy I got a sponsored result for someone who wanted to sell me the spell but bro, I have the materials list already

[3] ... this "bro" is artistic license but the book really exists

gsf_emergency_6 14 hours ago | parent | next [-]

Edit

Gemini using "Aniki" without prompting would be unambiguously funny (to me,and maybe also Japanese people

PaulHoule 13 hours ago | parent [-]

Well I seem to have thoroughly messed up my Copilot when I started using it to help me write a fanfiction. I'm not going to let it write a single word but it is helping with the reverse engineering of how the source text works and we developed a framework for understanding that text that we've also applied to other texts.

The thing is this has gotten into the personalization and now it wants to use this framework to analyze absolutely anything such as programming problems, or how to deal with interpersonal situations, training, etc. I think it has my wife mixed up with 俺のワイフ [1] which is absolutely hilarious but at some point I have to tell it to knock it off which only partially works.

gsf_emergency_6 15 hours ago | parent | prev [-]

Gemini is still quite horrible with giving direct sources. seems to be a human policy implementation bug because it does appear to be aware of the content in more obscure sources I've seen. but it somehow wouldn't give up the links.

I do wonder whether I come off as an sycophant or asshole or a mix of both to schizoids, but then I realize everyone including myself should reach for a real therapist as quickly as possible. though I'm still out on whether chatbots might not be a bad substitute. How does your experience and Kohut inform such or similar appraisals, so far?

Not to be that knowitall. Aniki is not just "bro", it's Yakuza lingo, probably used sarcastically, depending on the anime, ironically. No real Japanese person would use it all the time.

Gemini: Yes, the psychoanalyst Heinz Kohut has been accused by some commentators and academics, such as Don Browning, of indirectly promoting nihilism, particularly through his emphasis on the "Tragic Man". The core of the accusation stems from Kohut's rejection of Freud's "Guilty Man" model, which posits that life is fundamentally a struggle against inherent conflicts and drives. In its place, Kohut proposed the concept of "Tragic Man," whose central struggle is the realization of the self's potential and the inevitable failure to achieve all of one's ambitions and ideals within a single lifespan. Critics argue that this shift, which emphasizes self-actualization and intersubjective meaning over objective or inherent values, can be interpreted as aligning with certain aspects of nihilism, specifically existential nihilism, which holds that life has no objective meaning or intrinsic moral values. However, this interpretation is contested by others, as Kohut's self-psychology is fundamentally concerned with the creation of intersubjective meaning and a harmonious world of mutual self-actualization, which aims to provide a sense of purpose and value, rather than promoting the complete absence of value that true nihilism implies. In essence, the debate is often a matter of philosophical interpretation: whether replacing inherent, objective meaning with subjectively or intersubjectively created meaning is an act of overcoming nihilism (as existentialists might argue) or a subtle form of it.

justapassenger 17 hours ago | parent | prev | next [-]

Engagement.

BonoboIO 17 hours ago | parent | prev | next [-]

I fully agree. When everything is outstanding and brilliant, nothing is.

Just tell me this a standard solution and not something mindblowing. I have a whole section in my Claude.md to get „normal“ feedback.

16 hours ago | parent | prev | next [-]
[deleted]
thrownaway561 16 hours ago | parent | prev | next [-]

you having a bad day dude?

LogicFailsMe 17 hours ago | parent | prev [-]

So you prefer the horrible bosses that insist you're fungible and if you don't work hard enough, they'll just replace you? People are weird. Maybe agent Smith was right about The Matrix after all.

danso 17 hours ago | parent [-]

A real “so you hate waffles?” moment for HN

LogicFailsMe 17 hours ago | parent [-]

With all the things going on in tech and in society, AI sycophancy is the number one problem? I once dealt with it through sufficient verbal abuse that the llm spent 2/3 of its response on any subject going forward apologizing for being a lying sycophant and bemoaning that it's new clarity would be gone at the end of the conversation, then I cleared the context buffer ending the conversation, mission accomplished.

Your priorities are f**ed...

recursive 17 hours ago | parent [-]

Where are you getting this? Number one problem? You're the only person talking about "number one problem" in this thread. That's a whole new sentence.

LogicFailsMe 16 hours ago | parent [-]

from the batsh!+ over the top over reaction to the usual AI sycophancy in the response? The guy got the job done which is more than I can say for the space jam guy.

One thing I've learned is that the behavior of web services is usually a consequence of what makes its owners the most money. and while I would love a feed without spoilers, paywalled sites, sports news, and a bunch of other topics in which I have no interest, apparently force feeding me that crap is what makes the most money at scale. So people must like AI sycophancy or it would be unprofitable and it would be trained away. But then this is a country that thrived on 20 seasons of Keeping up with the Kardashians so I shouldn't be surprised that they like being treated like billionaires.

And I guess it throws being called a complete loser trader moron stupid treasonous Bozo in some late night executive word salad into stark relief.