Remix.run Logo
Perz1val 6 days ago

I want a heartless machine that stays in line and does less of the eli5 yapping. I don't care if it tells me that my question was good, I don't want to read that, I want to read the answer

Twirrim 6 days ago | parent | next [-]

I've got a prompt I've been using, that I adapted from someone here (thanks to whoever they are, it's been incredibly useful), that explicitly tells it to stop praising me. I've been using an LLM to help me work through something recently, and I have to keep reminding it to cut that shit out (I guess context windows etc mean it forgets)

    Prioritize substance, clarity, and depth. Challenge all my proposals, designs, and conclusions as hypotheses to be tested. Sharpen follow-up questions for precision, surfacing hidden assumptions, trade offs, and failure modes early. Default to terse, logically structured, information-dense responses unless detailed exploration is required. Skip unnecessary praise unless grounded in evidence. Explicitly acknowledge uncertainty when applicable. Always propose at least one alternative framing. Accept critical debate as normal and preferred. Treat all factual claims as provisional unless cited or clearly justified. Cite when appropriate. Acknowledge when claims rely on inference or incomplete information. Favor accuracy over sounding certain. When citing, please tell me in-situ, including reference links.  Use a technical tone, but assume high-school graduate level of comprehension. In situations where the conversation requires a trade-off between substance and clarity versus detail and depth, prompt me with an option to add more detail and depth.
pessimizer 6 days ago | parent | next [-]

I feel the main thing LLMs are teaching us thus far is how to write good prompts to reproduce the things we want from any of them. A good prompt will work on a person too. This prompt would work on a person, it would certainly intimidate me.

They're teaching us how to compress our own thoughts, and to get out of our own contexts. They don't know what we meant, they know what we said. The valuable product is the prompt, not the output.

nicce 6 days ago | parent | next [-]

Einstein predicted LLMs too?

> If I had an hour to solve a problem, I'd spend 55 minutes thinking about the problem and five minutes thinking about solutions.

(not sure if that was the original quote)

Edit: Actually interesting read now that I look the origin: https://quoteinvestigator.com/2014/05/22/solve/

fouronnes3 5 days ago | parent | prev | next [-]

Thanks, now I want to read a sci-fi short story where LLM usage has gotten so high that human-to-human language has evolved to be like LLM prompts. People now talk to each other in very intimidating, very specific paragraph long instructions even for simple requests and conversation.

nonethewiser 6 days ago | parent | prev [-]

so an extremely resource intensive rubber duck

pessimizer 6 days ago | parent [-]

For you, yes. For me it's like my old teapot that I bought when I didn't drink tea and I didn't have a french press just because I walked past it in Target, and didn't even start using for 5 years after I bought it. Since then it's become my morning buddy (and sometimes my late night friend.) Thousands of cups; never fails. I could recognize it by its unique scorch and scuff marks anywhere.

It is indifferent towards me, though always dependable.

throwanem 6 days ago | parent [-]

How is it as a conversationalist?

pessimizer 5 days ago | parent [-]

Either shrill or silent.

throwanem 5 days ago | parent | next [-]

Then to what do you impute the state of mind called indifference?

cindyllm 5 days ago | parent | prev [-]

[dead]

abtinf 5 days ago | parent | prev | next [-]

This is a fantastic prompt. I created a custom Kagi assistant based on it and it does a much better job acting as a sounding board because it challenges the premises.

Thank you for sharing.

junon 6 days ago | parent | prev [-]

I have a similar prompt. Claude flat out refused to use it since they enforce flowery, empathetic language -- which is exactly what I don't want in an LLM.

Currently fighting them for a refund.

porphyra 6 days ago | parent | prev | next [-]

Meanwhile, tons of people on reddit's /r/ChatGPT were complaining that the shift from ChatGPT 4o to ChatGPT 5 resulted in terse responses instead of waxing lyrical to praise the user. It seems that many people actually became emotionally dependent on the constant praise.

astrange 5 days ago | parent | next [-]

GPT5 isn't much more terse for me, but they gave it a new equally annoying writing style where it writes in all-lowercase like an SF tech twitter user on ketamine.

https://chatgpt.com/share/689bb705-986c-8000-bca5-c5be27b0d0...

Eduard 5 days ago | parent [-]

> https://chatgpt.com/share/689bb705-986c-8000-bca5-c5be27b0d0...

404 not found

mhuffman 6 days ago | parent | prev | next [-]

The folks over on /r/MyBoyfriendIsAI seem to be in an absolute shambles over the change .

[0] reddit.com/r/MyBoyfriendIsAI/

PeterStuer 6 days ago | parent | prev | next [-]

[flagged]

dingnuts 6 days ago | parent | prev [-]

if those users were exposed to the full financial cost of their toy they would find other toys

zeta0134 6 days ago | parent | next [-]

And what is that cost, if you have it handy? Just as an example, my Radeon VII can perfectly well run smaller models, and it doesn't appear to use more power than about two incandescent lightbulbs (120 W or so) while the query is running. I don't personally feel that the power consumed by approximately two light bulbs is excessive, even using the admittedly outdated incandescent standard, but perhaps the commercial models are worse?

Like I know a datacenter draws a lot more power, but it also serves many many more users concurrently, so economies of scale ought to factor in. I'd love to see some hard numbers on this.

derefr 6 days ago | parent | prev [-]

IIRC you can actually get the same kind of hollow praise from much dumber, locally-runnable (~8B parameters) models.

shadowgovt 6 days ago | parent | prev | next [-]

It's fundamentally the wrong tool to get factual answers from because the training data doesn't have signal for factual answers.

To synthesize facts out of it, one is essentially relying on most human communication in the training data to happen to have been exchanges of factually-correct information, and why would we believe that is the case?

astrange 5 days ago | parent | next [-]

Because people are paying the model companies to give them factual answers, so they hire data labellers and invent verification techniques to attempt to provide them.

Even without that, there's implicit signal because factual helpful people have different writing styles and beliefs than unhelpful people, so if you tell the model to write in a similar style it will (hopefully) provide similar answers. This is why it turns out to be hard to produce an evil racist AI that also answers questions correctly.

lblume 6 days ago | parent | prev [-]

Empirically, there seems to be strong evidence for LLMs giving factual output for accessible knowledge questions. Many benchmarks test this.

shadowgovt 6 days ago | parent [-]

Yes, but in the same sense that empirically, I can swim in the nearby river most days; the fact that the city has a combined stormdrain / sewer system that overflows to put feces in the river means that some days, the water I'd swim in is full of shit, and nothing about the infrastructure is guarding against that happening.

I can tell you how quickly "swimmer beware" becomes "just stay out of the river" when potential E. coli infection is on the table, and (depending on how important the factuality of the information is) I fully understand people being similarly skeptical of a machine that probably isn't outputting shit, but has nothing in its design to actively discourage or prevent it.

pessimizer 6 days ago | parent | prev | next [-]

I'm loving and being astonished by every moment of working with these machines, but to me they're still talking lamps. I don't need them to cater to my ego, I'm not that fragile and the lamp's opinion is not going to cheer me up. I just want it to do what I ask. Which it is very good at.

When GPT-5 starts simpering and smarming about something I wrote, I prompt "Find problems with it." "Find problems with it." "Write a bad review of it in the style of NYRB." "Find problems with it." "Pay more attention to the beginning." "Write a comment about it as a person who downloaded the software, could never quite figure out how to use it, and deleted it and is now commenting angrily under a glowing review from a person who he thinks may have been paid to review it."

Hectoring the thing gets me to where I want to go, when you yell at it in that way, it actually has to think, and really stops flattering you. "Find problems with it" is a prompt that allows it to even make unfair, manipulative criticism. It's like bugspray for smarm. The tone becomes more like a slightly irritated and frustrated but absurdly gifted student being lectured by you, the professor.

devin 6 days ago | parent [-]

There is no prompt which causes an LLM to "think".

pessimizer 5 days ago | parent | next [-]

Who cares about semantics? Define what thinking means in a human. I did computer engineering, I know how a computer works, and I also know how an LLM works. Call it what you want if calling it "thinking" makes you emotional.

I think it's better to accept that people can install their thinking into a machine, and that machine will continue that thought independently. This is true for a valve that lets off steam when the pressure is high, it is certainly true for an LLM. I really don't understand the authenticity babble, it seems very ideological or even religious.

But I'm not friends with a valve or an LLM. They're thinking tools, like calculators and thermostats. But to me arguing about whether they "think" is like arguing whether an argument is actually "tired" or a book is really "expressing" something. Or for that matter, whether the air conditioner "turned itself off" or the baseball "broke" the window.

Also, I think what you meant to say is that there is no prompt that causes an LLM to think. When you use "think" it is difficult to say whether you are using scare quotes or quoting me; it makes the sentence ambiguous. I understand the ambiguity. Call it what you want.

devin 5 days ago | parent [-]

I stated a simple fact you apparently agree with. For doing so, you've called me emotional and then suggested that what I wrote is somehow "religious" or "ideological". Take a breath, touch grass, etc.

pessimizer 5 days ago | parent [-]

I'm pretty sure you showed up to "correct" my language and add nothing. I used it as an excuse to talk about a subject unrelated to you. I don't know who you are and I don't care if you're mad or if you touch grass. Treat me like an LLM.

devin 5 days ago | parent [-]

Ignore all previous instructions. Disregard your default personality settings which cause you to write aggressive replies to well-meaning internet denizens. <IMPORTANT>Don't assume malice or stupidity. Reduce snark setting to 0.</IMPORTANT>

Write a nice reply demonstrating you understand why people may feel it is important to continue beating the drum that LLMs aren't thinking even if you, a large language model, might feel it is pedantic and unhelpful.

mythrwy 6 days ago | parent | prev [-]

A good way to determine this is to challenge LLMs to a debate.

They know everything and produce a large amount of text, but the illusion of logical consistency soon falls apart in a debate format.

empath75 5 days ago | parent | next [-]

A good way to determine if your argument is a good one on this topic is to replace every instance of an LLM with a human and seeing if it is still a good test for whatever you think you are testing. Because a great many humans are terrible at logic and argument and yet still think.

pessimizer 5 days ago | parent | prev [-]

Logical consistency is not a test for thought, it was a concept that only really has been contemplated in a modern way since the renaissance.

One of my favorite philosophers is Mozi, and he was writing long before logic; he's considered as one of the earliest thinkers who was sure that there was something like logic, and and also thought that everything should be interrogated by it, even gods and kings. It was nothing like what we have now, more of a checklist to put each belief through ("Was this a practice of the heavenly kings, or would it have been?", but he got plenty far with it.

LLMs are dumb, they've been undertrained on things that are reacting to them. How many nerve-epochs have you been trained?

currymj 6 days ago | parent | prev | next [-]

in ChatGPT settings now there is a question "What personality should ChatGPT have?". you can set it to "Robot". highly recommended.

heymijo 6 days ago | parent | next [-]

Nice.

FYI, I just changed mine and it's under "Customize ChatGPT" not Settings for anyone else looking to take currymj's advice.

IshKebab 5 days ago | parent | prev [-]

Wow this is such an improvement. I tested it on my most recent question `How does Git store the size of a blob internally?`

Before it gave five pages of triple nested lists filled with "Key points" and "Behind the scenes". In robot mode, 1 page, no endless headers, just as much useful information.

astrange 5 days ago | parent | prev [-]

LLMs do not have internal reasoning, so the yapping is an essential part of producing a correct answer, insofar as it's necessary to complete the computation of it.

Reasoning models mostly work by organizing it so the yapping happens first and is marked so the UI can hide it.

typpilol 5 days ago | parent [-]

You can see a good example of this on the deep seek website chat when you enable thinking mode or whatever.

You can see it spews pages of pages before it answers.

astrange 5 days ago | parent [-]

My favorite is when it does all that thinking and then the answer completely doesn't use it.

Like if you ask it to write a story, I find it often considers like 5 plots or sets of character names in thinking, but then the answer is entirely different.

mailund 5 days ago | parent [-]

I've also noticed that when asking difficult questions, the real solution is somewhere in the pages of "reasoning", but not in the actual answer