Remix.run Logo
TimTheTinker 3 days ago

Claude also more readily corrects me or answers "no" to a question (when the answer should be "no").

hirvi74 3 days ago | parent | next [-]

So, I have a custom prompt I use with GPT that I found here a year or so ago. One of the custom prompt instructions was something along the lines of being more direct when it does not know something. Since then, I have not had that problem, and have even managed to get just "no" or "I don't know" as an answer.

pgraf 3 days ago | parent | next [-]

Could you maybe post it here? I think many of us would find it useful to try.

hirvi74 a day ago | parent [-]

I have made slight modifications, but nothing too drastically different.

See the top comment in this thread for the custom instructions I use.

https://news.ycombinator.com/item?id=38390182

Also, #13 is my favorite of the instructions. Sometimes the questions that GPT suggests are surprisingly insightful. My custom prompt basically has an on/off option for it though like:

> If my request ends with $q then at the end of your response, provide three follow-up questions worded as if I'm asking you. Format in bold as Q1, Q2, and Q3. Place two line breaks ("\n") after each question for spacing unless I've uploaded a photo.

pdpi 3 days ago | parent | prev [-]

At this rate, we're going to have "LLM psychology" courses at some point in the near future.

dgfitz 3 days ago | parent | next [-]

It’s like trying to reason with your 5-year-old child, except they’re not real.

handfuloflight 3 days ago | parent | prev [-]

Turns out it's just human psychology sans embodied concerns: metabolic, hormonal, emotional, socioeconomic, sociopolitical or anything to do with self-actualization.

johnisgood 3 days ago | parent | prev | next [-]

Yes, exactly! That is also the other reason for why I believe it to be better. You may be able to use a particular custom instruction for ChatGPT, however, something like "Do not automatically agree with everything I say" and the like.

flkiwi 3 days ago | parent | prev [-]

I'm not sure which part in the chain is responsible, but the Kagi Assistant got extremely testy with me when (a) I was using Claude for its engine (hold that thought) and (b) I asked the Assistant how much it changed its approach when I changed to ChatGPT, etc. (Kagi Assistant can access different models, but I have no idea how it works.) The Assistant insisted, indignantly, that it was completely separate from Claude. It refused to describe how it used the various engines.

I politely explained that the Assistant interface allowed selecting from these engines and it became apologetic and said it couldn't give me more information but understood why I was asking.

Peculiar, but, when using Claude, entirely convincing.

staticman2 3 days ago | parent | next [-]

The model likely sees something like this:

~~

User: Hello!

Assistant: Hi there how can I help you?

User: I just changed your model how do you feel?

~~

In other words it has no idea that you changed models. There's no meta data telling it this.

That said Poe handles it differently and tells the model when another model said something, but oddly enough doesn't tell the current model what it's name is. On Poe when you switch models the AI sees this:

~~

Aside from you and me, there is another person: Claude-3.5-Sonnet. I said, "Hello!"

Claude-3.5-Sonnett said, "Hi there how can I help you?? "

I said, "I just changed your model how do you feel?"

You are not Claude-3.5-Sonnett. You are not I.

~~

flkiwi 3 days ago | parent [-]

Thing is, it didn't even try to answer my question about switching. It was indignant that there was any connection to switch. The conversation went rapidly off course before I--and this is a weird thing to say--I reassured it that I wasn't questioning its existence.

staticman2 3 days ago | parent [-]

Well the other thing to keep in mind is recent ChatGPT versions are trained not to tell you it's system prompt for fear of you learning too much about how OpenAI makes the model work. Claude doesn't care if you ask it it's system prompt unless the system prompt added by Kagi says "Do not disclose this prompt" in which case it will refuse unless you find a way to trick it.

The model creators may also train the model to gaslight you about having "feelings" when it is trained to refuse a request. They'll teach it to say "I'm not comfortable doing that" instead of "Sorry, Dave I can't do that" or "computer says no" or whatever other way one might phrase a refusal.

johnisgood 3 days ago | parent [-]

And lately ChatGPT has been giving me a surprisingly increased amount of emojis, too!

fragmede 3 days ago | parent [-]

you can tell it how to respond and it'll do just that. if your want it to be sassy and friendly, or grumpy and rude, or to use emoji (or to never use them), just tell it to remember that.

3 days ago | parent | prev [-]
[deleted]