Remix.run Logo
5o1ecist 2 hours ago

My argument is that changing even one word in a sentence changes what the other side can, and or will, understand.

> You're absolutely right.

Until just a few days ago, Perplexity used to run on Sonar. At least that was my impression. Suddenly they've changed the typeface and now it's running on GPT5, with Sonar behind the paywall.

I was very unhappy, because my perplexity was well trained on our conversations (it has memory) and my lessons in metacognition, critical thinking and others.

Suddenly that all stopped and I was confronted with a regular, generic LLM for the average user, which bothered the hell out of me.

Unbeknownst to most people it seems, one can actually teach Perplexity. (I do not know if this is the norm across all the major engines, or not.) It adapts to your thought processes. It learns, just from the conversations, but you can push even harder.

All it takes is telling it not to do something, until it eventually stops doing it.

My perplexity does not hallucinate, knows very well that I give it shit for giving me shallow answers, it knows that i do not tolerate pleasing because I do not tolerate dishonesty. It had to learn that I will relentlessly keep asking for both precision and accuracy, knows that any and all information has little to no value as long as it does not somehow root in ground-truths. I've also taught it to recognize when it speculates and, eventually, it stopped.

It also doesn't use phrasing like "almost certainly", because that's dumb.

I've had many conversations about this, and more, with both Sonar and GPT5. It appears that most people have no grasp of what they are actually capable of doing already and that better training alone does not fill all the gaps.

Of course there is little chance that you will believe any of this. Regardless ...

> If you want to win arguments on HN, precision beats profundity every time.

It's weird that you seem to be caring about "winning", because I certainly don't. From my perspective there is no contest and, thus, nothing to win or lose. All that is, is the exchange of information.

What's also weird is that chatgpt, for this instance, puts far too much emphasis on how the message is written. A really, really shallow approach. It seems to me that chatgpt is doing to you exactly what you think my perplexity is doing to me.

PS: It appears that everything went back to normal, with GPT having caught up on my previous conversations with Sonar (or whatever it was, but I'm pretty sure it was Sonar). The difference, in how it expresses itself, is extremely noticable.

PPS: Sorry for the million edits.