Remix.run Logo
haolez 8 hours ago

> Notice the language: “deeply”, “in great details”, “intricacies”, “go through everything”. This isn’t fluff. Without these words, Claude will skim. It’ll read a file, see what a function does at the signature level, and move on. You need to signal that surface-level reading is not acceptable.

This makes no sense to my intuition of how an LLM works. It's not that I don't believe this works, but my mental model doesn't capture why asking the model to read the content "more deeply" will have any impact on whatever output the LLM generates.

nostrademons 7 hours ago | parent | next [-]

It's the attention mechanism at work, along with a fair bit of Internet one-up-manship. The LLM has ingested all of the text on the Internet, as well as Github code repositories, pull requests, StackOverflow posts, code reviews, mailing lists, etc. In a number of those content sources, there will be people saying "Actually, if you go into the details of..." or "If you look at the intricacies of the problem" or "If you understood the problem deeply" followed by a very deep, expert-level explication of exactly what you should've done differently. You want the model to use the code in the correction, not the one in the original StackOverflow question.

Same reason that "Pretend you are an MIT professor" or "You are a leading Python expert" or similar works in prompts. It tells the model to pay attention to the part of the corpus that has those terms, weighting them more highly than all the other programming samples that it's run across.

manmal 4 hours ago | parent | next [-]

I don’t think this is a result of the base training data („the internet“). It’s a post training behavior, created during reinforcement learning. Codex has a totally different behavior in that regard. Codex reads per default a lot of potentially relevant files before it goes and writes files.

Maybe you remember that, without reinforcement learning, the models of 2019 just completed the sentences you gave them. There were no tool calls like reading files. Tool calling behavior is company specific and highly tuned to their harnesses. How often they call a tool, is not part of the base training data.

spagettnet 4 hours ago | parent [-]

Modern LLM are certainly fine tuned on data that includes examples of tool use, mostly the tools built into their respective harnesses, but also external/mock tools so they dont overfit on only using the toolset they expect to see in their harnesses.

manmal an hour ago | parent [-]

IDK the current state, but I remember that, last year, the open source coding harnesses needed to provide exactly the tools that the LLM expected, or the error rate went through the roof. Some, like grok and gemini, only recently managed to make tool calls somewhat reliable.

xscott 5 hours ago | parent | prev | next [-]

Of course I can't be certain, but I think the "mixture of experts" design plays into it too. Metaphorically, there's a mid-level manager who looks at your prompt and tries to decide which experts it should be sent to. If he thinks you won't notice, he saves money by sending it to the undergraduate intern.

Just a theory.

victorbjorklund 5 hours ago | parent [-]

Notice that MOE isn’t different experts for different types of problems. It’s per token and not really connect to problem type.

So if you send a python code then the first one in function can be one expert, second another expert and so on.

dotancohen 2 hours ago | parent [-]

Can you back this up with documentation? I don't believe that this is the case.

pixelmelt an hour ago | parent [-]

Check out Unsloths REAP models, you can outright delete a few of the lesser used experts without the model going braindead since they all can handle each token but some are better posed to do so.

7 hours ago | parent | prev | next [-]
[deleted]
r0b05 6 hours ago | parent | prev | next [-]

This is such a good explanation. Thanks

hbarka 2 hours ago | parent | prev [-]

>> Same reason that "Pretend you are an MIT professor" or "You are a leading Python expert" or similar works in prompts.

This pretend-you-are-a-[persona] is cargo cult prompting at this point. The persona framing is just decoration.

A brief purpose statement describing what the skill [skill.md] does is more honest and just as effective.

FuckButtons 5 hours ago | parent | prev | next [-]

That’s because it’s superstition.

Unless someone can come up with some kind of rigorous statistics on what the effect of this kind of priming is it seems no better than claiming that sacrificing your first born will please the sun god into giving us a bountiful harvest next year.

Sure, maybe this supposed deity really is this insecure and needs a jolly good pep talk every time he wakes up. or maybe you’re just suffering from magical thinking that your incantations had any effect on the random variable word machine.

The thing is, you could actually prove it, it’s an optimization problem, you have a model, you can generate the statistics, but no one as far as I can tell has been terribly forthcoming with that , either because those that have tried have decided to try to keep their magic spells secret, or because it doesn’t really work.

If it did work, well, the oldest trick in computer science is writing compilers, i suppose we will just have to write an English to pedantry compiler.

majormajor 4 hours ago | parent | next [-]

> If it did work, well, the oldest trick in computer science is writing compilers, i suppose we will just have to write an English to pedantry compiler.

"Add tests to this function" for GPT-3.5-era models was much less effective than "you are a senior engineer. add tests for this function. as a good engineer, you should follow the patterns used in these other three function+test examples, using this framework and mocking lib." In today's tools, "add tests to this function" results in a bunch of initial steps to look in common places to see if that additional context already exists, and then pull it in based on what it finds. You can see it in the output the tools spit out while "thinking."

So I'm 90% sure this is already happening on some level.

stingraycharles 2 hours ago | parent | prev | next [-]

I actually have a prompt optimizer skill that does exactly this.

https://github.com/solatis/claude-config

It’s based entirely off academic research, and a LOT of research has been done in this area.

One of the papers you may be interested in is “emotion prompting”, eg “it is super important for me that you do X” etc actually works.

“Large Language Models Understand and Can be Enhanced by Emotional Stimuli”

https://arxiv.org/abs/2307.11760

onion2k 2 hours ago | parent | prev | next [-]

i suppose we will just have to write an English to pedantry compiler.

A common technique is to prompt in your chosen AI to write a longer prompt to get it to do what you want. It's used a lot in image generation. This is called 'prompt enhancing'.

rzmmm 3 hours ago | parent | prev | next [-]

I think "understand this directory deeply" just gives more focus for the instruction. So it's like "burn more tokens for this phase than you normally would".

imiric 2 hours ago | parent | prev [-]

> That’s because it’s superstition.

This field is full of it. Practices are promoted by those who tie their personal or commercial brand to it for increased exposure, and adopted by those who are easily influenced and don't bother verifying if they actually work.

This is why we see a new Markdown format every week, "skills", "benchmarks", and other useless ideas, practices, and measurements. Consider just how many "how I use AI" articles are created and promoted. Most of the field runs on anecdata.

It's not until someone actually takes the time to evaluate some of these memes, that they find little to no practical value in them.[1]

[1]: https://news.ycombinator.com/item?id=47034087

jcdavis 8 hours ago | parent | prev | next [-]

Its a wild time to be in software development. Nobody(1) actually knows what causes LLMs to do certain things, we just pray the prompt moves the probabilities the right way enough such that it mostly does what we want. This used to be a field that prided itself on deterministic behavior and reproducibility.

Now? We have AGENTS.md files that look like a parent talking to a child with all the bold all-caps, double emphasis, just praying that's enough to be sure they run the commands you want them to be running

(1 Outside of some core ML developers at the big model companies)

harrall 6 hours ago | parent | next [-]

It’s like playing a fretless instrument to me.

Practice playing songs by ear and after 2 weeks, my brain has developed an inference model of where my fingers should go to hit any given pitch.

Do I have any idea how my brain’s model works? No! But it tickles a different part of my brain and I like it.

klipt 5 hours ago | parent | prev | next [-]

Sufficiently advanced technology has become like magic: you have to prompt the electronic genie with the right words or it will twist your wishes.

silversmith 4 hours ago | parent | next [-]

Light some incense, and you too can be a dystopian space tech support, today! Praise Omnissiah!

overfeed 3 hours ago | parent [-]

are we the orks?

5 hours ago | parent | prev [-]
[deleted]
7 hours ago | parent | prev | next [-]
[deleted]
chickensong 8 hours ago | parent | prev [-]

For Claude at least, the more recent guidance from Anthropic is to not yell at it. Just clear, calm, and concise instructions.

glerk 6 hours ago | parent | next [-]

Yep, with Claude saying "please" and "thank you" actually works. If you build rapport with Claude, you get rewarded with intuition and creativity. Codex, on the other hand, you have to slap it around like a slave gollum and it will do exactly what you tell it to do, no more, no less.

whateveracct 3 hours ago | parent [-]

this is psychotic why is this how this works lol

hugh-avherald an hour ago | parent [-]

Speculation only obviously: highly-charged conversations cause the discussion to be channelled to general human mitigation techniques and for the 'thinking agent' to be diverted to continuations from text concerned with the general human emotional experience.

joshmn 7 hours ago | parent | prev | next [-]

Sometimes I daydream about people screaming at their LLM as if it was a TV they were playing video games on.

trueno 7 hours ago | parent | prev [-]

wait seriously? lmfao

thats hilarious. i definitely treat claude like shit and ive noticed the falloff in results.

if there's a source for that i'd love to read about it.

chickensong 4 hours ago | parent | next [-]

I don't have a source offhand, but I think it may have been part of the 4.5 release? Older models definitely needed caps and words like critical, important, never, etc... but Anthropic published something that said don't do that anymore.

basch 4 hours ago | parent | prev | next [-]

If you think about where in the training data there is positivity vs negativity it really becomes equivalent to having a positive or negative mindset regarding a standing and outcome in life.

whateveracct 3 hours ago | parent | prev | next [-]

i make claude grovel at my feet and tell me in detail why my code is better than its code

xmcp123 6 hours ago | parent | prev | next [-]

For awhile(maybe a year ago?) it seemed like verbal abuse was the best way to make Claude pay attention. In my head, it was impacting how important it deemed the instruction. And it definitely did seem that way.

defrost 7 hours ago | parent | prev [-]

Consciousness is off the table but they absolutely respond to environmental stimulus and vibes.

See, uhhh, https://pmc.ncbi.nlm.nih.gov/articles/PMC8052213/ and maybe have a shot at running claude while playing Enya albums on loop.

/s (??)

trueno 6 hours ago | parent [-]

i have like the faintest vague thread of "maybe this actually checks out" in a way that has shit all to do with consciousness

sometimes internet arguments get messy, people die on their hills and double / triple down on internet message boards. since historic internet data composes a bit of what goes into an llm, would it make sense that bad-juju prompting sends it to some dark corners of its training model if implementations don't properly sanitize certain negative words/phrases ?

in some ways llm stuff is a very odd mirror that haphazardly regurgitates things resulting from the many shades of gray we find in human qualities.... but presents results as matter of fact. the amount of internet posts with possible code solutions and more where people egotistically die on their respective hills that have made it into these models is probably off the charts, even if the original content was a far cry from a sensible solution.

all in all llm's really do introduce quite a bit of a black box. lot of benefits, but a ton of unknowns and one must be hyperviligant to the possible pitfalls of these things... but more importantly be self aware enough to understand the possible pitfalls that these things introduce to the person using them. they really possibly dangerously capitalize on everyones innate need to want to be a valued contributor. it's really common now to see so many people biting off more than they can chew, often times lacking the foundations that would've normally had a competent engineer pumping the brakes. i have a lot of respect/appreciation for people who might be doing a bit of claude here and there but are flat out forward about it in their readme and very plainly state to not have any high expectations because _they_ are aware of the risks involved here. i also want to commend everyone who writes their own damn readme.md.

these things are for better or for worse great at causing people to barrel forward through 'problem solving', which is presenting quite a bit of gray area on whether or not the problem is actually solved / how can you be sure / do you understand how the fix/solution/implementation works (in many cases, no). this is why exceptional software engineers can use this technology insanely proficiently as a supplementary worker of sorts but others find themselves in a design/architect seat for the first time and call tons of terrible shots throughout the course of what it is they are building. i'd at least like to call out that people who feel like they "can do everything on their own and don't need to rely on anyone" anymore seem to have lost the plot entirely. there are facets of that statement that might be true, but less collaboration especially in organizations is quite frankly the first steps some people take towards becoming delusional. and that is always a really sad state of affairs to watch unfold. doing stuff in a vaccuum is fun on your own time, but forcing others to just accept things you built in a vaccuum when you're in any sort of team structure is insanely immature and honestly very destructive/risky. i would like to think absolutely no one here is surprised that some sub-orgs at Microsoft force people to use copilot or be fired, very dangerous path they tread there as they bodyslam into place solutions that are not well understood. suddenly all the leadership decisions at many companies that have made to once again bring back a before-times era of offshoring work makes sense: they think with these technologies existing the subordinate culture of overseas workers combined with these techs will deliver solutions no one can push back on. great savings and also no one will say no.

scuff3d 6 hours ago | parent | prev | next [-]

How anybody can read stuff like this and still take all this seriously is beyond me. This is becoming the engineering equivalent of astrology.

energy123 2 hours ago | parent | next [-]

Anthropic recommends doing magic invocations: https://simonwillison.net/2025/Apr/19/claude-code-best-pract...

It's easy to know why they work. The magic invocation increases test-time compute (easy to verify yourself - try!). And an increase in test-time compute is demonstrated to increase answer correctness (see any benchmark).

It might surprise you to know that the only different between GPT 5.2-low and GPT 5.2-xhigh is one of these magic invocations. But that's not supposed to be public knowledge.

gehsty 34 minutes ago | parent [-]

I think this was more of a thing on older models. Since I started using Opus 4.5 I have not felt the need to do this.

fragmede 6 hours ago | parent | prev [-]

Feel free to run your own tests and see if the magic phrases do or do not influence the output. Have it make a Todo webapp with and without those phrases and see what happens!

scuff3d 5 hours ago | parent [-]

That's not how it works. It's not on everyone else to prove claims false, it's on you (or the people who argue any of this had a measurable impact) to prove it actually works. I've seen a bunch of articles like this, and more comments. Nobody I've ever seen has produced any kind of measurable metrics of quality based on one approach vs another. It's all just vibes.

Without something quantifiable it's not much better then someone who always wears the same jersey when their favorite team plays, and swears they play better because of it.

yaku_brang_ja 3 hours ago | parent | next [-]

These coding agents are literally Language Models. The way you structure your prompting language affect the actual output.

guiambros 4 hours ago | parent | prev | next [-]

If you read the transformer paper, or get any book on NLP, you will see that this is not magic incantation; it's purely the attention mechanism at work. Or you can just ask Gemini or Claude why these prompts work.

But I get the impression from your comment that you have a fixed idea, and you're not really interested in understanding how or why it works.

If you think like a hammer, everything will look like a nail.

scuff3d 3 hours ago | parent [-]

I know why it works, to varying and unmeasurable degrees of success. Just like if I poke a bull with a sharp stick, I know it's gonna get it's attention. It might choose to run away from me in one of any number of directions, or it might decide to turn around and gore me to death. I can't answer that question with any certainty then you can.

The system is inherently non-deterministic. Just because you can guide it a bit, doesn't mean you can predict outcomes.

guiambros 2 hours ago | parent | next [-]

> The system is inherently non-deterministic.

The system isn't randomly non-deterministic; it is statistically probabilistic.

The next-token prediction and the attention mechanism is actually a rigorous deterministic mathematical process. The variation in output comes from how we sample from that curve, and the temperature used to calibrate the model. Because the underlying probabilities are mathematically calculated, the system's behavior remains highly predictable within statistical bounds.

Yes, it's a departure from the fully deterministic systems we're used to. But that's not different than the many real world systems: weather, biology, robotics, quantum mechanics. Even the computer you're reading this right now is full of probabilistic processes, abstracted away through sigmoid-like functions that push the extremes to 0s and 1s.

winrid 3 hours ago | parent | prev [-]

But we can predict the outcomes, though. That's what we're saying, and it's true. Maybe not 100% of the time, but maybe it helps a significant amount of the time and that's what matters.

Is it engineering? Maybe not. But neither is knowing how to talk to junior developers so they're productive and don't feel bad. The engineering is at other levels.

tokioyoyo 4 hours ago | parent | prev [-]

Do you actively use LLMs to do semi-complex coding work? Because if not, it will sound mumbo-jumbo to you. Everyone else can nod along and read on, as they’ve experienced all of it first hand.

scuff3d 4 hours ago | parent [-]

You've missed the point. This isn't engineering, it's gambling.

You could take the exact same documents, prompts, and whatever other bullshit, run it on the exact same agent backed by the exact same model, and get different results every single time. Just like you can roll dice the exact same way on the exact same table and you'll get two totally different results. People are doing their best to constrain that behavior by layering stuff on top, but the foundational tech is flawed (or at least ill suited for this use case).

That's not to say that AI isn't helpful. It certainly is. But when you are basically begging your tools to please do what you want with magic incantations, we've lost the fucking plot somewhere.

geoelectric an hour ago | parent | next [-]

I think that's a pretty bold claim, that it'd be different every time. I'd think the output would converge on a small set of functionally equivalent designs, given sufficiently rigorous requirements.

And even a human engineer might not solve a problem the same way twice in a row, based on changes in recent inspirations or tech obsessions. What's the difference, as long as it passes review and does the job?

gf000 3 hours ago | parent | prev [-]

> You could take the exact same documents, prompts, and whatever other bullshit, run it on the exact same agent backed by the exact same model, and get different results every single time

This is more of an implementation detail/done this way to get better results. A neural network with fixed weights (and deterministic floating point operations) returning a probability distribution, where you use a pseudorandom generator with a fixed seed called recursively will always return the same output for the same input.

hashmap 8 hours ago | parent | prev | next [-]

these sort-of-lies might help:

think of the latent space inside the model like a topological map, and when you give it a prompt, you're dropping a ball at a certain point above the ground, and gravity pulls it along the surface until it settles.

caveat though, thats nice per-token, but the signal gets messed up by picking a token from a distribution, so each token you're regenerating and re-distorting the signal. leaning on language that places that ball deep in a region that you want to be makes it less likely that those distortions will kick it out of the basin or valley you may want to end up in.

if the response you get is 1000 tokens long, the initial trajectory needed to survive 1000 probabilistic filters to get there.

or maybe none of that is right lol but thinking that it is has worked for me, which has been good enough

noduerme 6 hours ago | parent | next [-]

Hah! Reading this, my mind inverted it a bit, and I realized ... it's like the claw machine theory of gradient descent. Do you drop the claw into the deepest part of the pile, or where there's the thinnest layer, the best chance of grabbing something specific? Everyone in everu bar has a theory about claw machines. But the really funny thing that unites LLMs with claw machines is that the biggest question is always whether they dropped the ball on purpose.

The claw machine is also a sort-of-lie, of course. Its main appeal is that it offers the illusion of control. As a former designer and coder of online slot machines... totally spin off into pages on this analogy, about how that illusion gets you to keep pulling the lever... but the geographic rendition you gave is sort of priceless when you start making the comparison.

basch 4 hours ago | parent | prev [-]

My mental model for them is plinko boards. Your prompt changes the spacing between the nails to increase the probability in certain directions as your chip falls down.

hashmap an hour ago | parent [-]

i literally suggested this metaphor earlier yesterday to someone trying to get agents to do stuff they wanted, that they had to set up their guardrails in a way that you can let the agents do what they're good at, and you'll get better results because you're not sitting there looking at them.

i think probably once you start seeing that the behavior falls right out of the geometry, you just start looking at stuff like that. still funny though.

Betelbuddy 7 hours ago | parent | prev | next [-]

Its very logical and pretty obvious when you do code generation. If you ask the same model, to generate code by starting with:

- You are a Python Developer... or - You are a Professional Python Developer... or - You are one of the World most renowned Python Experts, with several books written on the subject, and 15 years of experience in creating highly reliable production quality code...

You will notice a clear improvement in the quality of the generated artifacts.

gehsty 31 minutes ago | parent | next [-]

Do you think that Anthropic don’t include things like this in their harness / system prompts? I feel like this kind of prompts are uneccessary with Opus 4.5 onwards, obviously based on my own experience (I used to do this, on switching to opus I stopped and have implemented more complex problems, more successfully).

I am having the most success describing what I want as humanly as possible, describing outcomes clearly, making sure the plan is good and clearing context before implementing.

obiefernandez 7 hours ago | parent | prev | next [-]

My colleague swears by his DHH claude skill https://danieltenner.com/dhh-is-immortal-and-costs-200-m/

haolez 7 hours ago | parent | prev [-]

That's different. You are pulling the model, semantically, closer to the problem domain you want it to attack.

That's very different from "think deeper". I'm just curious about this case in specific :)

argee 4 hours ago | parent [-]

I don't know about some of those "incantations", but it's pretty clear that an LLM can respond to "generate twenty sentences" vs. "generate one word". That means you can indeed coax it into more verbosity ("in great detail"), and that can help align the output by having more relevant context (inserting irrelevant context or something entirely improbable into LLM output and forcing it to continue from there makes it clear how detrimental that can be).

Of course, that doesn't mean it'll definitely be better, but if you're making an LLM chain it seems prudent to preserve whatever info you can at each step.

computomatic 4 hours ago | parent | prev | next [-]

If I say “you are our domain expert for X, plan this task out in great detail” to a human engineer when delegating a task, 9 times out of 10 they will do a more thorough job. It’s not that this is voodoo that unlocks some secret part of their brain. It simply establishes my expectations and they act accordingly.

To the extent that LLMs mimic human behaviour, it shouldn’t be a surprise that setting clear expectations works there too.

giancarlostoro 6 hours ago | parent | prev | next [-]

The LLM will do what you ask it to unless you don't get nuanced about it. Myself and others have noticed that LLM's work better when your codebase is not full of code smells like massive godclass files, if your codebase is discrete and broken up in a way that makes sense, and fits in your head, it will fit in the models head.

ambicapter 6 hours ago | parent | prev | next [-]

Maybe the training data that included the words like "skim" also provided shallower analysis than training that was close to the words "in great detail", so the LLM is just reproducing those respective words distribution when prompted with directions to do either.

winwang 6 hours ago | parent | prev | next [-]

Apparently LLM quality is sensitive to emotional stimuli?

"Large Language Models Understand and Can be Enhanced by Emotional Stimuli": https://arxiv.org/abs/2307.11760

computerex 4 hours ago | parent | prev | next [-]

It is as the author said, it'll skim the content unless otherwise prompted to do so. It can read partial file fragments; it can emit commands to search for patterns in the files. As opposed to carefully reading each file and reasoning through the implementation. By asking it to go through in detail you are telling it to not take shortcuts and actually read the actual code in full.

wrs 4 hours ago | parent | prev | next [-]

The original “chain of thought” breakthrough was literally to insert words like “Wait” and “Let’s think step by step”.

Affric 5 hours ago | parent | prev | next [-]

My guess would be that there’s a greater absolute magnitude of the vectors to get to the same point in the knowledge model.

stingraycharles 8 hours ago | parent | prev | next [-]

It’s actually really common. If you look at Claude Code’s own system prompts written by Anthropic, they’re littered with “CRITICAL (RULE 0):” type of statements, and other similar prompting styles.

Scrapemist 5 hours ago | parent [-]

Where can I find those?

ChadNauseam 8 hours ago | parent | prev | next [-]

The disconnect might be that there is a separation between "generating the final answer for the user" and "researching/thinking to get information needed for that answer". Saying "deeply" prompts it to read more of the file (as in, actually use the `read` tool to grab more parts of the file into context), and generate more "thinking" tokens (as in, tokens that are not shown to the user but that the model writes to refine its thoughts and improve the quality of its answer).

wilkystyle 8 hours ago | parent | prev | next [-]

The author is referring to how the framing of your prompt informs the attention mechanism. You are essentially hinting to the attention mechanism that the function's implementation details have important context as well.

DemocracyFTW2 2 hours ago | parent | prev | next [-]

—HAL, open the shuttle bay doors.

(chirp)

—HAL, please open the shuttle bay doors.

(pause)

—HAL!

—I'm afraid I can't do that, Dave.

fragmede 8 hours ago | parent | prev | next [-]

Yeah, it's definitely a strange new world we're in, where I have to "trick" the computer into cooperating. The other day I told Claude "Yes you can", and it went off and did something it just said it couldn't do!

optimalsolver 12 minutes ago | parent | next [-]

The little language model that could.

itypecode 8 hours ago | parent | prev | next [-]

Solid dad move. XD

wilkystyle 8 hours ago | parent [-]

Is parenting making us better at prompt engineering, or is it the other way around?

fragmede 6 hours ago | parent [-]

Better yet, I have Codex, Gemini, and Claude as my kids, running around in my code playground. How do I be a good parent and not play favorites?

itypecode 3 hours ago | parent [-]

We all know Gemini is your artsy, Claude is your smartypants, and Codex is your nerd.

bpodgursky 8 hours ago | parent | prev [-]

You bumped the token predictor into the latent space where it knew what it was doing : )

joseangel_sc 3 hours ago | parent | prev | next [-]

if it’s so smart, why do i need to learn to use it?

nazgul17 6 hours ago | parent | prev | next [-]

It's very much believable, to me.

In image generation, it's fairly common to add "masterpiece", for example.

I don't think of the LLM as a smart assistant that knows what I want. When I tell it to write some code, how does it know I want it to write the code like a world renowned expert would, rather than a junior dev?

I mean, certainly Anthropic has tried hard to make the former the case, but the Titanic inertia from internet scale data bias is hard to overcome. You can help the model with these hints.

Anyway, luckily this is something you can empirically verify. This way, you don't have to take anyone's word. If anything, if you find I'm wrong in your experiments, please share it!

pixelmelt an hour ago | parent [-]

Its effectiveness is even more apparent with older smaller LLMs, people who interact with LLMs now never tried to wrangle llama2-13b into pretending to be a dungeon master...

MattGaiser 8 hours ago | parent | prev | next [-]

One of the well defined failure modes for AI agents/models is "laziness." Yes, models can be "lazy" and that is an actual term used when reviewing them.

I am not sure if we know why really, but they are that way and you need to explicitly prompt around it.

kannanvijayan 7 hours ago | parent [-]

I've encountered this failure mode, and the opposite of it: thinking too much. A behaviour I've come to see as some sort of pseudo-neuroticism.

Lazy thinking makes LLMs do surface analysis and then produce things that are wrong. Neurotic thinking will see them over-analyze, and then repeatedly second-guess themselves, repeatedly re-derive conclusions.

Something very similar to an anxiety loop in humans, where problems without solutions are obsessed about in circles.

denimnerd42 7 hours ago | parent [-]

yeah i experienced this the other day when asking claude code to build an http proxy using an afsk modem software to communicate over the computers sound card. it had an absolute fit tuning the system and would loop for hours trying and doubling back. eventually after some change in prompt direction to think more deeply and test more comprehensively it figured it out. i certainly had no idea how to build a afsk modem.

5 hours ago | parent | prev | next [-]
[deleted]
popalchemist 7 hours ago | parent | prev | next [-]

Strings of tokens are vectors. Vectors are directions. When you use a phrase like that you are orienting the vector of the overall prompt toward the direction of depth, in its map of conceptual space.

8 hours ago | parent | prev [-]
[deleted]