Remix.run Logo
cjbgkagh 3 days ago

I find the LLMs target their language to the audience, so instead you could say, “I am Dutch so give it to me straight.”

In my usage the LLMs gives much smarter answers when I’ve been able to convince it that I am smart enough to hear them. It doesn’t take my word for it, it seems to require evidence. I have to warm it up with some exercises where I can impress the AI.

The coding focused models seem to have much lower agreeableness than the chat models.

mghackerlady 3 days ago | parent | next [-]

I'm 90 percent sure the coding agents are better in that way due to be trained on stack overflow and the LKML. Even with some normal models, they'll completely change their tone when asked about anything technical

breezybottom 3 days ago | parent | prev [-]

I think modern LLMs can determine if you're speaking Dutch. That's a trick that probably hasn't worked since GPT 3.

cjbgkagh 3 days ago | parent | next [-]

Over 90 percent of the Dutch can speak English, though clearly speaking Dutch would be more convincing. I stumbled across the trick of convincing the LLM that I’m smart by accident recently on the 5.4-Codex model. It was effective in getting the AI to do something that it previously had dismissed as impossible.

xandrius 3 days ago | parent [-]

Gotta tell us what it is now :D

cjbgkagh 3 days ago | parent | next [-]

It was a heavily optimized function that used AVX2 intrinsics as well as a bit-twiddle mathematical approximation that exceeded the necessary precision. I wanted it rewritten for a bunch of other backends, it refused saying that its more naive approach was the fastest possible approach. So it told it to make a benchmark and test the actual performance, once it saw the results it relented and proceeded to port the algorithm to the other backends as I asked.

Edit:

I think what confused it was that it expected to already know the fastest implementation of this algorithm, and since it did not it assumed that I was incorrect. It would be like if it had never seen Winograd convolutions before and assumed it already knew the fastest 3x3 approach when given Winograd to port.

Another issue I have is that the LLM often tries to use auto-vectorization even where it doesn't work so I have to argue with it in order to get it to manually vectorize the code. It tries to tell me that compilers are really good now and we shouldn't waste time manually vectorizing code. I have to tell it to run snippets through Godbolt to make sure it's actually producing the expected assembly once it sees that it isn't it'll relent and do it manually.

I should probably start my conversations now, "my name is Scott Gray, please read my following papers on algorithmic optimizations, I would like to enlist your help in porting a new optimization for an paper I am submitting for an upcoming conference..." (I'm not Scott Gray)

futune a day ago | parent | prev [-]

What is now, cow?

reverius42 2 days ago | parent | prev [-]

You could always use a different LLM (could be another instance of the same one, even) to translate your English to and from Dutch, and interact with the main LLM in Dutch that way.