Remix.run Logo
benob 9 hours ago

This is the worst lay-people explanation of an AI component I have seen in a long time. It doesn't even seem AI generated.

BenoitP 8 hours ago | parent | next [-]

It is AI generated. Or was written by someone a bit far from the technical advances IMHO. The Johnson-Lindenstrauss Lemma is a very specific and powerful concept, when in the article the QLJ explanation is vacuous. A knowledgeable human would not have left the reader wanting for how that relates to the Lemma.

spencerflem 9 hours ago | parent | prev [-]

I think it is though-

“ TurboQuant, QJL, and PolarQuant are more than just practical engineering solutions; they’re fundamental algorithmic contributions backed by strong theoretical proofs. These methods don't just work well in real-world applications; they are provably efficient and operate near theoretical lower bounds.”

NoahZuniga 5 hours ago | parent | next [-]

Genius new idea: replace the em-dashes with semicolons so it looks less like AI.

tux3 4 hours ago | parent | next [-]

You're absolutely right. That's not just a genius idea; it's a radical new paradigm.

Quarrel an hour ago | parent | prev [-]

Damnit.

There goes another bit of my writing style that will get mistaken for an LLM.

zarzavat 5 hours ago | parent | prev | next [-]

I read "this clever step" and immediately came to the comments to see if anyone picked up on it.

It reads like a pop science article while at the same time being way too technical to be a pop science article.

Turing test ain't dead yet.

TeMPOraL 2 hours ago | parent [-]

> Turing test ain't dead yet.

Only because people are lazy, and don't bother with a simple post-processing step: attach a bunch of documents or text snippets written by a human (whether yourself or, say, some respected but stylistically boring author), and ask the LLM to match style/tone.

integralid 8 hours ago | parent | prev | next [-]

I also instinctively reacted to that fragment, but at this point I think this is overreacting to a single expression. It's not just a normal thing to say in English, it's something people have been saying for a long time before LLMs existed.

nvme0n1p1 7 hours ago | parent | next [-]

There are tells all over the page:

> Redefining AI efficiency with extreme compression

"Redefine" is a favorite word of AI. Honestly no need to read further.

> the key-value cache, a high-speed "digital cheat sheet" that stores frequently used information under simple labels

No competent engineer would describe a cache as a "cheat sheet". Cheat sheets are static, but caches dynamically update during execution. Students don't rewrite their cheat sheets during the test, do they? LLMs love their inaccurate metaphors.

> QJL: The zero-overhead, 1-bit trick

> It reduces each resulting vector number to a single sign bit (+1 or -1). This algorithm essentially creates a high-speed shorthand that requires zero memory overhead.

Why does it keep emphasizing zero overhead? Why is storing a single bit a "trick?" Either there's currently an epidemic of algorithms that use more than one bit to store a bit, or the AI is shoving in extra plausible-sounding words to pad things out. You decide which is more likely.

It's 1:30am and I can't sleep, and I still regret wasting my time on this slop.

TeMPOraL an hour ago | parent | next [-]

I say you're fixating on the wrong signal here. "Redefine" and "cheat sheet" are normal words people frequently use, and I see worse metaphors in human-written text routinely.

It's the structure and rhythm at the sentence and paragraph levels that's the current tell, as SOTA LLMs all seem to overuse clarification constructs like "it's not X, it's Y" and "it's X, an Y and a Z", and "it's X, it's essentially doing Y".

Thing is, I actually struggle to find what's so off-putting about these, given that they're usually used correctly. So far, the best hypothesis I have for what makes AI text stand out is that LLM output is too good. Most text written by real humans (including my own) is shit, with the best of us caring about communicating clearly, and most people not even that; nobody spends time refining the style and rhythm, unless they're writing a poem. You don't expect a blog post or a random Internet article (much less a HN comment) to be written in the same style as a NYT bestseller book for general audience - but LLMs do that naturally, they write text better at paragraph level than most people ever could, which stands out as jarring.

> Either there's currently an epidemic of algorithms that use more than one bit to store a bit, or the AI is shoving in extra plausible-sounding words to pad things out. You decide which is more likely.

Or, those things matter to authors and possibly the audience. Which is reasonable, because LLMs made the world suddenly hit hard against global capacity constraints in compute, memory, and power; between that and edge devices/local use, everyone who pays attention is interested in LLM efficiency.

veunes 6 hours ago | parent | prev | next [-]

Looks like Google canned all their tech writers just to pivot the budget into H100s for training these very same writers

snovv_crash 5 hours ago | parent [-]

Capex vs. opex

roywiggins 2 hours ago | parent | prev | next [-]

"The X Trick" or "The Y Dilemma" or similar snowclones in a header is also a big AI thing. Humans use this construction too, but LLMs love it out of all proportion. I call it The Ludlum Delusion (since that's how every Robert Ludlum book is titled).

pqs 7 hours ago | parent | prev [-]

There is also the possibility that the article when through the hands of the company's communication department which has writers that probably write at LLM level.

awesomelvin 5 hours ago | parent [-]

[dead]

g-mork 5 hours ago | parent | prev [-]

Another instinctual reaction here. This specific formulation pops out of AI all the time, there might as well have been an emdash in the title

benob 9 hours ago | parent | prev [-]

Maybe they quantized a bit too much the model parameters...