Remix.run Logo
Aurornis 3 hours ago

This article is excessively LLM written to the point that there’s barely any real information in between all of the unnecessary pie charts and repeated points.

This move is being executed too broadly, in my opinion, but the “bad labs” problem especially in China is widely known in the industry. If you spend any time in the electronics industry at smaller companies you will encounter people who know Chinese labs that will give your product passing test results every time as long as you’re not so far past the limits that it’s too obvious.

milleramp 2 hours ago | parent | next [-]

Yes when qualifying new equipment often the source of failing emissions is the power supply (or some other third party device) that has already 'passed' testing.

chambertime an hour ago | parent [-]

Interesting so like a power supply module that has already gotten full modular approval? Do you have an example link to one of those? Thanks

EnderWT an hour ago | parent | prev | next [-]

The search site they point to (https://markready.io/labs) also lists incorrect locations on the map for labs. BUREAU VERITAS CONSUMER PRODUCTS SERVICES, INC. shows on the map in Littleton, Colorado but is actually located in Littleton, Massachusetts.

chambertime an hour ago | parent [-]

Thanks for finding a data quality issue and reporting it.

mh- 3 hours ago | parent | prev | next [-]

Is that why the pie chart colors don't match the colors in the legend at all..?

3 hours ago | parent | next [-]
[deleted]
chambertime 2 hours ago | parent | prev [-]

Thanks for catching that, fixed! Legend didnt match because I made a mistake when I tried to clean up the contrast in the pie chart.

doctorpangloss 2 hours ago | parent | prev | next [-]

i think it's better to flag the article and move on. the author's replies in the thread are also LLM authored. and nothing in the article makes any sense.

chambertime 2 hours ago | parent [-]

[flagged]

FrustratedMonky 3 hours ago | parent | prev | next [-]

I've noticed a trend of calling every single article "This is AI or LLM, I can't stand it".

And really, you can't tell. Nobody can tell. Humans write badly and blandly also. It's just a trope at this point.

No, you're comment is an LLM.

Night_Thastus 3 hours ago | parent | next [-]

LLMs often have a distinct writing style. It's not guaranteed, you can get false positives and false negatives, but if you start paying attention it becomes obvious in many cases.

chambertime 3 hours ago | parent | next [-]

My poor Reddit has been taken over by bots :(

2ndorderthought 2 hours ago | parent [-]

Reddit is extra cooked soon.

bombcar an hour ago | parent [-]

I’m going to assume everyone using “cooked” comes from uThermal and you won’t convince me otherwise.

CamperBob2 2 hours ago | parent | prev | next [-]

That obviously won't be true for much longer, assuming it's still true now, which I doubt. If you're an LLM content farmer, how hard could it possibly be to LoRA your way out of generating cliches like emdashes, 'You're absolutely right!' and 'It's not A, but B' rhetoric?

We should probably go ahead and get over it.

FrustratedMonky 37 minutes ago | parent | prev | next [-]

I guess my point was lost.

It is obvious, when it is obvious. When it is not, you don't know it.

There are ton more false positives now. Everyone is calling everything 'LLM Slop'.

Because there is a lot of slop. Now every bad human writer is being called an AI just for being human.

And, that is covering that a ton of stuff is LLM and nobody can tell.

People that say they can tell the difference are fooling themselves.

gchamonlive 14 minutes ago | parent [-]

Don't beat yourself over it. It's the new sport for HN upvote farmers to default to calling out any TLDR post that got "delve" in it or some other cliché as LLM Slop. I also think it's a waste of time. What's important is the content. Is the content of the article valuable? No? Just close it and move on. But we know the incentives to get a few upvotes is just much too good to pass...

FrustratedMonky 3 hours ago | parent | prev [-]

Yes, if you are using a generic LLM.

But you can tell it to use different styles. To be formal or in-formal, to insert colloquialisms or to remove.

People are depending on their own 'gut-sense' a lot, and not realizing they are really not correct.

If you think all it takes is paying attention, then you are missing it. It's both more widely used than assumed, and also now obscuring what is non-AI.

zahlman 2 hours ago | parent [-]

> But you can tell it to use different styles. To be formal or in-formal, to insert colloquialisms or to remove.

And when you get it right, the result doesn't get called AI generated.

> People are depending on their own 'gut-sense' a lot, and not realizing they are really not correct.

TFA is very obvious about it.

A human who writes like this should be ashamed to do so, and should endeavour to understand why the writing comes across as "generic LLM"-like and fix it.

We have reached a point where people can end up training their writing on generic LLM output. This is a bad thing, because it's bad output.

Even beyond any clues from writing style, the general presentation is bad. It presents far too many facts and figures without giving anyone a good reason to care about most of them. And then it ends with a section on a separate topic (how to choose a lab, rather than how they're distributed across the world).

Most importantly, though, the submission is presented with a different title that implies a different purpose to the article that is not elaborated in the article. I would have expected personal insight a) on why people should care about the FCC's action (there is no mention of that action at all); b) on what the process was like of collecting this data. And I would have expected, you know, mapping of the lab locations rather than bar charts giving geographic breakdowns.

chownie 3 hours ago | parent | prev | next [-]

This article goes ham on the rule of threes, it does the "not just x, but y" cliche, em-dash with spaces on either side, bold heading-sentence paragraphs, it visibly has hallmarks of AI driven writing.

If you personally can't tell then just say that rather than casting aspersions on everyone else by claiming they can't.

godelski 3 hours ago | parent | prev | next [-]

Fun fact, the author admits to using a LLM.

https://news.ycombinator.com/item?id=47963465

FrustratedMonky 30 minutes ago | parent [-]

Not the first time this has happened.

Half the articles are now LLM's.

If he didn't admit it, we'd be arguing over 'style', which itself can be configured.

Prompt> LLM don't use em-dash.

LLM> OK.

alnwlsn 2 hours ago | parent | prev | next [-]

No human* would waste the time to write a piece that is both highly polished while being so long that any useful information is spread so thinly it is essentially empty. This is how people "can tell" if it is written by AI.

Not a dig at this author by the way or saying it applies to this post, just in general.

*or if they did anyway, the result is the same: bad writing.

JumpCrisscross 2 hours ago | parent | next [-]

> a piece that is both long and highly polished while being devoid of useful information

Idk, I learned a little bit about our regulatory system, that a lot of these labs are in China and that those are now banned (and that the ones in India may be next).

The style is admittedly annoying. But I'm glad the author put in the work to highlight something they, and now I through them, found interesting.

CamperBob2 2 hours ago | parent | prev [-]

No human would waste the time to write a piece that is both highly polished while being so long that any useful information is spread so thinly it is essentially empty.

LOL, some of us spent 12 years in public schools refining this very art to perfection.

ramon156 2 hours ago | parent | prev | next [-]

Haha, good one Altman!

40 minutes ago | parent | prev | next [-]
[deleted]
RobRivera 3 hours ago | parent | prev [-]

I wake up, there is another psyop, I go to sleep

chambertime 3 hours ago | parent | prev [-]

Thanks for that insight on how the Chinese labs are perceived amongst hardware engineers.

I know pie charts are decisive. I thought they were visually helpful in this instance.

JumpCrisscross 3 hours ago | parent | next [-]

> thought they were visually helpful in this instance

If you're the author, can you comment on whether you used AI to write this? (Specifically, the text.)

Where it might be suffering is in its presentation of a list of facts unorganised around any thesis. It took me until your China Question section to see the meat of your piece.

If I had to suggest some edits, they would be making everything above that section more concise (by reducing the number of charts and/or moving them to footnotes) and adding a summarising subtitle.

There are also jargon jumps, e.g. from TFAB to TCB. (I initially assumed the FAA was a TCB, the latter being a generic international term.) This compounds the lack of conciseness presented by the accredition-body breakdown and TCBs vc. test-only labs sections. If those sections were moved after your thesis section, you could dive into whether China's labs differ from the U.S. labs in those respects.

chambertime 3 hours ago | parent [-]

The content of the site is, as stated in my first comment and in the article itself, a nice looking wrapper on top of in essence, an llm Wiki that I put together with the help of Claude on the hardware certification universe. While I was building this data set out, I uncovered that the FCC had this vote today, so I thought it would be a good thing to share since it's timely and because I had just collected all of the relevant information tolp someone figure out how this impacts their hardware certification process (I use voice transcription to write this comment)

I very much appreciate your feedback. As I look at the article now. I totally see what you're saying. I should have let off what was going on with the vote today since that's what I referenced in the title of the post on here.

skeeter2020 3 hours ago | parent | prev [-]

the headline hints that there's some sort of non-obvious factor that's going to be revealled. I scrolled past countless redundant and information-sparse graph-like figures and never found it; moved on.

If you've only got a paragraph worth of information to share, say it and let us get on with our lives.

chambertime 3 hours ago | parent [-]

Well I couldn't find any other thorough dataset on this topic, so in that sense this is non-obvious since it took weeks to assemble the information. And it was fun doing it using the LLM Wiki technique.

JumpCrisscross 3 hours ago | parent [-]

> it was fun doing it using the LLM Wiki technique

What is that?

rnxrx 3 hours ago | parent | next [-]

The general idea is to have the LLM maintain longer-term context/background by storing it in a format/structure that's akin to a standard Wiki. The result is (hopefully) a series of human-readable and editable documents that's developed and maintained by the agent.

There's great coverage of it at https://gist.github.com/karpathy/442a6bf555914893e9891c11519...

It's actually also now a base capability in the Hermes agent and has been really helpful for me, at least.

chambertime 3 hours ago | parent | prev [-]

https://gist.github.com/karpathy/442a6bf555914893e9891c11519...

JumpCrisscross 3 hours ago | parent [-]

Could you share the tools you used to do this? I'd love to organise my esoteric research side quests like this.

chambertime 2 hours ago | parent [-]

Git + claude code in yolo mode. In the first prompt, I passed it Kaparthy's gist, and had it put together a high level plan of all of the sections that needed to be written to complete a vision I provided. Essentially put together a complete wiki on everything for getting global hardware certification.

I then had it loop once an hour. It would pick the next wiki to write, research it, gather raw sources, and then synthesize the wiki for me and push. I could nudge it in between hours if I wanted.

JumpCrisscross 2 hours ago | parent [-]

Do you add this as a skill or permanent prompt, making it always maintain wikis in the background? Or do you direct it to make these only when you're in the project?

chambertime 2 hours ago | parent [-]

There's skills if you want. I didn't want to do that since I don't feel like I need the smarts to work on the LLM wiki in every coding session. I like to keep my context clean and scoped to what I'm working on.

I am running this in a long running session that spawns a subagent once an hour. So the context of the main session doesn't get out of control.