| ▲ | furyofantares 2 days ago |
| I'm sorry, but is producing 10 lists of 15 items each really 50 peoples' worth of work? The amount of effort this would take today without the use of a chatbot seems overblown. Or with the use of a chatbot, but fact-checked, for that matter. edit: I'm not questioning how much work it was in the days of print. I think it's fairly false to paint it as if AI has much to do with the transition from high effort lists to low effort. I don't think it happened overnight that it went from 50 brains to 1, these lists have become easier to produce and far less valuable over the past few decades, I suspect the number of people involved had dwindled a lot before anyone used a chatbot to do it. |
|
| ▲ | throwway120385 2 days ago | parent | next [-] |
| In the days of print media, before you could google "top 10" X, the newspaper might well be your only source of "listicles." They took that responsibility seriously. Top 10 lists are garbage nowadays because the format is used to flood search engines with Amazon Affiliate links for things like fartely brand leggings. |
| |
| ▲ | furyofantares 2 days ago | parent [-] | | Right, which is why I asked if it's that much work _today_. I don't think they went from this going through 50 brains to it going through 1 overnight because chatbots exist. They gradually got there as these lists both became easier to produce and less valuable. | | |
| ▲ | AnthonyMouse 2 days ago | parent [-] | | High-quality lists didn't become easier to produce; search engines were created and were unable to distinguish between high-quality lists and low-quality lists, which created a market for lemons. |
|
|
|
| ▲ | dentemple 2 days ago | parent | prev | next [-] |
| It was 3 people who were replaced for the making of a list. The number 50 was what Doctorow presumed was the entirety of the department that could potentially have been replaced by AI, of which the making of this list had been only one of that department's overall tasks. At 3 interns per article, having 30 interns working on 10 simultaneous articles at any given time seems like reasonable output for an online zine. |
|
| ▲ | rtkwe 2 days ago | parent | prev | next [-] |
| That wouldn't have been their full time jobs but that list would have passed through quite a few people's hands; from gathering all the events/books/activities, picking out some that seem fun for several different 'types' of people, compiling, writing, reviewing/editing, fact checking and formatting. |
|
| ▲ | cxr a day ago | parent | prev | next [-] |
| How many books should a person read before being able to pick 15 to recommend, and how long do you think it should take to read them? |
|
| ▲ | smithkl42 2 days ago | parent | prev | next [-] |
| I agree with Doctorow about the de-humanizing nature of this sort of work - but to your specific point about fact checking, it'd honestly be fastest to outsource that to a different LLM, maybe ChatGPT in "deep research" mode or something like that. |
| |
| ▲ | WolfeReader 2 days ago | parent | next [-] | | "Here's a technology which is known to be confidently wrong pretty frequently. I'm going to use it to fact check things." | | |
| ▲ | smallnix 2 days ago | parent | next [-] | | Querying an LLM for 'facts' is dangerous. Using some IR technique and incorporating LLMs to gauge relevancy and semantic alignment is a viable approach. | |
| ▲ | sleepybrett 2 days ago | parent | prev | next [-] | | pay all the other 'ai's to crowdsource .. or maybe cloudsource, a truth boolean. Then when they all ingest each others answers slowly over time all the answers become the same, wether truthful or not. | |
| ▲ | lupusreal 2 days ago | parent | prev | next [-] | | If you have tokens to burn, using new sessions to critique the work produced in other sessions greatly improves reliability. Asking the same question multiple different ways, and to more than one LLM, also helps a lot. | | |
| ▲ | pipo234 2 days ago | parent [-] | | That approach may be a viable heuristic but it will only get you so far. It's like flagging an opinion because it doesn't rhyme with the opinions of others. That's not what humans do when they are fact finding, though. It's not what a (proper) scientist would do if she/he discovered a great insight or theory and was wondering whether it was true. | | |
| ▲ | lupusreal 2 days ago | parent [-] | | If you're turning to LLMs for great insight or theory, you're definitely doing it wrong. These tools are for well trod terrain, and when they're really just making shit up, it's almost always different shit each time. So yes, it's a heuristic, but for dealing with the stochastic weirdness these models sometimes spit out it works pretty damn well. |
|
| |
| ▲ | CamperBob2 2 days ago | parent | prev [-] | | It's a powerful tool that can be misused by the incompetent, like most other powerful tools. |
| |
| ▲ | CamperBob2 2 days ago | parent | prev [-] | | The other thing is that the missing books can be created on demand and published in a couple of hours, using the same tech that mistakenly added them to the list in the first place. They'll suck, of course, but so do most of the books on any given "Summer Reading Guide." | | |
| ▲ | throwway120385 2 days ago | parent [-] | | So it's a sort of "reverse memory hole" where things that never existed contemporaneously to the list can now be given life to invalidate fact checking. | | |
| ▲ | CamperBob2 2 days ago | parent [-] | | Which just reminds us that the committee who awarded the Nobel Prize to Kissinger and Arafat is the same one that overlooked Borges and Eco. | | |
| ▲ | addaon 2 days ago | parent [-] | | No. The Nobel Peace Prize is awarded by the Norwegian Nobel Committee, while the Literature prize and the other prizes defined by Nobel’s will are awarded by the Swedish Academy. | | |
|
|
|
|
|
| ▲ | 2 days ago | parent | prev [-] |
| [deleted] |