Remix.run Logo
halJordan 2 days ago

Llms are already incredibly able to be great at contextualizing and explaining things. HNs is so allergic to AI, it's incredible. And leaving you behind

bc569a80a344f9c 2 days ago | parent | next [-]

They are. I use LLMs. They need to be given context. Which is easy for things that are already on the Internet for them to pull from. When people stop writing news articles that connect events to one another then LLMs have nothing to pull into their context. They are not capable of connecting two random sources.

Edit: also, the primary point is that if everyone uses LLMs for reporting, the loss of revenue will cause the disappearance of the investigative journalism that funds, which LLMs sure as fuck aren’t going to do.

ragequittah 2 days ago | parent [-]

Is this article investigative? Summary of the court case pdf is trivial for an LLM and most will probably do a better job than the linked article. Main difference being you won't be bombarded with ads and other nonsense (at least for now). Hell I wouldn't be surprised if the reporter had an LLM summarize the case before they wrote the article.

Content that can't be easily made by an LLM will still be worth something. But go to most news sites and their content is mostly summarization of someone else's content. LLMs may make that a hard sell.

bongodongobob 2 days ago | parent | prev | next [-]

I think it's a mix of shortsightedness and straight up denial. A lot of people on here were the smart nerdy kid. They are good at programming or electronics or whatever. It became their identity and they are fuckin scared that the one thing they can do well will be taken away rather than putting the new tool in their toolbox.

grues-dinner 2 days ago | parent | prev | next [-]

The problem I may have with using an LLM for this is that I am not already familiar with the subject in detail and won't know when the thing has:

* Strayed from reality

* Strayed from the document and is freely admixing with other information from its training data without saying so. Done properly, this is a powerful tool for synthesis, and LLMs theoretically are great at it, but done improperly it just muddles things

* Has some kind of bias baked in-ironic mdash-"in summary, this ruling is an example of judicial overreach by activist judges against a tech company which should morally be allowed to do what they want". Not such a problem now, but I think we may see more of this once AI is firmly embedded into every information flow. Currently the AI company game is training people to trust the machine. Once they do, what a resource those people become!

Now, none of those points are unique to LLMs: inaccuracy, misunderstanding, wrong or confused synthesis and especially bias are all common in human journalism. Gell-Mann amnesia and institutional bias and all that.

Perhaps the problem is that I'm not sufficiently mistrustful of the status quo, even though I am already quite suspicious of journalistic analysis. Or maybe it's because AI, though my brain screams "don't trust it, check everything, find the source", remains in the toolbox even when I find problems, whereas for a journalist I'd roll my eyes, call them a hack and leave the website.

Not that it's directly relevant to the immediate utility of AI today, but once AI is everything, or almost everything, then my next worry is what happens when you functionally only have published primary material and AI output to train on. Even without model collapse, what happens when AI journobots inherently don't "pick up the phone", so to speak, to dig up details? For the first year, the media runs almost for free. For the second year, there's no higher level synthesis for the past year to lean on and it all regresses to summarising press releases. Again, there are already many human publications that just repackage PRs, but when that's all there is? This problem isn't limited to journalism, but it's a good example.

smohare 2 days ago | parent | prev | next [-]

[dead]

computerthings 2 days ago | parent | prev [-]

[dead]