Remix.run Logo
spyspy 4 days ago

It's not buried anywhere, it's literally the next paragraph after the lede.

> These acres are not necessarily owned by large conglomerates and investment firms. Corporate structures are also attractive vehicles for family businesses because they offer tax benefits and externalize losses.

mbreese 4 days ago | parent [-]

So it is... Chalk that up to my science reading skills -- I skimmed the text and skipped quickly to the charts, figures, and legends...

ASalazarMX 4 days ago | parent [-]

This is a use case where I think a current LLM shines. Ask it to summarize the important points of n papers, and slow read only the ones that pique your interest. It won't be perfect, but it will save you a ton of time while letting you focus on the things that need more attention.

freeopinion 4 days ago | parent [-]

I'm not anti-LLM even if the following statement sounds like it.

I don't trust LLMs, even to summarize for me. I have to fact-check every single statement. For instance, if I ask ChatGPT, "Is PLA more dense than ABS?" it answers, "No, PLA is not more dense than ABS." Those are direct quotes. In the third paragraph, ChatGPT says, "So technically, PLA is denser than ABS, not less — I misspoke earlier."

I find LLMs good for using words that I didn't think of. I can then reword a search to get better search results.

To be fair, the cherry-picked example I used above sounds a lot like a human. Humans make such mistakes and corrections. If a human had given me that response, I would shrug and ask more questions. But it would make that human not be my go to source.

It makes me shudder to think about code that is written in such a manner.

dylan604 4 days ago | parent | next [-]

A human that makes a mistake, catches it, corrects it and apologizes is someone I'd much rather interact with than someone that makes a mistake, catches it, and doubles down on the mistake that I then later discover the issue would be the person I would not be my go to source.

Mistakes happen. If they are honest mistakes, then we can deal with it. If they are deliberate mistakes, well, we can deal with that too but in a different manner. The problem that I have is answering something in a confident manner when it's really a hedge to not sound unsure. People apparently have issues with an unsure bot. I'd much rather have a response like, "I'm not positive, but I think PLA is less dense than ABS" for the wiggle room of being able to come back later with "So technically, PLA is more dense than ABS". Even if the bot doesn't figure it out, by phrasing that way, you're clued in on what to fact check

ASalazarMX 3 days ago | parent | prev | next [-]

It might surprise you, but copy-paste coders have always existed, LLMs are just making them more obvious, like "vibe coding". I don't use AI code assistants, but I can see why they could be valuable autocompleters in contexts where the user is knowledgeable.

It also makes me shudder to think of people who know just enough to compile and fix glaring mistakes, but largely trust the overall AI output. As if commercial software quality hasn't gone downhill enough these days.

HDThoreaun 3 days ago | parent | prev | next [-]

I am researching LLMs as a summarizing tool for medical research and early results show that LLMs do a better job than the current status quo of volunteer doctors. Pre print hopefully coming soon, until then youll just have to trust me that they are really good at summarizing and critiquing research.

tialaramex 4 days ago | parent | prev [-]

> It makes me shudder to think about code that is written in such a manner.

Often it has the property which was good enough for generations of C and C++ programmers, it compiles. Does it work? Eh. Do the tests, if there even are tests, check anything useful? Eh.

The focus on "it doesn't matter so long as it compiles" justifies everything up to IFNDR†, the explicit choice in C++ that if what you've written is nonsense but it would not be easy to modify the compiler to notice, just don't worry about it and say it's somebody else's problem.

† "Ill-formed, No Diagnostic Required" these words or near equivalent occur frequently in the ISO definition of the language.