| ▲ | nateroling 5 hours ago |
| Seeing the Gemini 3 capabilities, I can imagine a near future where file formats are effectively irrelevant. |
|
| ▲ | qart 3 hours ago | parent | next [-] |
| I have family members with health conditions that require periodic monitoring. For some tests, a phlebotomist comes home. For some tests, we go to a hospital. For some other tests, we go to a specialized testing center. They all give us PDFs in their own formats. I manually enter the data to my spreadsheet, for easy tracking. I use LLMs for some extraction, but they still miss a lot. At least for the foreseeable future, no LLM will ever guarantee that all the data has been extracted correctly. By "guarantee", I mean someone's life may depend on it. For now, doctors take up the responsibility of ensuring the data is correct and complete. But not having to deal with PDFs would make at least a part of their job (and our shared responsibilities) easier. |
|
| ▲ | s0rce 4 hours ago | parent | prev | next [-] |
| Can you elaborate? Are you never reading papers directly but only using Gemini to reformat or combine/summarize? |
| |
| ▲ | nateroling 3 hours ago | parent [-] | | I mean that when a computer can visually understand a document and reformat and reinterpret it in any imaginable way, who cares how it’s stored? When a png or a pdf or a markdown doc can all be be read and reinterpreted into an infographic or a database or an audiobook or an interactive infographic the original format won’t matter. |
|
|
| ▲ | DANmode 5 hours ago | parent | prev | next [-] |
| Files. Truth in general, if we aren't careful. |
|
| ▲ | doc_ick 5 hours ago | parent | prev | next [-] |
| [dead] |
|
| ▲ | sansseriff 4 hours ago | parent | prev [-] |
| Seriously. More people need to wake up to this. Older generations can keep arguing over display formats if they want. Meanwhile younger undergrad and grad students are getting more and more accustomed to LLMs forming the front end for any knowledge they consume. Why would research papers be any different. |
| |
| ▲ | JadeNB 3 hours ago | parent [-] | | > Meanwhile younger undergrad and grad students are getting more and more accustomed to LLMs forming the front end for any knowledge they consume. Well, that's terrifying. I mean, I knew it about undergrads, but I sure hoped people going into grad school would be aware of the dangers of making your main contact with research, where subtle details are important, through a known-distorting filter. (I mean, I'd still be kinda terrified if you said that grad students first encounter papers through LLMs. But if it is the front end for all knowledge they consume? Absolutely dystopian.) | | |
| ▲ | sansseriff 2 hours ago | parent [-] | | I admit it has dystopian elements. It’s worth deciding what specifically is scary though. The potential fallibility or mistakes of the models? Check back in a few months. The fact they’re run by giant corps which will steal and train on your data? Then run local models. Their potential to incorporate bias or persuade via misalignment with the reader’s goals? Trickier to resolve, but various labs and nonprofits are working on it. In some ways I’m scared too. But that’s the way things are going because younger people far prefer the interface of chat and question answering to flipping through a textbook. Even if AI makes more mistakes or is more misaligned with the reader’s intentions than a random human reviewer (which is debatable in certain fields since the latest models game out), the behavior of young people requires us to improve the reputability of these systems. (Make sure they use citations, make sure they don’t hallucinate, etc). I think the technology is so much more user friendly that fixing the engineering bugs will be easier than forcing new generations to use the older systems. |
|
|