| ▲ | Unverified: What Practitioners Post About OCR, Agents, and Tables(idp-software.com) |
| 20 points by chelm 10 hours ago | 10 comments |
| |
|
| ▲ | bonsai_spool 5 hours ago | parent | next [-] |
| Please write in your own words! I’m not inclined to read something if it consists of what you copy and pasted from Claude |
| |
| ▲ | obsidianbases1 3 hours ago | parent | next [-] | | Interesting complaint, because many might not share any of their ideas if it weren't for LLMs making it easy. Not everyone has the incentive to dedicate a day to producing writing worth publishing. But maybe they would if it took significantly less time. Even considering HNs no LLMs for comments rule, which I mostly agree with, I think we would all lose of the same rule were applied to publishing in general. | | |
| ▲ | curtisf 3 hours ago | parent [-] | | "I would rather read the prompt" https://claytonwramsey.com/blog/prompt/ discussion:
https://news.ycombinator.com/item?id=43888803 All of the output beyond the prompt contains, definitionally, essentially no useful information. Unless it's being used to translate from one human language to another, you're wasting your reader's time and energy in exchange for you own. If you have useful ideas, share them, and if you believe in the age of LLMs, be less afraid of them being unpolished and simply ask you readers to rely on their preferred tools to piece through it. | | |
| ▲ | x1798DE 15 minutes ago | parent [-] | | I have also found that LLMs do not help me communicate my ideas in any way because the bottleneck is getting the ideas out of my head and into the prompt in the first place, but I will disagree with the idea that the output beyond the prompt contains no useful information. In the article you linked the output he is complaining about probably had a prompt like this: "What are the downsides of using Euler angles for rotation representation in robotics? Please provide a bulleted list and suggest alternatives." The LLM expanded on it based on its knowledge of the domain or based on a search tool (or both). Charitably, the student looked it over and thought through the information and decided it was good (or possibly tweaked around the edges) and then sent it over - though in practice they probably just assumed it was correct and didn't check it. For writing an essay like "I would rather read the prompt" LLMs don't seem like they would speed up the process much, but for something that involves synthesizing or summarizing information LLMs definitely can generate you a useful essay (though at least at the moment the default system prompts generate something distinctively bland and awful). | | |
| ▲ | chelm 2 minutes ago | parent [-] | | Pretty balanced take. I think if a human gains information or saves time, it's still worthwhile. Surely, I don't publish those clickbaits. That's AI slop. |
|
|
| |
| ▲ | chelm 12 minutes ago | parent | prev [-] | | Did you read the article? |
|
|
| ▲ | ChrisKnott 2 hours ago | parent | prev | next [-] |
| Is there a SOTA OCR model that prioritises failing in a debuggable way? What I want is an output that records which sections of the image have contributed to each word/letter, preferably with per word confidence levels and user correctable identification information. I should be able to build a UI to say: no, this section is red-on-green vertically aligned Cyrillic characters; try again. |
|
| ▲ | quinndupont 2 hours ago | parent | prev | next [-] |
| Very helpful analysis that confirms everything I’ve encountered. OCR remains a thorny issue. The author talks about professional workflows struggling with tables and such, but I’ve found it challenging to get clean copies of long documents (books). The hybrid workflow (layout then OCR) sounds promising. |
|
| ▲ | bobajeff 2 hours ago | parent | prev | next [-] |
| It's very surprising to me that the state of the art tools for data entry and digitizing still require a lot of supervision. From the article it's not that surprising that handwritten documents are harder for old-school OCR or AI as that can be hard even for humans in some cases. But tables and different layouts seem like low hanging fruit for vision models. |
|
| ▲ | jgalt212 3 hours ago | parent | prev [-] |
| > The Demo Works. Production Does Not. Truer words have never been spoken. LLMs make mind blowing demos, but real-world performance is much less (but still useful). An example from yesterday: I asked Google / Nano Banana to repaint my house with a few options. It gave a nice write up on three themes and a nice rendering of 1/3 vertical slices in one image of each theme. Then, I asked it to redraw the image entirely in one of the themes. It redrew the image 1/3 in the one theme I asked for and 2/3 in a theme I did not ask for. Further prompting did not fix it. At the end of the day, this was a useful exercise and I was able to get some sense of what color scheme would work better for my house, but the level of execution was miles away from the perfection portrayed in demos and hypester / huckster bloggers and VCs. |