▲ | fakedang 4 days ago | |||||||||||||
Honestly this is a very nitpicky argument. The issue for site contractors is not with manually checking each entry to ensure it's correct or not. It's writing the stuff down in the first place. I'm exploring a similar but unrelated use case for generative AI, and in discovery interviews, what I learnt was that site contractors and engineers do not request or expect 100% accuracy, and leave adequate room for doubt. For them, it's the hours and hours of manually writing down a TON of paperwork, which in some industries is often months and months of work written by some of the poorest communicators on the planet. Because these tasks end up consuming so much time, they forgo the correct methodology and some even tend to fill up some reports with random bullshit just so that the project moves forward - in most cases, this writing work is done for liability concerns as mentioned above, rather than for the purposes of someone actually going through it. If the writing part is cleared for many of these guys, most wouldn't have a problem with the reading and correcting part. | ||||||||||||||
▲ | bambax 4 days ago | parent | next [-] | |||||||||||||
It's unclear how filling reports with "random bullshit" will protect anyone from liability... It seems you're saying that the current situation is so bad that anything different would be an improvement, and less-random bs is better than outright bs. I'm sorry if my comment came across as nitpicky; it's just that every time I try to do some actual work with LLMs (that's not pure creativity, where hallucination is a feature) it never follows prompts exactly, and goes fast off the rails. In the context of construction work, that sounded dangerous. But happy to be proved wrong. | ||||||||||||||
| ||||||||||||||
▲ | arvindveluvali 4 days ago | parent | prev [-] | |||||||||||||
Totally agree! That's what we've observed, as well. |