| ▲ | tyushk 9 hours ago |
| > A BBC journalist ran the image through an AI chatbot which identified key spots that may have been manipulated. The image is likely AI generated in this case, but this does not seem like the best strategy for finding out if an image is AI generated. |
|
| ▲ | 1659447091 9 hours ago | parent | next [-] |
| Under the other photos it says A photo taken by a BBC North West Tonight reporter showed the bridge is undamaged and A BBC North West reporter visited the bridge today and confirmed it was undamaged They may have first ran the photo through an AI, but they also went out to verify. Or ran it after verification to understand it better, maybe |
| |
| ▲ | lazystar 8 hours ago | parent [-] | | So.. is this where the AI hype train starts to lose steam? One AI hallucinated and caused the incident, and another AI program just wasted everyone's time after it was unable to verify the issue. Sounds like AI was utterly useless to everyone involved. | | |
| ▲ | bigiain 6 hours ago | parent | next [-] | | > One AI hallucinated and caused the incident I suspect that AI was prompted to create the image, not that this was an incidental "hallucination". Cynical-me suspects this may have been a trial run by malicious actors experimenting with disrupting critical infrastructure. | | | |
| ▲ | 1659447091 8 hours ago | parent | prev | next [-] | | > Sounds like AI was utterly useless to everyone involved Maybe. Imo, I think the advances in AI and the hype toward generated everything will actually be the current societies digitally-obsessed course-correction back to having a greater emphases on things like theater, live music, conversing with people in-person or even strangers (the horror, I know) simply to connect/consume more meaningfully. It'll level out integrating both instead of being so digitally loop-sided as humans adapt to enjoy both.* To me, this shows a need for more local journalism that has been decimated by the digital world. By journalism, I mean it in a more traditional sense, not bloggers and podcast (no shade some follow principled, journalistic integrity -- as some national "traditional" one don't). Local journalism is usually held to account by the community, and even though the worldwide BBC site has this story, it was the local reporters they had that were able to verify. If these AI stories/events accelerate a return to local reporting with a worldwide audience, then all the better. * I try to be a realist, but when I err, it tends to be on the optimist side | | |
| ▲ | nicoburns 7 hours ago | parent [-] | | The tech giants sucking up all the ad revenue is what killed local journalism. Unless you can find a solution to that problem (or an alternstove fundong model), it's not coming back. | | |
| ▲ | DrewADesign 5 hours ago | parent [-] | | But just think of all the people that didn’t have to receive a paycheck because of all this efficiency! It’s really incredible how the supposedly unassailable judgement of mass consumer preference consistently leads our society to produce worse shit so we can have more or it, and rewards the chief enshittifiers with mega yachts. | | |
| ▲ | kiba 5 hours ago | parent [-] | | They have powerful untaxed monopolies in excess of the economic value tech companies themselves generate. At some point, the value of their services come from the people who use their sites. |
|
|
| |
| ▲ | ceejayoz 8 hours ago | parent | prev [-] | | > Sounds like AI was utterly useless to everyone involved. Not the hoaxer! |
|
|
|
| ▲ | skissane 7 hours ago | parent | prev | next [-] |
| Someone I know is a high school English teacher (being vague because I don’t want to cause them trouble or embarrassment). They told me they were asking ChatGPT to tell them whether their students’ creative writing assignments were AI-generated or not-I pointed out that LLMs such as ChatGPT have poor reliability at this; classifier models trained specifically for this task perform somewhat better, yet also have their limitations. In any event, if the student has access to whatever model the teacher is using to test for AI-generation (or even comparable models), they can always respond adversarially by tinkering with an AI-generated story until it is no longer classified as AI-generated |
| |
| ▲ | frenchtoast8 6 hours ago | parent | next [-] | | A New York lawyer used ChatGPT to write a filing with references to fake cases. After a human told him they were hallucinated, he asked ChatGPT if that was true (which said they were real cases). He then screenshotted that answer and submitted it to the judge with the explanation "ChatGPT ... assured the reliability of its content." https://www.courtlistener.com/docket/63107798/54/mata-v-avia... (pages 19, 41-43) | | | |
| ▲ | techjamie 7 hours ago | parent | prev | next [-] | | Reminds me of a Reddit story that made the rounds about a professor asking ChatGPT if it wrote papers, to which it frequently responded afirmatively. He sent an angry email about it, and a student responded by showing a response from ChatGPT claiming it wrote his email. | | |
| ▲ | gblargg 6 hours ago | parent [-] | | > student responded by showing a response from ChatGPT claiming it wrote his email Which is actually fine. Students need to do their own homework. A teacher can delegate writing emails. | | |
| ▲ | recursive 6 hours ago | parent | next [-] | | But if he didn't delegate, and it said he did, that would suggest that the methodology doesn't really work. | |
| ▲ | gblargg 3 hours ago | parent | prev | next [-] | | Apologies to everyone I upset by this comment. It was just an innocent mis-reading of the joke. Lesson learned. | |
| ▲ | arcanemachiner 6 hours ago | parent | prev | next [-] | | I believe you just got whooshed. | | |
| ▲ | gblargg 3 hours ago | parent | next [-] | | Yes, I missed the student using the teacher's trust in those tools to make them even more angry and neuter their angry email that they (probably) actually wrote themselves. Well-played. | |
| ▲ | MengerSponge 5 hours ago | parent | prev [-] | | A person arguing in favor of LLM use failed to comprehend the context or argument? Unpossible! | | |
| ▲ | gblargg 3 hours ago | parent [-] | | I don't think I was arguing for LLMs. I wish nobody used them. But the argument against a student using it for assignments is significantly different than that against people in general using them. It's similar to using a calculator or asking someone else for the answer: fine normally but not if the goal is to demonstrate that you learned/know something. I admit I missed the joke. I read it as the usual "you hypocrite teacher, you don't want us using tools but you use them" argument I see. There's no need to be condescending towards me for that. I see now that the "joke" was about the unreliability of AI checkers and making the teacher really angry by suggesting that their impassioned email wasn't even their writing, bolstered by their insistence that checkers are reliable. |
|
| |
| ▲ | weird-eye-issue 5 hours ago | parent | prev [-] | | You missed the entire point lol | | |
| ▲ | gblargg 3 hours ago | parent [-] | | Yeah, I'm really sorry. I didn't realize it would upset so many people. |
|
|
| |
| ▲ | ikr678 4 hours ago | parent | prev [-] | | Students (and some of my coworkers) are now learning new content by reading AI generated text. Of course when tested on this, they are going to respond in the style of AI. |
|
|
| ▲ | jameslk 6 hours ago | parent | prev | next [-] |
| ChatGPT: This looks like AI. I can tell from some of the pixels and from seeing quite a bit of training data in my time. |
|
| ▲ | informal007 7 hours ago | parent | prev | next [-] |
| This is the fast way they can try, but it shouldn't be the most trustworthy way and shouldn't be in report. |
|
| ▲ | 5 hours ago | parent | prev | next [-] |
| [deleted] |
|
| ▲ | Davidzheng 7 hours ago | parent | prev | next [-] |
| If it's nano banana you can give it to Gemini bc it has artifacts |
| |
| ▲ | RestartKernel 4 hours ago | parent [-] | | All these tool integrations are making it increasingly difficult to explain to non-tech people what these chatbots are capable of. Even more so as multi-modality improves (at some point image generation went from a distinct tool to arguably an inherent part the the models). |
|
|
| ▲ | hexbin010 3 hours ago | parent | prev | next [-] |
| Do you not think even BBC "journalists" are suffering from immense pressures to use AI for efficiency? It's everywhere |
|
| ▲ | Blackthorn 8 hours ago | parent | prev | next [-] |
| Yeah, talk about begging the question. Yikes. |
|
| ▲ | vkou 9 hours ago | parent | prev | next [-] |
| It's not, but when you have 30 minutes to ship a story... |
| |
|
| ▲ | maxlin 7 hours ago | parent | prev [-] |
| Yeah that hardly talks of the "journalist" being good at their job. At worst they asked a biased question like "has this photo been AI generated and if then how" or worse. People tend to think that AI is like a specific kind of human which knows other AI things better. But we should expect better from people that do writing as their job. |