| ▲ | whack 6 hours ago |
| > According to the court documents, the Fargo detective working the case then looked at Lipps' social media accounts and Tennessee driver's license photo. In his charging document, the detective wrote that Lipps appeared to be the suspect based on facial features, body type and hairstyle and color. > Once they were in hand, Fargo police met with him and Lipps at the Cass County jail on Dec. 19. She had already been in jail for more than five months. It was the first time police interviewed her. How is this the fault of AI? It flagged a possible match. A live human detective confirmed it. And the criminal justice system, for reasons that have nothing to do with AI, let this woman sit in jail for 5 months before doing even interviewing her or doing any due diligence. There's a reason why we don't let AI autonomously jail people. Instead of scapegoating an AI bogeyman, maybe we should look instead at the professional human-in-the-loop who shirked all responsibility, and a criminal justice system that thinks it is okay to jail people for 5 months before even starting to assess their guilt. |
|
| ▲ | rglover 6 hours ago | parent | next [-] |
| > How is this the fault of AI? It flagged a possible match. A live human detective confirmed it. Because we're seeing the first instances of what reality looks like with AI in the hands of the average bear. Just like the excuse was "but the computer said it was correct," now we're just shifting to "but the AI said it was correct." Don't underestimate how much authority and thinking people will delegate to machines. Not to mention the lengths they'll go to weasel out of taking responsibility for a screw up like this (saw another comment in this thread about the Chief of Police stepping down but it being framed as "retirement"). |
| |
| ▲ | andrepd 4 hours ago | parent | next [-] | | It's not even just incompetence, but malice. "AI says so" is going to be the perfect catch-all excuse for literally everything anyone might want to do that they shouldn't. You know how techbros love to excuse every horrifying outcome of their torment nexi with "don't blame me, the algorithm did it"? It's going to be like that, but now everyone can do it. | |
| ▲ | foxglacier 3 hours ago | parent | prev | next [-] | | So what? There were false arrests and convictions made by misuse of line-ups, DNA, eye-witnesses, photos, bloodstains, fingerprints, etc. since forever. You must also blame all those other technologies, so what do you think the police should use to find suspects? In your view, the more help police have, the worse a job they'll do. Is that actually the trend? | | |
| ▲ | catlikesshrimp 3 hours ago | parent | next [-] | | With all other proof you mentioned, there was always a human putting his signature. Now that they can blame "AI" no specific officer(s) will take the blame, ever. If no one is responsible there will be many more false positives. And false positives destroy lives | |
| ▲ | worik 3 hours ago | parent | prev [-] | | So what??? This woman lost most of her material possessions, was terrorised by "goons"... The police do this stuff regularly, as black people, immigrants, "white trash" etcetera know well. Another opportunity, presented BY AI models for more routine police oppression As the wise singer said: "Fuck the police!" | | |
| ▲ | foxglacier 3 hours ago | parent [-] | | Exactly, it's the police's fault, as well as the wider system they operate in that enables that kind of abuse, and they do it anyway even with out AI. |
|
| |
| ▲ | pj_mukh 5 hours ago | parent | prev [-] | | I'm sorry but this is a piss-poor excuse. When I Claude code broken features, I'm responsible 100%. Why are cops not treated the same way? OP is right, AI is totally irrelevant in this story. If the point is "cops can't be trusted". Why do they have GUNS?! AI is the least of your problems. I feel like I'm going crazy with this narrative. | | |
| ▲ | jacquesm 4 hours ago | parent | next [-] | | > I feel like I'm going crazy with this narrative. We're only getting warmed up. There are programmers on HN that will take the output of their favorite AI, paste it and run it. And we're supposed to be the ones that know better. What do you think an ordinary person is going to do in the presence of something that they can not relate to anything else except for an oracle, assuming they know the term? You put anything in there and out pops this extremely polished looking document, something that looks better than whatever you would put together yourself with a bunch of information on it that contains all kinds of juicy language geared up to make you believe the payload. And it does that in a split second. It's absolutely magical to those in the know, let alone to those that are not. They're going to fall for it, without a second thought. And they're going to draw consequences from it that you thought could use a little skepticism. Too late now. | |
| ▲ | heavyset_go 3 hours ago | parent | prev | next [-] | | When you foster a culture of impunity and passing the buck, don't be surprised when they pass the buck to the inscrutable black box they bought. You might even argue that's the purpose of the inscrutable black box. | | | |
| ▲ | dml2135 4 hours ago | parent | prev | next [-] | | The “I” in “AI” stands for “intelligence”. Cops are using AI facial recognition because it is being sold to them as being smarter and better than what they are currently capable of. Why are we then surprised that they aren’t second-guessing the technology? | | |
| ▲ | lotsofpulp 4 hours ago | parent [-] | | Because they are supposed to possess minimum levels of intelligence found in homo sapiens, which includes not believing anything a salesperson says. Also, their whole job is dealing with people who constantly lie to them. | | |
| ▲ | pixl97 2 hours ago | parent | next [-] | | There are two things occurring here. Police get raises and recognition for closing cases. In general they don't care if you're guilty or not, that's someone else's problem. Same with the detective, same with the DA. The more cases they close they 'tougher they are on crime'. The next thing occurring is https://en.wikipedia.org/wiki/Computer_says_no https://en.wikipedia.org/wiki/Computer_says_no | | | |
| ▲ | tharkun__ 4 hours ago | parent | prev [-] | | You're over-selling the minimum level of intelligence in homo sapiens. What you're stating is your wishful thinking. Don't get me wrong. I'd also like what you say to be true. It very much is not. Quite the opposite, which is why salespeople "work". The amount of AI bullshit Senior+ level developers just paste to me as truth is astonishing. |
|
| |
| ▲ | caconym_ 5 hours ago | parent | prev | next [-] | | As soon as we start to see a pattern of shitty vibe-coded software actually harming people via defects etc. (see: therac-25), I would hope that the conversation is about structural change to mitigate risk in aggregate rather than just punitive consequences for the individual programmers who are "responsible". The latter would be a fantastically stupid response and would do little or nothing to reduce future harm. | | |
| ▲ | pj_mukh 5 hours ago | parent [-] | | all accountability need not be punitive, we can certainly talk about systemic guardrails. What I find disbelief in, is someone saying the Chief of Police saying "We are not going to talk about that today?" is not the biggest scandal, but the AI is. | | |
| ▲ | caconym_ 4 hours ago | parent | next [-] | | > someone saying the Chief of Police saying "We are not going to talk about that today?" is not the biggest scandal, but the AI is. Who is this "someone"? OP's article and the discussion here are absolutely not neglecting the human factors and general
institutional failure that made this possible. But it's also true that without these "AI" tools, it would never have happened. | | |
| ▲ | pj_mukh 4 hours ago | parent [-] | | Yea but this feels like when a Waymo ran over a cat, and a Human driver ran over a toddler and both got the same level of coverage in the media (actually the cat got more follow-up coverage). And I'm supposed to believe both issues are equally important. No. That's gaslighting, and totally misplaced political activation. |
| |
| ▲ | toraway 4 hours ago | parent | prev [-] | | "Among his accomplishments has been establishing the department’s Real Time Crime Center that leverages technology and data to support officers in responding more effectively to incidents," the city's release said. "Zibolski also prioritized officer wellness initiatives to strengthen mental health resources and resilience within the department. He reinstituted the Traffic Safety Team to focus on roadway safety and proactive enforcement, and ... played an active role in statewide discussions on various issues affecting law enforcement."
From the same article... He spearheaded a push to "leverage technology and data to support officers in responding more effectively to incidents", then that same technology mistakingly ruins a woman's life by passing along a hit to an officer who compared with her FB photos and said "sure, seems right".The technology seems highly relevant here. Plus, as we've seen in the software world, when a mandate comes from the top to use the shiny new magic AI tools as much as possible, the officer may have felt pressured to make arrests using the new system they paid a bunch of money for instead of second guessing whatever it spits out. |
|
| |
| ▲ | ux266478 22 minutes ago | parent | prev | next [-] | | You can hold someone responsible only after they've actually fucked up. And with the way things move in the criminal justice system, that can take months to discover. Holding them responsible doesn't really fix anything, it's purely reactive. | |
| ▲ | jfengel 4 hours ago | parent | prev | next [-] | | You are exactly correct. Cops cannot be trusted. We spent a lot of time pointing that out in 2020. AI is the least of our problems with policing. Unfortunately, a lot of people are certain it won't happen to them, and it has been practically impossible to establish any kind of accountability. It has only gotten worse since 2020. | |
| ▲ | malwrar 3 hours ago | parent | prev | next [-] | | You are right IMO to question why North Dakota police were able to obtain this Tennessean woman in the first place, you’d think something like that should require far more sufficient evidence than facial recognition. But, then what good is facial recognition for? Would it have been okay for this woman’s life to have been merely invaded because she matched a facial recognition system? Maybe they can just secretly watch you so you’re not consciously aware of being investigated? Should that be our new standard, if a computer thinks you look like a suspect you can be harassed by police in a state you’ve never even been in? I just don’t see a legitimate way for AI to empower officers here without risking these new harms. That’s why I lean towards blaming the AI tech, rather than historically intractable problems like the reality of law enforcement. | | |
| ▲ | mlinsey 2 hours ago | parent [-] | | Having a facial recognition match make you a suspect and cause the police to ask you some questions doesn't seem completely unreasonable to me. Investigations can certainly begin with weak forms of evidence (like an anonymous tip), you just require a higher standard of evidence for a search warrant, surveillance, or an arrest. A facial recognition match shouldn't be probable cause for an arrest warrant, but it still might be a useful starting point for a detective looking for actual evidence. | | |
| ▲ | crooked-v an hour ago | parent [-] | | It is absolutely not reasonable to use low-quality photos to decide someone halfway across the country with no history of even leaving their local area is 'a suspect'. |
|
| |
| ▲ | smcl 3 hours ago | parent | prev | next [-] | | You’re on the right track here but I don’t think it should be hand-waved away as “the least of your problems” - it’s yet another weapon that police in the USA can use against the population with impunity. They’re going to have to reckon with all of this in the coming years - cops having guns and armored cars, “qualified immunity”, the “stop resisting” workaround for brutality and now this AI | |
| ▲ | antod 4 hours ago | parent | prev | next [-] | | But it's not totally irrelevant in this story. Cops are already susceptible to confirmation bias, and for "efficiencies" they are delegating part of their job to apparently magical tools that will only increase their confirmation bias. And because it is for efficiency you can bet they won't be given extra time to validate the results. What or who is at fault isn't either/or, it's a bunch of compounding factors. | |
| ▲ | stego-tech 5 hours ago | parent | prev | next [-] | | You’re going crazy because up until this exact moment you’ve never had to confront the reality that these tools, placed into the hands of the common man, are viewed as authoritative and lack any accountability or consequence for misuse. For anyone who has been victimized by law enforcement or governments before, we’ve been warning about this shit for decades. About the lack of consequence for police brutality. The lack of consequence for LPR abuse. The lack of consequence for facial recognition failures and AI mismatches. You need to understand that by using these systems correctly and holding yourself accountable, you are in the minority. Most people do not think that critically, and are all too happy to finger the computer when things go badly. And until you accept that, and work to actually hold folks accountable instead of deflecting blame away from the tool, then this won’t actually change. | | |
| ▲ | Nimitz14 19 minutes ago | parent [-] | | Your answer presumes we cannot hold people accountable. I think that is incorrect. |
| |
| ▲ | pear01 5 hours ago | parent | prev | next [-] | | It's called qualified immunity. Many support its repeal. I hope you join them, and convey the same to your local representatives and candidates. Until it is reformed few if any officers or administrators of criminal justice in the United States will ever feel any type of accountability. Short of video evidence of blatant gun to the back of the head style homicide qualified immunity means most law enforcement officials are never held accountable for their miscarriages of justice. Criminal charges against officers are exceedingly rare. She should be able to sue this detective directly. Of course she can sue the government too, and should. But without any personal consequences for the people carrying out these acts, taxpayers will continue to bail out these practices without ever noticing. Your own government should not be a shield for a police officer who has violated you or your neighbors. | | |
| ▲ | kelnos 4 hours ago | parent | next [-] | | > Many support its repeal. There's nothing to repeal. Qualified immunity is a doctrine that the judicial branch made up out of thin air, with no legislative backing. But agreed, we need legislatures to write laws that expressly hold police accountable, and declare that they are not shielded from liability when things go wrong due to their own failures and negligence. | |
| ▲ | jfengel 4 hours ago | parent | prev [-] | | > Short of video evidence of blatant gun to the back of the head style homicide qualified immunity means most law enforcement officials are never held accountable for their miscarriages of justice. And frequently not even then. |
| |
| ▲ | kelnos 4 hours ago | parent | prev | next [-] | | I mean, this is the USA we're talking about. Cops are given huge authority over everyone else, with poor accountability. AI just lets them pretend to be even less accountable. And by "pretend" I of course mean "get away with it". | |
| ▲ | wat10000 5 hours ago | parent | prev [-] | | When are cops ever treated the same way as the rest of us? | | |
| ▲ | deepsun 4 hours ago | parent [-] | | Well in most cases I would prefer to have a cop's word to outweigh a word of an average joe. | | |
| ▲ | amanaplanacanal 9 minutes ago | parent | next [-] | | Do you think police are inherently more honest than everybody else? Why would you think that? | |
| ▲ | wat10000 3 hours ago | parent | prev [-] | | Why should having that particular job give you that privilege? All should be equal before the law. |
|
|
|
|
|
| ▲ | caconym_ 5 hours ago | parent | prev | next [-] |
| This particular "AI bogeyman" isn't just AI; it's cops with AI and in particular cops with facial recognition tools, dragnet LPR surveillance tools, and all this other new technology that essentially picks somebody's name out of a hat to have their life temporarily (or [semi-]permanently) ruined by shithead cops who won't ever face any real accountability. This keeps happening, and the reason it keeps happening is that shithead cops have these tools and are using them. Until we can find a reliable way to prevent this from happening, which may or may not be possible, cops who may or may not be shitheads should not have access to these tools. |
| |
| ▲ | hinkley 5 hours ago | parent | next [-] | | It’s also cops Making the Numbers Go Up by marking down a case file as having progressed because someone is in custody. Which isn’t about justice. | | |
| ▲ | mothballed 5 hours ago | parent [-] | | They don't seem to give a single iota of a fuck about that when a private regular person has their money stolen or their car totaled by hit and run driver. Finding some innocent person to arrest would indicate they are at least pretending to give a fuck, yet they seem to only be bothered to even keep up appearances when it is the bank being robbed. | | |
| |
| ▲ | guelo 4 hours ago | parent | prev | next [-] | | It's not just the shithead cops, it's the voters. All the "Blue Lives Matter", "thin blue line", "back the blue" propaganda works towards giving police infinite powers with zero accountability. This is what voters want and they've said so loudly over and over again. | |
| ▲ | throwaway314155 5 hours ago | parent | prev [-] | | There’s nothing wrong with your comment per se, but it’s almost as if you didn’t even read the comment you’re responding to. | | |
| ▲ | caconym_ 5 hours ago | parent | next [-] | | Let me help you out with this comprehension issue. The point of my comment is that I disagree with the apparent premise of the comment I replied to, which is that "AI" is some generic investigative tool that we can neatly snip out of the picture to blame this incident on human factors at the individual level ("the professional human-in-the-loop who shirked all responsibility"). Said comment also implies that people are fixating on the AI aspect of this issue while ignoring the human factors, which IMO is a strawman. To me, the existence of AI in its current incarnations and the ways in which law enforcement will inevitably abuse it are, together, inseparably, the problem. AI (in the most general sense) opens up entire new dimensions for potential abuse. As a concrete example: > And the criminal justice system, for reasons that have nothing to do with AI, let this woman sit in jail for 5 months before doing even interviewing her or doing any due diligence. Let me state what should be obvious: without AI (as in, the facial recognition systems involved in this case), this woman would not have sat in jail for 5 months, or indeed for any length of time at all. So saying that it has "nothing to do with AI" is totally ridiculous. | | |
| ▲ | fc417fc802 33 minutes ago | parent | next [-] | | > Let me state what should be obvious: without AI (as in, the facial recognition systems involved in this case), this woman would not have sat in jail for 5 months, or indeed for any length of time at all. How do you arrive at that conclusion? Because it happened, and it wasn't an AI overseeing (the lack of) due process. The police identifying suspects is part of their job. So are arrest warrants and all the rest of it. I honestly don't see what AI had to do with anything here. All I see is a gaping systemic issue that could have happened regardless of AI if the wrong person got the wrong idea or had a personal vendetta. Suppose ICE busts down someone's door, drags them off, holds them in an internment camp for months, and then finally goes "oh, oops, guess you were a citizen all along sorry about that" and releases them. We don't blame the source of their faulty hit list. We blame the systemic practices and legal apparatus that permitted it all to happen in the first place. You might as well blame the SUV manufacturer because without vehicles the police wouldn't hav been able to drive over to make the arrest, right? | |
| ▲ | throwaway314155 4 hours ago | parent | prev [-] | | Like I said, there wasn’t anything wrong with your comment. It just didn’t seem to directly address the parent comment. This does, thanks. |
| |
| ▲ | Retric 5 hours ago | parent | prev [-] | | Seems like a direct response to me. >> How is this the fault of AI? > This particular "AI bogeyman" isn't just AI; it's cops with AI You can’t separate the thing from how it will be used. It’s like arguing that cars on their own aren’t particularly dangerous, but the point of buying a car is to use it thus risking the general public. | | |
| ▲ | fc417fc802 28 minutes ago | parent [-] | | But you can in fact argue exactly that. If (arbitrary example) pedestrians are being killed due to poor road engineering practices it isn't reasonable to point at cars and say "see those are the root problem" when in fact it's due to a willful lack of sidewalks or marked crossings or whatever. Being adjacent to something bad doesn't equate to being the root cause. |
|
|
|
|
| ▲ | danso an hour ago | parent | prev | next [-] |
| > How is this the fault of AI? AI is being used by bureaucrats and enforcers to justify lazy, harmful conclusions. You don't live in the real world if you think "just punish the bureaucrats, don't make it about AI" is going to remotely rectify this toxic feedback loop and ecosystem. |
| |
| ▲ | fc417fc802 10 minutes ago | parent [-] | | No, we definitely should punish bureaucrats and enforcers who act negligently. If someone in a position of authority flagrantly fails to do his job and it directly harms someone he should be held accountable. That would provide a strong incentive for future actors to take their responsibilities seriously. If an engineer signs off on an obviously faulty building plan and people die as a result we hold him accountable. This is no different. |
|
|
| ▲ | obviouslynotme 5 hours ago | parent | prev | next [-] |
| It's not. This is just an acceleration in the unraveling of society facilitated by AI. As someone whose childhood included so many "robots will kill humans" books and movies, I am flabbergasted that the AI apocalypse will be dumb humans overtrusting faulty AI in important matters until everything falls apart. Most humans cannot distinguish AI from actual intelligence. When you combine that with bureaucrats innate tendency to say, "Computer said so," you end up with bizarre situations like this. If a person had made this facial match, another human would have relentlessly jeered him. Since a computer running AI did it, no one even cared to think about it. Computers are wildly dangerous, not because of anything innate but because of how humans act around them. |
| |
| ▲ | fc417fc802 18 minutes ago | parent | next [-] | | > If a person had made this facial match, another human would have relentlessly jeered him. The glaringly obvious problem here is that our justice system should not be constructed in such a way so as to be reliant on someone's coworker shaming him. That is not a sensible check against a systemic failure. We're supposed to have due process. If someone skips or otherwise subverts due process the justifications don't matter. The root issue is that due process was skipped. Why was that even possible to begin with? | |
| ▲ | throwaway173738 3 hours ago | parent | prev [-] | | > It's not. This is just an acceleration in the unraveling of society facilitated by AI. As someone whose childhood included so many "robots will kill humans" books and movies, I am flabbergasted that the AI apocalypse will be dumb humans overtrusting faulty AI in important matters until everything falls apart. This is literally the plot of most of those books and the way they differ is in how everything falls apart. In some of them the AI supplants us entirely and kills us all. In others it gets taught to kill us all. In others it gets really good at giving us what we ask for until everything falls apart. But it’s taken as a given that unless we change something innate in our culture AI will be our downfall. |
|
|
| ▲ | antonymoose 3 hours ago | parent | prev | next [-] |
| Reminds me of a case that just popped up in my neck of the woods. Man gets pulled over on an expired plate. They search based on this fact, find a pill bottle (for Irritable Bowel Syndrome) and magically find he’s trafficking cocaine and fentanyl. Months later a lab test exonerates the poor guy. https://www.wyff4.com/article/deputies-falsely-identify-ibs-... |
|
| ▲ | rightbyte 5 hours ago | parent | prev | next [-] |
| > How is this the fault of AI? The false positive rate combined with scanning millions of pictures might make the chance of arresting the wrong person really high. |
|
| ▲ | stego-tech 5 hours ago | parent | prev | next [-] |
| It's the fault of the tool because our society treats the tools as superior judgements than humans and to be trusted completely as a means of deflecting accountability - something any and every minority group has been warning about for fucking decades. The reason everyone rushes to defend the tool's use is because holding humans accountable would mean throwing these tools out entirely in most cases, due to internal human biases and a decline in basic critical and cognitive thinking skills. The marketing has been the same since the 80s: the tool is superior (until it isn't), the tool shall be trusted completely (until it fails), the tool cannot make mistakes (until it does). If folks actually listened to the victims of this shit, companies like Flock and Palantir would be gutted and their founders barred from any sort of office of responsibility, at minimum. The fact so many deflect blame from the tool like the marketing manual demands shows they don't actually give a shit about the humans wrapped up in the harms, or the misuse and misappropriation of these tools by persons wholly unaccountable under the law, but only about defending a shiny thing they personally like. |
| |
| ▲ | pixl97 2 hours ago | parent [-] | | >rushes to defend the tool's use is because holding humans accountable would mean throwing these tools out entirely in most cases, due to internal human biases and a decline in basic critical and cognitive thinking skill The magical past where people had critical thinking skills never existed. We put a lot of trust in tools is because people are unfucking reliable. Hence why in most cases actual physical evidence does a far better job than witness testimony. This said, people are lazy. It is one of our greatest and worst traits. When we are allowed to be lazy, especially with tools bad things happen. |
|
|
| ▲ | lokar 3 hours ago | parent | prev | next [-] |
| Automation has a strong tendency to degrade diligence. I see this all the time in operational / production settings. Having a loop with automation reviewed and approved by a human degrades very fast. I only approve automation that has a quick path to unsupervised operation. |
|
| ▲ | RobRivera 6 hours ago | parent | prev | next [-] |
| I think it's more nuanced; it is one error in a Tragedy of Errors. |
|
| ▲ | 5 hours ago | parent | prev | next [-] |
| [deleted] |
|
| ▲ | culi 3 hours ago | parent | prev | next [-] |
| Study after study has shown a very strong and consistent bias of humans to trust "automated systems" in face of any ambiguity |
|
| ▲ | themafia 6 hours ago | parent | prev | next [-] |
| > How is this the fault of AI? It could be the fault of the company that's selling this service. They often make wildly inaccurate claims about the utility and accuracy of their systems. [0] > There's a reason why we don't let AI autonomously jail people. Yes we do. [1] > and a criminal justice system that thinks it is okay to jail people for 5 months before even starting to assess their guilt. Her guilt was assessed. That's why she had no bail. It assessed it incorrectly, but the error is more complicated than your reaction implies. [0]: https://thisisreno.com/2026/03/lawsuit-reno-police-ai-polici... [1]: https://projects.tampabay.com/projects/2020/investigations/p... |
|
| ▲ | type0 4 hours ago | parent | prev | next [-] |
| > Instead of scapegoating an AI bogeyman One big reason for AI adoption everywhere is that you can use it as a scapegoat |
|
| ▲ | unethical_ban 3 hours ago | parent | prev | next [-] |
| Someone from the government should be in jail for this kind of oversight. |
| |
|
| ▲ | worik 3 hours ago | parent | prev | next [-] |
| > How is this the fault of AI? It is not. It is the fault of the police AI models are tools. When mistakes are made they are the mistake of the operator of said tool This AI model was badly misused, this woman should get a metric shit tonne of compensation, but it was the fault of the police. |
|
| ▲ | blitzar 5 hours ago | parent | prev | next [-] |
| computer said yes |
|
| ▲ | an hour ago | parent | prev | next [-] |
| [deleted] |
|
| ▲ | ares623 3 hours ago | parent | prev [-] |
| I hope you take this as a teaching/learning opportunity |