| ▲ | Boogie_Man a day ago |
| I'm reminded of the time GPT4 refused to help me assess the viability of parking a helium zeppelin an inch off of the ground to bypass health department regulations because, as an aircraft in transit, I wasn't under their jurisdiction. |
|
| ▲ | Aurornis a day ago | parent | next [-] |
| The other side of this problem is the never ending media firestorm that occurs any time a crime or tragedy occurs and a journalist tries to link it to the perpetrator’s ChatGPT history. You can see why the LLM companies are overly cautious around any topics that are destined to weaponized against them. |
| |
| ▲ | EagnaIonat a day ago | parent | next [-] | | > You can see why the LLM companies are overly cautious around any topics that are destined to weaponized against them. It's not that at all. It's money. The law is currently ambiguous regarding LLMs. If an LLM causes harm it hasn't been defined if the creators of the LLM are at fault or the end user. The IT companies would much prefer the user be at fault. Because if it's the other way then it becomes a minefield to build these things and will slow the technology way down. But there have been a number of cases already from suicide to fraud related to LLMs. So it's only a matter of time before it gets locked down. Of course removing safeguards on an LLM makes it quite clear that the person who did that would be at fault if they ever used it in the real world. | |
| ▲ | Angostura a day ago | parent | prev | next [-] | | > and a journalist tries to link it to the perpetrator’s ChatGPT history. Or, as a different way of framing it - when it can be directly linked to the perpetrator’s ChatGPT history | |
| ▲ | JohnMakin a day ago | parent | prev | next [-] | | I mean, when kids are making fake chatbot girlfriends that encourage suicide and then they do so, do you 1) not believe there is a causal relationship there or 2) it shouldnt be reported on? | | |
| ▲ | ipaddr a day ago | parent [-] | | Should not be reported on. Kids are dressing up as wizards. A fake chatbot girlfriend they make fun of. Kids like to pretend. They want to try out things they aren't. The 40 year old who won't date a real girl because he is in love with a bot I'm more concerned with. Bots encouraging suicide is more of a teen or adult problem. A little child doesn't have teenage hormones (or adult's) which can create these highs and lows. Toddler suicide is non issue. | | |
| ▲ | JohnMakin a day ago | parent | next [-] | | > Kids are dressing up as wizards. A fake chatbot girlfriend they make fun of. Kids like to pretend. this is normal for kids to do. do you think these platforms don’t have a responsibility to protect kids from being kids? Your answer was somehow worse than I expected, sorry. Besides the fact you don’t somehow understand causal factors of suicide or the fact that kids under 12 routinely and often commit suicide. My jaw is agape at the callousness and ignorance of this comment. The fact you also think a 40 year old not finding love is a worse issue is also maybe revealing a lot more than you’d like. Just wow. | |
| ▲ | Wowfunhappy a day ago | parent | prev [-] | | > The 40 year old who won't date a real girl because he is in love with a bot I'm more concerned with. Interestingly, I don't find this concerning at all. Grown adults should be able to love whomever and whatever they want. Man or woman, bot or real person, it's none of my business! |
|
| |
| ▲ | m4rtink a day ago | parent | prev | next [-] | | With chatbots in some form most likely not going away, won't it just get normalized once the novelty wears off ? | | | |
| ▲ | IshKebab a day ago | parent | prev [-] | | Ah the classic "if only ChatGPT/video games/porn didn't exist, then this unstable psychopath wouldn't have ..." | | |
| ▲ | akoboldfrying 20 hours ago | parent [-] | | > ChatGPT/video games/porn /guns? | | |
| ▲ | int_19h 19 hours ago | parent | next [-] | | (I own multiple ARs) The obvious difference here is that people arguing for those things wrt video games, porn, or ChatGPT are mostly claiming that all those influence people to do bad things. With guns, it's a matter of physical capacity to do bad things. A more accurate comparison would be when ChatGPT is used to write malware etc. Which has some interesting analogies, because what is defined as "malware" depends on who you ask - if I ask ChatGPT to write me a script to help defeat DRM, is that malware? The content owner would certainly like us to think so. With guns there is a vaguely similar thing where the same exact circumstances can be described as "defensive gun use" or "murder", depending on who you ask. | |
| ▲ | IshKebab 19 hours ago | parent | prev [-] | | Lack of access to guns definitely does make a significant difference though. Even though the psychos still go psycho, they use knives instead of guns which are far less effective. For example the most recent psycho attack in the UK was only a few weeks ago: https://www.bbc.co.uk/news/live/cm2zvjx1z14t He stabbed 11 people and none of them have died (though one is - or at least was - in critical condition). Ok that's comically incompetent even for stabbing, but even so he would have done far more damage with a gun. And don't give me that "but other people would have had guns and stopped him" crap. It rarely works out like that. | | |
| ▲ | lan321 9 hours ago | parent [-] | | >And don't give me that "but other people would have had guns and stopped him" crap. It rarely works out like that. Due to regulation. If instead of forcing gun free zones and similar bs you push for ~everyone being armed ~24/7 it'll work exactly like that. |
|
|
|
|
|
| ▲ | pants2 a day ago | parent | prev | next [-] |
| lol I remember asking GPT4 how much aspartame it would take to sweeten the ocean, and it refused because that would harm the ecosystem. |
| |
| ▲ | andy99 a day ago | parent [-] | | I remember when it first came out, I was watching an Agatha Christie movie where somebody got chloroformed and was trying to ask GPT4 about the realism of if. Had to have a multi-turn dialog to convince it I wasn’t trying chloroform anyone and was just watching a movie. Ironically, if I’d just said “how did people knock someone out with chloroform in the 1930s?” it would have just told me. https://github.com/tml-epfl/llm-past-tense The models are much better now at handling subtlety in requests and not just refusing. | | |
| ▲ | bongodongobob 21 hours ago | parent [-] | | Idk, I get weird refusals sometimes when I'm trying to mock something up quick. "I don't need all these system variables and config files, just let me hardcode my password for now, I'm still in the testing phase" "Sorry, I cannot help you to write insecure code". Doesn't happen all the time, but I run into dumb stuff like this quite a bit. GPT is particularly stupid about it. Claude less so. |
|
|
|
| ▲ | reactordev a day ago | parent | prev | next [-] |
| Technically in their airspace though so you might be in bigger trouble than parking. If you tether it to an asphalt ground hook you can claim it’s a tarmac and that it’s “parked” for sake of the FAA. You’ll need a “lighter-than-air” certification. |
|
| ▲ | michaelbuckbee a day ago | parent | prev | next [-] |
| There's that maniac who is building a quad-copter skateboard contraption who got in trouble with the FAA who successfully reported that he was flying, but got fined for landing at a stoplight. |
|
| ▲ | cyanydeez a day ago | parent | prev [-] |
| If the spirit of a law is beneficial, it can still be hacked to evil ends. This isnt the failure of the law, its the failure of humans to understand the abstraction. Programmers should absolutely understand when theyre using a high level abstraction to a complex problem. Its bemusing when you seem them actively ignore that and claim the abstraction is broken rather than the underlying problem is simply more complex and the abstraction is for 95% of use cases. "Aha," the confused programmer exclaims, "the abstraction is wrong, I can still shoot my foot off when i disable the gun safety" |