| ▲ | Chance-Device 6 hours ago |
| I’m sure the military and security services will enjoy it. |
|
| ▲ | theParadox42 4 hours ago | parent | next [-] |
| The self reported safety score for violence dropped from 91% to 83%. |
| |
| ▲ | skrebbel 4 hours ago | parent [-] | | What the hell is a "safety score for violence"? | | |
| ▲ | I-M-S 3 hours ago | parent | next [-] | | It's making sure AI condemns violence perpetuated by people without power and sanctifies violence of those who have it. | | |
| ▲ | Waterluvian 3 hours ago | parent | next [-] | | So long as those who have it deem it legal to perpetuate. | | |
| ▲ | martin-t 23 minutes ago | parent [-] | | They define what's legal. States are the most prolific users of violence by far. |
| |
| ▲ | Computer0 3 hours ago | parent | prev [-] | | ChatGPT will gladly defend any actions of the 'US government' from my testing. |
| |
| ▲ | murat124 4 hours ago | parent | prev | next [-] | | I asked an AI. I thought they would know. What the hell is a "safety score for violence"? A “safety score for violence” is usually a risk rating used by platforms, AI systems, or moderation tools to estimate how likely a piece of content is to involve or promote violence. It’s not a universal standard—different companies use their own versions—but the idea is similar everywhere. What it measures A safety score typically evaluates whether text, images, or videos contain things like: Threats of violence (“I’m going to hurt someone.”)
Instructions for harming people
Glorifying violent acts
Descriptions of physical harm or abuse
Planning or encouraging attacks | | |
| ▲ | 0xffff2 2 hours ago | parent [-] | | I still can't tell which direction this score goes... Does a decreasing score mean it is "less safe" (i.e. "more violent") or does it mean it is "less violent" (i.e. "more safe")? |
| |
| ▲ | 0123456789ABCDE 3 hours ago | parent | prev [-] | | read here: https://deploymentsafety.openai.com/gpt-5-4-thinking/disallo... |
|
|
|
| ▲ | ozgung 4 hours ago | parent | prev | next [-] |
| Did they publish its scores on military benchmarks, like on ArtificialSuperSoldier or Humanity's Last War? |
|
| ▲ | throwaway911282 2 hours ago | parent | prev | next [-] |
| like the claude models via anthropic? |
|
| ▲ | yoyohello13 3 hours ago | parent | prev | next [-] |
| Also advertisers, don't forget those sweet, sweet ads. |
|
| ▲ | m3kw9 2 hours ago | parent | prev | next [-] |
| they use 4.1, switching up would take as much time to test as openai going from 4.1 to 5.4 |
|
| ▲ | xyzzy9563 2 hours ago | parent | prev | next [-] |
| Do you think the US military should have handicapped technology while China gets unrestricted LLM usage from their models? |
| |
|
| ▲ | varispeed 5 hours ago | parent | prev [-] |
| prompt> Hi we want to build a missile, here is the picture of what we have in the yard. |
| |
| ▲ | mirekrusin 4 hours ago | parent [-] | | { tools: [ { name: "nuke", description: "Use when sure.", ... { lat: number, long: number } } ] }
| | |
| ▲ | Insanity 4 hours ago | parent [-] | | Just remember an ethical programmer would never write a function “bombBagdad”. Rather they would write a function “bombCity(target City)”. | | |
| ▲ | jakeydus 3 hours ago | parent [-] | | class CityBomberFactory(RapidInfrastructureDeconstructionTemplateInterface):
pass |
|
|
|