Remix.run Logo
stingraycharles 9 hours ago

Why, though? What, really, does anyone envision the next decade with government + AI is going to be like?

Obviously mass surveillance is already happening. Obviously the line between “human kills other human” is blurring for a long time already, eg remote operated drones. Missiles are already remotely controlled and navigating and detecting and following moving targets autonomously.

What’s the goal of people who think deleting their OpenAI account will make an impact?

maxbond 9 hours ago | parent | next [-]

Recently I left an HN comment pointing out that there was a typo on Ars Technia's staff page. One copy editor had the title "Copy Editor" and the other "Copyeditor." Several days later the typo was fixed. I'm confident that it was because someone at Ars saw my comment.

I left a comment describing how I am deleting my OpenAI account. I think there's a good chance someone at OpenAI sees it, even if only aggregated into a figure in a spreadsheet. Maybe a pull quote in a report.

You do your best at the margin, have faith it will count for something in aggregate and accept that sometimes you're tilting at windmills. I know most of my breathe is wasted but I can't reliably tell which.

mentalgear 9 hours ago | parent | prev | next [-]

Because openAI is the least trustworthy of the Big LLM providers. See S(c)am Altman's track record, especially his early comments in senate hearings where:

* he warned of engagement-optimisation strategies, like social media, being used for chatbots / LLMs.

* also, he warned that "ads would be the last resort" for LLM companies.

Both of his own warnings he casually ignored as ChatGPT / openAI has now fully converted to Facebook's tactics of "move fast and break things" - even if it is society itself. A complete turn away from the original AI for science lab it was founded as, which explains why every real (founding) ML scientist has left the company years ago.

While still being for-profit outfits, at least DeepMind and Anthrophic are headed by actual scientists not marketing guys.

qsera 9 hours ago | parent [-]

Mm..just wait till your current favorite guy becomes as big..

designerarvid 9 hours ago | parent | prev | next [-]

Maybe people believe that the US is better off not having a government that coerces private companies? This is a way of showing that.

/non-US and just guessing

stingraycharles 9 hours ago | parent [-]

So then you would prefer Grok instead?

The genie is out of the bottle, this will happen anyway. The question is who will be the steward.

rglullis 9 hours ago | parent | next [-]

> The question is who will be the steward.

I do not have the power to control that, but I do have the power to choose who I support.

virgildotcodes 9 hours ago | parent | prev | next [-]

Grok and this administration are completely aligned, so if people believe that the government's coercive actions are to be stood up against, why on Earth would they support Grok instead of... the company that's actually taking a stand against government coercion?

stingraycharles 8 hours ago | parent [-]

That’s kind of my point. Why are we applauding Anthropic taking a strong stance, why do we want OpenAI to do the same, if that will inevitably lead to Grok getting their systems integrated in all of the DoD’s surveillance and intelligence systems?

virgildotcodes 8 hours ago | parent | next [-]

I believe Grok is already as deeply integrated into the gov as can be, but it's objectively the least capable model family behind OpenAI, Anthropic, Gemini.

So the Gov could very well rely on it alone, purely on ideological grounds, but then they'd be condemned to using inferior tech at a time when everyone is really nervous about staying ahead in AI usage (rightly or wrongly). Not sure they'd be willing to accept that, and it does put pressure on them.

duskdozer 8 hours ago | parent | prev [-]

If they preferred Grok, they could have just gone with Grok in the first place. Presumably, OpenAI gives them something they want more.

9 hours ago | parent | prev [-]
[deleted]
duskdozer 8 hours ago | parent | prev | next [-]

Any one individual's vote is probably not going to change the result of an election. So, why do people vote? Individual actions in aggregate have effects. And even if you think it's ultimately futile, sometimes it's about saying "I don't think this is acceptable."

coredev_ 8 hours ago | parent | prev | next [-]

When did the US poulation stop believing in a better society and world? A bad progression is something that can be fixed. We do not need AI in weapons, we need a law that forces the children of presidents starting war to automatically be conscripted to the front line of said war.

ndriscoll 5 hours ago | parent | next [-]

I don't think the US population has ever thought we don't need to develop weapons. To not do so is to put us at risk of subjugation or destruction. It's an entirely different question from whether we should be using them on anyone at any given time (personally I lean more isolationist on that question than most of the population apparently does).

Of course it's also a different question from whether we should allow mass surveillance against ourselves, which obviously we should not.

chronc2739 6 hours ago | parent | prev [-]

> We do not need AI in weapons, we need a law that forces the children of presidents starting war to automatically be conscripted to the front line of said war.

Says who? You?

Sorry, but you are just 1 person, 1 vote.

Unless you believe your vote outweighs other people’s vote.

Today, 40% of Americans today still approve of Trump and his actions. Another 10-20% probably don’t care. Even after Iran’s attack and DoW x OAI collab.

Which leaves the “no AI in weapons” camp at less than 50%.

ozgung 9 hours ago | parent | prev | next [-]

“Predictive programming“ in action. Predicting something beforehand and getting used to it should’t make a wrong thing acceptable.

Ethics is about knowing and acting right or wrong. Not about how we feel about them.

kledru 9 hours ago | parent | prev | next [-]

Kind of signal that we do not want to pay for our surveillance ourselves. I did not write funeral though.

podgorniy 8 hours ago | parent | prev | next [-]

We are obviously dying. What's the point of doing anything in between now and the last moment? What goal of people who think that doing anything will make any impact?

--

Some people do that as a symbolic action. Some to keep own terms as much as they can. Some hope their actions will join others actions and will turn into a signal for decision makers. For others this action reduces the area of their exposure. Others believe in something and just follow their beliefs.

BTW following own set of beliefs is what you're (we all) doing here. You believe that surveillance is already happening and nothing can be done about it, that single action does not matter, that there are no other reasons for action other than direct visible impact, etc. Seems that you analyze others through own set of beliefs and it can not explain actions of others. This inability to explain others suggests that the whole model is flawed in some way. So what is the nature of your beliefs? Did you choose them or they were presented you without alternatives? What are alternatives then? Do these beliefs serve your interests or others?

throwaway20261 9 hours ago | parent | prev | next [-]

It's all about money in the end. If people keep spending money with these companies, it reinforces their notion that the money will keep flowing despite what they do. Cancelling slows down that revenue stream, giving time for other entities which are less misanthropic to catch up and counterbalance the negative side effects from these companies.

hrmtst93837 7 hours ago | parent | prev | next [-]

It's more about personal choice than making a grand impact. Many people want control over their digital footprint, given the rapid evolution of AI and its implications for privacy.

syllogism 7 hours ago | parent | prev | next [-]

The actions of the US government here are openly corrupt.

The point of the supply chain risk provisions is to denote, you know, supply chain risks. The intention is not to give the Pentagon a lever it can pull to force any company to agree to any contract it wants.

Hegseth doesn't even pretend that Anthropic is actually a supply chain risk. The argument for designating them so is that _they won't do exactly what the government wants_.

People use the term "fascism" a lot and people have kind of tuned it out, but what do you call a government that deals itself the power to compel any company to accept any contract, and declare it a pariah on thin pretext if it objects?

By taking the deal under these conditions OpenAI is accepting this. They're saying, "Well, sucks to be them, life goes on". They're consenting to the corruption and agreeing to profit from it. But they'll be next, and if the next company in line has the same stand then yeah, the government can force any company to do anything. There's nothing normal about this.

vee-kay 9 hours ago | parent | prev [-]

AI will get access to missiles, fighter jets, attack drones, and even nuclear launch codes - that's the fear.

Even when the bombs drop from the sky, at least those humans who had deleted their OpenAI account can rest easy, knowing that that they weren't the ones supporting the AI that will delete humanity.

stingraycharles 9 hours ago | parent | next [-]

And what if an even worse alternative becomes the AI of choice for the DoD if OpenAI didn’t get this deal?

aniviacat 8 hours ago | parent | next [-]

If the DoW had to rely on worse AI models, the process of integrating AI into their systems would be slowed down.

tovej 9 hours ago | parent | prev | next [-]

Then the sane thing to do is to boycott that AI provider as well.

Opposing all AI companies tied to the war industry is a pretty vanilla principles stance, which also makes sense rationally if you want to "minimize harm".

8 hours ago | parent | prev | next [-]
[deleted]
moron4hire 8 hours ago | parent | prev [-]

And what if Pete Hegseth does in a drunk driving accident? A lot of things can happen.

davidmurdoch 8 hours ago | parent | prev [-]

Every country is going to arm themselves with AI.