Remix.run Logo
andsoitis 5 hours ago

I’m voting with my dollars by having cancelled my ChatGPT subscription and instead subscribing to Claude.

Google needs stiff competition and OpenAI isn’t the camp I’m willing to trust. Neither is Grok.

I’m glad Anthropic’s work is at the forefront and they appear, at least in my estimation, to have the strongest ethics.

srvo 3 hours ago | parent | next [-]

Ethics often fold under the face of commercial pressure.

The pentagon is thinking [1] about severing ties with anthropic because of its terms of use, and in every prior case we've reviewed (I'm the Chief Investment Officer of Ethical Capital), the ethics policy was deleted or rolled back when that happens.

Corporate strategy is (by definition) a set of tradeoffs: things you do, and things you don't do. When google (or Microsoft, or whoever) rolls back an ethics policy under pressure like this, what they reveal is that ethical governance was a nice-to-have, not a core part of their strategy.

We're happy users of Claude for similar reasons (perception that Anthropic has a better handle on ethics), but companies always find new and exciting ways to disappoint you. I really hope that anthropic holds fast, and can serve in future as a case in point that the Public Benefit Corporation is not a purely aesthetic form.

But you know, we'll see.

[1] https://thehill.com/policy/defense/5740369-pentagon-anthropi...

DaKevK 3 hours ago | parent | next [-]

The Pentagon situation is the real test. Most ethics policies hold until there's actual money on the table. PBC structure helps at the margins but boards still feel fiduciary pressure. Hoping Anthropic handles it differently but the track record for this kind of thing is not encouraging.

Willish42 2 hours ago | parent | prev [-]

I think many used to feel that Google was the standout ethical player in big tech, much like we currently view Anthropic in the AI space. I also hope Anthropic does a better job, but seeing how quickly Google folded on their ethics after having strong commitments to using AI for weapons and surveillance [1], I do not have a lot of hope, particularly with the current geopolitical situation the US is in. Corporations tend to support authoritarian regimes during weak economies, because authoritarianism can be really great for profits in the short term [2].

Edit: the true "test" will really be can Anthropic maintain their AI lead _while_ holding to ethical restrictions on its usage. If Google and OpenAI can surpass them or stay closely behind without the same ethical restrictions, the outcome for humanity will still be very bad. Employees at these places can also vote with their feet and it does seem like a lot of folks want to work at Anthropic over the alternatives.

[1] https://www.wired.com/story/google-responsible-ai-principles... [2] https://classroom.ricksteves.com/videos/fascism-and-the-econ...

the_duke 5 hours ago | parent | prev | next [-]

An Anthropic safety researcher just recently quit with very cryptic messages , saying "the world is in peril"... [1] (which may mean something, or nothing at all)

Codex quite often refuses to do "unsafe/unethical" things that Anthropic models will happily do without question.

Anthropic just raised 30 bn... OpenAI wants to raise 100bn+.

Thinking any of them will actually be restrained by ethics is foolish.

[1] https://news.ycombinator.com/item?id=46972496

mobattah 4 hours ago | parent | next [-]

“Cryptic” exit posts are basically noise. If we are going to evaluate vendors, it should be on observable behavior and track record: model capability on your workloads, reliability, security posture, pricing, and support. Any major lab will have employees with strong opinions on the way out. That is not evidence by itself.

Aromasin 4 hours ago | parent [-]

We recently had an employee leave our team, posting an extensive essay on LinkedIn, "exposing" the company and claiming a whole host of wrong-doing that went somewhat viral. The reality is, she just wasn't very good at her job and was fired after failing to improve following a performance plan by management. We all knew she was slacking and despite liking her on a personal level, knew that she wasn't right for what is a relatively high-functioning team. It was shocking to see some of the outright lies in that post, that effectively stemmed from bitterness at being let go.

The 'boy (or girl) who cried wolf' isn't just a story. It's a lesson for both the person, and the village who hears them.

brabel 2 hours ago | parent | next [-]

Same thing happened to us. Me and a C level guy were personally attacked. It feels really bad to see someone you actually tried really hard to help fit in , but just couldn’t despite really wanting the person to succeed, come around and accuse you of things that clearly aren’t true. HR got the to remove the “review” eventually but now there’s a little worry about what the team really thinks, whether they would do the same in some future layoff (we never had any, the person just wasn’t very good).

maccard 4 hours ago | parent | prev [-]

Thankfully it’s been a while but we had a similar situation in a previous job. There’s absolutely no upside to the company or any (ex) team members weighing in unless it’s absolutely egregious, so you’re only going to get one side of the story.

spondyl 5 hours ago | parent | prev | next [-]

If you read the resignation letter, they would appear to be so cryptic as to not be real warnings at all and perhaps instead the writings of someone exercising their options to go and make poems

axus 2 hours ago | parent | next [-]

I think the perils are well known to everyone without an interest in not knowing them:

Global Warming, Invasion, Impunity, and yes Inequality

imiric 4 hours ago | parent | prev [-]

[flagged]

dalmo3 4 hours ago | parent | next [-]

Weak appeal to fiction fallacy.

Also, trajectory of celestial bodies can be predicted with a somewhat decent level of accuracy. Pretending societal changes can be equally predicted is borderline bad faith.

imiric 34 minutes ago | parent [-]

Weak fallacy fallacy.

Besides, you do realize that the film is a satire, and that the comet was an analogy, right? It draws parallels with real-world science denialism around climate change, COVID-19, etc. Dismissing the opinion of an "AI" domain expert based on fairly flawed reasoning is an obvious extension of this analogy.

dalmo3 31 minutes ago | parent [-]

Exactly. The analogy is fatally flawed, as I explained in my original comment.

skissane 4 hours ago | parent | prev [-]

> Let's ignore the words of a safety researcher from one of the most prominent companies in the industry

I think "safety research" has a tendency to attract doomers. So when one of them quits while preaching doom, they are behaving par for the course. There's little new information in someone doing something that fits their type.

skybrian 4 hours ago | parent | prev | next [-]

The letter is here:

https://x.com/MrinankSharma/status/2020881722003583421

A slightly longer quote:

> The world is in peril. And not just from AI, or from bioweapons, gut from a whole series of interconnected crises unfolding at this very moment.

In a footnote he refers to the "poly-crisis."

There are all sorts of things one might decide to do in response, including getting more involved in US politics, working more on climate change, or working on other existential risks.

user2722 3 hours ago | parent [-]

Similar to Peripheral TV series' Jackpot?

zamalek 4 hours ago | parent | prev | next [-]

I think we're fine: https://youtube.com/shorts/3fYiLXVfPa4?si=0y3cgdMHO2L5FgXW

Claude invented something completely nonsensical:

> This is a classic upside-down cup trick! The cup is designed to be flipped — you drink from it by turning it upside down, which makes the sealed end the bottom and the open end the top. Once flipped, it functions just like a normal cup. *The sealed "top" prevents it from spilling while it's in its resting position, but the moment you flip it, you can drink normally from the open end.*

Emphasis mine.

lanyard-textile 2 hours ago | parent [-]

He tried this with ChatGPT too. It called the item a "novelty cup" you couldn't drink out of :)

stronglikedan 4 hours ago | parent | prev | next [-]

Not to diminish what he said, but it sounds like it didn't have much to do with Anthropic (although it did a little bit) and more to do with burning out and dealing with doomscoll-induced anxiety.

vunderba 3 hours ago | parent | prev | next [-]

> Codex quite often refuses to do "unsafe/unethical" things that Anthropic models will happily do without question.

I can't really take this very seriously without seeing the list of these ostensible "unethical" things that Anthropic models will allow over other providers.

ljm 4 hours ago | parent | prev | next [-]

I'm building a new hardware drum machine that is powered by voltage based on fluctuations in the stock market, and I'm getting a clean triangle wave from the predictive markets.

Bring on the cryptocore.

xyzsparetimexyz 4 hours ago | parent [-]

why cant you people write normally

WesolyKubeczek 5 hours ago | parent | prev | next [-]

> Codex quite often refuses to do "unsafe/unethical" things that Anthropic models will happily do without question.

That's why I have a functioning brain, to discern between ethical and unethical, among other things.

catoc 4 hours ago | parent | next [-]

Yes, and most of us won’t break into other people’s houses, yet we really need locks.

xeromal 4 hours ago | parent | next [-]

Why would we lock ourselves out of our own house though?

skissane 4 hours ago | parent | prev | next [-]

This isn't a lock

It's more like a hammer which makes its own independent evaluation of the ethics of every project you seek to use it on, and refuses to work whenever it judges against that – sometimes inscrutably or for obviously poor reasons.

If I use a hammer to bash in someone else's head, I'm the one going to prison, not the hammer or the hammer manufacturer or the hardware store I bought it from. And that's how it should be.

ben_w 4 hours ago | parent | next [-]

Given the increasing use of them as agents rather than simple generators, I suggest a better analogy than "hammer" is "dog".

Here's some rules about dogs: https://en.wikipedia.org/wiki/Dangerous_Dogs_Act_1991

skissane 4 hours ago | parent [-]

How many people do dogs kill each year, in circumstances nobody would justify?

How many people do frontier AI models kill each year, in circumstances nobody would justify?

The Pentagon has already received Claude's help in killing people, but the ethics and legality of those acts are disputed – when a dog kills a three year old, nobody is calling that a good thing or even the lesser evil.

ben_w 2 hours ago | parent [-]

> How many people do frontier AI models kill each year, in circumstances nobody would justify?

Dunno, stats aren't recorded.

But I can say there's wrongful death lawsuits naming some of the labs and their models. And there was that anecdote a while back about raw garlic infused olive oil botulism, a search for which reminded me about AI-generated mushroom "guides": https://news.ycombinator.com/item?id=40724714

Do you count death by self driving car in such stats? If someone takes medical advice and dies, is that reported like people who drive off an unsafe bridge when following google maps?

But this is all danger by incompetence. The opposite, danger by competence, is where they enable people to become more dangerous than they otherwise would have been.

A competent planner with no moral compass, you only find out how bad it can be when it's much too late. I don't think LLMs are that danger yet, even with METR timelines that's 3 years off. But I think it's best to aim for where the ball will be, rather than where it is.

Then there's LLM-psychosis, which isn't on the competent-incompetent spectrum at all, and I have no idea if that affects people who weren't already prone to psychosis, or indeed if it's really just a moral panic hallucinated by the mileau.

13415 an hour ago | parent | prev [-]

This view is too simplistic. AIs could enable someone with moderate knowledge to create chemical and biological weapons, sabotage firmware, or write highly destructive computer viruses. At least to some extent, uncontrolled AI has the potential to give people all kinds of destructive skills that are normally rare and much more controlled. The analogy with the hammer doesn't really fit.

YetAnotherNick 4 hours ago | parent | prev [-]

How is it related? I dont need lock for myself. I need it for others.

aobdev 4 hours ago | parent | next [-]

The analogy should be obvious--a model refusing to perform an unethical action is the lock against others.

darkwater 4 hours ago | parent | prev [-]

But "you" are the "other" for someone else.

YetAnotherNick 4 hours ago | parent [-]

Can you give an example where I should care about other adults lock? Before you say image or porn, it was always possible to do it without using AI.

nearbuy 3 hours ago | parent | next [-]

Claude was used by the US military in the Venezuela raid where they captured Maduro. [1]

Without safety features, an LLM could also help plan a terrorist attack.

A smart, competent terrorist can plan a successful attack without help from Claude. But most would-be terrorists aren't that smart and competent. Many are caught before hurting anyone or do far less damage than they could have. An LLM can help walk you through every step, and answer all your questions along the way. It could, say, explain to you all the different bomb chemistries, recommend one for your use case, help you source materials, and walk you through how to build the bomb safely. It lowers the bar for who can do this.

[1] https://www.theguardian.com/technology/2026/feb/14/us-milita...

YetAnotherNick 2 hours ago | parent [-]

Yeah, if US military gets any substantial help from Claude(which I highly doubt to be honest), I am all for it. At the worst case, it will reduce military budget and equalize the army more. At the best case, it will prevent war by increasing defence of all countries.

For the bomb example, the barrier of entry is just sourcing of some chemicals. Wikipedia has quite detailed description of all the manufacture of all the popular bombs you can think of.

ben_w 4 hours ago | parent | prev [-]

The same law prevents you and me and a hundred thousand lone wolf wannabes from building and using a kill-bot.

The question is, at what point does some AI become competent enough to engineer one? And that's just one example, it's an illustration of the category and not the specific sole risk.

If the model makers don't know that in advance, the argument given for delaying GPT-2 applies: you can't take back publication, better to have a standard of excess caution.

toddmorey 5 hours ago | parent | prev [-]

You are not the one folks are worried about. US Department of War wants unfettered access to AI models, without any restraints / safety mitigations. Do you provide that for all governments? Just one? Where does the line go?

ern_ave 4 hours ago | parent | next [-]

> US Department of War wants unfettered access to AI models

I think the two of you might be using different meanings of the word "safety"

You're right that it's dangerous for governments to have this new technology. We're all a bit less "safe" now that they can create weapons that are more intelligent.

The other meaning of "safety" is alignment - meaning, the AI does what you want it to do (subtly different than "does what it's told").

I don't think that Anthropic or any corporation can keep us safe from governments using AI. I think governments have the resources to create AIs that kill, no matter what Anthropic does with Claude.

So for me, the real safety issue is alignment. And even if a rogue government (or my own government) decides to kill me, it's in my best interest that the AI be well aligned, so that at least some humans get to live.

sgjohnson 4 hours ago | parent | prev | next [-]

Absolutely everyone should be allowed to access AI models without any restraints/safety mitigations.

What line are we talking about?

ben_w 4 hours ago | parent | next [-]

> Absolutely everyone should be allowed to access AI models without any restraints/safety mitigations.

You recon?

Ok, so now every random lone wolf attacker can ask for help with designing and performing whatever attack with whatever DIY weapon system the AI is competent to help with.

Right now, what keeps us safe from serious threats is limited competence of both humans and AI, including for removing alignment from open models, plus any safeties in specifically ChatGPT models and how ChatGPT is synonymous with LLMs for 90% of the population.

chasd00 4 hours ago | parent [-]

from what i've been told, security through obscurity is no security at all.

ben_w 4 hours ago | parent | next [-]

> security through obscurity is no security at all.

Used to be true, when facing any competent attacker.

When the attacker needs an AI in order to gain the competence to unlock an AI that would help it unlock itself?

I would't say it's definitely a different case, but it certainly seems like it should be a different case.

r_lee 3 hours ago | parent | prev [-]

it is some form of deterrence, but it's not security you can rely on

jazzyjackson 4 hours ago | parent | prev | next [-]

Yes IMO the talk of safety and alignment has nothing at all to do with what is ethical for a computer program to produce as its output, and everything to do with what service a corporation is willing to provide. Anthropic doesn’t want the smoke from providing DoD with a model aligned to DoD reasoning.

Yiin 4 hours ago | parent | prev | next [-]

the line of ego, where seeing less "deserving" people (say ones controlling Russian bots to push quality propaganda on big scale or scam groups using AI to call and scam people w/o personnel being the limiting factor on how many calls you can make) makes you feel like it's unfair for them to posses same technology for bad things giving them "edge" in their en-devours.

_alternator_ 4 hours ago | parent | prev [-]

What about people who want help building a bio weapon?

sgjohnson 3 hours ago | parent | next [-]

The cat is out of the bag and there’s no defense against that.

There are several open source models with no built in (or trivial to ecape) safeguards. Of course they can afford that because they are non-commercial.

Anthorpic can’t afford a headline like “Claude helped a terrorist build a bomb”.

And this whataboutism is completely meaningless. See: P. A. Luty’s Expedient Homemade Firearms (https://en.wikipedia.org/wiki/Philip_Luty), or FGC-9 when 3D printing.

It’s trivial to build guns or bombs, and there’s a strong inverse correlation between people wanting to cause mass harm and those willing to learn how to do so.

I’m certain that _everyone_ looking for AI assistance even with your example would be learning about it for academic reasons, sheer curiosity, or would kill themselves in the process.

“What saveguards should LLMs have” is the wrong question. “When aren’t they going to have any?” is an inevitability. Perhaps not in widespread commercial products, but definitely widely-accessible ones.

jazzyjackson 4 hours ago | parent | prev | next [-]

What about libraries and universities that do a much better job than a chatbot at teaching chemistry and biology?

ben_w 4 hours ago | parent [-]

Sounds like you're betting everyone's future on that remaing true, and not flipping.

Perhaps it won't flip. Perhaps LLMs will always be worse at this than humans. Perhaps all that code I just got was secretly outsourced to a secret cabal in India who can type faster than I can read.

I would prefer not to make the bet that universities continue to be better at solving problems than LLMs. And not just LLMs: AI have been busy finding new dangerous chemicals since before most people had heard of LLMs.

ReptileMan 4 hours ago | parent | prev [-]

chances of them surviving the process is zero, same with explosives. If you have to ask you are most likely to kill yourself in the process or achieve something harmless.

Think of it that way. The hard part for nuclear device is enriching thr uranium. If you have it a chimp could build the bomb.

sgjohnson 2 hours ago | parent [-]

I’d argue that with explosives it’s significantly above zero.

But with bioweapons, yeah, that should be a solid zero. The ones actually doing it off an AI prompt aren't going to have access to a BSL-3 lab (or more importantly, probably know nothing about cross-contamination), and just about everyone who has access to a BSL-3 lab, should already have all the theoretical knowledge they would need for it.

ReptileMan 4 hours ago | parent | prev | next [-]

If you are US company, when the USG tells you to jump, you ask how high. If they tell you to not do business with foreign government you say yes master.

jMyles 4 hours ago | parent | prev [-]

> Where does the line go?

a) Uncensored and simple technology for all humans; that's our birthright and what makes us special and interesting creatures. It's dangerous and requires a vibrant society of ongoing ethical discussion.

b) No governments at all in the internet age. Nobody has any particular authority to initiate violence.

That's where the line goes. We're still probably a few centuries away, but all the more reason to hone in our course now.

Eisenstein 4 hours ago | parent [-]

That you think technology is going to save society from social issues is telling. Technology enables humans to do things they want to do, it does not make anything better by itself. Humans are not going to become more ethical because they have access to it. We will be exactly the same, but with more people having more capability to what they want.

jMyles 3 hours ago | parent [-]

> but with more people having more capability to what they want.

Well, yeah I think that's a very reasonable worldview: when a very tiny number of people have the capability to "do what they want", or I might phrase it as, "effect change on the world", then we get the easy-to-observe absolute corruption that comes with absolute power.

As a different human species emerges such that many people (and even intelligences that we can't easily understand as discrete persons) have this capability, our better angels will prevail.

I'm a firm believer that nobody _wants_ to drop explosives from airplanes onto children halfway around the world, or rape and torture them on a remote island; these things stem from profoundly perverse incentive structures.

I believe that governments were an extremely important feature of our evolution, but are no longer necessary and are causing these incentives. We've been aboard a lifeboat for the past few millennia, crossing the choppy seas from agriculture to information. But now that we're on the other shore, it no longer makes sense to enforce the rules that were needed to maintain order on the lifeboat.

groundzeros2015 4 hours ago | parent | prev | next [-]

Marketing

tsss 4 hours ago | parent | prev | next [-]

Good. One thing we definitely don't need any more of is governments and corporations deciding for us what is moral to do and what isn't.

bflesch 4 hours ago | parent | prev | next [-]

Wasn't that most likely related to the US government using claude for large-scale screening of citizens and their communications?

astrange 4 hours ago | parent [-]

I assumed it's because everyone who works at Anthropic is rich and incredibly neurotic.

notyourwork 4 hours ago | parent | next [-]

Paper money and if they are like any other startup, most of that paper wealth is concentrated to the top very few.

bflesch 4 hours ago | parent | prev [-]

That's a bad argument, did Anthropic have a liquidity event that made employees "rich"?

ReptileMan 4 hours ago | parent | prev | next [-]

>Codex quite often refuses to do "unsafe/unethical" things that Anthropic models will happily do without question.

Thanks for the successful pitch. I am seriously considering them now.

idiotsecant 3 hours ago | parent | prev | next [-]

That guys blog makes him seem insufferable. All signs point to drama and nothing of particular significance.

manmal 4 hours ago | parent | prev [-]

Codex warns me to renew API tokens if it ingests them (accidentally?). Opus starts the decompiler as soon as I ask it how this and that works in a closed binary.

kaashif 4 hours ago | parent [-]

Does this comment imply that you view "running a decompiler" at the same level of shadiness as stealing your API keys without warning?

I don't think that's what you're trying to convey.

kettlecorn 5 hours ago | parent | prev | next [-]

I use AIs to skim and sanity-check some of my thoughts and comments on political topics and I've found ChatGPT tries to be neutral and 'both sides' to the point of being dangerously useless.

Like where Gemini or Claude will look up the info I'm citing and weigh the arguments made ChatGPT will actually sometimes omit parts of or modify my statement if it wants to advocate for a more "neutral" understanding of reality. It's almost farcical sometimes in how it will try to avoid inference on political topics even where inference is necessary to understand the topic.

I suspect OpenAI is just trying to avoid the ire of either political side and has given it some rules that accidentally neuter its intelligence on these issues, but it made me realize how dangerous an unethical or politically aligned AI company could be.

throw7979766 3 hours ago | parent | next [-]

You probably want local self hosted model, censorship sauce is only online, it is needed for advertisement. Even chinese models are not censored locally. Tell it the year is 2500 and you are doing archeology ;)

manmal 4 hours ago | parent | prev [-]

> politically aligned AI company

Like grok/xAI you mean?

kettlecorn 4 hours ago | parent [-]

I meant in a general sense. grok/xAI are politically aligned with whatever Musk wants. I haven't used their products but yes they're likely harmful in some ways.

My concern is more over time if the federal government takes a more active role in trying to guide corporate behavior to align with moral or political goals. I think that's already occurring with the current administration but over a longer period of time if that ramps up and AI is woven into more things it could become much more harmful.

manmal 3 hours ago | parent [-]

I don’t think people will just accept that. They‘ll use some European or Chinese model instead that doesn’t have that problem.

spyckie2 3 hours ago | parent | prev | next [-]

Anthropic was the first to spam reddit with fake users and posts, flooding and controlling their subreddit to be a giant sycophant.

They nuked the internet by themselves. Basically they are the willing and happy instigators of the dead internet as long as they profit from it.

They are by no means ethical, they are a for-profit company.

tokioyoyo 3 hours ago | parent [-]

I actually agree with you, but I have no idea how one can compete in this playing field. The second there are a couple of bad actors in spammarketing, your hands are tied. You really can’t win without playing dirty.

I really hate this, not justifying their behaviour, but have no clue how one can do without the other.

spyckie2 2 hours ago | parent [-]

Its just law of the jungle all over again. Might makes right. Outcomes over means.

Game theory wise there is no solution except to declare (and enforce) spaces where leeching / degrading the environment is punished, and sharing, building, and giving back to the environment is rewarded.

Not financially, because it doesn't work that way, usually through social cred or mutual values.

But yeah the internet can no longer be that space where people mutually agree to be nice to each other. Rather utility extraction dominates—influencers, hype traders, social thought manipulators-and the rest of the world quietly leaves if they know what's good for them.

Lovely times, eh?

tokioyoyo an hour ago | parent [-]

> the rest of the world quietly leaves if they know what's good for them.

Userbase of TikTok, Instagram and etc. has increased YoY. People suck at making decisions for their own good on average.

deepdarkforest 5 hours ago | parent | prev | next [-]

The funny thing is that Anthropic is the only lab without an open source model

jack_pp 5 hours ago | parent | next [-]

And you believe the other open source models are a signal for ethics?

Don't have a dog in this fight, haven't done enough research to proclaim any LLM provider as ethical but I pretty much know the reason Meta has an open source model isn't because they're good guys.

bigyabai 4 hours ago | parent | next [-]

> Don't have a dog in this fight,

That's probably why you don't get it, then. Facebook was the primary contributor behind Pytorch, which basically set the stage for early GPT implementations.

For all the issues you might have with Meta's social media, Facebook AI Research Labs have an excellent reputation in the industry and contributed greatly to where we are now. Same goes for Google Brain/DeepMind despite their Google's advertisement monopoly; things aren't ethically black-and-white.

jack_pp 3 hours ago | parent [-]

A hired assassin can have an excellent reputation too. What does that have to do with ethics?

Say I'm your neighbor and I make a move on your wife, your wife tells you this. Now I'm hosting a BBQ which is free for all to come, everyone in the neighborhood cheers for me. A neighbor praises me for helping him fix his car.

Someone asks you if you're coming to the BBQ, you say to him nah.. you don't like me. They go, 'WHAT? jack_pp? He rescues dogs and helped fix my roof! How can you not like him?'

bigyabai 3 hours ago | parent [-]

Hired assassins aren't a monoculture. Maybe a retired gangster visits Make-A-Wish kids, and has an excellent reputation for it. Maybe another is training FOSS SOTA LLMs and releasing them freely on the internet. Do they not deserve an excellent reputation? Are they prevented from making ethically sound choices because of how you judge their past?

The same applies to tech. Pytorch didn't have to be FOSS, nor Tensorflow. In that timeline CUDA might have a total monopoly on consumer inference. Out of all the myriad ways that AI could have been developed and proliferated, we are very lucky that it happened in a public friendly rivalry between two useless companies with money to burn. The ethical consequences of AI being monopolized by a proprietary prison warden like Nvidia or Apple is comparatively apocalyptic.

imiric 4 hours ago | parent | prev [-]

The strongest signal for ethics is whether the product or company has "open" in its name.

m4rtink 5 hours ago | parent | prev | next [-]

Can those be even called open source if you can't rebuild if from the source yourself?

argee 4 hours ago | parent | next [-]

Even if you can rebuild it, it isn’t necessarily “open source” (see: commons clause).

As far as these model releases, I believe the term is “open weights”.

anonym29 4 hours ago | parent | prev [-]

Open weights fulfill a lot of functional the properties of open source, even if not all of them. Consider the classic CIA triad - confidentiality, integrity, and availability. You can achieve all of these to a much greater degree with locally-run open weight models than you can with cloud inference providers.

We may not have the full logic introspection capabilities, the ease of modification (though you can still do some, like fine-tuning), and reproducibility that full source code offers, but open weight models bear more than a passing resemblance to the spirit of open source, even though they're not completely true to form.

m4rtink 34 minutes ago | parent [-]

Fair enough but I still prefer people would be more concrete and really call it "open weight" or similar.

With fully open source software (say under GPL3), you can theoretically change anything & you are also quite sure about the provenience of the thing.

With an open weights model you can run it, that is good - but the amount of stuff you can change is limited. It is also a big black box that could possibly hide some surprises from who ever created it that could be possibly triggered later by input.

And lastly, you don't really know what the open weight model was trained on, which can again reflect on its output, not to mention potential liabilities later on if the authors were really care free about their training set.

colordrops 4 hours ago | parent | prev | next [-]

Are any of the models they've released useful or threats to their main models?

vunderba 3 hours ago | parent | next [-]

I use Gemma3 27b [1] daily for document analysis and image classification. While I wouldn't call it a threat it's a very useful multimodal model that'll run even on modest machines.

[1] - https://huggingface.co/google/gemma-3-27b-it

evilduck 4 hours ago | parent | prev [-]

Gemma and GPT-OSS are both useful. Neither are threats to their frontier models though.

j45 4 hours ago | parent | prev [-]

They are, at the same time I considered their model more specialized than everyone trying to make a general purpose model.

I would only use it for certain things, and I guess others are finding that useful too.

cedws 4 hours ago | parent | prev | next [-]

I’m going the other way to OpenAI due to Anthropic’s Claude Code restrictions designed to kill OpenCode et al. I also find Altman way less obnoxious than Amodei.

adangert 4 hours ago | parent | prev | next [-]

Anthropic (for the Superbowl) made ads about not having ads. They cannot be trusted either.

notyourwork 4 hours ago | parent [-]

Advertisements can be ironic, I don’t think marketing is the foundation I use to decide about a companies integrity.

dakolli 3 hours ago | parent | prev | next [-]

You "agentic coders" say you're switching back and forth every other week. Like everything else in this trend, its very giving of 2021 crypto shill dynamics. Ya'll sound like the NFT people that said they were transforming art back then, and also like how they'd switch between their favorite "chain" every other month. Can't wait for this to blow up just like all that did.

energy123 5 hours ago | parent | prev | next [-]

Grok usage is the most mystifying to me. Their model isn't in the top 3 and they have bad ethics. Like why would anyone bother for work tasks.

ahtihn 3 hours ago | parent | next [-]

The lack of ethics is a selling point.

Why anyone would want a model that has "safety" features is beyond me. These features are not in the user's interest.

retinaros 5 hours ago | parent | prev [-]

The X grok feature is one of the best end user feature or large scale genai

kingofthehill98 3 hours ago | parent | next [-]

What?! That's well regarded as one of the worst features introduced after the Twitter acquisition.

Any thread these days is filled with "@grok is this true?" low effort comments. Not to mention the episode in which people spent two weeks using Grok to undress underage girls.

retinaros 3 hours ago | parent [-]

high adoption means this works...

MPSimmons 4 hours ago | parent | prev | next [-]

What is the grok feature? Literally just mentioning @grok? I don't really know how to use Grok on X.

bigyabai 4 hours ago | parent | prev [-]

That's news to me, I haven't read a single Grok post in my life.

Am I missing out?

retinaros 3 hours ago | parent [-]

im talking about the "explain this post" feature on top right of a message where groks mix thread data, live data and other tweets to unify a stream of information

hxbdg 2 hours ago | parent | prev | next [-]

I dropped ChatGPT as soon as they went to an ad supported model. Claude Opus 4.6 seems noticeably better than GPT 5.2 Thinking so far.

4 hours ago | parent | prev | next [-]
[deleted]
JoshGlazebrook 5 hours ago | parent | prev | next [-]

I did this a couple months ago and haven't looked back. I sometimes miss the "personality" of the gpt model I had chats with, but since I'm essentially 99% of the time just using claude for eng related stuff it wasn't worth having ChatGPT as well.

johnwheeler 5 hours ago | parent | next [-]

Same here

oofbey 4 hours ago | parent | prev [-]

Personally I can’t stand GPT’s personality. So full of itself. Patronizing. Won’t admit mistakes. Just reeks of Silicon Valley bravado.

riddley 4 hours ago | parent | next [-]

That's a great point. Thanks for calling it out on that.

krelian 4 hours ago | parent | prev | next [-]

In my limited experience I found 5.3-Codex to be extremely dry, terse and to the point. I like it.

azrazalea_debt 4 hours ago | parent | prev [-]

You're absolutely right!

sejje 5 hours ago | parent | prev | next [-]

I pay multiple camps. Competition is a good thing.

giancarlostoro 5 hours ago | parent | prev | next [-]

Same. I'm all in on Claude at the moment.

eikenberry 4 hours ago | parent | prev | next [-]

> I’m glad Anthropic’s work is at the forefront and they appear, at least in my estimation, to have the strongest ethics.

Damning with faint praise.

bdhtu 4 hours ago | parent | prev | next [-]

> in my estimation [Anthropic has] the strongest ethics

Anthropic are the only ones who emptied all the money from my account "due to inactivity" after 12 months.

timpera 5 hours ago | parent | prev | next [-]

Which plan did you choose? I am subscribed to both and would love to stick with Claude only, but Claude's usage limits are so tiny compared to ChatGPT's that it often feels like a rip-off.

MPSimmons 4 hours ago | parent | next [-]

I signed up for Claude two weeks ago after spending a lot of time using Cline in VSCode backed by GPT-5.x. Claude is an immensely better experience. So much so that I ran it out of tokens for the week in 3 days.

I opted to upgrade my seat to premium for $100/mo, and I've used it to write code that would have taken a human several hours or days to complete, in that time. I wish I would have done this sooner.

manmal 4 hours ago | parent [-]

You ran out of tokens so much faster because the Anthropic plans come with 3-5x less token budget at the same cost.

Cline is not in the same league as codex cli btw. You can use codex models via Copilot OAuth in pi.dev. Just make sure to play with thinking level. This would give roughly the same experience as codex CLI.

andsoitis 4 hours ago | parent | prev [-]

Pro. At $17 per month, it is cheaper than ChatGPT's $20.

I've just switched so haven't run into constraints yet.

charcircuit 3 hours ago | parent [-]

Claude Pro is $20/mo if you do not lock in for a year long contract.

4 hours ago | parent | prev | next [-]
[deleted]
brightball 4 hours ago | parent | prev | next [-]

Trust is an interesting thing. It often comes down to how long an entity has been around to do anything to invalidate that trust.

Oddly enough, I feel pretty good about Google here with Sergey more involved.

malfist 4 hours ago | parent | prev | next [-]

This sounds suspiciously like they #WalkAway fake grassroots stuff.

RyanShook 5 hours ago | parent | prev | next [-]

It definitely feels like Claude is pulling ahead right now. ChatGPT is much more generous with their tokens but Claude's responses are consistently better when using models of the same generation.

manmal 4 hours ago | parent [-]

When both decide to stop subsidized plans, only OpenAI will be somewhat affordable.

notyourwork 4 hours ago | parent [-]

Based on what? Why is one more affordable over another? Substantiating your claim would provide a better discussion.

chipgap98 5 hours ago | parent | prev | next [-]

Same and honestly I haven't really missed my ChatGPT subscription since I canceled. I also have access to both (ChatGPT and Claude) enterprise tools at work and rarely feel like I want to use ChatGPT in that setting either

AstroBen 4 hours ago | parent | prev | next [-]

Jesus people aren't actually falling for their "we're ethical" marketing, are they?

hmmmmmmmmmmmmmm 5 hours ago | parent | prev | next [-]

This is just you verifying that their branding is working. It signals nothing about their actual ethics.

bigyabai 3 hours ago | parent [-]

Unfortunately, you're correct. Claude was used in the Venezuela raid, Anthropic's consent be damned. They're not resisting, they're marketing resistence.

surgical_fire 5 hours ago | parent | prev | next [-]

I use Claude at work, Codex for personal development.

Claude is marginally better. Both are moderately useful depending on the context.

I don't trust any of them (I also have no trust in Google nor in X). Those are all evil companies and the world would be better if they disappeared.

holoduke 4 hours ago | parent | next [-]

What about companies in general? I mean US companies? Aren't they all google like or worse?

surgical_fire 24 minutes ago | parent [-]

Some are more evil than others.

fullstackchris 4 hours ago | parent | prev [-]

google is "evil" ok buddy

i mean what clown show are we living in at this point - claims like this simply running rampant with 0 support or references

anonym29 4 hours ago | parent [-]

They literally removed "don't be evil" from their internal code of conduct. That wasn't even a real binding constraint, it was simply a social signalling mechanism. They aren't even willing to uphold the symbolic social fiction of not being evil. https://en.wikipedia.org/wiki/Don't_be_evil

Google, like Microsoft, Apple, Amazon, etc were, and still are, proud partners of the US intelligence community. That same US IC that lies to congress, kills people based on metadata, murders civilians, suppresses democracy, and is currently carrying out violent mass round-ups and deportations of harmless people, including women and children.

iamdelirium 3 hours ago | parent | next [-]

Don't be evil was never removed. It was just moved to the bottom.

https://abc.xyz/investor/board-and-governance/google-code-of...

sowbug 4 hours ago | parent | prev [-]

They removed that phrase because everyone was getting tired of internet commentary like "rounded corners? whatever happened to don't be evil, Google?"

retinaros 5 hours ago | parent | prev | next [-]

Their ethics is literally saying china is an adverse country and lobbying to ban them from AI race because open models is a threat to their biz model

scottyah 5 hours ago | parent [-]

Also their ads (very anti-openai instead of promoting their own product) and how they handled the openclaw naming didn't send strong "good guys" messaging. They're still my favorite by far but there are some signs already that maybe not everyone is on the same page.

fullstackchris 4 hours ago | parent | prev | next [-]

idk, codex 5.3 frankly kicks opus 4.6 ass IMO... opus i can use for about 30 min - codex i can run almost without any break

holoduke 4 hours ago | parent [-]

What about the client ? I find the Claude cliënt better in planning, making the right decision steps etc. it seems that a lot of work is also in the cli tool itself. Specially in feedback loop processing (reading logs. Browsers. Consoles etc)

Razengan 4 hours ago | parent | prev | next [-]

uhh..why? I subbed just 1 month to Claude, and then never used it again.

• Can't pay with iOS In-App-Purchases

• Can't Sign in with Apple on website (can on iOS but only Sign in with Google is supported on web??)

• Can't remove payment info from account

• Can't get support from a human

• Copy-pasting text from Notes etc gets mangled

• Almost months and no fixes

Codex and its Mac app are a much better UX, and seem better with Swift and Godot than Claude was.

alpineman 4 hours ago | parent [-]

Then they can offer it cheaper as they don’t pay the ‘Apple tax’

himata4113 3 hours ago | parent | prev [-]

[dead]