Remix.run Logo
throwa356262 2 days ago

Seriously, why can't huge companies like OpenAI and Google produce documentation that is half this good??

https://api-docs.deepseek.com/guides/thinking_mode

No BS, just a concise description of exactly what I need to write my own agent.

u_sama 2 days ago | parent | next [-]

I am very partial to Mistral's API docs https://docs.mistral.ai/api

eshack94 2 days ago | parent [-]

Agreed, they also have great documentation. There's something to be said for documentation that is so concise, well laid out, and immediately actionable for those looking to get started quickly.

lykr0n 2 days ago | parent | prev | next [-]

It's because they're optimizing for a different problem.

Western Models are optimizing to be used as an interchangeable product. Chinese models are being optimizing to be built upon.

Barbing 2 days ago | parent | next [-]

>Western Models are optimizing to be used as an interchangeable product.

But so much investment in their platforms, not just their APIs?

raincole 2 days ago | parent | prev [-]

[flagged]

setr 2 days ago | parent | next [-]

First you clone the API of the winner, because you want to siphon users from its install-base and offer de-risked switch over cost.

Now that you’re winning, others start cloning your API to siphon your users.

Now that you’re losing, you start cloning the current winner, who is probably a clone of your clone.

Highly competitive markets tend to normalize, because lock-in is a cost you can’t charge and remain competitive. The customer holds power here, not the supplier.

Thats also why everyone is trying to build into the less competitive spaces, where they could potentially moat. Tooling, certs, specialized training data, etc

hunter67 2 days ago | parent | prev | next [-]

Our (western) economic model forces competing individual companies to be profitable quickly. China can ignore DeepSeek losing money, because they know developing DeepSeek will help China. Not every institution needs to be profitable.

naveen99 2 days ago | parent | next [-]

You mean like intel, tesla, spacex, openai ?

deaux 2 days ago | parent | prev [-]

Ah yes, the Western economic model forcing individual American companies like Amazon , Youtube and Uber to become profitable after.. checks notes _14 years_ for Uber, 9 years for Amazon, many years for Youtube.

FuckButtons 2 days ago | parent | prev | next [-]

yes, they want to win the same way they won more or less every other economic competition in the last 30 years, scale out, drop prices and asphyxiate the competition.

simonjgreen 2 days ago | parent | prev | next [-]

Yeah, it’s an interesting one. I think inertia and expectations at this point? I don’t think the big labs anticipated how low the model switching costs would be and how quickly their leads would be eroded (by each other and the upstarts)

They are developing their moats with the platform tooling around it right now though. Look at Anthropic with Routines and OpenAI with Agents. Drop that capability in to a business with loose controls and suddenly you have a very sticky product with high switching costs. Meanwhile if you stick with purely the ‘chat’ use cases, even Cowork and scheduled tasks, you maintain portability.

tick_tock_tick 2 days ago | parent | prev | next [-]

They are all racing to AGI. They aren't designing them to be interchangeable they just happen to be.

rglullis 2 days ago | parent | next [-]

No, they are not. If they were "racing to AGI" they would be working together. OpenAI would still be focused on being a non-profit. Anthropic wouldn't be blocking distillation on their models.

koe123 2 days ago | parent | prev [-]

If by AGI you mean IPO, sure. I genuinely don't believe Dario nor Sam should be trusted at this point. Elon levels of overpromising and underdelivering.

djmips 2 days ago | parent [-]

If by AGI you mean IPO - I automatically read that in Fireship's voice. XD

peepee1982 2 days ago | parent | prev [-]

If you want other people to know whether you're being genuine or sarcastic, you'll have to put a bit more effort into your comments. Your comment just adds noise.

kennyloginz 2 days ago | parent [-]

What da?

madduci a day ago | parent | prev | next [-]

For me, DeepSeek has been the best so far, in terms of coding skills, performance and documentation all together. Too bad this is flagged as 'concerning' when it comes to privacy, while on the other hand Gemini, ChatGPT and Claude are way beyond that, especially their mobile apps requiring a lot of permissions.

vitorgrs 2 days ago | parent | prev | next [-]

Meanwhile, they don't actually say which model you are running on Deepseek Chat website.

alansaber 2 days ago | parent | prev | next [-]

Because they produce revenue from products which abstract this away

Alifatisk 2 days ago | parent | prev | next [-]

You might enjoy Z.ais api docs aswell

kubb 2 days ago | parent | prev | next [-]

Western orgs have been captured by Silicon Valley style patrimonialism, and aren’t based on merit anymore.

2 days ago | parent | next [-]
[deleted]
notTheLastMan a day ago | parent | prev [-]

[dead]

kccqzy 2 days ago | parent | prev [-]

I spent only two minutes reading their documentation and it’s clear no one did any proofreading and it’s full of mistakes made by non-native speakers.

Example: the second sentence on the first page says “softwares” but “software” is a mass noun that cannot be pluralized.

Example: the third page about tokens has some zipped code to “calculate the token usage for your intput/output” and obviously “intput” should be “input” but misspelled.

As a company that produces LLMs, they could have even used their own LLM to edit their documentation to fix grammar issues, and yet they did not.

Maybe I’m just extra sensitive to grammar and spelling issues but this kind of lack of attention to detail is a huge subconscious turnoff. I had to fight my urge to close the tab.

Maxatar 2 days ago | parent | next [-]

Yeah I think those details are the least of most peoples concerns. I can't vouch one way or another for DeepSeek's documentation but for me what matters most when reading documentation is being able to get the information I want efficiently, not whether someone spelled "software" as "softwares", which is a very common spelling in Asia as an FYI.

I read OpenAI or Anthropic's documentation nowadays and it's just so full of useless junk and self-congratulation that makes it a miserable experience to go through. It's a real shame because OpenAI used to write stellar documentation and publish really lucid papers just few years ago.

aprdm 2 days ago | parent | prev | next [-]

No one cares about this kind of stuff. 99% of the devs are not English native speakers, what do you expect ? It works and we all can understand it

kccqzy 2 days ago | parent [-]

I try hard not to care but subconsciously spelling errors and grammar issues scream low-quality work to me. It’s the kind of mistake that’s the easiest to correct, and they didn’t bother.

u_fucking_dork 2 days ago | parent | next [-]

Missing comma in your first sentence was such an egregious grammar error that I was unable to finish reading the rest.

kccqzy 2 days ago | parent [-]

The phrase “missing comma” is missing an article. You need “a” or “the” before that. As a result when reading your comment, I subconsciously think of it as low quality.

But it’s okay. HN comments aren’t supposed to be high quality anyways. I know mine aren’t. But the official product documentation ought to be.

19 hours ago | parent | next [-]
[deleted]
komali2 a day ago | parent | prev [-]

Why ought it be?

Between you, me, and the Deepseek team, so far as I'm aware, only one entity has caused the Western frontier model companies to panic by delivering an open model that competes far more cheaply, to the point where people are running versions of it at home.

So they spelled software wrong. So what? Outside of this being the mental equivalent of a too-scratchy-sweater for the kinds of people sensitive to that sort of thing, I don't see why it matters.

Those of us that have spent a lot of time programming with non native English speakers (the majority of software engineers on earth) have learned long ago that English ability has no correlation with engineering ability.

diydsp a day ago | parent [-]

It may be a sign deepseek isn't "only for" Americans. Billions of non-native speakers communicate in "flawed" versions of English. Similar for other languages. Circling back to polish instructions for the picky among the Americans... hmm

If it tickles anyone's subconscious feelings, it would be their internal guiding myth of exceptionalism. With their recent forays into authoritarianism, it's becoming ever harder to paper over the reality.

aprdm 2 days ago | parent | prev [-]

That seems like a you problem

amluto 2 days ago | parent | prev | next [-]

The tool calling Python example would have benefitted from actually parsing the tool call. As is, it explains almost nothing.

dackdel a day ago | parent | prev | next [-]

i dont think deepseek will ever recover from this. huge loss for them. they will stop the pursuit of agi cause of one hn user and a comma.

squirrellous a day ago | parent | prev | next [-]

This tells me a real developer wrote the docs, instead of someone with good English writing skills but is less technical.

> they could have even used their own LLM to edit their documentation to fix grammar issues

In my experience companies who do this rarely stop at using LLMs to fix grammar issues. It becomes full on LLM speak quite fast, especially if there isn’t a native English speaker in the room who can discern what’s good and bad writing.

replwoacause a day ago | parent | prev | next [-]

pedantry

slopinthebag 2 days ago | parent | prev | next [-]

i prefer it cuz it indicates they didnt use an LLM to write their documentations and that its human generated

jen20 2 days ago | parent | prev | next [-]

> Example: the second sentence on the first page says “softwares” but “software” is a mass noun that cannot be pluralized.

I constantly see and hear this mistake from actual humans too.

It's fairly ironic that your own comment contains run-on sentences, speculative claims and phrasing peculiarities like "could have even" instead of "could even have". Perhaps you are less sensitive to this than you think!

angry_octet 2 days ago | parent [-]

There is a difference between conversational speech and formal speech like documentation. It isn't rational to criticise use of the first when such speech is complaining about errors in the latter.

It's strange that you criticise "could have even" when it is a phrasing clearly being used for emphasis. "Could even have" makes no clearer sense in context.

No irony detected.

ChrisClark 2 days ago | parent | prev [-]

Nobody cares, we're talking about quality documentation here, not a couple spelling mistakes