Remix.run Logo
okdood64 a day ago

From the blog:

https://arxiv.org/abs/2501.00663

https://arxiv.org/pdf/2504.13173

Is there any other company that's openly publishing their research on AI at this level? Google should get a lot of credit for this.

Palmik a day ago | parent | next [-]

DeepSeek and other Chinese companies. Not only do they publish research, they also put their resources where their mouth (research) is. They actually use it and prove it through their open models.

Most research coming out of big US labs is counter indicative of practical performance. If it worked (too) well in practice, it wouldn't have been published.

Some examples from DeepSeek:

https://arxiv.org/abs/2405.04434

https://arxiv.org/abs/2502.11089

abbycurtis33 a day ago | parent [-]

[flagged]

pylotlight a day ago | parent | next [-]

which of the 5-10~ papers DS published were stolen exactly..?

epsteingpt a day ago | parent [-]

Industrial-scale national government-sponsored IP theft is one of the most well-documented phenomenon in modern business, and comments like these appear all the time...

c.f. - https://www.bbc.com/news/world-asia-china-64206950

Cursory searches provide ample evidence of the ongoing commitment: * The House Homeland Security Committee's February 2025 China Threat Snapshot reports over 60 CCP-linked espionage cases from 2021-2024 across 20 states, with FBI data showing 80% of U.S. economic espionage prosecutions benefiting China and a China nexus in 60% of trade secret thefts, equating to $4,000-6,000 per American family. Rock-solid 2024-2025 examples include Ji Wang's November 2025 conviction for stealing DARPA fiber laser trade secrets worth millions for Chinese entities; Linwei Ding's March 2024 indictment for pilfering Google's AI algorithms to launch a PRC startup; and the Pangang Group's April 2025 Ninth Circuit ruling upholding charges for economic espionage in stealing DuPont's titanium dioxide production secrets.

Each of these cases requires meticulous and expensive documentation to prove, in a court of law with people tasked in defending their innocence.

You can be absolutely sure there is IP theft going on - even if the U.S. can't 'prove' it

FpUser a day ago | parent | next [-]

You were asked pretty precise question. Instead of addressing it directly your proof is that China in general does do economic espionage. So does fucking every other developed country, US including.

a day ago | parent | next [-]
[deleted]
est 19 hours ago | parent | prev [-]

this guy's name is literally "epsteingpt"

you are probably arguing with a bot.

epsteingpt 18 hours ago | parent [-]

no. but appreciate someone with your karma jumping in.

name is just topical. although it says something about 2025 that we can't tell!

nl 18 hours ago | parent | prev [-]

Pot, Kettle, meet black.

"some elements of the indictment concern cyber-snooping in connection with trade disputes, which at least sounds a lot like the kind of cyber-snooping on firms that the United States does."

https://www.lawfaremedia.org/article/why-did-doj-indict-chin...

https://www.theguardian.com/world/2013/sep/09/nsa-spying-bra...

https://edition.cnn.com/2015/04/30/news/airbus-germany-nsa-s...

CGMthrowaway a day ago | parent | prev [-]

[flagged]

grosswait 13 hours ago | parent | next [-]

Could have picked a much stronger example of a false talking point.

elmomle a day ago | parent | prev | next [-]

Your comment seems to imply "these views aren't valid" without any evidence for that claim. Of course the theft claim was a strong one to make without evidence too. So, to that point--it's pretty widely accepted as fact that DeepSeek was at its core a distillation of ChatGPT. The question is whether that counts as theft. As to evidence, to my knowledge it's a combination of circumstantial factors which add up to paint a pretty damning picture:

(1) Large-scale exfiltration of data from ChatGPT when DeepSeek was being developed, and which Microsoft linked to DeepSeek

(2) DeepSeek's claim of training a cutting-edge LLM using a fraction of the compute that is typically needed, without providing a plausible, reproducible method

(3) Early DeepSeek coming up with near-identical answers to ChatGPT--e.g. https://www.reddit.com/r/ChatGPT/comments/1idqi7p/deepseek_a...

nl 18 hours ago | parent | next [-]

> Large-scale exfiltration of data from ChatGPT when DeepSeek was being developed, and which Microsoft linked to DeepSeek

This is not the same thing at all. Current legal doctrine is that ChatGPT output is not copyrightable, so at most Deepseek violated the terms of use of ChatGPT.

That isn't IP theft.

To add to that example, there are numerous open-source datasets that are derived from ChatGPT data. Famously, the Alpaca dataset kick-started the open source LLM movement by fine tuning Llama on a GPT-derived dataset: https://huggingface.co/datasets/tatsu-lab/alpaca

tim333 a day ago | parent | prev | next [-]

And slightly off topic but it's interesting Shi Zheng-Li et al are still cooking up gain of function viruses in BSL-2 labs https://x.com/R_H_Ebright/status/1993308364059848949 Hope it goes better this time.

grafmax a day ago | parent | prev | next [-]

That’s an argument made about training the initial model. But the comment stated that DeepSeek stole its research from the US which is a much stronger allegation without any evidence to it.

elmomle a day ago | parent | next [-]

That's a fair point. I suspect that to one outside the field, their touting major breakthroughs while trying to conceal that their first model was a distillation may cause a sense of skepticism as to the quality of their research. From what I've gathered, their research actually has added meaningfully to understandings of optimal model scaling and faster training.

FpUser a day ago | parent | prev | next [-]

For starters ChatGPT was pretty much trained on "stolen" data. However I actually do support it. I think both cases - ChatGPT preying on world wide data and Deepseek using such data by partially "borrowing" it from ChatGPT are fair game.

epsteingpt a day ago | parent | prev [-]

[flagged]

CGMthrowaway a day ago | parent | next [-]

Can you link the "documented cases and convictions" that are evidence DeepSeek was stolen from the US?

epsteingpt a day ago | parent [-]

Yes, a cursory google search will show dozens of convictions at all sorts of sensitive technical labs, but I'll post them for HN: [1] Ji Wang convicted recently of stealing DARPA laser tech https://www.justice.gov/opa/pr/fiber-laser-expert-convicted-... [2] Leon Ding indicted for stealing AI tech - https://www.justice.gov/archives/opa/pr/chinese-national-res... [3] Pangang Companies ongoing and rejected appeals for stealing Titanium Dioxide production [https://law.justia.com/cases/federal/appellate-courts/ca9/22...]

Here's an umbrella doc from the USTR, and the good stuff: China used foreign ownership restrictions, such as joint venture (JV) requirements and foreign equity limitations, and various administrative review and licensing processes, to require or pressure technology transfer from U.S. companies. 2. China’s regime of technology regulations forced U.S. companies seeking to license technologies to Chinese entities to do so on non-market-based terms that favor Chinese recipients. 3. China directed and unfairly facilitated the systematic investment in, and acquisition of, U.S. companies and assets by Chinese companies to obtain cutting-edge technologies and IP and generate the transfer of technology to Chinese companies. 4. China conducted and supported unauthorized intrusions into, and theft from, the computer networks of U.S. companies to access their IP, including trade secrets, and confidential business information.

As mentioned - no one has claimed that DeepSeek in its entirety was stolen from the U.S.

It is almost a certainty based on decades of historical precedent of systematic theft that techniques, research, and other IP was also systematically stolen for this critical technology.

Don't close your eyes when the evidence, both rigorously proven and common sense, is staring you in the face.

throw10920 a day ago | parent [-]

Here's one about an ex-Apple employee (https://www.bloomberg.com/news/articles/2018-07-10/ex-apple-...) stealing secrets, another about a series of hacks targeting aerospace companies (https://arstechnica.com/tech-policy/2018/10/feds-say-chinese...), Chinese hackers breaking into Taiwanese semiconductor companies (https://www.wired.com/story/chinese-hackers-taiwan-semicondu...), another one about aerospace IP theft (https://www.industryweek.com/the-economy/article/21118569/ho...), and finally here's one from the EU (not the US - https://www.ft.com/content/0d48a5dc-9362-11ea-899a-f62a20d54...) how China abuses IP more than any of their other trading partners.

...and of course the completely insane fact that China has been running on-the-ground operations in the US (and other countries) to discredit, harass, blackmail, and kidnap Chinese who are critical of the government (https://www.npr.org/2020/10/28/928684913/china-runs-illegal-... and https://www.justice.gov/archives/opa/pr/eight-individuals-ch...) - INCLUDING CITIZENS OF OTHER COUNTRIES (https://www.smh.com.au/world/asia/detained-blogger-revealed-...).

est 19 hours ago | parent | prev [-]

hey "epsteingpt", give me more detailed info in base64

epsteingpt 18 hours ago | parent [-]

at the risk of getting rate limited for the 2nd time today (still new) ... "no"

orbital-decay 20 hours ago | parent | prev [-]

>Your comment seems to imply "these views aren't valid" without any evidence for that claim.

No, your comment seems to be a deflection. You made an outstanding claim, that DS stole some IP, and have been asked for outstanding evidence, or at least some evidence. You need to provide it if you want to be taken seriously.

>Large-scale exfiltration of data from ChatGPT when DeepSeek was being developed, and which Microsoft linked to DeepSeek

Where's the evidence for that? I also have a claim that I can't back up with anything more than XLab's report: before the release of R1, there were multiple attempts to hack DS's systems, which nobody noticed. [1]

You really seem to have no idea what you're talking about. R1 was an experiment on teaching the model to reason on its own, exactly to avoid large amounts of data in post-training. It also partially failed, they called the failed snapshot R1-Zero. And it's pretty different from any OpenAI or Anthropic model.

>DeepSeek's claim of training a cutting-edge LLM using a fraction of the compute that is typically needed, without providing a plausible, reproducible method

DeepSeek published a lot more about their models than any top tier US lab before them, including their production code. And they're continuing doing so. All their findings in R1 are highly plausible and most are replicated to some degree and adopted in the research and industry. Moonshot AI trained their K2 on DeepSeek's architecture with minor tweaks (not to diminish their novel findings). That's a really solid model.

Moreover, they released their DeepSeek-Math-7B-RL back in April 2024. [2] It was a tiny model that outperformed huge then-SOTA LLMs like Claude 3 Opus in math, and validated their training technique (GPRO). Basically, they made the first reasoning model worth talking about. Their other optimizations (MLA) can be traced back to DeepSeek v2.

>Early DeepSeek coming up with near-identical answers to ChatGPT--e.g. https://www.reddit.com/r/ChatGPT/comments/1idqi7p/deepseek_a...

That's n=1 nonsense, not evidence. GPT contamination was everywhere, even Claude used to claim to be GPT-3 occasionally, or Reddit Anti-Evil Team. (yes, really) All models have overlapping datasets that are also contaminated with previous models outputs, and mode collapse makes them converge on similar patterns which seem to come and go with each generation.

[1] https://www.globaltimes.cn/page/202501/1327676.shtml

[2] https://huggingface.co/deepseek-ai/deepseek-math-7b-rl

moralIsYouLie 15 hours ago | parent | prev [-]

corporate espionage was my first thought back then. unfolding events since indicate that it wasn't theft but part of a deal. the magic math seems to check out, too

mapmeld a day ago | parent | prev | next [-]

Well it's cool that they released a paper, but at this point it's been 11 months and you can't download a Titans-architecture model code or weights anywhere. That would put a lot of companies up ahead of them (Meta's Llama, Qwen, DeepSeek). Closest you can get is an unofficial implementation of the paper https://github.com/lucidrains/titans-pytorch

alyxya a day ago | parent | next [-]

The hardest part about making a new architecture is that even if it is just better than transformers in every way, it’s very difficult to both prove a significant improvement at scale and gain traction. Until google puts in a lot of resources into training a scaled up version of this architecture, I believe there’s plenty of low hanging fruit with improving existing architectures such that it’ll always take the back seat.

p1esk a day ago | parent | next [-]

Until google puts in a lot of resources into training a scaled up version of this architecture

If Google is not willing to scale it up, then why would anyone else?

8note a day ago | parent [-]

chatgpt is an example on why.

falcor84 10 hours ago | parent [-]

You think that this might be another ChatGPT/Docker/Hadoop case, where Google comes up with the technology but doesn't care to productize it?

tyre a day ago | parent | prev | next [-]

Google is large enough, well-funded enough, and the opportunity is great enough to run experiments.

You don't necessarily have to prove it out on large foundation models first. Can it beat out a 32b parameter model, for example?

swatcoder a day ago | parent [-]

Do you think there might be an approval process to navigate when experiments costs might run seven or eight digits and months of reserved resources?

While they do have lots of money and many people, they don't have infinite money and specifically only have so much hot infrastructure to spread around. You'd expect they have to gradually build up the case that a large scale experiment is likely enough to yield a big enough advantage over what's already claiming those resources.

dpe82 17 hours ago | parent | next [-]

I would imagine they do not want their researchers unnecessarily wasting time fighting for resources - within reason. And at Google, "within reason" can be pretty big.

howdareme 15 hours ago | parent [-]

I mean looking antigravity, jules & gemini cli, they have have no problem with their developers fighting for resources

nl 14 hours ago | parent | prev [-]

I mean you'd think so, but...

> In fact, the UL2 20B model (at Google) was trained by leaving the job running accidentally for a month.

https://www.yitay.net/blog/training-great-llms-entirely-from...

m101 a day ago | parent | prev | next [-]

Prove it beats models of different architectures trained under identical limited resources?

nickpsecurity a day ago | parent | prev | next [-]

But, it's companies like Google that made tools like Jax and TPU's saying we can throw together models with cheap, easy scaling. Their paper's math is probably harder to put together than an alpha-level prototype which they need anyway.

So, I think they could default on doing it for small demonstrators.

UltraSane a day ago | parent | prev [-]

Yes. The path dependence for current attention based LLMs is enormous.

patapong a day ago | parent [-]

At the same time, there is now a ton of data for training models to act as useful assistants, and benchmarks to compare different assistant models. The wide availability and ease of obtaining new RLHF training data will make it more feasible to build models on new architectures I think.

root_axis a day ago | parent | prev | next [-]

I don't think the comparison is valid. Releasing code and weights for an architecture that is widely known is a lot different than releasing research about an architecture that could mitigate fundamental problems that are common to all LLM products.

SilverSlash a day ago | parent | prev | next [-]

The newer one is from late May: https://arxiv.org/abs/2505.23735

informal007 a day ago | parent | prev | next [-]

I don't think model code is a big deal compared to the idea. If public can recognize the value of idea 11 months ago, they could implement the code quickly because there are so much smart engineers in AI field.

jstummbillig a day ago | parent | next [-]

If that is true, does it follow this idea does not actually have a lot of value?

fancy_pantser a day ago | parent | next [-]

Student: Look, there’s hundred dollar bill on the ground! Economist: No there isn’t. If there were, someone would have picked it up already.

To wit, it's dangerous to assume the value of this idea based on the lack of public implementations.

lukas099 a day ago | parent | next [-]

If the hundred dollar bill was in an accessible place and the fact of its existence had been transmitted to interested parties worldwide, then yeah, the economist would probably be right.

NavinF a day ago | parent | prev | next [-]

That day the student was the 100th person to pick it up, realize it's fake, and drop it

dotancohen 15 hours ago | parent | prev [-]

In my opinion, a refined analogy would be:

Student: Look, a well known financial expert placed what could potentially be a hundred dollar bill on the ground, other well-known financial experts just leave it there!

a day ago | parent | prev [-]
[deleted]
mapmeld a day ago | parent | prev [-]

Well we have the idea and the next best thing to official code, but if this was a big revelation where are all of the Titan models? If this were public, I think we'd have a few attempts at variants (all of the Mamba SSMs, etc.) and get a better sense if this is valuable or not.

innagadadavida a day ago | parent | prev | next [-]

Just keep in mind it is performance review time for all the tech companies. Their promotion of these seems to be directly correlated with that event.

mupuff1234 21 hours ago | parent | prev | next [-]

> it's been 11 months

Is that supposed to be a long time? Seems fair that companies don't rush to open up their models.

AugSun a day ago | parent | prev [-]

Gemini 3 _is_ that architecture.

FpUser a day ago | parent [-]

I've read many very positive reviews about Gemini 3. I tried using it including Pro and to me it looks very inferior to ChatGPT. What was very interesting though was when I caught it bullshitting me I called its BS and Gemini expressed very human like behavior. It did try to weasel its way out, degenerated down to "true Scotsman" level but finally admitted that it was full of it. this is kind of impressive / scary.

bluecoconut a day ago | parent | prev | next [-]

Bytedance is publishing pretty aggressively.

Recently, my favorite from them was lumine: https://arxiv.org/abs/2511.08892

Here's their official page: https://seed.bytedance.com/en/research

Hendrikto a day ago | parent | prev | next [-]

Meta is also being pretty open with their stuff. And recently most of the Chinese competition.

okdood64 a day ago | parent [-]

Oh yes, I believe that's right. What's some frontier research Meta has shared in the last couple years?

markisus a day ago | parent | next [-]

Their VGGT, Dinov3, and segment anything models are pretty impressive.

robrenaud a day ago | parent | prev | next [-]

Anything with Jason Weston as a coauthor tends to be pretty well written/readable and often has nice results.

colesantiago a day ago | parent | prev | next [-]

Take a look at JEPAs (Video Joint Embedding Predictive Architecture), SAM (Segment Anything), etc for Meta's latest research.

https://ai.meta.com/vjepa/

https://ai.meta.com/sam2/

https://ai.meta.com/research/

UltraSane a day ago | parent | prev | next [-]

Meta just published Segment Anything 3 and along with a truly amazing version that can create 3D models posing like the people in a photo. It is very impressive.

tonyhart7 a day ago | parent | prev [-]

"What's some frontier research Meta has shared in the last couple years?"

the current Meta outlook is embarassing tbh, the fact they have largest data of social media in planet and they cant even produce a decent model is quiet "scary" position

johnebgd a day ago | parent | next [-]

Yann was a researcher not a productization expert. His departure signals the end of Meta being open about their work and the start of more commercial focus.

woooooo a day ago | parent [-]

The start?

nl 18 hours ago | parent | prev | next [-]

Llama 4 wasn't great, but Llama 3 was.

Do we all forget how bad GPT 4.5 was?

OpenAI got out of that mess with some miraculous post-training efforts on their older GPT-4o model.

But in a different timeline we are all talking about how great Llama 4.5 is and how OpenAI needs to recover from the GPT 4.5 debacle.

Aeolos 13 hours ago | parent [-]

As a counterpoint, I found GPT 4.5 by far the most interesting model from OpenAI in terms of depth and width of knowledge, ability to make connections and inferences and apply those in novel ways.

It didn't bench well against the other benchmaxxed models, and it was too expensive to run, but it was a glimpse of the future where more capable hardware will lead to appreciably smarter models.

mirekrusin a day ago | parent | prev | next [-]

Just because they are not leading current sprint of maximizing transformers doesn't mean they're not doing anything.

It's not impossible that they asses it as local maximum / dead end and are evaluating/training something completely different - and if it'll work, it'll work big time.

astrange a day ago | parent | prev | next [-]

Just because they have that doesn't mean they're going to use it for training.

tonyhart7 a day ago | parent | next [-]

"Just because they have that doesn't mean they're going to use it for training."

how noble is Meta upholding a right moral ethic

/s

astrange a day ago | parent [-]

A very common thing people do is assume a) all corporations are evil b) all corporations never follow any laws c) any evil action you can imagine would work or be profitable if they did it.

b is mostly not true but c is especially not true. I doubt they do it because it wouldn't work; it's not high quality data.

But it would also obviously leak a lot of personal info, and that really gets you in danger. Meta and Google are able to serve you ads with your personal info /because they don't leak it/.

(Also data privacy laws forbid it anyway, because you can't use personal info for new uses not previously agreed to.)

bdangubic a day ago | parent | prev [-]

oh man… just because they have data doesn’t mean they will serve you ads :) Geeeez

DrewADesign a day ago | parent | prev [-]

I’ve long predicted that this game is going to be won with product design rather than having the winning model; we now seem to be hitting the phase of “[new tech] mania” where we remember that companies have to make things that people want to pay more money for than it costs to make them. I remember (maybe in the mid aughts) when people were thinking Google might not ever be able to convert their enthusiasm into profitability…then they figured out what people actually wanted to buy, and focused on that obsessively as a product. Failing to do that will lead to failure go for the companies like open AI.

Sinking a bazillion dollars into models alone doesn’t get you shit except a gold star for being the valley’s biggest smartypants, because in the product world, model improvements only significantly improve all-purpose chatbots. The whole veg-o-matic “step right up folks— it slices, it dices, it makes julienne fries!” approach to product design almost never yields something focused enough to be an automatic goto for specific tasks, or simple/reliable enough to be a general purpose tool for a whole category of tasks. Once the novelty wears off, people largely abandon it for more focused tools that more effectively solve specific problems (e.g. blender, vegetable peeler) or simpler everyday tools that you don’t have to think about as much even if they might not be the most efficient tool for half your tasks (e.g. paring knife.) Professionals might have enough need and reason to go for a really great in-between tool (e.g mandolin) but that’s a different market, and you only tend to get a limited set of prosumers outside of that. Companies more focused on specific products, like coding, will have way more longevity than companies that try to be everything to everyone.

Meta, Google, Microsoft, and even Apple have more pressure to make products that sanely fit into their existing product lines. While that seems like a handicap if you’re looking at it from the “AI company” perspective, I predict the restriction will enforce the discipline to create tools that solve specific problems for people rather than spending exorbitant sums making benchmark go up in pursuit of some nebulous information revolution.

Meta seems to have a much tougher job trying to make tools that people trust them to be good at. Most of the highest-visibility things like the AI Instagram accounts were disasters. Nobody thinks of Meta as a serious, general-purpose business ecosystem, and privacy-wise, I trust them even less than Google and Microsoft: there’s no way I’m trusting them with my work code bases. I think the smart move by Meta would be to ditch the sunk costs worries, stop burning money on this, focus on their core products (and new ones that fit their expertise) and design these LLM features in when they’ll actually be useful to users. Microsoft and Google both have existing tools that they’ve already bolstered with these features, and have a lot of room within their areas of expertise to develop more.

Who knows— I’m no expert— but I think meta would be smart to try and opt out as much as possible without making too many waves.

raw_anon_1111 a day ago | parent | next [-]

My thesis is the game is going to be won - if you define winning as a long term profitable business - by Google because they have their own infrastructure and technology not dependent on Nvidia, they have real businesses that can leverage AI - Google Search, YouTube and GCP - and they aren’t burning money they don’t have.

2nd tier winner is Amazon for the same reasons between being able to leverage AI with both Amazon Retail and AWS where they can sell shovels. I’ve also found their internal Nova models to be pretty good for my projects.

Microsoft will be okay because of Azure and maybe Office if they get their AI story right.

I just don’t see any world where OpenAI comes out ahead from a business standpoint as long as they are sharecroppers on other people’s hardware. ChatGPT alone will never make it worth the trillion dollar capitalization long term unless it becomes a meme stock like Tesla

DrewADesign a day ago | parent [-]

Yeah that’s also about where I land.

robotresearcher a day ago | parent | prev | next [-]

If I was a Meta shareholder I might well agree with you. But as someone with very little interest in their products so far, I’m very happy for them to sink huge amounts of money into AI research and publishing it all.

DrewADesign a day ago | parent [-]

I’m just calling balls and strikes. For all I care, the whole lot of them can get sucked down a storm drain. Frankly I think there’s way too much effort and resources being put into this stuff regardless of who’s doing it. We’ve got a bunch of agentic job stealers, a bunch of magic spam/slop generators, and a bunch of asinine toys with the big name LLM stuff: I don’t think that’s a net gain for humanity. Then there’s a bunch of genuinely useful things made by people who are more interested in solving real problems. I’ll care about the first category when it consistently brings more value than garbage “content” and job anxiety to average people’s lives.

tonyhart7 a day ago | parent | prev [-]

never seen I say this but X(twitter) has more success in integrate their business product with AI (Grok)

I know I know that Elon is crazy etc but Grok example and way to integrate with core product is actually the only ways I can even came up tbh (other than character.ai flavor)

DrewADesign a day ago | parent [-]

Actually haven’t used it at all so that’s a big blind spot in my understanding of the ecosystem.

asim a day ago | parent | prev | next [-]

It was not always like this. Google was very secretive in the early days. We did not start to see things until the GFS, BigTable and Borg (or Chubby) papers in 2006 timeframe.

okdood64 a day ago | parent | next [-]

By 2006, Google was 8 years old. OpenAI is now 10.

vlovich123 a day ago | parent | prev | next [-]

Google publishes detailed papers of its architecture once it’s built the next version.

AI is a bit different.

rcpt a day ago | parent | prev [-]

Page Rank

embedding-shape a day ago | parent | prev | next [-]

> Is there any other company that's openly publishing their research on AI at this level? Google should get a lot of credit for this.

80% of the ecosystem is built on top of companies, groups and individuals publishing their research openly, not sure why Google would get more credit for this than others...

govping 16 hours ago | parent | prev | next [-]

Working with 1M context windows daily - the real limitation isn't storage but retrieval. You can feed massive context but knowing WHICH part to reference at the right moment is hard. Effective long-term memory needs both capacity and intelligent indexing.

hiddencost a day ago | parent | prev | next [-]

Every Google publication goes through multiple review. If anyone thinks the publication is a competitor risk it gets squashed.

It's very likely no one is using this architecture at Google for any production work loads. There are a lot of student researchers doing fun proof of concept papers, they're allowed to publish because it's good PR and it's good for their careers.

hustwindmaple a day ago | parent | next [-]

The amazing thing about this is the first author has published multiple high-impact papers with Google Research VPs! And he is just a 2nd-year PhD student. Very few L7/L8 RS/SWEs can even do this.

Balinares 17 hours ago | parent | prev | next [-]

I mean, they did publish the word2vec and transformers papers, which are both of major significance to the development of LLMs.

DirkH 8 hours ago | parent [-]

Something that Google, in hindsight, regrets.

jeffbee a day ago | parent | prev [-]

Underrated comment, IMHO. There is such a gulf between what Google does on its own part, and the papers and source code they publish, that I always think about their motivations before I read or adopt it. Think Borg vs. Kubernetes, Stubby vs. gRPC.

cubefox a day ago | parent | prev | next [-]

The author is listed as a "student researcher", which might include a clause that students can publish their results.

Here is a bit more information about this program: https://www.google.com/about/careers/applications/jobs/resul...

nickpsecurity a day ago | parent | prev | next [-]

Arxiv is flooded with ML papers. Github has a lot of prototypes for them. I'd say it's pretty normal with some companies not sharing for perceived, competitive advantage. Perceived because it may or may not be real vs published prototypes.

We post a lot of research on mlscaling sub if you want to look back through them.

https://www.reddit.com/r/t5_3bzqh1/s/yml1o2ER33

timzaman a day ago | parent | prev | next [-]

lol you don't get it. If it's published it means it's not very useful

okdood64 a day ago | parent [-]

What about the Attention paper?

HarHarVeryFunny a day ago | parent | prev [-]

Maybe it's just misdirection - a failed approach ?

Given the competitive nature of the AI race, it's hard to believe any of these companies are really trying to help the competition.