| ▲ | mapmeld a day ago |
| Well it's cool that they released a paper, but at this point it's been 11 months and you can't download a Titans-architecture model code or weights anywhere. That would put a lot of companies up ahead of them (Meta's Llama, Qwen, DeepSeek).
Closest you can get is an unofficial implementation of the paper https://github.com/lucidrains/titans-pytorch |
|
| ▲ | alyxya a day ago | parent | next [-] |
| The hardest part about making a new architecture is that even if it is just better than transformers in every way, it’s very difficult to both prove a significant improvement at scale and gain traction. Until google puts in a lot of resources into training a scaled up version of this architecture, I believe there’s plenty of low hanging fruit with improving existing architectures such that it’ll always take the back seat. |
| |
| ▲ | p1esk a day ago | parent | next [-] | | Until google puts in a lot of resources into training a scaled up version of this architecture If Google is not willing to scale it up, then why would anyone else? | | |
| ▲ | 8note a day ago | parent [-] | | chatgpt is an example on why. | | |
| ▲ | falcor84 10 hours ago | parent [-] | | You think that this might be another ChatGPT/Docker/Hadoop case, where Google comes up with the technology but doesn't care to productize it? |
|
| |
| ▲ | tyre a day ago | parent | prev | next [-] | | Google is large enough, well-funded enough, and the opportunity is great enough to run experiments. You don't necessarily have to prove it out on large foundation models first. Can it beat out a 32b parameter model, for example? | | |
| ▲ | swatcoder a day ago | parent [-] | | Do you think there might be an approval process to navigate when experiments costs might run seven or eight digits and months of reserved resources? While they do have lots of money and many people, they don't have infinite money and specifically only have so much hot infrastructure to spread around. You'd expect they have to gradually build up the case that a large scale experiment is likely enough to yield a big enough advantage over what's already claiming those resources. | | |
| ▲ | dpe82 17 hours ago | parent | next [-] | | I would imagine they do not want their researchers unnecessarily wasting time fighting for resources - within reason. And at Google, "within reason" can be pretty big. | | |
| ▲ | howdareme 15 hours ago | parent [-] | | I mean looking antigravity, jules & gemini cli, they have have no problem with their developers fighting for resources |
| |
| ▲ | nl 14 hours ago | parent | prev [-] | | I mean you'd think so, but... > In fact, the UL2 20B model (at Google) was trained by leaving the job running accidentally for a month. https://www.yitay.net/blog/training-great-llms-entirely-from... |
|
| |
| ▲ | m101 a day ago | parent | prev | next [-] | | Prove it beats models of different architectures trained under identical limited resources? | |
| ▲ | nickpsecurity a day ago | parent | prev | next [-] | | But, it's companies like Google that made tools like Jax and TPU's saying we can throw together models with cheap, easy scaling. Their paper's math is probably harder to put together than an alpha-level prototype which they need anyway. So, I think they could default on doing it for small demonstrators. | |
| ▲ | UltraSane a day ago | parent | prev [-] | | Yes. The path dependence for current attention based LLMs is enormous. | | |
| ▲ | patapong a day ago | parent [-] | | At the same time, there is now a ton of data for training models to act as useful assistants, and benchmarks to compare different assistant models. The wide availability and ease of obtaining new RLHF training data will make it more feasible to build models on new architectures I think. |
|
|
|
| ▲ | root_axis a day ago | parent | prev | next [-] |
| I don't think the comparison is valid. Releasing code and weights for an architecture that is widely known is a lot different than releasing research about an architecture that could mitigate fundamental problems that are common to all LLM products. |
|
| ▲ | SilverSlash a day ago | parent | prev | next [-] |
| The newer one is from late May: https://arxiv.org/abs/2505.23735 |
|
| ▲ | informal007 a day ago | parent | prev | next [-] |
| I don't think model code is a big deal compared to the idea. If public can recognize the value of idea 11 months ago, they could implement the code quickly because there are so much smart engineers in AI field. |
| |
| ▲ | jstummbillig a day ago | parent | next [-] | | If that is true, does it follow this idea does not actually have a lot of value? | | |
| ▲ | fancy_pantser a day ago | parent | next [-] | | Student: Look, there’s hundred dollar bill on the ground!
Economist: No there isn’t. If there were, someone would have picked it up already. To wit, it's dangerous to assume the value of this idea based on the lack of public implementations. | | |
| ▲ | lukas099 a day ago | parent | next [-] | | If the hundred dollar bill was in an accessible place and the fact of its existence had been transmitted to interested parties worldwide, then yeah, the economist would probably be right. | |
| ▲ | NavinF a day ago | parent | prev | next [-] | | That day the student was the 100th person to pick it up, realize it's fake, and drop it | |
| ▲ | dotancohen 15 hours ago | parent | prev [-] | | In my opinion, a refined analogy would be: Student: Look, a well known financial expert placed what could potentially be a hundred dollar bill on the ground, other well-known financial experts just leave it there! |
| |
| ▲ | a day ago | parent | prev [-] | | [deleted] |
| |
| ▲ | mapmeld a day ago | parent | prev [-] | | Well we have the idea and the next best thing to official code, but if this was a big revelation where are all of the Titan models? If this were public, I think we'd have a few attempts at variants (all of the Mamba SSMs, etc.) and get a better sense if this is valuable or not. |
|
|
| ▲ | innagadadavida a day ago | parent | prev | next [-] |
| Just keep in mind it is performance review time for all the tech companies. Their promotion of these seems to be directly correlated with that event. |
|
| ▲ | mupuff1234 21 hours ago | parent | prev | next [-] |
| > it's been 11 months Is that supposed to be a long time? Seems fair that companies don't rush to open up their models. |
|
| ▲ | AugSun a day ago | parent | prev [-] |
| Gemini 3 _is_ that architecture. |
| |
| ▲ | FpUser a day ago | parent [-] | | I've read many very positive reviews about Gemini 3. I tried using it including Pro and to me it looks very inferior to ChatGPT. What was very interesting though was when I caught it bullshitting me I called its BS and Gemini expressed very human like behavior. It did try to weasel its way out, degenerated down to "true Scotsman" level but finally admitted that it was full of it. this is kind of impressive / scary. |
|