| ▲ | admiralrohan 6 days ago |
| Everyone is so negative here but we have reached the limit of AI scaling with conventional methods. Who knows Mistral might find the next big breakthrough like DeepSeek did. We should be optimistic. |
|
| ▲ | lordofgibbons 6 days ago | parent | next [-] |
| > but we have reached the limit of AI scaling with conventional methods We've just only started RL training LLMs. So far, RL has not used more than 10-20% of the existing pre-training compute budget. There's a lot of scaling left in RL training yet. |
| |
| ▲ | am17an 6 days ago | parent | next [-] | | Isn't this factually wrong? Grok-4 used as much compute on RL as they did on pre-training. I'm sure GPT-5 was the same (or even more) | | |
| ▲ | sigmoid10 6 days ago | parent [-] | | It was true for models up to o3, but there isn't enough public info to say much about GPT-5. Grok 4 seems to be the first major model that scaled RL compute 10x to near pre-training effort. |
| |
| ▲ | scellus 6 days ago | parent | prev | next [-] | | Even with pretraining, there's no limit or wall in raw performance, just diminishing returns in terms of the current applications, and business rationale to serve lighter models given the current infrastructure and pricing (and applications). Algorithmic efficiency of inference on a given performance level has also advanced a couple of OOMs since 2022 (for sure a major part of that is about model architecture and training methods). And it seems research is bottlenecked by computation. | |
| ▲ | alcinos 6 days ago | parent | prev | next [-] | | > We've just only started RL training LLMs That's just factually wrong. Even the original chatGPT model (based on gpt3.5, released in 2022) was trained with RL (specifically RLHF). | | |
| ▲ | prasoon2211 6 days ago | parent | next [-] | | RLHF is not the "RL" the parent is posting about. RLHF is specifically human driven reward (subjective, doesn't scale, doesn't improve the model "intelligence", just tweaks behavior) - which is why the labs have started calling it post-training, not RLHF, anymore. True RL is where you set up an environment where an agent can "discover" solutions to problems by iterating against some kind of verifiable reward AND the entire space of outcomes is theoretically largely explorable by the agent. Maths and Coding are have proven amenable to this type of RL so far. | |
| ▲ | manscrober 6 days ago | parent | prev | next [-] | | a) 2022 is not too long ago
b) this was a first important step to usable ai but not scalable. I'd say "RL training" is not the same as RLHF. | |
| ▲ | bigyabai 6 days ago | parent | prev [-] | | The original ChatGPT was like 3 years after the first usable transformer models. |
| |
| ▲ | whimsicalism 6 days ago | parent | prev [-] | | It is still an open question whether RL will (at least easily) scale the same way as pretrain or whether it is more effective at elicitation. |
|
|
| ▲ | 0x008 6 days ago | parent | prev | next [-] |
| This move is mostly about expected EU subsidies |
|
| ▲ | namero999 6 days ago | parent | prev | next [-] |
| Especially with Euclyd entering the space (efficiency for AI workloads), with founders with tight ties to ASML, this is the move Europe needs. |
| |
|
| ▲ | tonkinai 6 days ago | parent | prev | next [-] |
| I would make a wild guess that this is a policital invesment. It's hard to believe Mistral is the right choice to throw in 1.7B€ for economic reason. |
| |
| ▲ | nirv 6 days ago | parent | next [-] | | > It’s hard to believe that Mistral isn’t the right choice to invest €1.7B in for economic reasons. Why? Cursor, essentially a VSCode fork, is valued at $10B. Perplexity AI, which, as far as I'm informed, doesn't have its own foundational models, boasts a market capitalisation of $20B, according to recent news. Yet Mistral sits at just a $14B. Meanwhile, Mistral was at the forefront of the LLM take-off, developing foundational (very lean, performant and innovative at the time) models from scratch and releasing them openly. They set up an API service, integrated with businesses, building custom models and fine-tunes, and secured partnership agreements. They launched user-facing interface and mobile app which are on par with leading companies, kept pace with "reasoning" and "research" advancements; and, in short, built a solid, commercially viable portfolio. So why on earth should Mistral AI be valued lower? Let alone have its mere €1.7B investment questioned. Edit: Apologies, I misread your quote and missed the "isn't" part. | |
| ▲ | pyrale 6 days ago | parent | prev [-] | | Since 2024, it's hard to make an investment that has no political nature. |
|
|
| ▲ | rldjbpin 6 days ago | parent | prev | next [-] |
| i recall them being one of the first ones to release a mixture-of-experts (MoE) model [1], which was quite novel at the time. post that, it has appeared to be a catch-up game for them in mainstream utility. like just a week go they announced support for custom MCP connectors to their chat offering [2]. more competition is always nice, but i wonder what can these two companies, separated by several steps in the supply chain, really achieve together. [1] https://mistral.ai/news/mixtral-of-experts
[2] https://mistral.ai/news/le-chat-mcp-connectors-memories |
|
| ▲ | whimsicalism 6 days ago | parent | prev [-] |
| what next big breakthrough are you claiming deepseek found? MLA? GRPO? these are all small tweaks |
| |
| ▲ | admiralrohan 6 days ago | parent [-] | | I am not a ML person but as per the broad level understanding the innovation was about efficient training method and training the model in much cheaper than the US models and it was dubbed as the "Sputnik moment". | | |
|