| ▲ | ru552 5 hours ago |
| There's speculation that next Tuesday will be a big day for OpenAI and possibly GPT 6. Anthropic showed their hand today. |
|
| ▲ | varispeed 2 hours ago | parent | next [-] |
| Sounds like a good opportunity to pause spending on nerfed 4.6 and wait for the new model to be released and then max out over 2 weeks before it gets nerfed again. |
|
| ▲ | enraged_camel 5 hours ago | parent | prev | next [-] |
| That does not sound very believable. Last time Anthropic released a flagship model, it was followed by GPT Codex literally that afternoon. |
| |
| ▲ | cyanydeez 3 hours ago | parent [-] | | Ya'll know they're teaching to the test. I'll wait till someone devises a novel test that isn't contained in the datasets. Sure, they're still powerful. |
|
|
| ▲ | swalsh 2 hours ago | parent | prev [-] |
| My understanding is GPT 6 works via synaptic space reasoning... which I find terrifying. I hope if true, OpenAI does some safety testing on that, beyond what they normally do. |
| |
| ▲ | coppsilgold 2 hours ago | parent | next [-] | | Likely an improvement on: > We study a novel language model architecture that is capable of scaling test-time computation by implicitly reasoning in latent space. Our model works by iterating a recurrent block, thereby unrolling to arbitrary depth at test-time. This stands in contrast to mainstream reasoning models that scale up compute by producing more tokens. Unlike approaches based on chain-of-thought, our approach does not require any specialized training data, can work with small context windows, and can capture types of reasoning that are not easily represented in words. We scale a proof-of-concept model to 3.5 billion parameters and 800 billion tokens. We show that the resulting model can improve its performance on reasoning benchmarks, sometimes dramatically, up to a computation load equivalent to 50 billion parameters. <https://arxiv.org/abs/2502.05171> | |
| ▲ | tyre 2 hours ago | parent | prev | next [-] | | From the recent New Yorker piece on Sam: “My vibes don’t match a lot of the traditional A.I.-safety stuff,” Altman said. He insisted that he continued to prioritize these matters, but when pressed for specifics he was vague: “We still will run safety projects, or at least safety-adjacent projects.” When we asked to interview researchers at the company who were working on existential safety—the kinds of issues that could mean, as Altman once put it, “lights-out for all of us”—an OpenAI representative seemed confused. “What do you mean by ‘existential safety’?” he replied. “That’s not, like, a thing.” | | |
| ▲ | actionfromafar an hour ago | parent [-] | | Amusing! Even if they believe that, they should know the company communicated the opposite earlier. |
| |
| ▲ | levocardia 2 hours ago | parent | prev | next [-] | | Oh you mean literally the thing in AI2027 that gets everyone killed? Wonderful. | |
| ▲ | notrealyme123 2 hours ago | parent | prev | next [-] | | That's sounds really interesting. Do you have some hints where to read more? | |
| ▲ | arm32 2 hours ago | parent | prev [-] | | Oh, of course they will /s |
|