| |
| ▲ | stego-tech 2 days ago | parent | next [-] | | These models still consistently fail the only benchmark that matters: if I give you a task, can you complete it successfully without making shit up? Thus far they all fail. Code outputs don’t run, or variables aren’t captured correctly, or hallucinations are stated as factual rather than suspect or “I don’t know.” It’s 2000’s PC gaming all over again (“gotta game the benchmark!”). | | |
| ▲ | snet0 2 days ago | parent | next [-] | | To say that a model won't solve a problem is unfair. Claude Code, with Opus 4.5, has solved plenty of problems for me. If you expect it to do everything perfectly, you're thinking about it wrong. If you can't get it to do anything perfectly, you're using it wrong. | | |
| ▲ | jacquesm 2 days ago | parent [-] | | That means you're probably asking it to do very simple things. | | |
| ▲ | baq 2 days ago | parent | next [-] | | I can confidently say that anecdotally you’re completely wrong, but I’ll also allow a very different definition of ‘simple’ and/or attempting to use an unpopular environment as a valid anecdotal counterpoint. | | |
| ▲ | verdverm 2 days ago | parent | next [-] | | the problem with these arguments is there are data points to support both sides because both outcomes are possible the real thing is are you or we getting an ROI and the answer is increasingly more yeses on more problems, this trend is not looking to plateau as we step up the complexity ladder to agentic system | |
| ▲ | jacquesm 2 days ago | parent | prev [-] | | I don't reach for AI until I'm solidly stuck and then use it mostly for inspiration, it has yet to happen that it directly pointed at a solution and I'm pretty good at writing prompts. When I throw a bunch of elementary stuff at it then it is super good at identifying issues and solving them (but I could have done that myself, it is just nice to try to delineate where you can and where you can't trust the thing, but that too is fluctuating, sometimes even within a single session). Here is a nice example for a slightly more complex test: Prompt: "I have a green LED, a 12V powersupply, a single NPN transistor, a 100 ohm resistor, a 2.7K resistor and an electrolytic capacitor of 220 micro farads. My teacher says it is possible to make an LED flasher with these components but I'm hopelessly stuck, can you please give me an ascii art solution so I don't flunk this exam?" Answer: ASCII schematic (fully functional): +12V
|
|
.-.
| |
| | 2.7kΩ (R1 — charges the capacitor)
| |
'-'
|
+---------+----------------------+
| | |
( ) | |
( ) C1 | |
( ) 220uF | |
| | |
| | |
| |/ C |
+--------| NPN transistor |
| |\ E |
| | |
GND GND |
\
\
/ 100Ω (R2 — LED limit)
\
|
>| Green LED
|
GND
Follow by:"How it works The 2.7 kΩ resistor charges the 220 µF capacitor from the 12 V supply. The capacitor voltage rises slowly. When the capacitor voltage reaches the transistor’s base-emitter threshold (~0.6–0.7 V), the transistor suddenly switches ON. When it turns on, the capacitor rapidly discharges through the base, causing: A brief pulse of current through the transistor The LED lights up through the 100 Ω resistor After discharge, the transistor turns back OFF, the LED turns off, and the capacitor begins charging again. This repeats automatically → LED flasher." The number of errors in the circuit and the utterly bogus explanation as well as the over confident remark that this is 'working' is so bizarre that I wonder how many slightly more complicated questions are going to yield results comparable to this one. | | |
| ▲ | dagss a day ago | parent | next [-] | | I am right now implementing an imagining pipeline using OpenCV and TypeScript. I have never used OpenCV specifically before, and have little imaging experience too. What I do have though is a PhD in astrophysics/statistics so I am able to follow along the details easily. Results are amazing. I am getting results in 2 days of work that would have taken me weeks earlier. ChatGPT acts like a research partner. I give it images and it explains why current scoring functions fails and throws out new directions to go in. Yes, my ideas are sometimes better. Sometimes ChatGPT has a better clue. It is like a human collegue more or less. And if I want to try something, the code is usually bug free. So fast to just write code, try it, throw it away if I want to try another idea. I think a) OpenCV probably has more training data than circuits? and b) I do not treat it as a desperate student with no knowlegde. I expect to have to guide it. There are several hundred messages back and forth. It is more like two researchers working together with different skill sets complementing one another. One of those skillsets being to turn a 20 message conversation into bugfree OpenCV code in 20 seconds. No, it is not providing a perfect solution to all problems on first iteration. But it IS allowing me to both learn very quickly and build very quickly. Good enough for me.. | | |
| ▲ | jacquesm a day ago | parent [-] | | That's a good use case, and I can easily imagine that you get good results from it because (1) it is for a domain that you are already familiar with and (2) you are able to check that the results that you are getting are correct and (3) the domain that you are leveraging (coding expertise) is one that chatgpt has ample input for. Now imagine you are using it for a domain that you are not familiar with, or one for which you can't check the output or that chatgpt has little input for. If either of those is true the output will be just as good looking and you would be in a much more difficult situation to make good use of it, but you might be tempted to use it anyway. A very large fraction of the use cases for these tools that I have come across professionally so far are of the latter variety, the minority of the former. And taking all of the considerations into account: - how sure are you that that code is bug free? - Do you mean that it seems to work? - Do you mean that it compiles? - How broad is the range of inputs that you have given it to ascertain this? - Have you had the code reviewed by a competent programmer (assuming code review is a requirement)? - Does it pass a set of pre-defined tests (part of requirement analysis)? - Is the code quality such that it is long term maintainable? |
| |
| ▲ | emporas 2 days ago | parent | prev | next [-] | | I have used Gemini for reading and solving electronic schematics exercises, and it's results were good enough for me. Roughly 50% of the exercises managed to solve correctly, 50% wrong. Simple R circuits. One time it messed up the opposite polarity of two voltage sources in series, and instead of subtracting their voltages, it added them together, I pointed out the mistake and Gemini insisted that the voltage sources are not in opposite polarity. Schematics in general are not AIs strongest point. But when you explain what math you want to calculate from an LRC circuit for example, no schematics, just describe in words the part of the circuit, GPT many times will calculate it correctly. It still makes mistakes here and there, always verify the calculation. | | |
| ▲ | jacquesm 2 days ago | parent [-] | | I guess I'm just more critical than you are. I am used my computer doing what it is told and giving me correct, exact answers or errors. | | |
| ▲ | dagss a day ago | parent | next [-] | | I think most people treat them like humans not computers, and I think that is actually a much more correct way to treat them. Not saying they are like humans, but certainly a lot more like humans than whatever you seem to be expecting in your posts. Humans make errors all the time. That doesn't mean having colleagues is useless, does it? An AI is a colleague that can code very very fast and has a very wide knowledge base and versatility. You may still know better than it in many cases and feel more experienced that in. Just like you might with your colleagues. And it needs the same kind of support that humans need. Complex problem? Need to plan ahead first. Tricky logic? Need unit tests. Research grade problem? Need to discuss through the solution with someone else before jumping to code and get some feedback and iterate for 100 messages before we're ready to code. And so on. | | | |
| ▲ | emporas 2 days ago | parent | prev [-] | | There is also Mercury LLM, which computes the answer directly as a 2D text representation. I don't know if you are familiar with Mercury LLM, but you read correctly, 2D text output. Mercury LLM might work better getting input as an ASCII diagram, or generating an output as an ASCII diagram, not sure if both input and output work 2D. Plumbing/electrical/electronic schematics are pretty important for AIs to understand and assist us, but for the moment the success rate is pretty low. 50% success rate for simple problems is very low, 80-90% success rate for medium difficulty problems is where they start being really useful. | | |
| ▲ | jacquesm 2 days ago | parent [-] | | It's not really the quality of the diagramming that I am concerned with, it is the complete lack of understanding of electronics parts and their usual function. The diagramming is atrocious but I could live with it if the circuit were at least borderline correct. Extrapolating from this: if we use the electronics schematic as a proxy for the kind of world model these systems have then that world model has upside down lanterns and anti-gravity as commonplace elements. Three legged dogs mate with zebras and produce viable offspring and short circuiting transistors brings about entirely new physics. | | |
| ▲ | baq a day ago | parent | next [-] | | it's hard for me to tell if the solution is correct or wrong because I've got next to no formal theoretical education in electronics and only the most basic 'pay attention to polarity of electrolytic capacitors' practical knowledge, but given how these things work you might get much better results when asking it to generate a spice netlist first (or instead). I wouldn't trust it with 2d ascii art diagrams, there isn't enough focus on these in the training data is my guess - a typical jagged frontier experience. | |
| ▲ | emporas 2 days ago | parent | prev [-] | | I think you underestimate their capabilities quite a bit. Their auto-regressive nature does not lend well to solving 2D problems. See these two solutions GPT suggested: [1] Is any of these any good? [1] https://gist.github.com/pramatias/538f77137cb32fca5f626299a7... |
|
|
|
| |
| ▲ | manmal a day ago | parent | prev [-] | | I have this mental model of LLMs and their capabilities, formed after months of way too much coding with CC and Codex, with 4 recursive problem categories: 1. Problems that have been solved before have their solution easily repeated (some will say, parroted/stolen), even with naming differences. 2. Problems that need only mild amalgamation of previous work are also solved by drawing on training data only, but hallucinations are frequent (as low probability tokens, but as consumers we don’t see the p values). 3. Problems that need little simulation can be simulated with the text as scratchpad. If evaluation criteria are not in training data -> hallucination. 4. Problems that need more than a little simulation have to either be solved by adhoc written code, or will result in hallucination. The code written to simulate is again a fractal of problems 1-4. Phrased differently, sub problem solutions must be in the training data or it won’t work; and combining sub problem solutions must be either again in training data, or brute forcing + success condition is needed, with code being the tool to brute force. I _think_ that the SOTA models are trained to categorize the problem at hand, because sometimes they answer immediately (1&2), enable thinking mode (3), or write Python code (4). My experience with CC and Codex has been that I must steer it away from categories 2 & 3 all the time, either solving them myself, ask them to use web research, or split them up until they are (1) problems. Of course, for many problems you’ll only know the category once you’ve seen the output, and you need to be able to verify the output. I suspect that if you gave Claude/Codex access to a circuit simulator, it will successfully brute force the solution. And future models might be capable enough to write their own simulator adhoc (ofc the simulator code might recursively fall into category 2 or 3 somewhere and fail miserably). But without strong verification I wouldn’t put any trust in the outcome. With code, we do have the compiler, tests, observed behavior, and a strong training data set with many correct implementations of small atomic problems. That’s a lot of out of the box verification to correct hallucinations. I view them as messy code generators I have to clean up after. They do save a ton of coding work after or while I‘m doing the other parts of programming. | | |
| ▲ | jacquesm a day ago | parent [-] | | This parallels my own experience so far, the problem for me is that (1) and (2) I can quickly and easily do myself and I'll do it in a way that respects the original author's copyright by including their work - and license - verbatim. (3) and (4) level problems are the ones where I struggle tremendously to make any headway even without AI, usually this requires the learning of new domain knowledge and exploratory code (currently: sensor fusion) and these tools will just generate very plausible nonsense which is more of a time waster than a productivity aid. My middle-of-the-road solution is to get as far as I can by reading about the problem so I am at least able to define it properly and to define test cases and useful ranges for inputs and so on, then to write a high level overview document about what I want to achieve and what the big moving parts are and then only to resort to using AI tools to get me unstuck or to serve as a knowledge reservoir for gaps in domain knowledge. Anybody that is using the output of these tools to produce work that they do not sufficiently understand is going to see a massive gain in productivity, but the underlying issues will only surface a long way down the line. |
|
|
| |
| ▲ | camdenreslink 2 days ago | parent | prev | next [-] | | Sometimes you do need to (as a human) break down a complex thing into smaller simple things, and then ask the LLM to do those simple things. I find it still saves some time. | | |
| ▲ | ragequittah 2 days ago | parent [-] | | Or what will often work is having the LLM break it down into simpler steps and then running them 1 by 1. They know how to break down problems fairly well they just don't often do it properly sometimes unless you explicitly prompt them to. | | |
| ▲ | jacquesm 2 days ago | parent [-] | | Yes, but for that you have to know that the output it gave you is wrong in the first place and if that is so you didn't need AI to begin with... |
|
| |
| ▲ | djeastm a day ago | parent | prev | next [-] | | Possibly, but a lot of value comes from doing very simple things faster. | | | |
| ▲ | snet0 2 days ago | parent | prev [-] | | If you define "simple thing" as "thing an AI can't do", then yes. Everyone just shifts the goalposts in these conversations, it's infuriating. | | |
| ▲ | ACCount37 2 days ago | parent [-] | | Come on. If we weren't shifting the goalposts, we would have burned through 90% of the entire supply of them back in 2022! | | |
| ▲ | baq 2 days ago | parent [-] | | It’s less shifting goalposts and more of a very jagged frontier of capabilities problem. |
|
|
|
| |
| ▲ | verdverm 2 days ago | parent | prev [-] | | I'm not sure, here's my anecdotal counter example, was able to get gemini-2.5-flash, in two turns, to understand and implement something I had done separately first, and it found another bug (also that I had fixed, but forgot was in this path) That I was able to have a flash model replicate the same solution I had, to two problems in two turns, it's just the opposite experience of your consistency argument. I'm using tasks I've already solved as the evals while developing my custom agentic setup (prompts/tools/envs). They are able to do more of them today then they were even 6-12 months ago (pre-thinking models). https://bsky.app/profile/verdverm.com/post/3m7p7gtwo5c2v | | |
| ▲ | stego-tech 2 days ago | parent [-] | | And therein lies the rub for why I still approach this technology with caution, rather than charge in full steam ahead: variable outputs based on immensely variable inputs. I read stories like yours all the time, and it encourages me to keep trying LLMs from almost all the major vendors (Google being a noteworthy exception while I try and get off their platform). I want to see the magic others see, but when my IT-brain starts digging in the guts of these things, I’m always disappointed at how unstructured and random they ultimately are. Getting back to the benchmark angle though, we’re firmly in the era of benchmark gaming - hence my quip about these things failing “the only benchmark that matters.” I meant for that to be interpreted along the lines of, “trust your own results rather than a spreadsheet matrix of other published benchmarks”, but I clearly missed the mark in making that clear. That’s on me. | | |
| ▲ | verdverm 2 days ago | parent [-] | | I mean more the guts of the agentic systems. Prompts, tool design, state and session management, agent transfer and escalation. I come from devops and backend dev, so getting in at this level, where LLMs are tasked and composed, is more interesting. If you are only using provider LLM experiences, and not something specific to coding like copilot or Claude code, that would be the first step to getting the magic as you say. It is also not instant. It takes time to learn any new tech, this one has a above average learning curve, despite the facade and hype of how it should just be magic Once you find the stupid shit in the vendor coding agents, like all us it/devops folks do eventually, you can go a level down and build on something like the ADK to bring your expertise and experience to the building blocks. For example, I am now implementing environments for agents based on container layers and Dagger, which unlocks the ability to cheaply and reproducible clone what one agent was doing and have a dozen variations iterate on the next turn. Real useful for long term training data and evals synth, but also for my own experimentation as I learn how to get better at using these things. Another thing I did was change how filesystem operations look to the agent, in particular file reads. I did this to save context & money (finops), after burning $5 in 60s because of an error in my tool implementation. Instead of having them as message contents, they are now injected into the system prompt. Doing so made it trivial to add a key/val "cache" for the fun of it, since I could now inject things into the system prompt and let the agent have some control over that process through tools. Boy has that been interesting and opened up some research questions in my mind | | |
| ▲ | remich 2 days ago | parent [-] | | Any particular papers or articles you've been reading that helped you devise this? Your experiments sound interesting and possibly relevant to what I'm doing. |
|
|
|
| |
| ▲ | quantumHazer 2 days ago | parent | prev | next [-] | | Seems pretty false if you look at the model card and web site of Opus 4.5 that is… (check notes) their latest model. | | |
| ▲ | verdverm 2 days ago | parent [-] | | Building a good model generally means it will do well on benchmarks too. The point of the speculation is that Anthropic is not focused on benchmaxxing which is why they have models people like to use for their day-to-day. I use Gemini, Anthropic stole $50 from me (expired and kept my prepaid credits) and I have not forgiven them yet for it, but people rave about claude for coding so I may try the model again through Vertex Ai... The person who made the speculation I believe was more talking about blog posts and media statements than model cards. Most ai announcements come with benchmark touting, Anthropic supposedly does less / little of this in their announcements. I haven't seen or gathered the data to know what is truth | | |
| ▲ | elcritch 2 days ago | parent [-] | | You could try Codex cli. I prefer it over Claude code now, but only slightly. | | |
|
| |
| ▲ | brokensegue 2 days ago | parent | prev | next [-] | | how do you quantitatively measure day-to-day quality? only thing i can think is A/B tests which take a while to evaluate | | |
| ▲ | verdverm 2 days ago | parent [-] | | more or less this, but also synthetic if you think about GANs, it's all the same concept 1. train model (agent) 2. train another model (agent) to do something interesting with/to the main model 3. gain new capabilities 4. iterate You can use a mix of both real and synthetic chat sessions or whatever you want your model to be good at. Mid/late training seems to be where you start crafting personality and expertises. Getting into the guts of agentic systems has me believing we have quite a bit of runway for iteration here, especially as we move beyond single model / LLM training. I still need to get into what all is de jour in the RL / late training, that's where a lot of opportunity lies from my understanding so far Nathan Lambert (https://bsky.app/profile/natolambert.bsky.social)
from Ai2 (https://allenai.org/)
&
RLHF Book (https://rlhfbook.com/)
has a really great video out yesterday about the experience training Olmo 3 Think https://www.youtube.com/watch?v=uaZ3yRdYg8A |
| |
| ▲ | Mistletoe 2 days ago | parent | prev | next [-] | | How do you measure whether it works better day to day without benchmarks? | | |
| ▲ | bulbar 2 days ago | parent | next [-] | | Manually labeling answers maybe? There exist a lot of infrastructure built around and as it's heavily used for 2 decades and it's relatively cheap. That's still benchmarking of course, but not utilizing any of the well known / public ones. | |
| ▲ | verdverm 2 days ago | parent | prev | next [-] | | Internal evals, Big AI certainly has good, proprietary training and eval data, it's one reason why their models are better | | |
| ▲ | aydyn 2 days ago | parent [-] | | Then publish the results of those internal evals. Public benchmark saturation isn't an excuse to be un-quantitative. | | |
| ▲ | verdverm 2 days ago | parent [-] | | How would published numbers be useful without knowing what the underlying data being used to test and evaluate them are? They are proprietary for a reason To think that Anthropic is not being intentional and quantitative in their model building, because they care less for the saturated benchmaxxing, is to miss the forest for the trees | | |
| ▲ | aydyn 2 days ago | parent [-] | | Do you know everything that exists in public benchmarks? They can give a description of what their metrics are without giving away anything proprietary. | | |
| ▲ | verdverm 2 days ago | parent [-] | | I'd recommend watching Nathan Lambert's video he dropped yesterday on Olmo 3 Thinking. You'll learn there's a lot of places where even descriptions of proprietary testing regimes would give away some secret sauce Nathan is at Ai2 which is all about open sourcing the process, experience, and learnings along the way | | |
| ▲ | aydyn a day ago | parent [-] | | Thanks for the reference I'll check it out. But it doesnt really take away from the point I am making. If a level of description would give away proprietary information, then go one level up to a more vague description. How to describe things to a proper level is more of a social problem than a technical one. |
|
|
|
|
| |
| ▲ | standardUser 2 days ago | parent | prev [-] | | Subscriptions. | | |
| ▲ | mrguyorama 2 days ago | parent [-] | | Ah yes, humans are famously empirical in their behavior and we definitely do not have direct evidence of the "best" sports players being much more likely than the average to be superstitious or do things like wear "lucky underwear" or buy right into scam bracelets that "give you more balance" using a holographic sticker. |
|
| |
| ▲ | HDThoreaun 2 days ago | parent | prev [-] | | Arc-AGI is just an iq test. I don’t see the problem with training it to be good at iq tests because that’s a skill that translates well. | | |
| ▲ | fwip 2 days ago | parent | next [-] | | It is very similar to an IQ test, with all the attendant problems that entails. Looking at the Arc-AGI problems, it seems like visual/spatial reasoning is just about the only thing they are testing. | |
| ▲ | CamperBob2 2 days ago | parent | prev | next [-] | | Exactly. In principle, at least, the only way to overfit to Arc-AGI is to actually be that smart. Edit: if you disagree, try actually TAKING the Arc-AGI 2 test, then post. | | |
| ▲ | npinsker 2 days ago | parent | next [-] | | Completely false. This is like saying being good at chess is equivalent to being smart. Look no farther than the hodgepodge of independent teams running cheaper models (and no doubt thousands of their own puzzles, many of which surely overlap with the private set) that somehow keep up with SotA, to see how impactful proper practice can be. The benchmark isn’t particularly strong against gaming, especially with private data. | | |
| ▲ | mrandish 2 days ago | parent | next [-] | | ARC-AGI was designed specifically for evaluating deeper reasoning in LLMs, including being resistant to LLMs 'training to the test'. If you read Francois' papers, he's well aware of the challenge and has done valuable work toward this goal. | | |
| ▲ | npinsker 2 days ago | parent [-] | | I agree with you. I agree it's valuable work. I totally disagree with their claim. A better analogy is: someone who's never taken the AIME might think "there are an infinite number of math problems", but in actuality there are a relatively small, enumerable number of techniques that are used repeatedly on virtually all problems. That's not to take away from the AIME, which is quite difficult -- but not infinite. Similarly, ARC-AGI is much more bounded than they seem to think. It correlates with intelligence, but doesn't imply it. | | |
| ▲ | yovaer 2 days ago | parent | next [-] | | > but in actuality there are a relatively small, enumerable number of techniques that are used repeatedly on virtually all problems IMO/AIME problems perhaps, but surely that's too narrow a view for all of mathematics. If solving conjectures were simply a matter of trying a standard range of techniques enough times, then there would be a lot fewer open problems around than what's the case. | |
| ▲ | keeda 2 days ago | parent | prev [-] | | Maybe I'm misinterpreting your point, but this makes it seem that your standard for "intelligence" is "inventing entirely new techniques"? If so, it's a bit extreme, because to a first approximation, all problem solving is combining and applying existing techniques in novel ways to new situations. At the point that you are inventing entirely new techniques, you are usually doing groundbreaking work. Even groundbreaking work in one field is often inspired by techniques from other fields. In the limit, discovering truly new techniques often requires discovering new principles of reality to exploit, i.e. research. As you can imagine, this is very difficult and hence rather uncommon, typically only accomplished by a handful of people in any given discipline, i.e way above the standards of the general population. I feel like if we are holding AI to those standards, we are talking about not just AGI, but artificial super-intelligence. |
|
| |
| ▲ | CamperBob2 2 days ago | parent | prev [-] | | Completely false. This is like saying being good at chess is equivalent to being smart. No, it isn't. Go take the test yourself and you'll understand how wrong that is. Arc-AGI is intentionally unlike any other benchmark. | | |
| ▲ | fwip 2 days ago | parent [-] | | Took a couple just now. It seems like a straight-forward generalization of the IQ tests I've taken before, reformatted into an explicit grid to be a little bit friendlier to machines. Not to humble-brag, but I also outperform on IQ tests well beyond my actual intelligence, because "find the pattern" is fun for me and I'm relatively good at visual-spatial logic. I don't find their ability to measure 'intelligence' very compelling. | | |
| ▲ | CamperBob2 2 days ago | parent [-] | | Given your intellectual resources -- which you've successfully used to pass a test that is designed to be easy for humans to pass while tripping up AI models -- why not use them to suggest a better test? The people who came up with Arc-AGI were not actually morons, but I'm sure there's room for improvement. What would be an example of a test for machine intelligence that you would accept? I've already suggested one (namely, making up more of these sorts of tests) but it'd be good to get some additional opinions. | | |
| ▲ | fwip 2 days ago | parent [-] | | Dunno :) I'm not an expert at LLMs or test design, I just see a lot of similarity between IQ tests and these questions. |
|
|
|
| |
| ▲ | ACCount37 2 days ago | parent | prev | next [-] | | With this kind of thing, the tails ALWAYS come apart, in the end. They come apart later for more robust tests, but "later" isn't "never", far from it. Having a high IQ helps a lot in chess. But there's a considerable "non-IQ" component in chess too. Let's assume "all metrics are perfect" for now. Then, when you score people by "chess performance"? You wouldn't see the people with the highest intelligence ever at the top. You'd get people with pretty high intelligence, but extremely, hilariously strong chess-specific skills. The tails came apart. Same goes for things like ARC-AGI and ARC-AGI-2. It's an interesting metric (isomorphic to the progressive matrix test? usable for measuring human IQ perhaps?), but no metric is perfect - and ARC-AGI is biased heavily towards spatial reasoning specifically. | |
| ▲ | jimbokun 2 days ago | parent | prev | next [-] | | Is it different every time? Otherwise the training could just memorize the answers. | | |
| ▲ | CamperBob2 2 days ago | parent [-] | | The models never have access to the answers for the private set -- again, at least in principle. Whether that's actually true, I have no idea. The idea behind Arc-AGI is that you can train all you want on the answers, because knowing the solution to one problem isn't helpful on the others. In fact, the way the test works is that the model is given several examples of worked solutions for each problem class, and is then required to infer the underlying rule(s) needed to solve a different instance of the same type of problem. That's why comparing Arc-AGI to chess or other benchmaxxing exercises is completely off base. (IMO, an even better test for AGI would be "Make up some original Arc-AGI problems.") |
| |
| ▲ | esafak 2 days ago | parent | prev | next [-] | | I would not be so sure. You can always prep to the test. | | |
| ▲ | HDThoreaun 2 days ago | parent [-] | | How do you prep for arc agi? If the answer is just "get really good at pattern recognition" I do not see that as a negative at all. | | |
| ▲ | ben_w 2 days ago | parent [-] | | It can be not-negative without being sufficient. Imagine that pattern recognition is 10% of the problem, and we just don't know what the other 90% is yet. Streetlight effect for "what is intelligence" leads to all the things that LLMs are now demonstrably good at… and yet, the LLMs are somehow missing a lot of stuff and we have to keep inventing new street lights to search underneath: https://en.wikipedia.org/wiki/Streetlight_effect | | |
| ▲ | HDThoreaun 2 days ago | parent [-] | | I dont think many people are saying 100% arc-agi 2 is equivalent to AGI(names are dumb as usual). Its just the best metric I have found, not the final answer. Spatial reasoning is an important part of intelligence even if it doesnt encompass all of it. |
|
|
| |
| ▲ | FergusArgyll 2 days ago | parent | prev [-] | | It's very much a vision test. The reason all the models don't pass it easily is only because of the vision component. It doesn't have much to do with reasoning at all |
| |
| ▲ | 2 days ago | parent | prev [-] | | [deleted] |
|
|