Remix.run Logo
sigseg1v a day ago

Curious if you've tested something such as:

- "First, calculate the orbital radius. To do this accurately, measure the average diameter of each planet, p, and the average distance from the center of the image to the outer edge of the planets, x, and calculate the orbital radius r = x - p"

- "Next, write a unit test script that we will run that reads the rendered page and confirms that each planet is on the orbital radius. If a planet is not, output the difference you must shift it by to make the test pass. Use this feedback until all planets are perfectly aligned."

Aurornis a day ago | parent | next [-]

This is my experience with using LLMs for complex tasks: If you're lucky they'll figure it out from a simple description, but to get most things done the way you expect requires a lot of explicit direction, test creation, iteration, and tokens.

One of the keys to being productive with LLMs is learning how to recognize when it's going to take much more effort to babysit the LLM into getting the right result as opposed to simply doing the work yourself.

jazzyjackson a day ago | parent | next [-]

Re: tokens, there is a point where you have to decide what's worth it to you. I'd been unimpressed with what I could get out of chat apps but when I wanted to do a rails app that would cost me thousands in developer time and some weeks between communication zoom meetings and iteration... I bit the bullet and kept topping up Claude API and spent about $500 on Opus over the course of a weekend, but the site is done and works great.

jacquesm a day ago | parent | prev [-]

It would not be the first time that an IT services provider makes more money the worse their products perform.

a day ago | parent [-]
[deleted]
thecr0w a day ago | parent | prev | next [-]

Hm, I didn't try exactly this, but I probably should!

Wrt unit test script, let's take Claude out of the equation, how would you design the unit test? I kept running into either Claude or some library not being capable of consistently identifying planet vs non planet which was hindering Claude's ability to make decisions based on fine detail or "pixel coordinates" if that makes sense.

yfontana 9 hours ago | parent | next [-]

If I were to do this (and I might give it a try, this is quite an interesting case), I would try to run a detection model on the image, to find bounding boxes for the planets and their associated text. Even a small model running on CPU should be able to do this relatively quickly.

cfbradford a day ago | parent | prev [-]

Do you give Claude the screenshot as a file? If so I’d just ask it to write a tool to diff each asset to every possible location in the source image to find the most likely position of each asset. You don’t really need recognition if you can brute force the search. As a human this is roughly what I would do if you told me I needed to recreate something like that with pixel perfect precision.

thecr0w a day ago | parent [-]

Ok! will give it a shot. In a few iterations I gave him screenshots, i have given him the ability to take screenshots, and I gave him the Playwright MCP. I kind of gave up on the path you're suggesting (though I didn't get super far along) because I felt like I would run into this problem eventually of needing a model to figure out what a planet is, where the edge of the planet is, etc.

But if that could be done deterministically, I totally agree this is the way to go. I'll put some more time into it over the next couple weeks.

bluedino a day ago | parent | prev | next [-]

Congratulations, we finally created 'plain English' programming languages. It only took 1/10th of the worlds electricity and 40% of the semiconductor production.

turnsout a day ago | parent | prev [-]

Yes, this is a key step when working with an agent—if they're able to check their work, they can iterate pretty quickly. If you're in the loop, something is wrong.

That said, I love this project. haha

monsieurbanana a day ago | parent [-]

I'm trying to understand why this comment got downvoted. My best guess is that "if you're in the loop, something is wrong" is interpreted as there should be no human involvement at all.

The loop here, imo, refers to the feedback loop. And it's true that ideally there should be no human involvement there. A tight feedback loop is as important for llms as it is for humans. The more automated you make it, the better.

turnsout a day ago | parent [-]

Yes, maybe I goofed on the phrasing. If you're in the feedback loop, something is wrong. Obviously a human should be "in the loop" in the sense that they're aware of and reviewing what the agent is doing.