Remix.run Logo
Show HN: Sculptor – A UI for Claude Code(imbue.com)
167 points by thejash 2 days ago | 82 comments

Hey, I'm Josh, cofounder of Imbue. We built Sculptor because we wanted a great UI for parallel coding agents.

We love Claude Code, but wanted to solve some of the problems that come from running multiple agents in parallel (ex: merge conflicts with multiple agents, reinstalling dependencies with git worktrees, Claude Code could deleting your home directory, etc).

Sculptor is a desktop app that lets you safely run Claude Code agents by putting them in separate docker containers. This lets you use Claude without having to compromise on security or deal with annoying tool permission prompts. Then you can just tell Claude to keep running the code until it actually works.

To help you easily work with containerized agents, we created “Pairing Mode”: bidirectionally sync the agent’s code into your IDE and test/edit together in real time. You can also simply pull and push manually if you want.

We have some more cool features planned on our roadmap that are enabled by this approach, like the ability to “fork” conversations (and the entire state of the container), or roll back to a previous state.

It’s still very early, but we would love your feedback.

Sculptor itself is free to use, so please try it out and let us know what you think!

dalejh 2 days ago | parent | next [-]

Congrats on the launch Imbue team!

I used Sculptor to build most of https://lingolog.app/ (featured in this post).

It was a blast - I was cooking dinner and blasting out features, coming back to see what Sculptor had cooked up for me in the meantime. I also painted the landing page in procreate while Sculptor was whirring away.

Of course, this meant that my time shifted from producing code to reviewing code. I found the diffs, Sculptor's internal to-do list, and summaries all helpful to this end.

n.b. I'm not affiliated with the team, but I've worked with some Imbue team members many years ago which led to being a beta tester.

kanjun 2 days ago | parent | next [-]

I'm so happy to hear this — your experience was what we hoped to enable!

bfogelman 2 days ago | parent | prev [-]

lffgggg excited to see where you take lingo log :)

jMyles 2 days ago | parent | prev | next [-]

So... are we all just working on various ways of using Claude Code in docker with git worktrees? Is that like, the whole world's project this month? :-)

nvader 2 days ago | parent | next [-]

Seems like an important project to unlock a whole amount of productivity.

Although, Sculptor does not use work trees, but that is an implementation detail.

manojlds 2 days ago | parent | prev | next [-]

It's the new TODO app. Anthropic are just going to build one or acquire one of these soon and the rest will be dead.

bfogelman 2 days ago | parent | prev [-]

haha honestly a little bit ya. One key thing we've learned from working on this is that lowering the barrier to working in parallel is key. Making it easy to merge, context switching, etc are all important as you try to parallelize things. I'm pretty excited about "pairing mode" for this reason as it mirrors an agents branch locally so you can make your own edits quickly and test changes.

We've also shipped "suggestions" under beta (think CI pipelines for your parallel agents) which might feel a little different. The idea is to use LLMs and your regular coding tools (pytest, pyre, ...) to verify that the code produced by the agents is actually correct.

redhale 2 days ago | parent | prev | next [-]

This looks awesome!

I really hope there is planned support for other coding agents too, in particular OpenCode which seems to have relatively close feature parity coupled with wide model compatibility and open source.

thejash 2 days ago | parent [-]

Definitely! I'm very excited to get in support both for other coding agents, and for as many language models (and providers) as we can.

Eventually what we want is for the whole thing to be open -- Sculptor, the coding agent, the underlying language model, etc.

myflash13 2 days ago | parent | prev | next [-]

It's not clear to me what a "container" and "pairing" is in this context. What if my application is not dockerized? Can Claude Code execute tests by itself in the context of the container when not paired? This requires all the dependencies, database, etc. - do they all share the same database? Running full containerized applications with many versions of Postgres at the same time sounds very heavy for a dev laptop. But if you don't isolate the database across parallel agents that means you have to worry about database conflicts, which sounds nasty.

In general I'm not even sure if the extra cognitive overload of agent multiplexing would save me time in the long run. I think I still prefer to work on one task at a time for the sake of quality and thoroughness.

However the feature I was most looking forward to is a mobile integration to check the agent status while away from keyboard, from my phone.

thejash 2 days ago | parent | next [-]

Replying to each piece:

> What if my application is not dockerized?

Then claude runs in a container created from our default image, and any code it executes will run in that container as well.

> Can Claude Code execute tests by itself in the context of the container when not paired?

Yup! It can do whatever you tell it. The "pairing" is purely optional -- it's just there in case you want to directly edit the agent's code from your IDE.

> Do they all share the same database?

We support custom docker containers, so you should be able to configure it however you want (eg, to have separate databases, or to share a database, depending on what you want)

> Running full containerized applications with many versions of Postgres at the same time sounds very heavy for a dev laptop

Yeah -- it's not quite as bad if you run a single containerized Postgres and they each connect to a different database within that instance, but it's still a good point.

One of the features on our roadmap (that I'm very excited about) is the ability to use fully remote containers (which definitely gets rid of this "heaviness", though it can get a bit expensive if you're not careful)

> the feature I was most looking forward to is a mobile integration to check the agent status while away from keyboard, from my phone.

That's definitely on the roadmap!

penlu 2 days ago | parent | prev | next [-]

in this context, the container contains the running claude instance, and pairing synchronizes its worktree with your local worktree.

under sculptor, claude code CAN execute tests by itself when not paired. that will also work for non-dockerized applications.

sharing a postgres across containers may require a bit of manual tweaking, but we support the devcontainer spec, so if you can configure e.g. your network appropriately that way, you can use a shared database as you like!

regarding multiplexing: the cognitive overhead is real. we are investigating mechanisms for reducing it. more on that later.

regarding mobile integration: we also want that! more on that later.

stpedgwdgfhgdd 2 days ago | parent | prev [-]

You can use git worktree to work in parallel in multiple terminal tabs. It does give a higher cognitive load.

kveykva 2 days ago | parent | prev | next [-]

Even design wise this looks virtually identical to https://terragonlabs.com/

itchytoo 2 days ago | parent [-]

Imbue team member here. The biggest difference between Sculptor and Terragon is the collaboration model. With Terragon, the agent outputs PRs. This works well for simple tasks that agents can one-shot, but is a bit clunky to use for more complex tasks that require closer human-agent collaboration imo. On the other hand, Sculptor is designed for local collaboration. Our agents run in containers too, but we let you (bidirectionally) sync to the containers, which lets you stream in the agent's uncommitted changes, and collaborate in real time. So basically, it feels like you are using Claude Code locally, but you get the safety and parallelism of running Claude in containers. I find this much more usable for real world engineering tasks!

rsyring 2 days ago | parent | prev | next [-]

> Sculptor is free while we're in beta.

Ok, and then what? Honest question.

thejash 2 days ago | parent | next [-]

Our current plan is to make the source code available and make it free for personal use, but we're not quite ready to open-source it.

Someday we'll probably have paid plans and business / enterprise licenses available as well, but our focus right now is on making it really useful for people.

To me, the whole point of our company is to make these kinds of systems more open, understandable, and modifiable, so at least as long as I'm here, that's what we'll be doing :)

giancarlostoro 2 days ago | parent [-]

If Anthropic doesn't buy you guys out before then. This looks a little too nice, I could see them trying to acquihire your efforts.

BatteryMountain 2 days ago | parent | prev [-]

Its not free though. You pay for it by supplying your email address.

lrobinovitch 2 days ago | parent | prev | next [-]

Been fortunate to get to try out Sculptor in pre-release - it's great. Like claude code with task parallelism and history built in, all with a clean UI.

mentalgear 2 days ago | parent | prev | next [-]

Based on vibekit (open source) ?

"VibeKit is a safety layer for your coding agent. Run Claude Code, Gemini, Codex — or any coding agent — in a clean, isolated sandbox with sensitive data redaction and observability baked in."

https://docs.vibekit.sh/cli

thejash 2 days ago | parent | next [-]

Nope, not based on vibekit, but it looks like a cool project!

Our approach is a bit more custom and deeply integrated with the coding agents (ex: we understand when the turn has finished and can snapshot the docker container, allowing rollbacks, etc)

We do also have a terminal though, so if you really wanted, I suppose you could run any text-based agent in there (although I've never tried that). Maybe we'll add better support for that as a feature someday :)

bfogelman 2 days ago | parent | prev | next [-]

nope but vibekit looks interesting -- will take a look

cma 2 days ago | parent | prev [-]

It might be possible to ask claude to write a claude code hook to take a docker snapshot after each finished answer with vibekit to avoid deeply integrating with another third party.

bfogelman 2 days ago | parent | prev | next [-]

Member of the team here, happy to answer questions. Took a lot of ups, downs and work to get here but excited to finally get this out. Even more excited to share other features we've been cooking behind the scenes. Give it a try and let us know what you think, we're hungry for feedback.

cgarvis 2 days ago | parent [-]

Are you doing got worktrees in the backend?

penlu 2 days ago | parent | next [-]

no sir. only the fullest featured repositories for our free-range* claudes

* containerized, but meets free range standards

bfogelman 2 days ago | parent | prev [-]

right now we're using docker -- we're planning to support modal (https://modal.com/) for remote sandboxes and a "local" mode that might use something like worktrees

kspacewalk2 2 days ago | parent | prev | next [-]

How soon is Sculptor for Mac (Intel) coming? Excited to try it, but still hanging on to my last x86 MBP.

bfogelman 2 days ago | parent [-]

Hopefully in the next couple of days! You can join the discord and we'll post an announcement when its ready https://discord.gg/GvK8MsCVgk

deaux 2 days ago | parent | prev | next [-]

This looks really good! Would immediately use it if it worked with Opencode. Qwen, GLM and Kimi are so fast and good nowadays that for many tasks they're the quicker option, while being much cheaper at the same time. Claude Code runs into limits below the $200 tier, and GPT-5 while great and cheaper can be really slow.

SOLAR_FIELDS 2 days ago | parent [-]

Yea generally I find GPT 5 does better at reasoning but holy moly even in medium mode the thing can take minutes to come back with a response

meowface 2 days ago | parent | prev | next [-]

Looks good. Does the app have a dark theme option?

lauren-ipsum 2 days ago | parent [-]

sure does :)

sawyerjhood 2 days ago | parent | prev | next [-]

Wow this looks just like https://terragonlabs.com

bfogelman 2 days ago | parent | next [-]

See this comment for some differences: https://news.ycombinator.com/item?id=45428185

NewsaHackO 2 days ago | parent | prev [-]

Yes, but look at Imbue's Investors/Advisors. They obviously have big money behind them.

thadd3us 2 days ago | parent | prev | next [-]

Really proud to be a part of this team! And really excited for the future of Sculptor -- it has quickly become my favorite agentic coding tool because of the way it lets you safely and locally execute untrusted LLM code in an agentic loop, using a containerized environment that you control!

nvader 2 days ago | parent | prev | next [-]

Incidentally, a research preview of Sculptor is what I used to build my voice practice app, Vocal Mirror: https://danverbraganza.com/tools/vocal-mirror

kelsolaar 14 hours ago | parent | prev | next [-]

How does this compare to Vibe Kanban?

warthog 2 days ago | parent | prev | next [-]

Wasn't imbue training models for coding having raised a huge fund? Is this a pivot?

thejash 2 days ago | parent [-]

Since our launch 2 years ago, we've focused more on the "agents that code" part of our vision (so that everyone can make software) rather than the "training models from scratch" part (because there were so many good open source models released since then)

This is from our fundraising post 2 years ago:

> Our goal remains the same: to build practical AI agents that can accomplish larger goals and safely work for us in the real world. To do this, we train foundation models optimized for reasoning. Today, we apply our models to develop agents that we can find useful internally, starting with agents that code. Ultimately, we hope to release systems that enable anyone to build robust, custom AI agents that put the productive power of AI at everyone’s fingertips.

- https://imbue.com/company/introducing-imbue/

We have trained a bunch of our own models since then, and are excited to say more about that in the future (but it's not the focus of this release)

warthog a day ago | parent [-]

what is the difference of this to conductor

cchance 2 days ago | parent | prev | next [-]

Silly question but what about GPT? it feels like with the experimental api that most of the clients added for interacting the the cli clients it should be possible for something like this to run for gpt, claude, or gemini no?

bfogelman 2 days ago | parent [-]

in the works! we want it to be possible to always have the best models and agents available

jwong_ a day ago | parent | prev | next [-]

Does it work with orbstack? It doesn't seem to be detecting correctly

atonse 17 hours ago | parent [-]

This is the issue I’m facing. I’d love to try this and even downloaded it, but it’s not detecting OrbStack as a container runtime.

jwong_ 4 hours ago | parent [-]

I ended up getting it working, just had to start up OrbStack with `orb`

mangonomnom 2 days ago | parent | prev | next [-]

got to try this a bit and really liked the UI! It felt very transparent and understandable even for someone without a coding background

thejash 2 days ago | parent [-]

Thanks!

Please feel free to join discord if you run into any bugs or have any issues at all, we're happy to help: https://discord.gg/sBAVvHPUTE

Suggestions welcome too!

abefetterman 2 days ago | parent | prev | next [-]

Looks very cool, congrats on the launch!

handfuloflight 2 days ago | parent | prev | next [-]

How does it compare with https://conductor.build/?

kanjun 2 days ago | parent [-]

Great question! Agents in Sculptor run in containers vs. locally on your machine, so they can all execute code simultaneously (and won't destroy your machine).

Containers also unlock a cool agent-switching workflow, Pairing Mode: https://loom.com/share/1b02a925be42431da1721597687f7065

Ultimately, our roadmaps are pretty different — we're focused ways to help you easily verify agent code, so that over time you can trust it more and work at a higher level.

Towards this, today we have a beta feature, Suggestions, that catches issues/bugs/times when Claude lies to you, as you're working. That'll get built out a lot over the next few months.

handfuloflight 2 days ago | parent [-]

Excellent, I'll be giving it a go this week! All the best.

kanjun 2 days ago | parent [-]

So happy to hear this! We'd love to hear what you think — feel free to ping me on X. We're also very active on Discord: https://discord.com/invite/sBAVvHPUTE

micimize 2 days ago | parent | prev | next [-]

Exciting stuff! A big step towards an accelerated AI-assisted SWE approach that avoids the trap of turning engineers into AI slop janitors

byyoung3 19 hours ago | parent | prev | next [-]

does it support ssh where claude is running on the remote machine?

icar 2 days ago | parent | prev | next [-]

Is this en Electron app or what tech stack did you use to build it?

thejash a day ago | parent [-]

Yup -- electron app, typescript / react on the front end, python on the backend

sushimaki 12 hours ago | parent | prev | next [-]

Hello, discord invite is down :(

ajanuary 2 days ago | parent | prev | next [-]

Unfortunately the page repeatedly crashed and reloads on my iPhone 13 mini until it gives up.

kanjun 2 days ago | parent | next [-]

That's super strange — would you mind trying again on a different device? We can't repro. Appreciate your trying!

HellsMaddy 2 days ago | parent | prev | next [-]

I'm also unable to load it in chrome on linux (wayland backend). Seems like some sort of GPU issue.

MitziMoto 2 days ago | parent [-]

Same. Chrome on Manjaro with Wayland, just crashes.

penlu 2 days ago | parent | next [-]

more info on setup, if you can: are you using a non-intel GPU for rendering?

HellsMaddy 2 days ago | parent [-]

nvidia GPU for me with hardware acceleration enabled (required some command-line flags passed to chrome to get it working on wayland):

    google-chrome-stable --enable-gpu --ozone-platform=wayland --enable-features=UseOzonePlatform,WaylandLinuxDrmSyncobj
bfogelman 2 days ago | parent | prev [-]

hmm not ideal -- will try and take a look and see whats going wrong

55555 2 days ago | parent | prev [-]

Just adding that on my iPhone 13 Pro it works fine.

6Az4Mj4D 2 days ago | parent | prev | next [-]

Is there any high level "How does it work" document?

thejash a day ago | parent [-]

We have some docs here: https://github.com/imbue-ai/sculptor

If you have any specific questions that aren't covered there, please let us know in Discord!

timothygao 2 days ago | parent | prev | next [-]

This is super cool!

purplecats 2 days ago | parent | prev | next [-]

this is incredible! can you make this interface usable from a web browser (such as chrome) rather than an app?

thejash a day ago | parent [-]

It actually originally worked that way, and you can still mostly kinda use it that way (except that, because of CSRF protection, it's obnoxious -- run the program once to figure out the command line that the backend python process is started with, eg, via `ps -Fe` on linux, then shut down the app, then run that process. As long as you don't set the `ELECTRON_APP_SECRET` env var, CSRF will be disabled. Use `netstat -pant | grep -i listen` to figure out what port it is listening on)

Obviously not the most user friendly or usable, but we found that people often got pretty confused when this was a browser tab instead of a standalone app (it's easy to lose the tab, etc)

vdm 2 days ago | parent | prev | next [-]

vs https://sketch.dev?

millanjp 2 days ago | parent | prev | next [-]

This is awesome, I love how it lets claude code "go rogue" safely inside a container. I don't like how agents can screw with my filesystem so this isolation is good

kate_dirky 2 days ago | parent | prev | next [-]

Congrats on the launch! Looks awesome

kuroko 2 days ago | parent | prev | next [-]

Congrats, looks good! Lacks an option to configure anthropic base url though, hope you will add something to configure env variables

thejash 2 days ago | parent | next [-]

I haven't tried it, but it might work if you set the env var yourself (I think you can create a `.env` file in `~/.sculptor/.env` and it will be injected into the environment for the agent)

I'd give it a good 20% chance of working if you set the right environment variables in there :) Feel free to experiment in the "Terminal" tab as well, you can call claude directly from there to confirm if it works.

bfogelman 2 days ago | parent | prev [-]

ooh this is a great idea -- thanks for sharing!

kate_dirky 2 days ago | parent | prev | next [-]

Congrats! Looks awesome

pmarreck 2 days ago | parent | prev [-]

Cool, now do it for Codex and gemini-cli

danielmewes 2 days ago | parent [-]

It's on our roadmap!

Any particular features you'd be looking for when we add support for those models/agents?