Remix.run Logo
101008 13 hours ago

There is something you are not explaining (at least I couldn't find it, sorry if you do), but how do you manage apps states? Basically databases?

Most of these agents solutions are focusing on git branches and worktrees, but at least none of them mention databases. How do you handle them? For example, in my projects, this means I would need ten different copies of my database. What about other microservices that are used, like redis, celery, etc? Are you duplicating (10-plicating) all of them?

If this works flawlessly it would be very powerful, but I think it still needs to solve more issues whan just filesystem conflicts.

avipeltz 12 hours ago | parent | next [-]

Great question currently superset manages worktrees + runs setup/teardown scripts you define on project setup. Those scripts can install dependencies, transfer env variables, and spin up branching services.

For example: • if you’re using Neon/Supabase, your setup script can create a DB branch per workspace • if you’re using Docker, the script can launch isolated containers for Redis/Postgres/Celery/etc

Currently we only orchestrate when they run, and have the user define what they do for each project, because every stack is different. This is a point of friction we are also solving by adding some features to help users automatically generate setup/teardown scripts that work for their projects.

We are also building cloud workspaces that will hopefully solve this issue for you and not limit users by their local hardware.

jitl 6 hours ago | parent | prev | next [-]

I have my agent run all docker commands in the main worktree. Sometimes this is awkward but mostly docker stuff is slow changing. I never run the stuff I’m developing in docker, I always run on the host directly.

For my current project (Postgres proxy like PGBouncer) I had Claude write a benchmark system that’s worktree aware. I have flags like -a-worktree=… -b-worktree =… so I can A/B benchmark between worktrees. Works great.

jpalomaki 9 hours ago | parent | prev | next [-]

Just docker compose and spin up 10 stacks? Should not be too much for modern laptop. But it would be great if tool like this could manage the ports (allocate unique set for each worktree, add those to .env)

For some cases test-containers [1] is an option as well. I’m using them for integration tests that need Postgres.

[1] https://testcontainers.com/

avipeltz 8 hours ago | parent [-]

That’s what our setup/teardown scripts are for but we plan on making the generation of them automatic

reactordev 13 hours ago | parent | prev | next [-]

Why aren’t you mocking your dependencies? I should be able to run a microservice without 3rd party and it still work. If it doesn’t, it’s a distributed monolith.

For databases, if you can’t see a connection string in env vars, use sqlite://:memory and make a test db like you do for unit testing.

For redis, provide a mock impl that gets/sets keys in a hash table or dictionary.

Stop bringing your whole house to the camp site.

esafak 12 hours ago | parent [-]

Because the real thing is higher fidelity, but it can expensive to boot up many times.

reactordev 11 hours ago | parent | next [-]

Higher fidelity?

What does that mean in this context?

What higher fidelity do you get with a real postgres over a SQLite in memory or even pglite or whatever.

The point isn’t you shouldn’t have a database, the point is what are your concerns? For me and my teams, we care about our code, the performance of that code, the correctness of that code, and don’t test against a live database so that we understand the separation of concerns between our app and its storage. We expect a database to be there. We expect it to have such and such schema. We don’t expect it to live at a certain address or a certain configuration as that is the databases concern.

We tell our app at startup where that address is or we don’t. The app should only care whether we did or not, if not, it will need to make one to work.

This is the same logic with unit testing. If you’re unit testing against a real database, that isn’t unit testing, that’s an integration test.

If you do care about the speed of your database and how your app scales, you aren’t going to be doing that on your local machine.

esafak 9 hours ago | parent [-]

There is your idealization, and there is reality. Mocks are to be avoided. I reserve them for external dependencies.

> What higher fidelity do you get with a real postgres over a SQLite in memory or even pglite or whatever

You want them to have the same syntax and features, to the extent that you use them, or you'll have one code path for testing and another for production. For example, sqlite does not support ARRAYs or UUIDs natively, so you'll have to write a separate implementation. This is a vector for bugs.

reactordev 8 hours ago | parent [-]

You're right that sqlite doesn't support array's or uuid's natively. SQLite was only a suggestion on how one might go about separating your database engine concerns with your data layer concerns.

If you fail to understand why this separation is important, you'll fail to reason with why you'd do it in the first place so continue building apps like it's 1999, tightly coupled and you need the whole stack to run your thing. God forbid you expand beyond just 1 team.

Leynos 12 hours ago | parent | prev [-]

pglite might be an option.

desireco42 10 hours ago | parent | prev [-]

You can for PG use that magic copy db they have, where they instantly (close to) copy db and with git-worktrees you can work on this, then tear it down. With sqlite obviously you would just copy it