Remix.run Logo
richardlblair 2 hours ago

I too am very excited. I had a voice recorder laying around and have worked that into my workflow over the past few months. Although, my AI assistant is a cobbled together set of python scripts.

What are you using for your AI assistant?

stavros 2 hours ago | parent [-]

I made my own, as I thought OpenClaw was a bit too insecure:

https://github.com/skorokithakis/stavrobot

I love it, it's amazing. I want to add a small section to the README about how to use it well (how to manage memory and the database, basically), but it's just fantastic. It has had basically zero bugs, as well.

justanotherunit 2 hours ago | parent [-]

Interesting, would you mind sharing your architectural setup? How does your index communicate to your agent server, what is the main agent framework/engine used?

Sounds like a cool concept to speak into your watch/wearable which automatically saves or performs tasks on the fly.

What is the general execution time from:

Prompt received -> final task executed?

stavros 2 hours ago | parent [-]

So basically there's a /chat endpoint that goes to the LLM (a Pi agent), which has access to call specific tools (web search, SQL execution, cron) but doesn't have filesystem access, so the only thing it can do is exfiltrate data it can see (pretty big, but you can't really avoid that, and it doesn't have access to anything on the host system). There's a Signal bridge that runs on another container to connect to Signal, a Telegram webhook, and the other big component is a coding agent and a tool container. The coding agent can write files to a directory that's also mounted in the tool container, and the tool container can run the tools. That way you separate the coder from everything else, and nothing has access to any of your keys.

You can't really avoid the coder exfiltrating your tool secrets, but at least it's separated. I also want to add a secondary container of "trusted" tool that the main LLM can call but no other LLM can change.

This way you're assured that, for example, the agent can't contact anyone that you don't want it contact, or it can read your emails but not send/delete, things like that. It makes it very easy to enforce ACLs for things you don't want LLM-coded, but also enables LLM coding of less-trusted programs.