Remix.run Logo
tekacs 4 hours ago

This looks really interesting!

I _would_ be curious to try it, but...

My first question was whether I could use this for sensitive tasks, given that it's not running on our machines. And after poking around for a while, I didn't find a single mention of security anywhere (as far as I could tell!)

The only thing that I did find was zero data retention, which is mentioned as being 'on request' and only on the Enterprise plan.

I totally understand that you guys need to train and advance your model, but with suggested features like scraping behind login walls, it's a little hard to take seriously with neither of those two things anywhere on the site, so anything you could do to lift up those concerns would be amazing.

Again, you seem to have done some really cool stuff, so I'd love for it to be possible to use!

Update: The homepage says this in a feature box, which is... almost worst than saying nothing, because it doesn't mean anything? -> "Enterprise-grade security; End-to-end encryption, enterprise-grade standards, and zero-trust access controls keep your data protected in transit and at rest."

antves 3 hours ago | parent | next [-]

Thanks for bringing this point up!

We take security very seriously and one of the main advantages of using Smooth over running things on your personal device is that your agent gets a browser in a sandboxed machine with no credentials or permissions by default. This means that the agent will be able to see only what you allow it to see. We also have some degree of guard-railing which we will continue to mature over time. For example, you can control which URLs the agent is allowed to view and which are off limits.

Until we'll be able to run everything locally on device, there must be a level of trust in the organizations that control the technology stack, passing from the LLM all the way to the infrastructure providers. And this applies to every personal information you disclose at any touch point to any AI company.

I believe that this trust is something that we and every other company in the space will need to fundamentally continue to grow and mature with our community and our users.

johnys 4 hours ago | parent | prev [-]

Curious: what are people using as the best open source and locally hosted versions to have agents browse the web?

verdverm 3 hours ago | parent [-]

Playwright, same thing we use when doing non-ai automation

Fun fact, ai can use the same tools you do, we don't have to reinvent everything and slap a "built for ai" label on it

antves 2 hours ago | parent [-]

We love these tools but they were designed for testing, not for automation. They are too low-level to be used as they are by AI.

For example, the playwright MCP is very unreliable and inefficient to use. To mention a few issues, it does not correctly pierce through the different frames and does not handle the variety of edge cases that exist on the web. This means that it can't click on the button it needs to click on. Also, because it lacks control over the context design, it cannot optimize for contextual operations and your LLM trace gets polluted with incredible amount of useless tokens. This increases cost, task complexity for the LLM, and latency

On top of that, these tools rely on the accessibility tree, which is just not a viable approach for a huge number of websites

verdverm 2 hours ago | parent [-]

again (see other comment), you are not listening to users and asking questions, you are telling them they are wrong

You describe problems I don't have. I'm happy with Playwright and other scraping tools. Certainly not frustrated enough to pay to send my data to a 3rd party

antves an hour ago | parent [-]

have you tried any other AI browser automation tools? we would be curious to hear about your use cases because the use cases we have been working on with our customers involve scenarios where traditional playwright automations are not viable, e.g. they operate on net new websites and net new tasks for each execution

verdverm an hour ago | parent [-]

I'm unwilling to send my data to a 3rd party that is so new on the scene

Consider me a late adopter because I care about the security of my data. (and no, whatever you say about security will not change my mind, track record and broader industry penetration may)

Make it self-hostable, the conversation can change