| ▲ | A verification layer for browser agents: Amazon case study(sentienceapi.com) | ||||||||||||||||||||||||||||||||||
| 43 points by tonyww 18 hours ago | 19 comments | |||||||||||||||||||||||||||||||||||
A common approach to automating Amazon shopping or similar complex websites is to reach for large cloud models (often vision-capable). I wanted to test a contradiction: can a ~3B parameter local LLM model complete the flow using only structural page data (DOM) plus deterministic assertions? This post summarizes four runs of the same task (search → first product → add to cart → checkout on Amazon). The key comparison is Demo 0 (cloud baseline) vs Demo 3 (local autonomy); Demos 1–2 are intermediate controls. More technical detail (architecture, code excerpts, additional log snippets): https://www.sentienceapi.com/blog/verification-layer-amazon-... Demo 0 vs Demo 3: Demo 0 (cloud, GLM‑4.6 + structured snapshots) success: 1/1 run tokens: 19,956 (~43% reduction vs ~35k estimate) time: ~60,000ms cost: cloud API (varies) vision: not required Demo 3 (local, DeepSeek R1 planner + Qwen ~3B executor) success: 7/7 steps (re-run) tokens: 11,114 time: 405,740ms cost: $0.00 incremental (local inference) vision: not required Latency note: the local stack is slower end-to-end here largely because inference runs on local hardware (Mac Studio with M4); the cloud baseline benefits from hosted inference, but has per-token API cost. Architecture This worked because we changed the control plane and added a verification loop. 1) Constrain what the model sees (DOM pruning). We don’t feed the entire DOM or screenshots. We collect raw elements, then run a WASM pass to produce a compact “semantic snapshot” (roles/text/geometry) and prune the rest (often on the order of ~95% of nodes). 2) Split reasoning from acting (planner vs executor). Planner (reasoning): DeepSeek R1 (local) generates step intent + what must be true afterward. Executor (action): Qwen ~3B (local) selects concrete DOM actions like CLICK(id) / TYPE(text). 3) Gate every step with Jest‑style verification. After each action, we assert state changes (URL changed, element exists/doesn’t exist, modal/drawer appeared). If a required assertion fails, the step fails with artifacts and bounded retries. Minimal shape: ok = await runtime.check( exists("role=textbox"), label="search_box_visible", required=True, ).eventually(timeout_s=10.0, poll_s=0.25, max_snapshot_attempts=3) What changed between “agents that look smart” and agents that work Two examples from the logs: Deterministic override to enforce “first result” intent: “Executor decision … [override] first_product_link -> CLICK(1022)” Drawer handling that verifies and forces the correct branch: “result: PASS | add_to_cart_verified_after_drawer” The important point is that these are not post‑hoc analytics. They are inline gates: the system either proves it made progress or it stops and recovers. Takeaway If you’re trying to make browser agents reliable, the highest‑leverage move isn’t a bigger model. It’s constraining the state space and making success/failure explicit with per-step assertions. Reliability in agents comes from verification (assertions on structured snapshots), not just scaling model size. | |||||||||||||||||||||||||||||||||||
| ▲ | augusteo an hour ago | parent | next [-] | ||||||||||||||||||||||||||||||||||
The shift from "click and hope" to explicit post-conditions is the right framing. We've been building agent-based automation and the reliability problem is brutal. An agent can be 95% accurate on each step, but chain ten steps together and you're at 60% success rate. That's not usable. Curious about the failure modes though. What happens when the verification itself is wrong? Like, the cart shows updated on screen but the verification layer checks a stale element? | |||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||
| ▲ | tonyww 17 hours ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||
A quick clarification on intent, since “browser automation” means different things to different people: This isn’t about making scripts smarter or replacing Playwright/Selenium. The problem I’m exploring is reliability: how to make agent-driven browser execution fail deterministically and explainably instead of half-working when layouts change. Concretely, the agent doesn’t just “click and hope”. Each step is gated by explicit post-conditions, similar to how tests assert outcomes: ---- ## Python Code Example: ready = runtime.assert_( all_of(url_contains("checkout"), exists("role=button")), "checkout_ready", required=True ) ---- If the condition isn’t met, the run stops with artifacts instead of drifting forward. Vision models are optional fallbacks, not the primary control signal. Happy to answer questions about the design tradeoffs or where this approach falls short | |||||||||||||||||||||||||||||||||||
| ▲ | wewtyflakes an hour ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||
I have found that a hybrid viewport screenshot + textual 'semantic snapshot' approach leads to the best outcomes, though sometimes text-only can be fine if the underlying page is not made of a complete mess of frameworks that would otherwise confuse normal click handlers, etc. I think using a logical diff to do pass/fail checking is clever, though I wonder if there are failure modes there that may confuse things, such as verifying highly dynamic webpages that change their content even without active user interactions. | |||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||
| ▲ | Akranazon 2 hours ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||
It is interesting subject matter, I am working on something similar. But the descriptions are quite terse. Maybe I just failed to gleam: * When you "run a WASM pass", how is that generated? Do you use an agent to do the pruning step, or is it deterministic? * Where do the "deterministic overrides" come from? I assume they are generated by the verifier agent? | |||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||
| ▲ | vilecoyote 3 hours ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||
I took a look at the quickstart with aim of running this locally and found that an API key is needed for the importance ranking. What exactly is importance ranking? Does the verification layer still exists without this ranking? | |||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||
| ▲ | joeframbach 5 hours ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||
Does the browser expose its accessibility tree instead of the raw dom element tree? The accessibility tree should be enough, I mean, it's all that's needed for vision impaired customers, and technically the ai agent _is_ a vision impaired customer. For a fair usage, try the accessibility tree. | |||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||
| ▲ | asyncadventure 4 hours ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||
Great point about the accessibility tree @joeframbach. The "vision impaired customer" analogy is spot on - if an interface works for screen readers, it should work for AI agents. What I find most compelling about this approach is the explicit verification layer. Too many browser automation projects fail silently or drift into unexpected states. The Jest-style assertions create a clear contract: either the step definitively succeeded or it didn't, with artifacts for debugging. This reminds me of property-based testing - instead of hoping the agent "gets it right," you're encoding what success actually looks like. | |||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||
| ▲ | ewuhic 5 hours ago | parent | prev [-] | ||||||||||||||||||||||||||||||||||
Slop shit discussing slop shit. | |||||||||||||||||||||||||||||||||||