Remix.run Logo
Show HN: Inspection Credit – read your inspection, draft the negotiation(inspectioncredit.com)
2 points by scarsam 6 hours ago

Hi HN. I built Inspection Credit because the inspection-response window in a home purchase is one of the worst-designed parts of the entire transaction.

You get a 60 to 90 page inspection report at 11pm. You have 5 days to respond. Half the report is photos of outlets. A third is non-issues. The 3 or 4 things that actually matter are buried on page 47 next to a missing smoke detector. Your inspector legally cannot advise on negotiation. Your agent makes commission when you close, not when you walk. So you decide, alone, in a panic.

The product takes the PDF and returns a packet with the negotiable findings, a credit-request letter, an escalation script for seller pushback, walk-away guidance that references your state's standard inspection-response form, and a UPL-defensive disclaimer. $79, refundable if the report turns up nothing worth negotiating.

A few engineering notes:

(1) The aggregator sites (HomeAdvisor, Fixr, Angi, HomeGuide) disagree by 30 to 50 percent on the same repair scope. I pulled current 2026 pricing for 67 inspection-finding categories from 3 to 5 sources each, took the median across them, and stored `last_verified` plus source URLs per entry so the data is auditable.

(2) Replaced 4-region BLS CPI cost multipliers (range 0.93 to 1.06, basically useless for skilled-trade pricing) with state-level BEA Regional Price Parities plus 25 metro-zip overrides anchored to the RSMeans City Cost Index for HCOL metros. Manhattan and SF land at 1.65x national, Seattle at 1.38x, the Southeast at 0.95x. The default 4-region CPI under-quotes Seattle construction labor by 25 to 40 percent because the consumer basket gets dragged around by housing rents and groceries, not actual trades.

(3) An asymmetric override gate on the model's cost estimates. Within plus-or-minus 30 percent of the verified value, keep the model (it has PDF context the static entry doesn't). Outside the band, snap up if the model under-priced, but keep the model if it over-priced up to 10x. Most over-pricing turns out to be finding-scope bundling (the model prices "carport: open wires + non-functional GFCI + extension cord wired into a switch" against a GFCI-only entry), not hallucination. The first symmetric version was snapping down on perfectly reasonable bundled findings.

(4) A 4-reviewer-agent feedback loop during prompt iteration. Content reviewer, WA real-estate attorney, PDF designer, veteran Seattle buyer's agent. Each surfaced different failure modes across three iterations. The lawyer agent specifically flagged the UPL-exposure shape of the early drafts (under WA General Rule 24, drafting bespoke contractual language for a third party is regulated even when wrapped in a disclaimer), which led to reframing the walk-away page as a talking-points memo for the buyer's licensed agent rather than a fileable termination notice.

Stack: Next.js, Neon Postgres, Drizzle, Resend, Stripe, Anthropic batch API for the 50 percent discount, react-pdf for the deliverable. About a week of focused work for v1 packet quality, plus another two days on the cost-data verification.

Sample is free. Email yourself a real packet from the landing page (no card, no signup) if you want to see what the deliverable actually looks like before deciding it's snake oil. Happy to answer questions about the prompt iterations, the data work, the regional multiplier methodology, or the legal-defensive framing.