Remix.run Logo
advikipedia 3 days ago

We recently spoke with 30+ startup founders and 40+ enterprise practitioners who are building and deploying agentic AI systems across industries like financial services, healthcare, cybersecurity, and developer tooling.

A few patterns emerged that might be relevant to anyone working on applied AI or automation:

- The main blockers aren’t technical. Most founders pointed to workflow integration, employee trust, and data privacy as the toughest challenges — not model performance.

- Incremental deployment beats ambition. Successful teams focus on narrow, verifiable use cases that deliver measurable ROI and build user trust before scaling autonomy.

- Enterprise adoption is uneven. Many companies have “some agents” in production, but most use them with strong human oversight. The fully autonomous cases remain rare.

- Pricing is unresolved. Hybrid models dominate; pure outcome-based pricing is uncommon due to attribution and monitoring challenges.

Infrastructure is mostly homegrown. Over half of surveyed startups build their own agentic stacks, citing limited flexibility in existing frameworks.

The article also includes detailed case studies, commentary on autonomy vs. accuracy trade-offs, and what’s next for ambient and proactive agents.

If you’re building in this space, the full report is free here: https://mmc.vc/research/state-of-agentic-ai-founders-edition...

Would be interested to hear how others on HN are thinking about real-world deployment challenges — especially around trust, evaluation, and scaling agentic systems.

Etheryte 3 days ago | parent | next [-]

Perhaps I simply don't understand what you mean, but it sounds like the first point could be rephrased in some way. To me, workflow integration and data privacy sound very much like technical blockers.

barrenko 3 days ago | parent | next [-]

But if you define them as non-technical related blockers agents are just swell.

advikipedia 3 days ago | parent | prev | next [-]

More than the "actual" problem, the "perception" of the problem is worse. Workflow integration is more to do with users having to rethink their workflows, their roles, and how they work with AI. As for data privacy concerns, even where startups have taken measures to overcome the problems, very often enterprises still remain concerned (making this more of a perception problem than an actual problem). That's why I focused on the non-technical aspect of it!

DrScientist 3 days ago | parent [-]

When I see vendors complain about workflow and integration issues, it's because the vendors software is written around an expectation of a certain workflow and integration points and they find out in reality every customer does it slightly differently.

Some key challenges around workflow are that while the fundamental white-board task flow is the same, different companies may distribute those tasks between people and over time in different ways.

Workflow is about flowing the task and associated information between people - not just doing the tasks.

Same goes for integration - the timing of when certain necessary information might be available again not uniform and timing concerns are often missed on the high level whiteboard.

Here's a classic example of ignoring timing issues.

https://www.harrowell.org.uk/blog/2017/03/19/universal-credi...

refactor_master 3 days ago | parent | prev | next [-]

Consider this simple example: Storing all your sensitive user data in one centralized location (e.g. a US server) would be great for any kind of analytics and modeling to tap into, and is technically very easy to do, but it also violates virtually every country's data privacy laws. So then you have to set up siloed servers around the world, deal with data governance, legal stuff, etc.

Sure, it then becomes a technical challenge to work around those limits, but that may be cost/time prohibitive.

1718627440 3 days ago | parent [-]

That sounds more like, that you can solve the problem, when it would have other requirements.

refactor_master 3 days ago | parent [-]

If you ask Silicon Valley, any organizational problem can be a technical problem if you try hard enough.

IanCal 3 days ago | parent | prev [-]

There are two sides to workflow integration.

One is technical (it’s a hassle to connect things to a specific system because you’d need to deal with the api or there is no api)

The other isn’t, because it’s figuring out how and where to use these new tools in an existing workflow. Maybe you could design something from scratch but you have lots of business processes right now, how do you smoothly modify that? Where does it make sense?

Frankly understanding what the systems can and can’t do takes at least some time even if only because the field is moving so fast (I worked with a small local firm who I was able to help by showing them the dramatic improvements in transcription quality vs cost recently - people here are more used to whisper and the like but it’s not as common knowledge how and where you can use these things).

woeirua 3 days ago | parent | prev | next [-]

Lack of employee trust in these systems is caused by model (under)performance. There's a HUGE disconnect between the C-suite right now and the people on the ground using these models. Anyone who builds something with the models would tell you that they can't be trusted.

baxtr 3 days ago | parent | prev | next [-]

> The main blockers aren’t technical. Most founders pointed to workflow integration, employee trust, and data privacy as the toughest challenges — not model performance.

What does that even mean? Are you trying to say that the problem isn’t that the AI models are bad — it’s that it’s hard to get people to use them naturally in their daily work?

Arnechos 3 days ago | parent [-]

For example where I work business users required model output to be 100% correct, which wasn't possible, so they decided to stick to old manual workflow.

sigwinch 3 days ago | parent [-]

That’s our definition of a process: when your objective is well-defined, a process is guaranteed to succeed. Not everything is a process. And sometimes people mistake what the desired success must be. For example, a piece of surgical equipment might not have features guaranteeing profitability.

thatjoeoverthr 3 days ago | parent | prev [-]

Honestly, just sad seeing AI posts on HN now.

ChrisMarshallNY 3 days ago | parent [-]

I’m not sure this is the case, here (although it’s always a possibility, sadly).

It just looks like the highly-polished marketing copy I’ve read, all my career. It’s entirely possible that it was edited by AI (a task that I have found useful), but I think that it’s actually a fairly important (to the firm) paper, and was likely originally written by their staff (or a consultant), and carefully edited.

I do feel as if it’s a promotional effort, but HN often features promotional material, if it is of interest to our community.