Remix.run Logo
davidpolberger 13 hours ago

I'm a co-founder of Calcapp, an app builder for formula-driven apps, and I recently received an email from a customer ending their subscription. They said they appreciated being able to kick the tires with Calcapp, but had now fully moved to an AI-based platform. So we're seeing this reality play out in real time.

The next generation of Calcapp probably won't ship with a built-in LLM agent. Instead, it will expose all functionality via MCP (or whatever protocol replaces it in a few years). My bet is that users will bring their own agents -- agents that already have visibility into all their services and apps.

I hope Calcapp has a bright future. At the same time, we're hedging by turning its formula engine into a developer-focused library and SaaS. I'm now working full-time on this new product and will do a Show HN once we're further along. It's been refreshing to work on something different after many years on an end-user-focused product.

I do think there will still be a place for no-code and low-code tools. As others have noted, guardrails aren't necessarily a bad thing -- they can constrain LLMs in useful ways. I also suspect many "citizen developers" won't be comfortable with LLMs generating code they don't understand. With no-code and low-code, you can usually see and reason about everything the system is doing, and tweak it yourself. At least for now, that's a real advantage.

zackliscio 13 hours ago | parent | next [-]

Sorry to hear about the customer churn, but the MCP-first strategy makes sense to me and seems like it could be really powerful. I also suspect that the bring your own agent future will be really exciting, and I've been surprised we haven't seen more of it play out already.

Agree there will be a place for no-code and low-code interfaces, but I do think it's an open question where the value capture will be--as SaaS vendors, or by the LLM providers themselves.

sergiotapia 13 hours ago | parent | prev [-]

I highly suggest you expose functionality through Graphql. It lets users send out an agent with a goal like: "Figure out how to do X" and because graphql has introspection, it can find stuff pretty reliably! It's really lovely as an end user. Best of luck!

_heimdall 7 hours ago | parent | next [-]

A proper REST API would also work without all the extra overhead of GraphQL.

People may dislike XML, but it is easy to make a REST API with and it works well as an interface between computer systems where a human doesn't have to see the syntax.

ako 4 hours ago | parent | next [-]

Depends mostly on efficiency: GraphQL (or Odata as a REST compliant alternative that has more or less the same functionality) provide the client with more controls out of the box to tune the response it needs. It can control the depth of the associated objects it needs, filter what it doesn't need, etc. This can make a lot of difference for the performance of a client. I actually like Odata more than GraphQL for this purpose, as it is REST compliant, and has standardized more of the protocol.

raverbashing 4 hours ago | parent | prev [-]

REST + Swagger I'd say

virtue3 3 hours ago | parent [-]

Swagger is critical. The graphql schema.json is very very good at helping ai's figure out how to use the service. Swagger evens that advantage.

storystarling 2 hours ago | parent | prev | next [-]

I tried this recently and found the token overhead makes it prohibitive for any non-trivial schema. Dumping the full introspection result into the context window gets expensive fast and seems to increase hallucination rates compared to just providing specific, narrow tool definitions.

mmasu 2 hours ago | parent [-]

a friend (and colleague, disclaimer) pushed this recently to github. It passes data through a duck fb layer exactly to avoid context bloat:

https://github.com/agoda-com/api-agent

worth taking a look to see multiple approaches to the problem

cpursley 27 minutes ago | parent | prev [-]

Hasura is working on this approach: https://promptql.io