Remix.run Logo
polskibus 3 days ago

are you saying that a current gen LLM can answer such queries with EnrichMCP directly? or does it need guidance via prompts (for example tell it which tables to look at, etc. ) ? I did expose a db schema to LLM before, and it was ok-ish, however often times the devil was in the details (one join wrong, etc.), causing the whole thing to deliver junk answers.

what is your experience with non trivial db schemas?

simba-k 3 days ago | parent [-]

So one big difference is that we aren't doing text2sql here, and the framework requires clear descriptions on all fields, entities, and relationships (it literally won't run otherwise).

We also generate a few tools for the LLM specifically to explain the data model to it. It works quite well, even on complex schemas.

The use case is more transactional than analytical, though we've seen it used for both.

I recommend running the openai_chat_agent in examples/ (also supports ollama for local run) and connect it to the shop_api server and ask it a question like : "Find and explain fraud transactions"

polskibus 3 days ago | parent [-]

So explicit model description (kind of repeating the schema into explicit model definition) provides better results when used with LLM because it’s closer to the business domain(or maybe the extra step from DDL to business model is what confuses the LLM?). I think I’m failing to grasp why does this approach work better than straight schema fed to Llm.

simba-k 3 days ago | parent [-]

Yeah, think of it as a data analyst. If I give you a Postgres account with all of our tables in it, you wouldn't even know when to start and would spend tons of time just running queries to figure out what you were looking at.

If I explain the semantic graph, entities, relationships, etc. with proper documentations and descriptions you'd be able to reason about it much faster and more accurately.

A postgres schema might have the data type and a name and a table name vs. all the rich metadata that would be required in EnrichMCP.