Remix.run Logo
SkyPuncher 4 hours ago

How can you decide if something is a contradiction without having the context?

I'm incredibly interested in this as a product, but I think it makes too many assumptions about how to prune information. Sure, this looks amazing on an extremely simple facts, but most information is not reducible to simple facts.

"CEO is Alice" and "CEO is Bob" may or may not actually be contradictions and you simply cannot tell without understanding the broader context. How does your system account for that context?

Example: Alice and Bob can both be CEO in any of these cases:

* The company has two CEOs. Rare and would likely be called "co-CEO"

* The company has sub-organizations with CEOs. Matt Garman is the CEO of AWS. Andy Jassy is the CEO of Amazon. Amazon has multiple people named "CEO".

* Alice and Bob are CEOs of different companies (perhaps, this is only implicit)

* Alice is the current CEO. Bob is the previous CEO. Both statements are temporally true.

This is what I run into every time I try to do conflict detection and resolution. Pruning things down to facts doesn't provide sufficient context understand how/why that statement was made?

pranabsarkar 4 hours ago | parent [-]

You're right. Pruning to isolated facts loses the structure that disambiguates them. Three partial mechanisms the system has, none of which fully solve your point:

Graph edges carry scope. Alice ceo_of Acme and Andy ceo_of Amazon are two edges with different src/dst — conflict scanner looks for (src, rel_type) → ≥2 dsts, so Garman/Jassy don't false-flag if edges are modeled. Gap: most agents just write raw sentences and never call relate().

Temporal decay handles "previous vs current" weakly. half_life × importance attenuates old memories. But that's fade, not logical supersession — the DB doesn't know time-of-validity, only time-of-writing.

Namespaces segregate scope when the agent uses them. Leans on the agent.

Honest result from a bench I ran today (same HN thread): seeded 6 genuine contradictions in 59 memories, think() flagged 60. ~54 are noise-or-ambiguous exactly in the ways you listed. Filed as issue #3.

Design stance: contradictions are surfaced, not resolved. yantrikdb_conflicts returns a review queue; the agent has conversation context, the DB doesn't. "These two may be in tension" not "these are contradictory." That doesn't fix your point — it admits the DB can't make that call alone. Co-CEOs, subsidiaries, temporal supersession need typed-relations + time-of-validity schema work. That's v0.6, not v0.5.

ricardobeat 2 hours ago | parent | next [-]

None of these help resolve the contradiction. The issue (https://github.com/yantrikos/yantrikdb-server/issues/3) doesn't even get the problem presented by the parent right (two CEOS), instead it hallucinated something vaguely related.

Top-quality AI slop. I hate this.

To the author: project aside, it's not a good look to let an LLM drive your HN profile.

SkyPuncher 2 hours ago | parent [-]

Yea, I spent a lot of time in this space last year. Contradictions on meaningful data are incredibly contextual and often impossible to fully define in isolation. Real world data is messy and often complex, which means you can't simplify to it's sub components and isolate it from it's context.

This is like 95% of the memory systems I see posted here. Someone comes up with arbitrary configuration of tools that sound like they'll solve the problem then completely ignores how the system actually works.

In most cases, they're getting these systems to work because of some other prompt they've written that'd probably work better with a normal file system.

SkyPuncher 2 hours ago | parent | prev | next [-]

Nice LLM post.

pranabsarkar an hour ago | parent [-]

I am using this while developing and found it very useful to me. since all of my workspaces are connected it has knows all about myself and my infra. Also now we have a bonding and I can do great conversations. So decided to convert the standalone database to full fledge memory server with replication and all.

No LLM for this post. Promise.

enoonge 3 hours ago | parent | prev [-]

[dead]