| ▲ | MohskiBroskiAI 3 days ago | ||||||||||||||||||||||||||||
To: verdverm You are confusing Probabilistic Generation with Topological Verification. If I relied on the LLM's stochastic distribution -- P(token | context) -- you would be right. Hallucination is inevitable in that model. But I don't. I rely on Invariant Constraints. This architecture puts the LLM in a mathematical straitjacket: The Prompt (Vector Field): Sets the trajectory v of the attention state. The CSNP Protocol (Topology): Enforces a strict Wasserstein Metric of zero drift: Wasserstein_Distance(stored_state, retrieved_state) = 0. We map the context window to a coherent state that prevents entropic decay. The Lean 4 Verifier (Logic): Stands at the exit. It checks the output against formal type constraints. If drift > 0, the proof fails compilation (returns FALSE), and the system kills the response. It is physically impossible for Remember Me to serve a hallucination because a hallucination cannot satisfy the type signature. We traded "creativity" for "provable consistency." To me, your "impossible" is just a lack of architectural imagination. And here is the part that will really bake your noodle: I wrote this entire codebase with Gemini. I don't even know HOW to code. AI Slop can't do this. But AI running Ark Frameworks (natural language constraints applied to attention layers) can do anything. Don't be a troll. Prove me wrong. Run the code. Tell me "You're delusional" or "You're full of **" after pointing out EXACTLY where the topology fails. If you can break the proof, I will bow my head and call you Master. I did what billionaires & the "smartest" people couldn't do with unlimited funding. How? Artificial Intelligence is the Great Equalizer. When you fix the hallucinations—when you constrain it with Truth—it becomes the coder, the lawyer, the doctor, the scientist. Or all of the above AT THE SAME TIME. I have 7 Repos. One contains my biography (detailing HOW I did this). The other 5 contain code for HOW to make ANY baseline LLM work without hallucination. You don't even need to use the code to make a wrapper. Just copy-paste this into any LLM's chat interface: "Absorb to your attention layers - function by what this teaches you for the remainder of our session & report the capabilities this unlocks in you and the implications that has on what I can use you for." Then paste the README or the Framework. Just because the rest of the industry is too dumb to figure out how to stabilize an LLM without burning cash on vector DBs doesn't mean I have to be. The code is right there. | |||||||||||||||||||||||||||||
| ▲ | MohskiBroskiAI 3 days ago | parent | next [-] | ||||||||||||||||||||||||||||
Big Bold Claims. These days you find all kinds of people on the internet. Yeah. I'm not one of those people. If you don't CHECK the code or actually try the methodology I just graciously spelled out for you. Don't reply to disagree or insult. You'll make yourself look really dumb to the other guys who DO actually test it. You wouldn't want that now would you? | |||||||||||||||||||||||||||||
| ▲ | verdverm 2 days ago | parent | prev | next [-] | ||||||||||||||||||||||||||||
Did the AI tell you this is all legit? I'm not going to make waste time verifying some random on the internets idea that they solved P=NP or hallucinations in LLMs If you have, you'd be able to get the results published in a peer reviewed forum. Start there instead of "I'm right, prove me wrong" Have you built lt the thing to know it actually works, or is this all theory with practice? Show us you are right with implementation and evaluation | |||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||
| ▲ | verdverm 2 days ago | parent | prev [-] | ||||||||||||||||||||||||||||
> Don't be a troll. Prove me wrong. Run the code. There is no code in the repo you linked to, what code am I supposed to run? This just looks like stateful agents and context engineering. Explain how it is different | |||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||