Remix.run Logo
LiamPowell 15 hours ago

Did you really use a LLM to generate the sample output in your readme instead of just running the application? I noticed the borders were all misaligned and wondered if you had hardcoded the number of spaces, but I looked at the code and you haven't.

If you did generate the output with a LLM instead of just running it... why?

Also:

> It uses Claude AI for smart classification, but runs entirely locally: your emails never leave your machine.

How can both of these things be true? How can Claude be used as a classifier without sending your emails to Claude? From looking at the code it appears that you do in fact just send off emails to Claude, or at least the first 300-400 characters, so that line is just a complete lie.

chevuru 7 hours ago | parent | next [-]

The share text and README have been updated to accurately say "Core cleanup runs locally — AI commands send only subjects/snippets to Anthropic." The terminal sample outputs were illustrative; I'm recording a real asciinema session to replace them. PR #8 landed the README fix

CobrastanJorji 15 hours ago | parent | prev | next [-]

I think the idea is that SOME of the classification (the "stats" command) works without AI, but it also supports some fancy and definitely-not-local Anthropic processing options.

chevuru 8 hours ago | parent | prev [-]

[dead]