Remix.run Logo
zbentley 5 days ago

I was also onboard with GP’s comment until I got to this part.

Audit data in the same DB is great, because it can be written transactionally for relatively cheap (multi table updates, triggers, actual transactions with multiple writes, etc).

After that, sure, ship it elsewhere and prune the audit tables if you like. But having the audit writes go directly to Kafka or whatnot is a pain because it requires your client logic to a) have a distributed publish-event transaction (which can work in this case more easily than distributed transactions in general with careful use of idempotency keys, read back, or transactional outboxes, but it’s complicated and requires everyone writing to auditable tables to play along), and b) reduces your reliability because now the audit store or its message queue needs to be online for every write as well as your database.

And there’s plenty of good reasons for business logic to use (only for reads) audit data. What else would business logic do if an audit table existed and there was a business need to e.g. show customers a change history for something? Build another redundant audit system instead?