| ▲ | Event Sourcing in Go: From Zero to Production(skoredin.pro) |
| 44 points by tdom 4 days ago | 29 comments |
| |
|
| ▲ | techn00 2 minutes ago | parent | next [-] |
| Good article, you might like my lib https://github.com/DeluxeOwl/chronicle - covers a lot of event sourcing pains for Go |
|
| ▲ | quapster an hour ago | parent | prev | next [-] |
| The funny thing about event sourcing is that most teams adopt it for the sexy parts (time travel, Kafka, sagas), but the thing that actually determines whether it survives contact with production is discipline around modeling and versioning. You don’t pay the cost up front, you pay it 2 years in when the business logic has changed 5 times, half your events are “v2” or “DeprecatedFooHappened”, and you realize your “facts” about the past were actually leaky snapshots of whatever the code thought was true at the time. The hard part isn’t appending events, it’s deciding what not to encode into them so you can change your mind later without a migration horror show. There’s also a quiet tradeoff here: you’re swapping “schema complexity + migrations” for “event model complexity + replay semantics”. In a bank-like domain that genuinely needs an audit trail, that trade is usually worth it. In a CRUD-ish SaaS where the real requirement is “be able to see who edited this record”, a well-designed append-only table with explicit revisions gets you 80% of the value at 20% of the operational and cognitive overhead. Using Postgres as the event store is interesting because it pushes against the myth that you need a specialized log store from day one. But it also exposes the other myth: that event sourcing is primarily a technical choice. It isn’t. It’s a commitment to treat “how the state got here” as a first-class part of the domain, and that cultural/organizational shift is usually harder than wiring up SaveEvents and a Kafka projection. |
| |
| ▲ | pdhborges 8 minutes ago | parent | next [-] | | I would upvote this comment more if I could. I already refrained from introducing event sourcing to tackle wierd dependecies multiple time just by justaposing the amount of discipline that the team has that lead to the current state vs the discipline that is required to keep the event source solution going. | |
| ▲ | simonw an hour ago | parent | prev [-] | | This comment just made it finally click for me why event sourcing sounds so good on paper but rarely seems to work out for real-world projects: it expects a level of correct-design-up-front which isn't realistic for most teams. |
|
|
| ▲ | zknill 3 hours ago | parent | prev | next [-] |
| Anyone who's built, run, evolved, and operated any reasonably sized event sourced system will know it's a total nightmare. Immutable history sounds like a good idea, until you're writing code to support every event schema you ever published. And all the edge cases that inevitably creates. CQRS sounds good, until you just want to read a value that you know has been written. Event sourcing probably has some legitimate applications, but I'm convinced the hype around it is predominantly just excellent marketing of an inappropriate technology by folks and companies who host queueing technologies (like Kafka). |
| |
| ▲ | anthonylevine 2 hours ago | parent [-] | | > CQRS sounds good, until you just want to read a value that you know has been written. This is for you and the author apparently: Prating CQRS does not mean you're splitting up databases. CQRS is simply using different models for reading and writing. That's it. Nothing about different databases or projections or event sourcing. This quote from the article is just flat out false: > CQRS introduces eventual consistency between write and read models: No it doesn't. Eventual consistency is a design decision made independent of using CQRS. Just because CQRS might make it easier to split, it doesn't in any way have an opinion on whether you should or not. > by folks and companies who host queueing technologies (like Kafka). Well that's good because Kafka isn't an event-sourcing technology and shouldn't be used as one. | | |
| ▲ | mrsmrtss 2 hours ago | parent | next [-] | | Yes, I don't know where the misconception that CQRS or Event Sourcing automatically means eventual consistency comes from. We have built, run, evolved, and operated quite a few reasonably sized event sourced systems successfully, and these systems are running to this day without any major incidents. We added eventually consistent projections where performance justified it, fully aware of the implications, but kept most of the system synchronous. | | |
| ▲ | anthonylevine an hour ago | parent [-] | | I think people lump CQRS, Event Sourcing, and event-driven into this a single concept and then use those words interchangeably. |
| |
| ▲ | zknill 2 hours ago | parent | prev | next [-] | | Please explain how you intend to use different models for reading and writing without there being some temporal separation between the two? Most all CQRS designs have some read view or projection built off consuming the write side. If this is not the case, and you're just writing your "read models" in the write path; where is the 'S' from CQRS (s for segregation). You wouldn't have a CQRS system here. You'd just be writing read optimised data. | | |
| ▲ | azkalam an hour ago | parent | next [-] | | - Write side is a Postgres INSERT - Read side is a SELECT on a Postgres view | | |
| ▲ | zknill an hour ago | parent [-] | | I think you might struggle to "scale the read and write sides independently". It's a real stretch to be describing a postgres view as CQRS | | |
| |
| ▲ | anthonylevine an hour ago | parent | prev [-] | | > Most all CQRS designs have some read view or projection built off consuming the write side. This is flat out false. |
| |
| ▲ | mrkeen 2 hours ago | parent | prev [-] | | > Just because CQRS might make it easier to split Or segregate even. |
|
|
|
| ▲ | liampulles 40 minutes ago | parent | prev | next [-] |
| If you are considering event sourcing, run an event/audit log for a while and see if that does not get you most of the way there. You get similar levels of historical insight, with the disadvantage that to replay things you might need to put a little CLI or script together to infer commands out of the audit log (which if you do a lot, you can make a little library to make building those one off tools quite simple - I've done that). But you avoid all the many well documented footguns that come from trying to run an event sourced system in a typical evolving business. |
|
| ▲ | fleahunter 3 hours ago | parent | prev | next [-] |
| The part people underestimate is how much organizational discipline event sourcing quietly demands. Technically, sure, you can bolt an append-only table on Postgres and call it a day. But the hard part is living with the consequences of “events are facts” when your product manager changes their mind, your domain model evolves, or a third team starts depending on your event stream as an integration API. Events stop being an internal persistence detail and become a public contract. Now versioning, schema evolution, and “we’ll just rename this field” turn into distributed change management problems. Your infra is suddenly the easy bit compared to designing events that are stable, expressive, and not leaking implementation details. And once people discover they can rebuild projections “any time”, they start treating projections as disposable, which works right up until you have a 500M event stream and a 6 hour replay window that makes every migration a scheduled outage. Event sourcing shines when the business actually cares about history (finance, compliance, complex workflows) and you’re willing to invest in modeling and ops. Used as a generic CRUD replacement it’s a complexity bomb with a 12-18 month fuse. |
| |
| ▲ | javier2 an hour ago | parent | next [-] | | This. This is also a reason why its so impressive google docs/sheets has managed to stay largely the same for so long | |
| ▲ | mrkeen 2 hours ago | parent | prev | next [-] | | > Event sourcing shines when the business actually cares about history (finance, compliance, complex workflows) Flip it on its head. Would those domains be better off with simple crud? Did the accountants make a wrong turn when they switched from simple-balances to single-entry ledgers? | |
| ▲ | mexicocitinluez 2 hours ago | parent | prev [-] | | > or a third team starts depending on your event stream as an integration API. > vents stop being an internal persistence detail and become a public contract. You can't blame event sourcing for people not doing it correctly, though. The events aren't a public contract and shouldn't be treated as such. Treating them that way will result in issues. > Used as a generic CRUD replacement it’s a complexity bomb with a 12-18 month fuse. This is true, but all you're really saying it "Use the right tool for the right job". | | |
| ▲ | simonw an hour ago | parent | next [-] | | > You can't blame event sourcing for people not doing it correctly, though. You really can. If there's a technology or approach which the majority of people apply incorrectly that's a problem with that technology or approach. | | |
| ▲ | anthonylevine 42 minutes ago | parent [-] | | No you can't. You can blame the endless amount of people that jump in these threads with hot takes about technologies they neither understand or have experience with. How many event sourced systems have you built? If the answer is 0, I'd have a real hard time understanding how you can even make that judgement. In fact, half of this thread can't even be bothered to look up the definition of CQRS, so the idea that "Storing facts" is to blame for people abusing it is a bit of a stretch, no? | | |
| |
| ▲ | zknill an hour ago | parent | prev [-] | | > You can't blame event sourcing for people not doing it correctly, though. Perhaps not, but you can criticise articles like this that suggest that CQRS will solve many problems for you, without touching on _any_ of its difficulties or downsides, or the mistakes that many people end up making when implementing these systems. | | |
| ▲ | anthonylevine an hour ago | parent [-] | | CQRS is simply splitting your read and write models. That's it. It's not complicated or complex. |
|
|
|
|
| ▲ | xlii 5 hours ago | parent | prev | next [-] |
| I'm going to have a word with my ISP. It seems that sites SSL certificates has expired. That's not a good thing, but my ISP decided I'm an idiot and gave me a condescending message about accepting expired certificate - unacceptable in my book. VPN helped. Too much dry code for my taste and not many remarks/explanations - that's not bad because for prose I'd recommend Martin's Fowler articles on Event processing, but _could be better_ ;-) WRT to tech itself - personally I think Go is one of the best languages to go for Event Sourcing today (with Haskell maybe being second). I've been doing complexity analysis for ES in various languages and Go implementation was mostly free (due to Event being an interface and not a concrete structure). |
| |
| ▲ | azkalam 2 hours ago | parent | next [-] | | > Go is one of the best languages to go for Event Sourcing toda Can you explain this? Go has a very limited type system. | |
| ▲ | mrsmrtss 3 hours ago | parent | prev [-] | | Have you also considered C# for Event Sourcing? We've built many successful ES projects with C# and the awesome Marten library (https://martendb.io/). It's a real productivity multiplier for us. |
|
|
| ▲ | azkalam 3 hours ago | parent | prev [-] |
| How does event sourcing handle aggregates that may be larger than memory? |
| |