Remix.run Logo
alexisread a day ago

I think that’s the beauty of PG here, you can find solutions to most of this:

Index creation https://stackoverflow.com/questions/23876479/will-postgresql...

JSON->DB schema https://jsonschema2db.readthedocs.io/en/latest/index.html

Pg shared disk failover is similar but RAC is quite unique, you’re not going to use though with a rented cluster?

https://www.postgresql.org/docs/current/different-replicatio...

Personally for me any technical advantages don’t outweigh the business side, YMMV :)

mike_hearn a day ago | parent [-]

RAC is a default part of any cloud Oracle DB, I think! I must admit I'm not an expert in all the different SKUs so there might be some that aren't, but if you rent an autonomous DB in the cloud you're probably running on ExaData/RAC. That's why the uptime is advertised as 99.95% even without a separate region.

> Index creation https://stackoverflow.com/questions/23876479/will-postgresql...

I was ambiguous. That's an answer telling how to create indexes manually, and saying that you get an index for primary keys and unique constraints automatically. Sure, all databases do that. Oracle can create arbitrary indexes for any relation in the background without it being requested, if it notices that common queries would benefit from them.

Forgetting to create indexes is one of the most common issues people face when writing database apps because the performance will be fine on your laptop, or when the feature is new, and then it slows down when you scale up. Or worse you deploy to prod and the site tanks because a new query that "works fine on my machine" is dog slow when there's real world amounts of data involved. Oracle will just fix it for you, Postgres will require a manual diagnosis and intervention. So this isn't the same capability.

> JSON->DB schema https://jsonschema2db.readthedocs.io/en/latest/index.html

Again I didn't provide enough detail, sorry.

What that utility is doing is quite different. For one, it assumes you start with a schema already. Oracle can infer a schema from a collection of documents even if you don't have one by figuring out which fields are often repeated, which values are unique, etc.

For another, what you get after running that utility is relational tables that you have to then access relationally via normal SQL. What JSON duality views give you is something that still has the original document layout and access mode - you GET/PUT whole documents - and behind the scenes that's mapped to a schema and then through to the underlying SQL that would be required to update the tables that the DB generated for you. So you get the performance of normalized relations but you don't have to change your code.

The nice thing about this is it lets developers focus on application features and semantics in the early stages of a startup by just reshaping their JSON documents at will, whilst someone else focuses on improving performance and data rigor fully asynchronously. The app doesn't know how the data is stored, it just sees documents, and the database allows a smooth transition from one data model to another.

I don't think Postgres has anything like this. If it does it'll be in the form of an obscure extension that cloud vendors won't let you use, because they don't want to/can't support every possible Postgres extension out there.