| ▲ | theanirudh 4 days ago |
| We just migrated to PlanetScale Postgres Metal over the weekend. We are already seeing major query improvements. The migration was pretty smooth. Post-migration we hit a few issues (turned out it wasn't an issue with PlanetScale), and the PlanetScale team jumped in immediately to help us out, even on a Saturday morning so support's been amazing. The Insights tab also surfaced missing indexes we added, which sped things up further. Early days, but so far so good. |
|
| ▲ | benterix 4 days ago | parent | next [-] |
| Out of curiosity: how do you connect your databases to external services that are consuming these data? In places I do similar work, databases are usually in the same private network as the instances which are reading and writing data to them. If you put them somewhere on the internet, apart from security, doesn't it affect latency? |
| |
| ▲ | theanirudh 4 days ago | parent | next [-] | | Their databases are hosted on AWS and GCP so latency isn't much of an issue. They also have AWS Private Link and if configured it won't go over the internet. | | |
| ▲ | sreekanth850 3 days ago | parent [-] | | No matter if its hosted on Azure GCP or AWS, latency is real. Cloud providers doesn't magically eliminates the Geography and phhysics. Private network don't eliminates latency magically. In general, Any small latency hike can potentially create performance bottlenecks for write operations in strong consistency DB like postgres or MySQL because each write operation go through a round trip from your server to remote planetscale server that create transaction overhead. Complex transactions with multiple statements can amplify this latency due to this round trip. But you could potentially reduce this latency by hosting your app near to where planet scale host their DB cluster though. But that is a dependency or compromise.
Edit: A few writes per second? Probably fine. Hundreds of writes per second? Those extra milliseconds become a real bottleneck. | | |
| ▲ | mattrobenolt 3 days ago | parent | next [-] | | You can simply place your database in the same AWS or GCP region and the same AZs. | |
| ▲ | aiisthefiture 3 days ago | parent | prev | next [-] | | Your database will get slower before the latency is an issue. | |
| ▲ | hobofan 3 days ago | parent | prev [-] | | > Hundreds of writes per second? Those extra milliseconds become a real bottleneck. Of course it's nicer if the database can handle it, but if you are doing hundreds of sequential non-pipelined writes per second, there is a good chance that there is something wrong with your application logic. | | |
| ▲ | sreekanth850 3 days ago | parent [-] | | Not universal, there are systems that need high frequency, low latency, strong consistency, writes. | | |
| ▲ | hobofan 3 days ago | parent [-] | | Yes, but for the majority of those, rhose would be individial transactions per e.g. request, so the impact would be a fixed latency penalty rather than a multiplicative one. |
|
|
|
| |
| ▲ | oefrha 4 days ago | parent | prev | next [-] | | PlanetScale runs in AWS/GCP, so not really “somewhere on the internet” if your workload is already there. | |
| ▲ | siquick 3 days ago | parent | prev [-] | | This is the thought I always come back to with the non-big cloud services. It’s pretty much always been mandatory at non-startups to have all databases to be hidden away from the wider internet. |
|
|
| ▲ | oefrha 4 days ago | parent | prev | next [-] |
| Would you mind sharing what you were migrating from, and what kind of issues you ran into? |
|
| ▲ | ProofHouse 4 days ago | parent | prev | next [-] |
| appreciate you sharing |
|
| ▲ | endorphine 4 days ago | parent | prev [-] |
| Care to elaborate what kind of issues? Looking into migrating as well. |
| |
| ▲ | theanirudh 4 days ago | parent [-] | | The issues weren't PlanetScale related. We use Hasura and when we did the cutover, we connected to the DB via PGBouncer and some features don't work right. Started seeing a lot of errors so paged them and they helped out. We were connecting directly to PG previously but when we cutover we missed that. |
|