| ▲ | raw_anon_1111 3 hours ago |
| Yes, I can see Claude Code making it easier to reproduce - Redshift (or Snowflake) - or anything else you need to be reliable and performant at scale. |
|
| ▲ | vjerancrnjak 2 hours ago | parent [-] |
| Both products are nothing but reliable. Redshift can’t even go around partitioning limits, or S3 limits. But what’s funny is that Claude Code is from US company so can’t be used in a boycott scenario |
| |
| ▲ | raw_anon_1111 2 hours ago | parent [-] | | Redshift is used at the largest e-commerce site in the world and was built specifically to “shift” away from “Big Red” (Oracle). | | |
| ▲ | vjerancrnjak 2 hours ago | parent [-] | | What can I say, I expected more than what they actually offer. A Redshift job can fail because S3 tells it to slow down. How can I make this HA performance product slower given its whole moat is an S3 based input output interface. As a compute engine its SQL capabilities are worse than the slowest pretend timeseries db like Elasticsearch. | | |
| ▲ | raw_anon_1111 2 hours ago | parent [-] | | Are you trying to treat an OLAP database with columnar storage like an OLTP database? If you are, you would probably have the same issue with Snowflake. As far as S3, are you trying to ingest a lot of small files or one large file? Again Redshift is optimized for bulk imports. |
|
|
|