| ▲ | rkagerer 5 hours ago | ||||||||||||||||||||||
My understanding is Optane is still unbeaten when it comes to latency. Has anyone examined its use as an OS volume, compared to today's leading SSD's? I know the throughput won't be as high, but in my experience that's not as important to how responsive your machine feels as latency. | |||||||||||||||||||||||
| ▲ | hamdingers 4 hours ago | parent | next [-] | ||||||||||||||||||||||
> Has anyone examined its use as an OS volume, compared to today's leading SSD's? Late last year I switched from a 1.5tb Optane 905P to a 4tb WD Blue SN5000 NVMe drive in a gaming machine and saw improved load times, which makes sense given the read and write speeds are ~double. No observable difference otherwise. I'm sure that's not the use case you were looking for. I could probably tease out the difference in latency with benchmarks but that's not how I use the computer. The 905P is now in service as an SSD cache for a large media server and that came with a big performance boost but the baseline I'm comparing to is just spinning drives. | |||||||||||||||||||||||
| |||||||||||||||||||||||
| ▲ | speedgoose 5 hours ago | parent | prev | next [-] | ||||||||||||||||||||||
I configured a hetzner ax101 bare metal server with a 480GB 3d xpoint ssd some years ago. It’s used as the boot volume and it seems fast despite the server being heavily over provisioned, but I can’t really compare because I don’t have a baseline without. | |||||||||||||||||||||||
| ▲ | aaronmdjones 4 hours ago | parent | prev | next [-] | ||||||||||||||||||||||
I have a 16 GiB Optane NVMe M.2 drive in my router as a boot drive, running OpenWRT. It's so incredibly fast and responsive that the LuCI interface completely loads the moment I hit enter on the login form. | |||||||||||||||||||||||
| ▲ | rkagerer 5 hours ago | parent | prev | next [-] | ||||||||||||||||||||||
Before people claim it doesn't matter due to OS write buffering, I should point out a) today's bloated software and the many-layered, abstracted I/O stack it's built on tends to issue lots of unnecessary flushes, b) read latency is just as important as write (if not moreso) to how responsive your OS feels, particularly if the whole thing doesn't fit in (or preload to) memory. | |||||||||||||||||||||||
| ▲ | dmayle 2 hours ago | parent | prev [-] | ||||||||||||||||||||||
I run two 1.5TB Optanes in raid-0 with XFS (I picked them up for $300 each on sale about two years ago). These are limited to PCIE 3.0 x4 (about 4GB/s max each). I also have a 64GB optane drive I use as my boot drive. It's hard to tell you, because it's subjective, I don't swap back and forth between an SSD and the optane drives. I have my old system, which has a 2TB Samsung 980 Pro NVME drive (PCIE 4.0 x4, or 8GB/s max) as root, and a Sabrent rocket 4 plus 4TB drive secondary (also PCIE 4.0), so I ran sysbench on both systems, so I could share the differences. (Old system 5950X, new system 9950X3D). It feels snappier, especially when doing compilations... Sequential reads: I started with a 150GB fileset, but it was being served by the kernel cache on my newer system (256GB RAM vs 128GB on the old), so I switched to use 300GB of data, and the optanes gave me 5000 MiB/s for sequential read as opposed to 2800 MiB/s for the 980 Pro, and 4340 MiB/s for the Rocket 4 Plus. Random writes alone (no read workload) The optane system gets 2184 MiB/s, the 980 Pro gets 32 MiB/s, and the Rocket 4 Plus gets 53 MiB/s. Mixed workload (random read/write) The optanes get 725/483 as opposed to 9/6 for the 980 Pro, and 42/28 for the Rocket 4 Plus. 2x1.5TB Optane Raid0: Prep time: `sysbench fileio --file-total-size=150G prepare` 161061273600 bytes written in 50.41 seconds (3047.27 MiB/sec).
2TB Nand Samsung 980 Pro:
Prep time:
`sysbench fileio --file-total-size=150G prepare`
161061273600 bytes written in 87.15 seconds (1762.53 MiB/sec).
4TB Sabrent Rocket 4 Plus:
Prep time:
`sysbench fileio --file-total-size=300G prepare`
322122547200 bytes written in 152.39 seconds (2015.92 MiB/sec). | |||||||||||||||||||||||