The Student Room Group

Poor SSD performence

Got another SSD (500GB Samsung 840 Evo) today, hooked it up, set up the partition, did a rudimentary speed check using Dxtory and it came up at ~250MB/s write. It is in a GSATA III port, and I know that the Marvell controlled SATA ports are slower than the Intel ones, but I would still expect them to be faster than this. HDtune gives it a fairly stable read speed just under 300MB/s ocassionally spiking up then down or down then up (kinda like a stereotypical heart-rate monitor)

I then went on to check my other two SSDs. The 250GB Samsung 840 (boot drive) on the Dxtory test came out at ~100MB/s (so it reckons my HDDs are faster), and I went on to check it using HDtune and it came out with this plot:

which in the grand scheme is fairly stable, most have come out being wildly jumping between ~100MB/s (as low as 60) and 400 for most of it. It's running on the Intel SATA III port 0. The third SSD seems to be running just fine, 250GB 840 Evo in Intel port 1 coming out on the Dxtory test at a good 450MB/s+ (and I didn't bother running it on HDtune).
(edited 9 years ago)
Maybe when you are monitoring the read and write speed..........actually write something to the ssd and also open a program installed in it, and you'll see what the actuall speeds are when the ssd is in use?
Reply 2
Writting from good SSD (G) to C started about 400, then dropped off to 80, then doing the same in reverse (c->G) gave 300-400
G->f (new SSD) started 400 dropped to about 250, in reverse, flat 300
Still seems on the low side, especially compared to what C used to manage
(2.71GB file used)
(edited 9 years ago)
SSDs become extremely slow when filled up because wear-leveling can no longer be done properly, and all blocks need to be erased before written again.

So it depends on what the program actually does, and how much of the SSD is marked as "used" on the firmware level (not the filesystem level). Maybe try running trim?

This should only affect writes, not read. So if it only happens in one direction, you know which SSD is the bottleneck.
Reply 4
Original post by ihavemooedtoday
SSDs become extremely slow when filled up because wear-leveling can no longer be done properly, and all blocks need to be erased before written again.

So it depends on what the program actually does, and how much of the SSD is marked as "used" on the firmware level (not the filesystem level). Maybe try running trim?

This should only affect writes, not read. So if it only happens in one direction, you know which SSD is the bottleneck.

Well, the one on the Marvell chip was brand new, so should have the grand total of jack **** being used, I'm pretty sure TRIM is running anyway (IIRC it was running be default on the first one when I put it in). With C I had just recently cleared about 40% of it though.
Reply 5
Original post by Jammy Duel
Writting from good SSD (G) to C started about 400, then dropped off to 80, then doing the same in reverse (c->G) gave 300-400
G->f (new SSD) started 400 dropped to about 250, in reverse, flat 300
Still seems on the low side, especially compared to what C used to manage
(2.71GB file used)


Transferring from one drive to another (which I think is what you're doing there) may not be the best test as you don't know where the bottleneck is (source drive, destination drive, controllers, CPU, Memory, Filesystem, Operating system etc.)

You'll do much better with specialist test tools or things that do just reads or write to a single drive (on linux I'd probably start off using dd and /dev/zero or /dev/null). Similarly single large files will generally show better speeds than lots of small files are is reduces the OS/ filesystem overheads.
Reply 6
Original post by mfaxford
Transferring from one drive to another (which I think is what you're doing there) may not be the best test as you don't know where the bottleneck is (source drive, destination drive, controllers, CPU, Memory, Filesystem, Operating system etc.)

You'll do much better with specialist test tools or things that do just reads or write to a single drive (on linux I'd probably start off using dd and /dev/zero or /dev/null). Similarly single large files will generally show better speeds than lots of small files are is reduces the OS/ filesystem overheads.


Well I somehow doubt it would be memory, cpu or source drive (given I was using the drive that the software was offering up as still working at high speed as source)

Posted from TSR Mobile
Original post by Jammy Duel
Got another SSD (500GB Samsung 840 Evo) today, hooked it up, set up the partition, did a rudimentary speed check using Dxtory and it came up at ~250MB/s write. It is in a GSATA III port, and I know that the Marvell controlled SATA ports are slower than the Intel ones, but I would still expect them to be faster than this. HDtune gives it a fairly stable read speed just under 300MB/s ocassionally spiking up then down or down then up (kinda like a stereotypical heart-rate monitor)

I then went on to check my other two SSDs. The 250GB Samsung 840 (boot drive) on the Dxtory test came out at ~100MB/s (so it reckons my HDDs are faster), and I went on to check it using HDtune and it came out with this plot:

which in the grand scheme is fairly stable, most have come out being wildly jumping between ~100MB/s (as low as 60) and 400 for most of it. It's running on the Intel SATA III port 0. The third SSD seems to be running just fine, 250GB 840 Evo in Intel port 1 coming out on the Dxtory test at a good 450MB/s+ (and I didn't bother running it on HDtune).


do everything on this list

http://www.computing.net/howtos/show/solid-state-drive-ssd-tweaks-for-windows-7/552.html

Quick Reply

Latest

Trending

Trending