While doing research into poor write performance with Oracle I discovered that the server was using the LSI SAS1068E. We had a RAID1 setup with 300GB 10K RPM SAS drives. Google provided some possible insight into why we the write performance was so bad(1 2). The main problem with this card is that there is no battery backed write cache. This means that the write-cache is disabled by default. I was able to turn on the write cache using the LSI utility.
This change however did not seem to any difference on performance. At this point I came to the conclusion that the card itself is the blame. I believe that this is an inexpensive RAID card that is good for general use of RAID0 and Raid1, however for anything were write throughput is important, it might be better the spring for a something a little bit more expensive.
When it was all said and done we ended up replacing all the these LSI cards with Dell Perc 6i cards. These cards did come battery backed…which allowed us to then enable the write cache, needless to say the performance improved significantly.
Hey, I’ve got the same LSI SAS controller and i’m thinking about replacing it with Dell Perc. Can you tell me, how big is the performance boost? Have you got any benchmarks?
Well I don’t have any hard benchmarks to show you, however since we were running Promox on these servers we were using a perl script that comes with that distro as our benchmarking tool.
The script is called ‘pveperf’. Here is the output from the command:
Original LSI SAS1068E output:
BUFFERED READS: 115.99 MB/sec
AVERAGE SEEK TIME: 6.58 ms
FSYNCS/SECOND: 272.47
New Dell Perc output:
BUFFERED READS: 101.11 MB/sec
AVERAGE SEEK TIME: 6.33 ms
FSYNCS/SECOND: 2509.68
Clearly the seek time should still be about the same…as should the buffered reads. The major performance gains are seen in terms of the write speed (FSYNCS/SECOND) as there is almost a 10X increasing using this metric.
I can tell you for a fact that if you have an I/O intensive application (Mysql, Oracle, Samba/NFS server with heavy writes) for example…you will see a huge increase in performance….and it is all due to the ability of the controller to cache those writes.
We ended up replacing these LSI cards with Perc ones in 10 to 15 of our Dell PowerEdge servers after this discovery.
We’re having the very same issues and are working on a solution: replacing the controller with the Perc 6I controller.
Do you know if these controllers are exchangeable without losing the data?
Well we replaced our LSI cards with the Perc ones without any issues. When you install the new card and create the new virtual disks, the controller will present a popup screen suggesting that you initialize the disks, we had existing data on our servers to I choose NOT to do that, and we have not seen any negative side effects of not doing so.
I do not see any reason why you would see any data loss just because you switch between these two RAID cards.
I was following your instructions for the “write cache” of that controller. But I don’t find in that LSI tool the menue point to enable write cache.
Please help me.
Well if you provide me with a little bit more detail I might be able to offer up some help…however as the article states, turning in the write cache option via the admin tools really has little to no actual impact on the performance that you will receive while using this card.
In fact it maybe the case that it simply says that it is enabled and no actual changes are made hardware or software wise. Anyway let me know if you still need help.
I’ve used a Dell Perc 6/i aka 1078 and Dell SAS 6ir aka 1068 and get the same bad read performance of 130MB/s max. I guess it may be the backplane, cable or drivers.