Quantcast
Channel: VMware Communities : All Content - All Communities
Viewing all articles
Browse latest Browse all 175326

Linux guest NFS server performance mystery

$
0
0

Hi all,

 

We have a need to present additional storage to some physical RHEL 5 Oracle database servers that are currently using EMC PowerPath for the databases. We don't want to introduce any more fiber channel LUNs to this environment especially where it would need to be another SAN vendor. It's a temporary requirement and I have suggested using NFS but the Oracle DBAs are balking with performance concerns. So to squash those concerns, I've done some performance testing. I created two CentOS VMs on the same fiber channel datastore, one the NFS server and one the client, with NFS v3 over tcp with 32k read/write blocks. Using FIO to measure, I get 16000 IOPS just to the VM's local vmdk/ext4 file system. If I run the exact same FIO specs on a NFS mount when the two guest are on separate ESXi hosts, I get an average 17000 IOPS. If I put the two VMs on the same ESXi host, the IOPS jump to 33000 IOPS, which I can't believe given the NFS server is working against the same vmdk file system that gets 16000 IOPS. The array (Winchester) has no SSD or other cache tier beyond what's in the controller. There is a direct connect 1Gbit cable between the two hosts with jumbo frames enabled on the vswitch and NICs in Linux and it's on this port group that the NFS traffic is passing. I can replicate these numbers all day long. Any thoughts on why we're seeing this exceptional NFS performance?

 

Thanks!


Viewing all articles
Browse latest Browse all 175326

Latest Images

Trending Articles



Latest Images