One of the things that cause the greatest amoutn of issues in SAN performance especially in the virtual space is of course lack of IO availability. That is to say IO issues with either reading data or writing it. The problem comes in when the idea of disk space vs disk utilization (performance) rears its ugly head, The problem is a understandable one A department lets say purchases 2 TB of disk space with the idea that this will be enough to accomodate their data as a raid 5 volume. The problem comes it when you ask the following questions.
1. What disk group will this go into?
As a standard storage best practice there should always be at least three disk groups. What the breakout of these disk groups does is create buffered sections of disk that are not being tapped polled or affected in any way at the SAN level. This means high IO and threshold maximization on on disk group will not leave the others out to dry.
2. What will the VM guests be used for?
A VM that is a functioning as a IIS server will probably not have the same I/O load as the database server on the back end or worse a MS Exchange server which is constantly sending or recieving mail.
So what do you do if you are faced with a application server or database server that is going to be a I/O hog? This is where breaking out your disk groups comes into play. One disk group is made into a raid ten disk group using up four or more of your disks. This storage group is where your SQL, Exchange or app, transaction logs will go as well as any other drive that you think may be capable of generating high usage.
The other disk group will be where the base OS disk and non Intensive disks such as data stores or data drives in SQL. these can be left on a standard raid five configuration.
3. Raw data Mappings or Vmdk?
No one will argue that is exceptionally neat to have everything one a drive in one file and be able to make VMware shots on the fly. But when you are dealing with mission critical servers that need performance you need to make the hard choices and one of those is Raw data mappings This means of course that the disk is carved out and presnted to the VM as if it was a LUN being presented to physical sever. there are of course ups and downs to this Idea one being that you instantly loose the portability that you gain using VMware tools such as storage vmotion a feature I have used to move vm’s off one old failing cluster to another almost seemlesly
What you gain though is that you have removed on layer of complexity in server performance issues in this scenario VMware itself is no longer a factor for these disks.
Next post will discuss Still having issues?
Posted in Uncategorized