Virtualization in the database world
Posted by decipherinfosys on October 26, 2007
Virtualization is rapidly changing how we consolidate our servers for testing and deployment, for training and for disaster recovery. We have VMWare machines running in our office with clustered SQL Server and clustered Oracle environments. It is a great way to test out new software and play around with the new functionality as well as test out your disaster recovery scenarios without investing millions on hardware. It is also a good way to consolidate your servers and use them for development and QA purposes.
Both VMWare’s VMWare Server and MSFT’s Virtual Server 2005 R2 support 64-bit architecture on the host which means more memory utilization on the host seerver (up-to 1 Tera-byte) which translates into the capability of running many more active VMs. When you are getting ready to consolidate your servers into the VMs, you should remember to allocate 32MB per VM to account for the VM overhead. So, if you have an Oracle database for which you have allocated say 2GB of memory (overall), when you move to the VM, you would need (2 * 1024) + 32 MB for the VM. And you need to ensure that there is enough RAM left for the host as well. Another thing to consider is the usage of a SAN for the host server. You should create the VM’s virtual hard drive on a drive that is different than the host’s operating system – this is to reduce any possibility of a drive and spindle contention. Best thing is to use a SAN to help improve the I/O for the VMs. One more things to remember is that the virtual hard drives can be configured with their default settings to dynamically expand as needed. However, this is not good for performance. We would recommend to pre-allocate a fixed amount in order to avoid the performance hit of the expansion.
Here is the link to the whitepapers from VMWare on this topic:
One Response to “Virtualization in the database world”
Sorry, the comment form is closed at this time.