Systems Engineering and RDBMS

  • Click Here for Decipher's Homepage

  • Categories

  • Questions?

    Please send your wish list of things that you would like us to write about or if you have suggestions to help improve this blog site. You can send all questions/suggestions to: Blog Support
  • Archives

  • Blog Stats


Archive for the ‘Hardware’ Category

CISCO’s Data Center Plans

Posted by decipherinfosys on March 20, 2009

CISCO unveiled it’s data center plans with it’s new UCS B-Series blades – you can read more on that here.

Posted in Hardware, Technology | Leave a Comment »

1TB Drives for Arrays

Posted by decipherinfosys on February 18, 2008

IBM is also using 1TB drives for it’s storage arrays now. Think about what this could mean for the data storage needs for large data centers. You can read more on this at this eWeek link:

Posted in Hardware, News | Leave a Comment »

Difference between RAID 0+1 vs RAID 1+0

Posted by decipherinfosys on January 15, 2008

We have covered RAID levels before in our posts. You can read about the different RAID levels here and the I/O characteristics here.  While building up a DR (Disaster Recovery) environment for one of our clients, one of the questions asked by the client was: “How is RAID 1+0 different than RAID 0+1?”.  Both RAID 0+1 and RAID 1+0 are multiple RAID levels which means that they are created by taking a number of disks and then dividing them up into sets. And within each of these sets, a single RAID level is applied to it in order to form the arrays.  Then, the second RAID level is applied at the top of it to form the nested array.  RAID 1+0 is also called as a stripe of mirrors and RAID 0+1 is also called as a mirror of stripes based on the nomenclature used for RAID 1 (mirroring) and RAID 0 (striping).  Let’s follow this up with an example:

Suppose that we have 20 disks to form the RAID 1+0 or RAID 0+1 array of 20 disks.

a) If we chose to do RAID 1+0 (RAID 1 first and then RAID 0), then we would divide those 20 disks into 10 sets of two.  Then we would turn each set into a RAID 1 array and then stripe it across the 10 mirrored sets.

b) If on the other hand, we choose to do RAID 0+1 (i.e. RAID 0 first and then RAID 1), we would divide the 20 disks into 2 sets of 10 each.  Then, we would turn each set into a RAID 0 array containing 10 disks each and then we would mirror those two arrays.

So, is there a difference at all?  The storage is the same, the drive requirements are the same and based on the testing also, there is not much difference in performance either.  The difference is actually in the fault tolerance.  Let’s look at the two steps that we mentioned above in more detail:

RAID 1+0:
Drives 1+2     = RAID 1 (Mirror Set A)
Drives 3+4     = RAID 1 (Mirror Set B)
Drives 5+6     = RAID 1 (Mirror Set C)
Drives 7+8     = RAID 1 (Mirror Set D)
Drives 9+10     = RAID 1 (Mirror Set E)
Drives 11+12     = RAID 1 (Mirror Set F)
Drives 13+14     = RAID 1 (Mirror Set G)
Drives 15+16     = RAID 1 (Mirror Set H)
Drives 17+18     = RAID 1 (Mirror Set I)
Drives 19+20     = RAID 1 (Mirror Set J)

Now, we do a RAID 0 stripe across sets A through J.  If drive 5 fails, then only the mirror set C is affected.  It still has drive 6 so it will continue to function and the entire RAID 1+0 array will keep functioning.  Now, suppose that while the drive 5 was being replaced, drive 17 fails, then also the array is fine because drive 17 is in a different mirror set.  So, bottom line is that in the above configuration atmost 10 drives can fail without effecting the array as long as they are all in different mirror sets.

Now, let’s look at what happens in RAID 0+1:

RAID 0+1:
Drives 1+2+3+4+5+6+7+8+9+10        = RAID 0 (Stripe Set A)
Drives 11+12+13+14+15+16+17+18+19+20    = RAID 0 (Stripe Set B)

Now, these two stripe sets are mirrored.  If one of the drives, say drive 5 fails, the entire set A fails.  The RAID 0+1 is still fine since we have the stripe set B.  If say drive 17 also goes down, you are down.  One can argue that that is not always the case and it depends upon the type of controller that you have.  Say that you had a smart controller that would continue to stripe to the other 9 drives in the stripe set A when the drive 5 fails and if later on, drive 17 fails, it can use drive 7 since it would have the same data.  If that can be done by the controller, then theoretically speaking, RAID 0+1 would be as fault tolerant as RAID 1+0.  Most of the controllers do not do that though.

Posted in Hardware | 7 Comments »

RAID Levels and IOs per disk

Posted by decipherinfosys on March 1, 2007

Here is the calculation of the I/O’s per disk for different RAID levels:

  • Raid 0 — I/Os per disk = (reads + writes) / number of disks
  • Raid 1 — I/Os per disk = [reads + (2 * writes)] / 2
  • Raid 5 — I/Os per disk = [reads + (4 * writes)] / number of disks
  • Raid 10 — I/Os per disk = [reads + (2 * writes)] / number of disks

So, as you can see from above RAID 5 incurs a higher overhead for writes as compared to RAID 10 which is why for requirements of a highly performant and transactional RDBMS, it is rarely recommended to have RAID 5 for the logs.

Also, how can you make use of this information to see whether you are running into a I/O bottleneck or not – because, what is presented above is just theory and to prove a well known point about RAID 5 vs RAID 10. Well, if we take the Windows OS for example and use perfmon (performance monitor) utility to measure these counters:

PhysicalDisk Object: Disk Reads/Sec., Disk Writes/Sec., Avg. Disk Queue Length (you can descriptions on these by hitting the explain button in perfmon when selecting the counters). Now, assume that you have put your log files on RAID 1 system which has 2 physical disks on it. Since logs are written to sequentially, RAID 1 is a good choice. Suppose, the counters measurements over a period of time yield you these values:

Disk Reads/sec = 90
Disk Writes/sec = 80
Avg. Disk Queue Length = 5

In that case, you are encountering (90 + (2 * 80))/2 = 125 I/Os per disk and your disk queue length = 5/2 = 2.5 which indicates a border line I/O bottleneck (any value over 2 is a cause of concern and should be evaluated along with the I/Os seen per disk).

From a RDBMS perspective, all three leading RDBMS: Oracle, SQL Server and DB2 LUW provide with system level information to look into the wait events for further diagnosis. You can look at the pending I/O requests, the latch waits etc. but do not (of course) provide any visibility into the physical disks experiencing the problem. In a future whitepaper on our site, we will cover in detail the different wait events in the three leading RDBMS and how they can help you to troubleshoot performance issues.

Posted in Hardware | Leave a Comment »


Posted by decipherinfosys on February 9, 2007

SAN stands for Storage Area Network

NAS stands for Network Attached Storage

The two of them are very different. A SAN system is an external storage system that allows multiple computer systems to access the same storage. The fiber controller inside the external storage system is able to take requests for different logical volumes from different HBAs (Host Bus Adapters). A NAS is similar, however, unlike the SAN system where the storage is connected via a fiber channel connection, a NAS system is accessed via a network connection. Of the two choices, I prefer SAN over a NAS for a production environment. The problem with NAS is that it is IP-based and data must travel over your network, which is limited by the top speed of the network, and other shared network traffic. This can greatly increase I/O latency, harming performance.

There also is a question of whether to go with different RAID arrays or to go with a SAN. SANs do outperform individual RAID arrays. Servers can have more than two fiber connections to a SAN, so you can increase bandwidth to the SAN when necessary. Another big value point for SANs is efficient disk usage. In a well-tuned RAID configuration, you might need half a terabyte of disk to support a 100GB database so you’ll have enough heads reading and writing data. SANs give systems only what they need and don’t over-allocate disk space the way RAID does. In addition, you can use a SAN to support multiple servers simultaneously and dual-path and multi-path SANs match the performance of RAID 1 and RAID 10 volumes. The drawbacks of using a SAN include the expense and difficulty of setup. In addition, a properly sized and configured SAN requires you to measure bandwidth and I/O requirements of all systems that simultaneously use the SAN.

Posted in Hardware | Leave a Comment »

What RAID is Best for You?

Posted by decipherinfosys on January 30, 2007

Most of you are familiar with the basic RAID technologies avaible out there today, but it is always good to have too much information about this topic than not enough. Here is a brief yet informative summary of the most popular hardware RAID configurations, including pros and cons for each:

RAID-0 (Striped)

  • Does not provide fault tolerance
  • Minimum number of disks required = 2
  • Usable storage capacity = 100%
  • This is the fastest of the RAID configurations from a read-write standpoint
  • Is the least expensive RAID solution because there is no duplicate data
  • Recommended use for temporary data only

RAID-1 (Mirrored)

  • Fault tolerant – you can lose multiple disks as long as a mirrored pair is not lost
  • Minimum number of disks required = 2
  • Usable storage capacity = 50%
  • Good read performance, relatively slow write performance
  • Recommended for operating system log files

RAID-5 (Striped with Parity)

  • Fault tolerant – can afford to lose one disk only
  • Minimum number of disks required = 3
  • Usable storage capacity = subtract 1 whole disk from the total number in the array (i.e. 3 60Gig hard drives would provide 120Gig of usable disk space)
  • Generally good performance, and increases with concurrency – the more drives in the array the faster the performance
  • Recommended for operating system files, shared data, and application files

RAID-0+1 (Striped with Mirrors)

  • Fault tolerant – you can lose multiple disks as long as both are not part of a mirrored pair
  • Minimum number of disks required = 4
  • Usable storage capacity = 50%
  • Generally good performance, and increases with concurrency – the more drives in the array the faster the performance
  • Recommended for operating systems, shared data, application files, and log files

Additional Things to Keep in Mind

  • If you are using more than two disks, RAID 0+1 is a better solution than RAID 1
  • Usable storage capacity increases as the amount of disks increases, but so does the cost of the configuration
  • Performance increases as you add disks, but again, so does cost

Posted in Hardware | 7 Comments »