Developers Club geek daily blog

3 years ago
Data ONTAP 8.3 cDOT OS is one of the greatest releases of NetApp. One of key features of release is the Advanced Drive Partitioning (ADP) technology, in the previous article I considered application of this technology for Root-Data Partitioning, in same I suggest to consider the internal StoragePools device. In more detail about what new is in cDOT 8.3 here.

StoragePools is similar to Root-Data Partitioning which also uses a partitsionirovaniye providing a new method to distribute SSD a cache for hybrid units.

Hybrid unit


The StoragePool technology was developed especially for hybrid units that it is more rational to distribute SSD a cache between them. For example at you in system it is set only 4 SSD, and the cache wants to be made for 2, 3 or even 4 units, here you will be come to the rescue by ADP.

So, for a start it is necessary to create StoragePool and to add there the SSD set of disks.

Data ONTAP 8.3 ADP: FlashPool StoragePools

All disks in StoragePool'e will be broken into equal 4 parts. It is not configured anywhere, the system always razobjt them on equal 4 parts. It is possible to have several Storagepool'ov. Creating StoragePool, by default, partition equally will be divided between two notes of HA system, but it can be changed.

The set of first (P1), second (P2), third (P3) and fourth (P4) of partition of the disks StoragePool'a is called Allocation Unit respectively (AU1, AU2, AU3, AU4).

Data ONTAP 8.3 ADP: FlashPool StoragePools

  • Now we can collect separate Raid Group'Y (RG) from Allocation Unit'ov (AU).
  • Allocation Unit'Y are created only from SSD.
  • Raid group already containing AU can consist only of one AU.
  • Several raid of the groups consisting of AU can coexist in one unit or live in several different units, units can live on the first and/or second note.
  • AU is used entirely for creation raid of group.
  • It is possible to involve any available quantity of AU or not to involve.
  • While the RAID group is no more than 14 Disks, it can be on the run converted by RAID4 into RAID-DP and back.
  • As an exception to the rules SSD as a cache can be added to the unit consisting of HDD disks (the hybrid unit, it is called still by FlashPool). As a rule NetApp does not permit to integrate in one unit raznogodny diski*.
  • As an exception of the rule, for SSD raid of groups is allowed to have excellent type of security (RAID4 or RAID-DP) from that which is configured for HDD. As a rule all raid of group have to have identical type of security. In our case the recommendation works thus: in total raid of group consisting of HDD have to have identical type of security, and the raid of group consisting of SSD/AU can have excellent type of security from HDD, but have to have identical type of security in SSD/AU razka raid of groups.

Data ONTAP 8.3 ADP: FlashPool StoragePools

Recommendations about RAID and Spare to disks


If you have only 4 disks

that can be collected by RAID4 (and to lose 1 Parity a disk) or to collect RAID-DP (and to lose 2 Parity of a disk)
As it is not enough disks, in case of RAID-DP it is allowed not to have Spare a disk
For RAID-4 it is necessary to have one Spare a disk
Hot-Spare a disk is always extremely recommended in all configurations.
At the choice between RAID4 (from Spare a disk) and RAID-DP (without disk Spare), NetApp gives preference to RAID4.

The preference of RAID4 on small quantity of SSD is caused by several reasons:

  1. SSD disks are actually more reliable than normal disks therefore on small quantities of SSD RAID4 than HDD in the same RAID is more reliable
  2. SSD disks one many faster are recovered because of what the output of the second disk out of operation in the course of reconstruction is much less probable on a srvneniye with HDD
  3. As SSD disks have limited number of cycles of rewriting, Spare the disk will not be used at all (and not to uilizirovat rewriting cycles)
  4. If the number of SSD exceeds 7 disks, it is recommended to use (to convert group in) RAID-DP

In production systems from four SSD RAID4 from Spare are, as a rule, used by a disk (we lose two disks from four). It is also possible to collect on four disks RAID-DP from Spare by a disk and to lose 3 disks from 4kh.

It is important to note that if the unit is refused by any one RAID4 or RAID-DP group (whether it be SSD, AU or HDD), all unit will pass into a status of Degraded and Offline. In this connection NetApp always recommends to isspolyovat Spare disks and meeting your expectations urover securities for red groups.

Example of converting of RAID-DP in RAID4 for SSD of a cache (AU)
storage aggregate modify -aggregate aggr_name -hybrid-enabled true
storage pool show-available-capacity
storage aggregate add aggr_name -storage-pool sp_name -allocation-units number_of_units
storage aggregate modify -aggregate aggr_name -raidtype raid4 -disktype SSD

storage aggregate show-status test

Aggregate test (online, mixed_raid_type, hybrid) (block checksums)
  Plex /test/plex0 (online, normal, active, pool0)
    RAID Group /test/plex0/rg0 (normal, block checksums, raid-dp)
                                                              Usable Physical
     Position Disk                        Pool Type     RPM     Size     Size Status
     -------- --------------------------- ---- ----- ------ -------- -------- ----------
     dparity  1.2.3                        0   BSAS    7200  827.7GB  828.0GB (normal)
     parity   1.2.4                        0   BSAS    7200  827.7GB  828.0GB (normal)
     data     1.2.5                        0   BSAS    7200  827.7GB  828.0GB (normal)
     data     1.2.6                        0   BSAS    7200  827.7GB  828.0GB (normal)
     data     1.2.8                        0   BSAS    7200  827.7GB  828.0GB (normal)



    RAID Group /test/plex0/rg1 (normal, block checksums, raid4)
                                                              Usable Physical
     Position Disk                        Pool Type     RPM     Size     Size Status
     -------- --------------------------- ---- ----- ------ -------- -------- ----------
     parity   1.3.3                        0   SSD        -  82.59GB  82.81GB (normal)
     data     1.4.0                        0   SSD        -  82.59GB  82.81GB (normal)
     data     1.4.1                        0   SSD        -  82.59GB  82.81GB (normal)
     data     1.4.2                        0   SSD        -  82.59GB  82.81GB (normal)



Advanced Workload Analyzer (AWA)


Before buying SSD disks for FlashPool you can evaluate what quantity of disks and what capacity will be able to increase a propusknuyusposobnost and speed of a response, being based on quantity a cache hit of hits which could be in case of existence of a cache.

Example of an output of AWA
### FP AWA Stats ###
Host lada66a Memory 93788 MB
ONTAP Version NetApp Release Rfullsteam_awarc_2662016_1501071654: Wed
Jan 7 17:43:42 PST 2015
Basic Information
Aggregate lada66a_aggr1
Current-time Fri Jan 9 16:14:29 PST 2015
Start-time Fri Jan 9 12:30:16 PST 2015
Total runtime (sec) 13452
Interval length (sec) 600
Total intervals 24
In-core Intervals 1024
Summary of the past 20 intervals
max
------------
Read Throughput (MB/s): 134.059
Write Throughput (MB/s): 1333.279
Cacheable Read (%): 27
Cacheable Write (%): 22
Max Projected Cache Size (GiB): 216.755
Summary Cache Hit Rate vs. Cache Size
Referenced Cache Size (GiB): 216.755
Referenced Interval: ID 22 starting at Fri Jan 9 16:04:38 PST 2015
Size 20% 40% 60% 80% 100%
Read Hit (%) 1 3 6 11 21
Write Hit (%) 5 11 13 14 23
AWA Summary for top 8 volumes
Vol interval len (sec) 19200
In-core volume intervals 8
Volume #1 lada66a_vol8
Summary of the past 32 intervals
max
------------
Read Throughput (MB/s): 1.751
Write Throughput (MB/s): 18.802
Cacheable Read (%): 11
Cacheable Write (%): 16
Max Projected Cache Size (GiB): 29.963
Projected Read Hit (%): 31
Projected Write Hit (%): 16
Volume #2 lada66a_vol7
Summary of the past 32 intervals
max
------------
Read Throughput (MB/s): 1.640
Write Throughput (MB/s): 17.691
Cacheable Read (%): 14
Cacheable Write (%): 13
Max Projected Cache Size (GiB): 28.687
Projected Read Hit (%): 29
Projected Write Hit (%): 13



Shortcomings


At failure of a SSD disk all raid of group (AU) which use its partition will be affected as result all units which use these AU. In this connection it is always recommended to have Spare a disk, in other this recommendation was always.

Outputs


StoragePools is one more technology under a cowl of cDOT which is created so that it is the simplest to use it. Just as Root-Data Partitioning it just is and just works. In general the technology perfectly copes with the task — more flexibly to distribute SSD a cache for hybrid units. It indirectly allows to save on SSD disks, so it was necessary to buy a minimum 4x by SSD for each hybrid unit earlier, now it is possible for 4kh to use units one set from 4kh SSD. Understanding subtleties of work of StoragePool it is possible to distribute more rationally a cache selecting it in the portions to units: for example to give to start one AU for an agragat, and then to add if necessary.

FAQ on work of ADP is available on Fieldportal.

By errors in the text I ask to send messages to a LAN.
Notes, additions and questions for article opposite, I ask in the comment.

This article is a translation of the original post at habrahabr.ru/post/270169/
If you have any questions regarding the material covered in the article above, please, contact the original author of the post.
If you have any complaints about this article or you want this article to be deleted, please, drop an email here: sysmagazine.com@gmail.com.

We believe that the knowledge, which is available at the most popular Russian IT blog habrahabr.ru, should be accessed by everyone, even though it is poorly translated.
Shared knowledge makes the world better.
Best wishes.

comments powered by Disqus