NetApp AFF and Advanced Drive Partition v2, part 1

NetApp has announced new version of their storage operating system, Ontap 9. One of the improved features is Advanced Drive Partition V2 (ADPv2).

With ADP you can slice SDDs or HDDs into two partitions, one for root aggregates and one for data aggregates. This means that you don’t have to reserve separate drives for root aggregates, instead you can use a small “slice” of disk for root aggregate and use remaining “slice” for data aggregates. By using ADP the overhead related to root aggregates is smaller and you will get more usable space for data.

ADP is supported in three use cases

  1. Entry level FAS systems, FAS2500 series. These usually ship with few disks and with traditional separate disk scheme for root aggregates, the usable vs marketing capacity was low
  2. Flashpool SSD drives can be put into a storage pool with ADP. Parity and spare drives can be shared and usable cache capacity is higher
  3.  All Flash FAS (AFF). Similar usage as with Entry level FAS systems

In this blog entry I will concentrate on AFF and ADPv2. With ADPv1 there was a slight hick-up with entry level AFF configurations. Two root aggregates are needed in a HA-configuration and the required usable size per root aggregate is around 390 GiB. The smallest AFF ships with 12 x 400 GB SSD drives. The smaller the disk size and the smaller the number of disks, the higher percentage of disk space goes to root aggregate slices. Especially with 400GB drives there wasn’t much space left for data aggregates after root aggregates had been built.

With 12 SSD disks you had to make a decision between maximizing capacity and maximixing performance.

Either maximize capacity by using only one data aggregate (and less parity + spare drive slices). The downside with that decision, is that you are leaving about 50% of performance potential unused, as only one storage controller is serving data, while the other controller has only root aggregate, is waiting for failover event to takeover the one and only data aggregate. Out of marketing capacity of 4,8TB (12x400GB), you will get roughly 1,99TB of usable space in data aggregate or 41,44% usable space of marketing capacity.

Example of Asymmetric configuration with 12 x 400GB SSD drives

Screen Shot 2016-06-23 at 17.28.02

The other option is to maximize performance and make two data aggregates on top of two root aggregates. With two data aggregates both storage controllers are actively serving data and you will have more CPU and memory to produce performance. This decision comes with penalty in usable capacity as you will have to use more disk slices for parity and spare.

Example of Symmetric configuration with 12x400GB SSD drives

Screen Shot 2016-06-23 at 17.23.24

With this Symmetric configuration you will only get 1,33 TB of usable space for data, compared to 1,99 TB with Asymmetric configuration. Or measily 27,62% usable space of marketing raw capacity.

To get most out of their valuable investment, most of the customers using AFF with only 12 SSD drives, chose the Asymmetric version to maximize their usable capacity.

With Ontap 9 and ADPv2 the situation with small AFF configurations is much better. Instead of slicing the disk into two partitions, ADPv2 can now slice the disk now into three partitions, one for root aggregates and the remaining space is split into two equal sized partitions for data aggregates. This means that you won’t have to choose between maximizing your capacity or maximizing performance. With ADPv2 you will get the same usable capacity as with ADPv1 asymmetric configuration ( only one controller serving data), while having two data aggregates and both controllers serving data and producing performance.

Example of ADPv2 with 12x400GB SSD

Screen Shot 2016-06-23 at 17.33.21

In part 2 I will cover AFF 12 disks setups with larger SSD drives (3.8TB and 15.3TB)

7 thoughts on “NetApp AFF and Advanced Drive Partition v2, part 1

  1. nice article, but i need to convert one FAS25xx with fresh install with ADPv2 and when the wipe process finishes it configure two partitions.


    1. Thanks for the first ever comment to my blog 🙂

      I haven’t played around with ADPv2 with spinning disks, it might be that the new three-partition root-root-data scheme is only in use with AFF configurations and FAS25xx with spinning disks will revert back to original two-partition root-data scheme


      1. Hi there, I’m very interested in adp v2 to gain some space. But I can’t find any documentation about it.
        I’ve always some errors when I try to remove the partitions because of the root partitions beng still there. Do you have any idea ?


  2. Mistyped my previous answer it should say “root-data-data”, not “root-root-data”. First of all this feature requires minimum ontap 9. Furthermore it is only avalable on AFF or FAS with only SSD drives. The minimum size for SSD is 400GB. If these “rules” are not met ADPv2 will revert back to method used with Ontap 8.3 version (only two partitions per drive, root-data).

    In order to use new slicing scheme you basically have to wipe the system and do reinstall, instructions for that can be found from NetApp support site.


    1. Thank you, I thought that adpv2 also was possible with entry level disc mechanical models. this argument explain why it works in AFF.


  3. Nice blog post! Because a AFF HA pair share the same SSD’s in a ADPv2 setup, how does SSD firmware upgrade works in ONTAP 9?


  4. I haven’t done any disk fw updates for systems running ADPv2, but I would imagine that it works the same way as with systems with Flashpool, sharing SSD drives between controllers. If I remember correctly, either controller can own the physical disks, the owning controller is probably in charge of updating firmwares


Leave a Reply

Please log in using one of these methods to post your comment: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s