Visio by DPTPB: NetApp AFF A700s

Feb 10, 2017 update notes for Visio by DPTPB stencil collection, part 1

This is became quite lengthy post or in fact series of posts due to new NetApp AFF A700s platform, which is very different “beast” than any other NetApp AFF / FAS platform:

  • First platform with internal disks which is not sharing chassis with existing disk shelf
  • Only 40GbE & 32G Fiber Channel ports and adapters
  • 10GbE connectivity is only available with 40GbE to 4x10GbE breakout cables.

This in turn required additional new shapes for 40GbE / 32G FC switches (part 2) and 40GbE to 4x10GbE breakout cables (part 3). Sample drawings for NetApp AFF A700s are available in the last post of this series, (part4).

40GbE / 32G FC switch shapes can also be used with other new NetApp platforms: AFF A700, a “big brother” to AFF A700s, uses also 40GbE Cluster Interconnect by default + FAS9000 / AFF A700 / FAS8200 / AFF A300 have 40GbE / 32G FC adapter options.

I had to make few sample cabling diagrams and do some research in order to understand what is required from the new shapes. Some of my findings & links to data sheets / setup instructions are scribbled down below.

Like always, if you find an error, please leave a comment and I am more than happy to learn from my mistakes and make any corrections required.

General NetApp AFF A700s Notes

NetApp AFF A700s is a 4U single chassis All-Flash-Array with up to 24 internal SSD drives. It has the same amount of CPU cores and RAM as standard NetApp AFF A700, but uses less rack space, thus letter”s” in the product, “s” for slim.

NetApp has published quite impressive performance figures for AFF A700s

  • 2.4 million IOPS at 0.69ms latency (with 12-node AFF A700S)
  •  SPC1 results
  • Top three result for vendors publishing SPC1 results

It is also more eco-friendly than standard AFF A700

  • Uses less power and generates less heat (when configured with the same number of SSDs)

However there is no free lunch and with smaller form-factor there are some limitations:

  • Less PCI slots and IO ports than standard AFF A700
    • Only 40GbE, 32G FC and 12G SAS adapters are supported
    • can use 40GbE-to4x10GbE breakout cables for 10GbE connectivity
  • No initiator mode for FC ports
    • = no foreign LUN import
    • = no FC tape backup
    • might change with future Ontap releases if/when initiator mode support is added for dual port 32G FC adapter
  • No FCOE
    • not really problem
    • FCOE was a nice idea, but rarely used in the field
  • No Metrocluster support
    • Not feasible with controllers equipped with internal disks
    • Metrocluster FCVI adapter not supported with AFF A700s
    • if Metrocluster is required, use AFF A300 or AFF A700
  • Requires 200-240V power
    • Not really concern in EMEA
    • might be a problem for some of our American friends

To simplify design/configuration/ordering there are only two configurations available for ordering

  • AFF A700s Ethernet
    • Uses 40GbE for Cluster Interconnect and Host-Side connections
      • 4x 40GbE ports on motherboard
      • 1x dual port 40GbE adapter in slot 5
  • AFF A700s Unified
    • Additional 32G FC dual port adapters in slots 2 & 3
  • This is welcome change for making architectural drawings as well
    • No more complex adapter slot assignment rules to follow
    • Can make just two visio shapes with pre-populated adapter slots
  • Slot 1 is reserved for NVRAM adapter
    • NVRAM node-to-node-mirroring uses the same Mini SAS HD cables as used with 12Gpbs SAS stacks
    • Still figuring out if this warrants a new cable shape, maybe black cable with the same cable-end shape as used with SAS cables
    • for the moment, I won’t draw  NVRAM mirroring cabling, using blue SAS cables would be just confusing
  • Slot4 is reserved for 12Gbps 4-port SAS adapter
    • For up to 8 external SSD shelves (in two SAS stacks)
    • with QSFP+ connectors
      • Transitional QSFP+ to Mini SAS HD cables required with the new DS224C shelves
      • Old DS2246 shelves are supported with QSFP+ to QSFP+ SAS cables
    • Maximum (1+8) x 24 = 216 SSD drives / HA-pair

AFF A700s Systems Installation and Setup Instructions

Few Examples of AFF A700s cabling

AFF A700s Ethernet, 40 GBE Cluster Interconnect & Host networks

screen-shot-2017-02-03-at-14-42-54

  • 12Gbps SAS cabling (dark blue for Node1 & light blue for Node2)
    • 4-port SAS adapter in slot 4
  • 1GbE Management cabling (red)
    • eoM & BMC ports
  • 2 x 40GbE Cluster Interconnect cabling with DAC cables (black)
    • No Cluster Interconnect switch
    • ports e0a & e0f
    • NetApp branded 40GbE DAC cables
    • Can also use fiber instead of DAC
    • Ports using different ASICs
  • 4 x 40GbE host network cabling with fiber (orange)
    • ports e0a, e0j, e5a & e5b
    • Can also be DAC, Cisco DAC required

AFF A700s Unified, 40 GbE Cluster Interconnect & Host networks + 32 Gb FC host network

screen-shot-2017-02-03-at-14-51-16

  • Otherwise the same as above, but with 32Gb Fibre Channel connectivity
    • 2 x Dual port 32Gb target mode FC adapters (per controller)
      • orange cables with square ends
    • adapters in slots 2 & 3

AFF A700s Ethernet, 40 GbE Cluster Interconnect & 10GbE Host Network

screen-shot-2017-02-03-at-14-54-47

  • 2 x 40GbE Cluster Interconnect cabling with DAC cables (black)
    • No Cluster Interconnect switch
    • both now ports behind single ASIC, ports e0a & e0e
    • Can also use fiber instead of DAC
    • NetApp branded 40GbE DAC cables
  • 8 x 10GbE Host Network
    • 40GbE to 4x10GbE breakout cables
      • Cisco branded
      • Only Cisco DAC supported with host-side connections
      • ports e0f,e0g,e0h,e0i & e5a,e5b,e5c,e5d
    • Can also use fiber breakout cables instead of DAC
  • Less ports available for connectivity
    • 40GbE ports e0j & e5b disabled
    • by Intel design when using breakout cables

AFF A700s Ethernet, 10 GbE Cluster Interconnect & 10GbE Host Network

screen-shot-2017-02-06-at-10-22-16

  • 40GbE Cluster Interconnect recommended to get most performance out of the box
  • 10GbE Cluster Interconnect supported so AFF A700s can be connected to existing cluster with 10GbE Cluster Interconnect network
  • Even less ports / bandwidth available for connectivity
    • 40GbE ports e0e,e0j & e5b disabled
  • 8 x 10GbE towards host network
  • 4 x 10GbE towards Cluster Interconnect Network
  • Some breakout cables split between Cluster Interconnect & Host Networks
    • Since 10GbE Cluster Interconnect switch (CN1610) uses by default NetApp DAC cables
    • And Host side should use Cisco DAC cables
    • -> Only fiber connection allowed in this use case?

NetApp AFF A700s Shapes

Normal front and rear view shapes. Since there are internal disks, the rear shape follows same logic as other controller models with internal disks, like FAS2650 / AFF A200, available disks are listed in Visio document master shapesheet and these lists are referenced by individual shapes. Separate shapes for “Ethernet” and “Unified” variants.

Picture: AFF A700s front view shape

screen-shot-2017-02-06-at-12-20-37

  • In the stencil there are separate front view shapes for “Eth” and “Uni” variants, there is no difference between variants, it is just easier to orginize and add shapes in pairs.
Picture: AFF A700s Ethernet rear view shape

screen-shot-2017-02-06-at-12-21-49

  • Pre populated with 4-port SAS adapter in slot 4 and 2-port 40GbE adapter in slot 5
Picture: AFF A700s Unified rear view shape

screen-shot-2017-02-06-at-12-23-47-1

  • same basic shape as “Ethernet” variant. Only difference is 32G FC adapters in slots 2&3

 

Call-to-Action

Please visit Downloads page for updated DPTPB stencil package and sample drawings or see “Visio by DPTPB” manual pages for more info.

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s