Developers Club geek daily blog

FlexPod Express: UCS-Managed configuration

1 year, 10 months ago
Among three previous configurations of Small/Medium/Large in architecture of FlexPod Express one more appeared under the name Cisco UCS-Managed. In this article the speech about this new configuration will go. FlexPod Express and FlexPod Datacenter are divided into two main types of connection: direct connection of SHD to servers (between SHD and servers there is no switch) or via the switch (between SHD and servers there is a switch), I will remind that Fabric Interconnect is not the switch, but part of the UCS domain of servers.

It should be noted several important differences of a new configuration from previous three.
  • First in architecture there were Fabric Interconnect, let also performed by the internal devices installed in the UCS Mini chassis with blades.
  • Secondly in architecture the possibility of direct connection of SHD to Fabric Interconnect appeared, earlier between servers and SHD there shall be a switch. At the same time the switch had to be only Nexus (3048/3500/9300).
  • In the third if we have a configuration of FlexPod Express Cisco UCS-Managed with direct connection, the switch for connection of ultimate users shall not be Nexus. Now it can be any standard switch maintaining fault tolerance on similarity of Multi Chassis Etherchannel. But if between UCS and FAS the switch is necessary, then the switch is obliged to be Nexus.

Read more »


The NetApp company declared purchase of SolidFire

1 year, 10 months ago
NetApp, inc signed the agreement on purchase of SolidFire, inc for 870 million a dale. USA cash.

Founded in 2010, SolidFire is a vendor of All Flash of storages for DPC of new generation where "configured simple scaling and management in a type and forgot" provides performance and the joint Multy-Tenancy environment.



With SolidFire NetApp will give the new sentence which will cover each of three parts of the All Flash market of storages:

  • For traditional infrastructures of Enterprise of customers, NetApp All Flas FAS (AFF), a product line which provides functions and possibilities of Enterprise of level.
  • For owners of applications, NetApp EF a series of the products showing incredibly high values of performance at the same time low speed of a response (Latency) according to open testing SPC-1, and also high availability "six nine".
  • For customers with infrastructures of new generation, the products from SolidFire distributed, self-repairing and the extending architecture, reasonable price and simple management.


SolidFire is active in cloudy community and provides to integrate an uravleniye storage by means of VMware vCenter plug-in, VAAI, SRM, OpenStack driver, CloudStack plug-in, Microsoft VSS provider, PowerShell, VMware SRM/SRA and other cloud solutions.

Through time products of SolidFire will be integrated into a paradigm of NetApp DataFabric providing seamless management of cloudy resources, and also the flash-resources and disk arrays.

Read more »


FlexPod DataCenter: Direct-Attached Storage

1 year, 10 months ago
In the previous article I told about "a не-FlexPod of DC" to architecture which can be supported from "one hands" according to the Cisco "Solution Support for Critical Infrastructure" (SSCI) program. Its main feature consists that in it there are no Nexus series switches and if there to add them, such architecture can become full-fledged FlexPod DataCenter.

Here the speech about new design of a network, for FlexPod DataCenter, with a live broadcast of SHD NetApp will go to the UCS domain. Difference from standard architecture of FlexPod DataCenter is that the switches Nexus are located not between UCS and NetApp, and "over" UCS.

In spite of the fact that and before SHD NetApp of the FAS series it was possible to connect directly to Fabric Interconnect (FI), officially the architecture of FlexPod DataCenter did not predusmarivat such design. Now the design with a live broadcast is supported and to saportitsya as FlexPod DataCenter architecture.

The general design of the FC and FCoE network with a live broadcast
Opisny switching circuits on the image are higher
Simultaneous connection on FC and FCoE is represented for two reasons:
  1. It is so really possible to make and it will work
  2. To show that it is possible for FC and/or FCoE.

Ethernet connection between two NetApp FAS controllers are represented for two reasons:
  1. To show that it is two notes of one NA of system (if it is more notes, on the picture surely there would be cluster switches).
  2. External cluster link obligatory accessory of the Clustered DataONTAP operating system.

FC a link from FI to Nexus of the switch is represented for two reasons:
  1. For the future. When we need to switch NetApp to Nexus switches and FI got access to the Lun'am. Then the scheme will become more scaled, it will be possible to add still UCS domains.
  2. To take away resources from storage from other servers which do not enter the UCS domain. For example UCS Rack of servers (UCS C a series) not connected to FI or servers of other vendors.



For traffic Ethernet jointly as with a live broadcast and the iSCSI protocol, and a live broadcast and the FCP protocol — by means of the multipasing which is built in these protocols there are no problems in setup of fault tolerance and balancing on links.
And here for NAS protocols, with a live broadcast (NFS v2/NFS v3 and CIFS v1/CIFS v2), in a type of lack of balancing of loading and a multipasing in these protocols, their function have to fulfill some other, underlying protocols, such as LACP and vPC (FI does not support vPC), thus fault tolerance for Ethernet of a network will have to build somehow in a different way. For example fault tolerance for Ethernet can be made at the level of a virtual switch (that can have problems with performance of such switch) or by means of active-passive of switching of the aggregated network link, without LACP (that the traffic on all available links will not allow to balance), ifgrp link for this purpose aggregated, from SHD, has to be configured in the single-mode mode.
The question with a live broadcast for NAS protocols not so sharply looks for NFS v4 and CIFS v3.0, but demands support of these protocols on the party of clients and SHD (all systems of FAS with cDOT are supported by NFS v4 and CIFS v3.0) as both protocols at last purchased a certain similarity of a multipasing.
to configure FCoE and CIFS/NFS a traffic over one link
  • First the version of a firmware Cisco UCS firmware 2.1 or above is necessary
  • Secondly the storage with 10GB CNA/UTA ports is necessary

Further we go on settings:
From NetApp storage it is necessary to transfer ports to CNA status (existence of CNA ports, normal Ethernet 1/10Gbs is necessary ports of it do not support), by means of the ucadmin command on SHD (restart of SHD will be required). In system independently "virtual" ports Ethernet and "virtual" FC ports, separately will be displayed (though the physical port for one such "virtual" Ethernet and one "virtual" FC will be used one). Such ports separately as normal physical ports are configured.
On FI it is necessary to include the FC mode in a status of "Switching mode", in the Fabric A/B settings on the Equipment tab. This setup will demand restart of FI.
After restart of FI on the Equipment tab it will be necessary to transfer convergent ports to the Appliance port mode, after several seconds the port will pass into online mode. Then recustomize port in the FCoE Storage Port mode, on the right panel you will see type of Unified Storage port. Now will be vozmozhnocht to select VSAN and VLAN for such port. And the important point created earlier VSAN has to have included "FC zoning" on FI to execute a zoning.

Setup of a zoning for FI:
SAN-> Storage Cloud-> Fabric X-> VSANs-> Create "NetApp-VSAN-600"->
VSAN ID: 600
FCoE VLAN ID: 3402
FC Zonning Settings: FC Zonning-> Enabled

SAN-> Policies-> vHBA Templates-> Create "vHBA-T1"-> VSAN "NetApp-VSAN-600"

SAN-> Policies-> Storage Connectivity Policies-> Create "My-NetApp-Connectivity"-> Zoning Type-> Sist (or Simt if is necessary)-> Create->
FC Target Endpoint: "NetApp LIF's WWPN" (begins with 20:)

SAN-> Policies-> SAN Connectivity Policies-> Create "NetApp-Connectivity-Pol1"-> vHBA Initiator Group->
Create "iGroup1"-> Select vHBA Initiators "vHBA-T1"
Select Storage Connectivity Policy: "My-NetApp-Connectivity"

During creation of Server Profile to use the created politicians and vHBA a template.

Read more »


Example of express performance review of SHD by means of the free Mitrend service

1 year, 10 months ago


Studying of a problem with performance and search of solutions are familiar to much not by hearsay. There is a large number of instruments of visualization and parsing of statistics of input-output. Now automation of the intellectual analysis based on Internet services gains steam.

In this post I want to share an example of the analysis of a problem with SHD performance based on one of such services (Mitrend) and I will offer ways of its solution. In my opinion, this example represents the interesting etude which as I think, can be useful to a wide range of IT readers.

So, the customer asked EMC to look at performance unrolled at it in SAN of hybrid storage system VNX5500. The VMware servers on which all turns "in general" are connected to SHD: from infrastructure tasks to file a sphere and the DB servers. Complaints to podvisaniye of the applications unrolled on connected to VNX servers were the cause of carrying out this express assessment.

Read more »


Why to be updated to Data ONTAP Cluster Mode?

1 year, 11 months ago
As I already wrote in the previous posts, Data ONTAP v8.3.x is one of the most significant releases of an operating system for FAS series NetApp storage systems.

In this article I will give the most significant, from my point of view, new functions of storage systems NetApp in most current release of Clustered Data ONTAP. By tradition I will give an example on cars: Provide you have a Tesla the car, you updated a firmware and received an auto pilot with an autoparking free of charge though it was not there earlier. Truth pleasantly? And so the most important arguments to update your system to Cluster-Mode is saving of investments and an opportunity to receive the most modern functionality on old iron:

  • Online detection (deduplication) of zero on the run that can be very useful in case of a DB and provisioning of virtual computers.
  • Online a deduplication for FlashPool (and AFF) systems that will allow to prolong service life of SSD disks. Function is available since 8.3.2.
  • If to be updated to VMWare vSphere 6, you will have a support of vVOL as with NAS and SAN
  • Support of NFS4.1 which is also present at VMware vSphere 6
  • Support of pNFS which allows to parallelize NFS and to switch between ways from the client to file a sphere without its reassembling, is supported with RHEL 6.4 above.
  • Support of SMB (CIFS) 3.0 which works with clients since Win 8 and Win 2012
  • Support of closing of files and sessions for SMB 3.0 from Data ONTAP
  • Support of SMB of 3.0 Encription.
  • SMB Continuous Availability (SMB CA), gives a switching opportunity between ways and controllers of storage without rupture of connection that is very important for work of SQL/Hyper-V
  • ODX during the work with Microsoft SAN/NAS allows to unload routine tasks, type to hammer a data unit with a certain pattern, and allows not to drive excess data between a host and storage.
  • Online migration of volyyum on units, including on other notes of a cluster
  • Online migration of lun after volyyuma, including on other notes of a cluster
  • Online switching of units between notes HA vapors
  • An opportunity to integrate heterogeneous systems in one cluster. Thus the upgrade is performed without stop of data access, thanks to such possibility of NetApp calls the cluster Immortal. At the time of updating of a cluster, its note can consist of different cDOT versions. I cannot miss an opportunity and not mention that most of competitors if in general has a clustering, then it first is very limited on number of notes, and in-torykh all notes of a cluster are obliged to be identical (a homogeneous cluster).
  • ADP StoragePool — technology for more rational distribution of SSD under a cache (hybrid units). For example at you is only 4 SSD, and you want that 2, 3 or four units got advantage from a caching on SSD.
  • ADP Root-Data Partitioning will allow to refuse the selected root of units for the FAS22XX/25XX and AFF8XXX systems
  • Space Reclamation for SAN — returns remote units to storage. I will remind that without SCSI3 UNMAP to a dezha if on yours to the moon data units were removed, on thin to the moon in the storage these blocks were all the same marked as used and occupied disk space, and any thin the moon could grow earlier only as earlier just there was no feedback mechanism of storage and a host. For support of Space Reclamation hosts have to be ESXi 5.1 or above, Win 2012 or above, RHEL 6.2 or above.
  • Adaptive compression — improves the speed of reading the compressed data.
  • Improvements of work of FlexClone for files and lun. There was a possibility of a task the politician of removal of clones of files or lun (it will be useful for example with vVOL).
  • An opportunity to authenticate the SHD administrators by means of the Active Directory (the license CIFS is not required).
  • Support of Kerberos 5: 128-bit AES and 256-bit AES enciphering, support of IPv6.
  • Support of SVM DR (on the basis of SnapMirror). I.e. an opportunity to otreplitsirovat all SVM on the reserve website. An important point is an opportunity at a stage of setup of the relations of replication to preset new network addresses (Identity discard mode) as on a reserve site, ranges of network addresses, other than the main site, are often used. The Identity discard function will be very convenient not to the big companies which are not able to afford the equipment and communication channels to stretch L2 the domain from the main site on spare. In order that clients switched to new network addresses enough to change the records DNS (that can be easy avmomatizirovano at momoshch of a simple script). Also Identity preserve the mode when all settings LIF, volume, LUN remain on a remote site is supported.
  • Possibility of recovery of the file or the moon from a backup copy of SnapVault without recovering all volyyum.
  • An opportunity to integrate SHD with anti-virus systems for check file a sphere. Computer Associates, McAfee, Sophos, Symantec, Trend Micro and Kaspersky are supported.
  • Work of FlashPool/FlashCache is optimized. Allows to cache the compressed data and big blocks (earlier both of these data types did not get to a cache).


Read more »


Data ONTAP 8.3 ADP: FlashPool StoragePools

1 year, 11 months ago
Data ONTAP 8.3 cDOT OS is one of the greatest releases of NetApp. One of key features of release is the Advanced Drive Partitioning (ADP) technology, in the previous article I considered application of this technology for Root-Data Partitioning, in same I suggest to consider the internal StoragePools device. In more detail about what new is in cDOT 8.3 here.

StoragePools is similar to Root-Data Partitioning which also uses a partitsionirovaniye providing a new method to distribute SSD a cache for hybrid units.

Hybrid unit


The StoragePool technology was developed especially for hybrid units that it is more rational to distribute SSD a cache between them. For example at you in system it is set only 4 SSD, and the cache wants to be made for 2, 3 or even 4 units, here you will be come to the rescue by ADP.

So, for a start it is necessary to create StoragePool and to add there the SSD set of disks.



All disks in StoragePool'e will be broken into equal 4 parts. It is not configured anywhere, the system always razobjt them on equal 4 parts. It is possible to have several Storagepool'ov. Creating StoragePool, by default, partition equally will be divided between two notes of HA system, but it can be changed.

The set of first (P1), second (P2), third (P3) and fourth (P4) of partition of the disks StoragePool'a is called Allocation Unit respectively (AU1, AU2, AU3, AU4).


Read more »


non-FlexPod DC: Direct-Attached Storage, Support "from one hands"

1 year, 12 months ago
In one of the articles I told what is architecture of FlexPod DC and what it consists of, treat the physical FlexPod DC components: SHD NetApp of the FAS series, Cisco UCS and Nexus server switches. There is a big variety of the supported designs of FlexPod DC consisting of these three main a component. To use cooperative support from "one hands" existence of the corresponding service of support is necessary for all these a component.

What if you have Cisco SnartNet and NetApp Support Edge services, and in architecture there are no Nexus switches, at this SHD is directly included in UCS Fabric Interconnect?

It is also "a не-FlexPod of DC" architecture about which the speech will go, she can be supported from "one hands" according to the Cisco "Solution Support for Critical Infrastructure" (SSCI) program too.


The general design of a SAN network with a live broadcast

Read more »


Output of data storage systems of NetApp from cluster

2 years, 1 month ago
Adding in cluster of storage system of NetApp FAS happens very simply:
Ports of cluster interconnect in switch are connected and the command is executed:
cluster setup


And how to bring note out of cluster?

Read more »


Inexpensive 10GbE infrastructure for clusters

2 years, 2 months ago
We in HOSTKEY regularly face need of the VLAN organization at speed 10gbit for virtualization clusters – and client. This technology is necessary for interaction with SHD, for backup, for access at DB and for ensuring live migration of virtual computers. Always there is question — how to make it reliably and with the minimum expenses?

Until recently the minimum expenses for such solution were essential. The smallest switch 10GbE was on 24 ports, and the simplest card – Intel X520 for 500 dollars. The budget on the port made about 700-1000 dollars, and the input ticket was very high.

Progress does not stand still, at the beginning of 2015 there was new class of devices 10GbE for acceptable money from warehouse in Moscow and under guarantee.
As we in HOSTKEY regularly build the selected servers and private clouds on their base, we want to share experience.

So, our Client has 5 machines in cluster and to it is necessary 10GbE to VLAN – there 2 fayler, one machine for backup and some notes. On gigabit all slowly also there is no wish to put in machines gigabit chetyrekhportovka in timing. It is necessary 10GbE and the budget is limited. Sounds familiarly, isn't that so?

Read more »


Zoning for cluster storage in pictures

2 years, 4 months ago
Storage systems of NetApp FAS can integrate in cluster to 8 notes for providing access in SAN networks and to 24 notes for Ethernet of networks. Let's review example of setup of zoning and the scheme of connection for such cluster systems.

The general scheme of connection for SAN and NAS.

Read more »