This tip represents a list and sequence of steps I use to setup a Cisco UCS B-Series solution. A up a C-Series system with Fabric Interconnects is nearly an identical process except the steps for installing the chassis are replaced with steps for installing the external rack-mount FEX device like the Nexus 2232. This tip is a draft Work in Progress
Since the steps below detail how to setup your system I feel like to need to remind you that this is NOT an official Cisco web site or resource. This page as with all information found on this site, http://ciscoquicklinks.com does NOT take any legal responsibility for the results of performing any of these tests in your environment. The content of this page and site including may include errors and inaccuracies and are provided for informational purposes only and should be considered "use at your own risk". Please refer to http://cisco.com/go/ucs and it's related support web sites for "official" and "legally responsible" types of information.
Most key UCS documentation is found on the documentation roadmap pages on the Cisco web site. These links are also found on the Data Center page on http://ciscoqucklinks.com page.
- Cisco UCS B-Series Servers Documentation Roadmap - Links to most of the documentation associated with the B-Series platform
- Cisco UCS C-Series Servers Documentation Roadmap - Links to most of the documentation associated with the C-Series rack server platform.
You will need several things before getting started.
- Physical parts: You need the stuff in order to install it. The UCS Siteprep guide can be found here.
- Proper Power: MOST installations need some form of power work except in cases where the customer is very familiar with powering blades. The UCS Siteprep guide can be found here.
- Cisco offers several power options. B-Series installs use the RP208-30-1P-U-1 PDU and C-Series installs use the RP208-30-1P-U-2 PDU. Specifications can be found here.
- The 5108 chassis requires high voltage power. The chassis has 4x C20 connectors so you need cables with a C19 on one end and a connector for your power source on the other.
- The Fabric Interconnects can use high or low voltage power. Each has 2x C14 connections so you need cables with a C13 on one end and a connector for your power source on the other.
- IP Addresseses: The system will need a number of addresses. Ideally all should be on the same network. You will need the subnet masks and gateways to go with all of these.
- Typically you will need 3 IP addresses for the environment + 1 IP address per server + optional: 1 IP address per service profile (1 per server).
- IP addresses for Fabric Interconnects: 3x IP addresses required. 1x for the UCS Manager Fabric Interconnect Cluster and 1x for each Fabric Interconnect Node.
- IP addresses for Server Hardware: 1x IP address FOR EACH server for physical hardware management including Remote KVM and Remote Media.
- IP addresses for Service Profiles (optional): 1x IP address FOR EACH service profile. Easiest if they are on the same network as the Fabric Interconnects but can be on other networks.
- IP address(es) for NTP server(s)
- IP address(es) for DNS server(s)
- IP address(es) for SMTP servers. SMTP server will need to be setup to accept incoming mail from the Fabric Interconnect IP' and cluster IP address.
- IP address(es) for syslog server(s)
- Ip address(es) for SNMP server(s).
- UCS Domain ID''s, MAC address scheme, WWN address scheme
- Determine a Domain ID for this pod of UCS servers. I use A1 as the first instance at a site. The letter represents the location and the number that represents the instance at that site.
- Ports, optics and/or cables from the Fabric Interconnects to the northbound LAN switch(es).
- 10GB uplinks: Typically you want 1 or 2 uplinks per Fabric Interconnect. More servers = More uplinks.
- 1GB uplinks: Typically 4 to 6 per FI at 1GB. If using 1GB uplinks you will need GLC-T's or GLC-SC's for the Fabric Interconnect ports. More servers = More uplinks.
- List of VLAN(s) to be presented to some or all of the LAN interfaces of the servers
- IP address(es) for the host OS on the blade(s). This is generally on a different subnet than management
- Ports, optics and cables from the fabric interconnects and the northbound SAN switch(es). Typically this is 2 per Fabric Interconnect.
- The Northbound SAN switch needs to support NPIV regardless of the brand. Please verify software version level of SAN switch and ensure feature is enabled.
- If Cisco MDS, list of VSANs needed to presented to the Cisco UCS blades.
- If boot from SAN: WWPN's for storage processors
- If boot from SAN: appropriate sized boot lun(s) for installation
- OS installation media and details
- Software media to install the operating systems
- License keys as needed for installing the operating systems
NOTE: The Fabric Interconnects install with the ports IN THE BACK of the rack. For some models of fabric interconnect you will want to make sure you have a path to string some power and network wires to the front of the rack. The 6200 series Fabric Interconnect setup instructions can be found here
: The 6100 series fabric interconnect setup instructions can be found here
- Install side rails. The fixed side of the rail should be flush with the front of the fabric interconnect. The floating rail should installed in the farthest back set of holes.
- Install rack clips at the top and bottom of the desired Rack Unit. Normally I put them towards the top of the rack. The rack clips and screws are NOT provided with the Fabric Interconnect.
- Install the lower Fabric Interconnect. Tighten most of the way but make sure it can move around still.
- Install the upper Fabric Interconnect. Tighten the screws all the way.
- Tighten the screws on the lower Fabric Interconnect.
- Install the cluster cables between the two Fabric Interconnects.
- Connect both Fabric Interconnects to the management network. Configure the management network.
- Use the provided serial cable to configure the top UCS Fabric Interconnects as the A node in the UCS Manager cluster in console mode.
- Use the provided serial cable to configure the bottom UCS Fabric Interconnects as B in console mode.
- Validate the UCS Manager cluster and nodes by pinging the cluster IP and the node IP's across the network.
NOTE: Lift the chassis ONLY by the handles and DO NOT attempt to lift it by yourself. The 5108 chassis installation instructions can be found here
- Install the rack clips into the appropriate locations using the chassis racking template included with the chassis. Generally the lower in the rack the better.
- Install the chassis rails into the rack.
- Optional but recommended: Remove server(s), power supplies and fans from the chassis
- Using at least two people insert the chassis into the rack.
- Screw the chassis into the rack using the screw clips installed earlier.
- Connect the power cables to the chassis in the rear. BE SURE TO PLUG THEM ALL THE WAY IN.
- Connect 1, 2, 4, or 8 cables from the top Fabric Interconnect to the IO module on the left side of the chassis. Typically I install 2 or 4 cables using the lowest sequential ports on the FI's.
- Connect the same number of cables as above from the bottom Fabric Interconnect to the IO module in the middle of the chassis. Plug them into the same ports as above.
These steps do not have to be performed in the exact order listed however in some cases certain things can't be done before other things are done first. The number of combinations is huge so I will describe a method that should work in the order provided.
- Determine the
- Make sure you are using unique MAC addresses on the server data network. For this reason I highly recommend coming up with POD Domain ID such as A1. See
- Make sure your object naming conventions are appropriate.
All objects referenced here are being created in the root. If sub-orginizations are going to be used you can create all policies and pools in the root and use these root items in the suborg.
- Equipment tab > Equipment object > Policies - Set Port channel if desired, number of links, and power policies.
- Equipment tab > Fabric Interconnects object - Configure unified ports on subordinate fabric interconnect. It will reboot after committing the change.
- Equipment tab > Fabric Interconnects object - Configure unified ports on primary fabric interconnect. It will reboot after committing the change. GUI will
- Admin tab > Communications Management > Management IP (ext-management) - Create pool of ip addresses of at least 1 per physical server
The settings in this tab involve integration with UCS Manager and various hypervisors. This is a great solution but currently outside the scope of this article. Typically day-1 installations do not include the integration of this technology although I would encourage every UCS user to investigate it's use as it provides significant performance benefits beyond traditional virtual network options.
- SAN tab > Pools > WWNN Pools > default - Create block of 512 addresses. Be sure to follow best practices for addressing in the pool.
- SAN tab > Pools > WWPN Pools > default - Delete default pool
- SAN tab > Pools > WWPN Pools - Create pool called Fabric-A and add a block of 512 addresses. Be sure to follow best practices for addressing in the pool.
- SAN tab > Pools > WWPN Pools - Create pool called Fabric-B and add a block of 512 addresses. Be sure to follow best practices for addressing in the pool.
- SAN tab > Policies > vHBA Templates - Create a vHBA template called Fabric-A. Be sure to follow best practices for addressing in the pool.
- SAN tab > Policies > vHBA Templates - Create a vHBA template called Fabric-B. Be sure to follow best practices for addressing in the pool.
- LAN tab > Pools > MAC Pools > default - Create block of 512 addresses. Be sure to follow best practices for addressing in the pool.
- LAN tab > Policies > Network Control Policies - Create a network control policy called CDP-Enabled and turn on CDP.
Work in progress.
Work in progress.
See the following article on a more expanded set of redundancy tests for #CiscoUCS
Article: Redundancy and availability scenarios for Cisco UCS - Testing #CiscoUCS internal and external availability and redundancy. Added 2012/10/08.