Monday, November 24, 2014

UCS and Fibre Interconnect

Wikipedia has rather clear explanation on what is UCS . I am just summarizing things in one liners before I go the topic of my interest ("Fabric Interconnect").

The Cisco Unified Computing System (UCS) is an (x86) architecture data center server platform composed of computing hardware, virtualization support, switching fabric, and management software introduced in 2009.Just-In-Time deployment of resources and 1:N redundancy can be configured with UCS systems.



Computing
The computing component of the UCS is available in two versions: the B-Series (a powered chassis and full and/or half slot blade servers), and the C-series for 19-inch racks (that can be used with fabric interconnects).The servers are marketed with converged network adapter and port virtualization.

Virtualization
Cisco UCS supports several hypervisors including VMware ESX, ESXi, Microsoft Hyper-V, Citrix XenServer and others.

Networking
The Cisco 6100 or 6200 Series switch (called a "Fabric Interconnect") provides network connectivity for the chassis, blade servers and rack servers connected to it through 10 Gigabit and Fiber Channel over Ethernet (FCoE). The Fabric Interconnects are derived from the Nexus 5000 and run NX-OS as well as the UCS Manager software. The FCoE component is necessary for connection to SAN storage, since the UCS system blade servers have very little local storage capacity.

I am more interested in this Fabric Interconnect , because i am currently exploring these for my work. so, I will provide more information in another post.


Management
Management of the system devices is handled by the Cisco UCS Manager software embedded into the 6100/6200 series Fabric Interconnect, which is accessed by the administrator through a common browser such as Internet Explorer or Firefox, or a Command line interface like Windows PowerShell or programmatically through an API.

Stateless Computing
A key benefit is the concept of Stateless Computing. Each compute node has no set configuration. MAC addresses, UUIDs, firmware and BIOS settings for example, are all configured on the UCS manager in a Service Profile and applied to the servers. This allows for consistent configuration and ease of re-purposing. A new profile can be applied within a matter of minutes.


Now That's UCS. Lets move on to "Fabric Interconnect"

The UCS fabric interconnect is part of a range of products provided by Cisco that are used to uniformly connect servers to networks and storage networks. These devices are usually installed as head units at the top of server racks. All the server components are attached to the fabric interconnect, which acts as a switch to provide access to the core network and storage networks of the data center.

The high-end model is the UCS 6296UP 96-port fabric interconnect, which is touted to promote flexibility, scalability and convergence. It has the following features:
a) Bandwidth of up to 1920 Gbps
b) High port density of 96 ports
c) High performance and low-latency capability, lossless 1/10 Gigabit Ethernet and Fiber Channel over Ethernet
d) Reduced port-to-port latency to only 2 ms
e) Centralized management under the Cisco UCS Manager
f) Efficient cooling and serviceability
g) Virtual machine-optimized services through the VM-FEX technology, which enables a consistent operational model and visibility between the virtual and the physical environments



In above diagram, The rack-mount servers are shown connected to Nexus 2232s which are nothing more than remote line-cards of the fabric interconnects known as Fabric Extenders. Fabric Extenders provide a localized connectivity point (10GE/FCoE in this case) without expanding management points by adding a switch.


UCS Logical Connectivity

In the last diagram we see several important things to note about UCS Ethernet networking:
UCS is a Layer 2 system meaning only Ethernet switching is provided within UCS. This means that any routing (L3 decisions) must occur upstream.
All switching occurs at the Fabric Interconnect level. This means that all frame forwarding decisions are made on the Fabric Interconnect and no intra-chassis switching occurs.
The only connectivity between Fabric Interconnects is the cluster links. Both Interconnects are active from a switching perspective but the management system known as UCS Manger (UCSM) is an Active/Standby clustered application. This clustering occurs across these links. These links do not carry data traffic which means that there is no inter-fabric communication within the UCS system and A to B traffic must be handled upstream.
The Fabric Interconnects themselves operate at approximately 3.2us (micro seconds), and the Fabric Extenders operate at about 1.5us. This means total roundtrip time blade to blade is approximately 6.2us right inline or lower than most Access Layer solutions.


For other questions like , The question then becomes how is traffic between fabrics handled? - Please refer to link xxx in references section.


References :
1) http://en.wikipedia.org/wiki/Cisco_Unified_Computing_System
2) https://interestingevan.wordpress.com/tag/fabric-interconnect/
3) http://www.techopedia.com/definition/30473/ucs-fabric-interconnect
4) http://www.definethecloud.net/inter-fabric-traffic-in-ucs/


Thursday, November 20, 2014

Difference between WWN, WWPN and WWNN

Have you ever wondered about the differences between WWPN, WWN and WWNN ?
Well, I had to dig google. so, here is what I found to explain the difference in a way I could understand.

WWN:
World Wide Name (WWN) or World Wide Identifier (WWID) is a unique identifier used in storage technologies including Fibre Channel, Advanced Technology Attachment (ATA) or Serial Attached SCSI (SAS). A WWN may be employed in a variety of roles, such as a serial number or for addressability; for example, in Fibre Channel networks, a WWN may be used as a WWNN (World Wide Node Name) to identify a switch, or a WWPN (World Wide Port Name) to identify an individual port on a switch. Two WWNs which do not refer to the same thing should always be different even if the two are used in different roles, i.e. a role such as WWPN or WWNN does not define a separate WWN space. The use of burned-in addresses and specification compliance by vendors is relied upon to enforce uniqueness.

WWPN:
World Wide Port NameWWPN, or WWpN, is a World Wide Name assigned to a port in a Fibre Channel fabric. Used on storage area networks, it performs a function equivalent to the MAC address in Ethernet protocol, as it is supposed to be a unique identifier in the network.

WWNN:
World Wide Node NameWWNN, or WWnN, is a World Wide Name assigned to a node (an endpoint, a device) in a Fibre Channel fabric. It is valid for the same WWNN to be seen on many different ports (different addresses) on the network, identifying the ports as multiple network interfaces of a single network node.


References:



Wednesday, November 19, 2014

Cloud Computing and OpenStack


"Cloud" has become a household term these days, more from a buzz word. I was curious to know what exactly people refer as a cloud and how it works. so here is some light over cloud computing and OpenStack. { Also, I was involved in a one-week training for Red Hat OpenStack at work.}


Cloud Computing is typically defined as a type of computing that relies on sharing computing resources rather than having local servers or personal devices to handle applications.

In a cloud computing system, there's a significant workload shift. Local computers no longer have to do all the heavy lifting when it comes to running applications. The network of computers that make up the cloud handles them instead. Hardware and software demands on the user's side decrease. The only thing the user's computer needs to be able to run is the cloud computing system's interface software, which can be as simple as a Web browser, and the cloud's network takes care of the rest.

When talking about a cloud computing system, it's helpful to divide it into two sections: the front end and the back end. They connect to each other through a network, usually the Internet. The front end is the side the computer user, or client, sees. The back end is the "cloud" section of the system.

The front end includes the client's computer (or computer network) and the application required to access the cloud computing system.

On the back end of the system are the various computers, servers and data storage systems that create the "cloud" of computing services. In theory, a cloud computing system could include practically any computer program you can imagine, from data processing to video games. Usually, each application will have its own dedicated server.


Cloud computing has started to obtain mass appeal in corporate data centers as it enables the data center to operate like the Internet through the process of enabling computing resources to be accessed and shared as virtual resources in a secure and scalable manner.



OpenStack is a free and open-source software cloud computing software platform. OpenStack began in 2010 as a joint project of Rackspace Hosting and NASA. Currently, it is managed by the OpenStack Foundation, a non-profit corporate entity established in September 2012[3] to promote OpenStack software and its community.
RackSpace donated the code that powers its storage and content delivery service (Cloud Files) and production servers (Cloud Servers). NASA contributed the technology that powers Nebula, their high performance computing, networking and data storage cloud service that allows researchers to work with large scientific data sets.

OpenStack has a modular architecture that currently has eleven components:

Nova - provides virtual machines (VMs) upon demand.
Swift - provides a scalable storage system that supports object storage.
Cinder - provides persistent block storage to guest VMs.
Glance - provides a catalog and repository for virtual disk images.
Keystone - provides authentication and authorization for all the OpenStack services.
Horizon - provides a modular web-based user interface (UI) for OpenStack services.
Neutron - provides network connectivity-as-a-service between interface devices managed by OpenStack services.
Ceilometer - provides a single point of contact for billing systems.
Heat - provides orchestration services for multiple composite cloud applications.
Trove - provides database-as-a-service provisioning for relational and non-relational database engines.
Sahara - provides data processing services for OpenStack-managed resources.


OpenStack officially became an independent non-profit organization in September 2012. The OpenStack community, which is overseen by a board of directors, is comprised of many direct and indirect competitors, including IBM, Intel and VMware.


Red Hat Enterprise Linux OpenStack Platform delivers an integrated foundation to create, deploy, and scale a secure and reliable public or private OpenStack cloud. Red Hat Enterprise Linux OpenStack Platform combines the world’s leading enterprise Linux and the fastest-growing cloud infrastructure platform to give you the agility to scale and quickly meet customer demands without compromising on availability, security, or performance.


References:

You would get basic data from Wikipedia. so, I am posting other references.

Cloud Computing

OpenStack



Red Hat Open Stack


 A/N: I have collected data from different sources so the credits of write-up goes to the authors.