hit counter script

Adjacency; Latency; Traditional Cisco Network; Two-Nic Design Examples - Cisco Nexus 1000V Series Deployment Manual

Hide thumbs Also See for Nexus 1000V Series:
Table of Contents

Advertisement

Deployment Guide
affinity prevents VMware Distributed Resource Scheduler (DRS) from moving the virtual machines to new machines.
If the VSMs end up on the same host due to VMware High Availability, VMware DRS will issue a five-star
recommendation to move one of the VSMs.
CPU and memory for the VSM virtual machine need to be guaranteed: that is, the 2 GB of memory required by each
virtual machine should not be shared with other virtual machines. In addition, a minimum 1-GHz CPU capability
should be assured for each VSM virtual machine.
The mgmt0 interface on the VSM does not necessarily require its own VLAN. In fact, you could simply use the same
VLAN to which VMware vCenter Server belongs. The VSM management VLAN is really no different than any other
virtual machine data VLAN. Alternatively, network administrators can have a special VLAN designated for network
device management.

Adjacency

The VSM and VEM control and packet interfaces are Layer 2 interfaces. They do not support IP-based
communication. Layer 2 adjacency from the VSMs to each VEM is required. Also, Layer 2 adjacency must be
maintained between VSMs that are configured as a high-availability pair.

Latency

The control protocol used by the VSM to communicate with the VEMs is similar to those used in Cisco module
chassis such as the Cisco MDS 9000 Family and the Cisco Nexus 7000 Series chassis. This protocol was designed
to operate in a tightly controlled, lossless, low-latency Layer 2 network with no possibility of network contention (for
example, the EoBC in a Cisco chassis). In a Cisco Nexus 1000V Series implementation, this protocol runs over the
data center network.
The control protocol has been modified for the Cisco Nexus 1000V Series to take into account various performance
characteristics of a data center network, but there are design limitations. The Cisco Nexus 1000V Series was
designed to run in a single data center. The Cisco Nexus 1000V Series does not support long inter-data center
distances between the VSM and a VEM.
As a general guideline, round-trip latency between the VSM and the VEM should be less than 50 milliseconds.

Traditional Cisco Network

This section describes various scenarios regarding the deployment of the Cisco Nexus 1000V Series in a traditional
access layer design. "Traditional" in this context refers to a VMware ESX host with multiple NICs connected to two
independent access layer switches.

Two-NIC Design Examples

Hosts with two NICs are fairly common when deploying VMware ESX, particularly for blades and 10-Gbps connected
hosts. This design is also the simplest from the perspective of the Cisco Nexus 1000V Series mainly because there
little possibility for configuration variation.
With this design, both NICs are part of a single PortChannel configured in vPC-HM connected to two access switches
(Figure 4). A single uplink profile is applied to both NICs. Load balancing varies, but it requires a source-based
hashing algorithm. VLAN hashing and source MAC address hashing are described, with source MAC address
hashing being the preferred method.
VLAN hashing in this design may lead to undesirable load balancing on 1-Gbps NICs. Using two 1-Gbps NICs
creates the potential for the data VLAN and the VMware VMotion VLAN to be hashed down the same uplink port. If
VMware VMotion is initiated, all virtual machines will be contending for the same bandwidth as the VMware VMotion
session. With 10-Gbps NICs, the impact may be negligible because of the substantial bandwidth available.
© 2009 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information.
Page 20 of 25

Advertisement

Table of Contents
loading

Table of Contents