Federation in NSX-T 3.0

NSX-T Federation

With the release of VMware NSX-T 3.0, there is now support for Federation in NSX-T

With Federation:

– You are able to scale NSX-T deployments by having NSX-T Manager cluster at each location, NSX-T manager at each location can possibly integrate with multiple vcenters present locally.

– Each location has Local NSX-T Manager with vcenters, clusters and hosts.
 
– NSX T Global Manager provides single pane of glass for management of NSX-T managers placed in the individual locations.

– Active and Standby Global Managers are placed in two different locations.

– Configuration change is done on the Active Global Manager which is then synced to Standby Global Manager.
Fabric preparation which includes creation of transport zones, host transport nodes, edge transport nodes is done using Local Manager.

– Tier 0 Gateways and Tier 1 Gateways can be stretched across multiple locations and here the number of locations are not just limited to two locations, you could also have a third location.
Segments can be stretched across multiple locations

– Ability to have non stretched Tier 1 Gateways in individual locations.
Here span of the segment could be limited to one location.

– Span of Tier 1 Gateway has to be less than or equal to the span of Tier 0 Gateway.


– Span of stretched Tier 1 Gateway which is DR only has to be equal to the span of stretched Tier 0 Gateway.

– Stretched Tier 1 Gateway can run services like edge firewall or NAT if required.

– NSX T Edge Clusters at each site are configured with RTEP IPs.
RTEPs are required to handle cross site traffic.
RTEPs are required only if the Edge cluster is used to configure a gateway that spans more than one location
RTEPs across sites in your setup should have IP connectivity to each other.
There is Geneve tunnel established between RTEPs.

Intra-location tunnel endpoints (TEP) and inter-location tunnel endpoints (RTEP) must use separate VLANs and layer 3 subnets.
This point is documented in admin guide of NSX-T version 3.0

Lab Setup:

Vcenters, Clusters, Hosts and Edge Nodes


Lab Setup


In the lab setup:
1. There is vcenter in each location
2. NSX-T manager is also deployed in each location
3. There are two locations here – Bangalore and Delhi
4. NSX T Edge Clusters comprising of two edges are deployed in each location.
Two edges in Bangalore and two edges in Delhi.
4. Hosts in each location are prepared as Host Transport Nodes.
5. In my lab setup, there is only one Active Global Manager deployed.
In a production setup there will Standby Global Manager as well in other location.
6. Tier 0 Gateway has Active-Active availability mode.
7. Bangalore is Primary location and Delhi is configured as Secondary location with respect to NSX-T Federation.

Deployment of Global Manager is similar to deployment of Local Manager, you need to specify Location name and very importantly the role Global Manager while deploying Global NSX-T Manager.

Once, Global Manager is deployed then you are able to add locations
While adding a location, you need to specify
– Location Name
– FQDN
– Username
– Password
– SHA 256 thumbprint which is retrieved from Local Manager GUI.


Adding location on Global Manager



Locations on Global Manager

Above you see that two locations Bangalore and Delhi have been added to the Global Manager.

Fabric setup in Bangalore:

In each location, you need to prepare NSX-T Fabric which includes
– Configuring Transport Zones in each location
For now, the default nsx-overlay-transportzone works for Overlay traffic in each location.
– Preparing Compute Transport Nodes
– Configuring Edge Transport Nodes
– Configuring Edge Cluster
– Configure RTEPs on edge cluster.

Sharing the screenshots for these steps from Bangalore Location.
Please note that same steps are to be followed in the other location Delhi as part of fabric preparation.

IP Address Pools for Compute TEPs, Edge TEPs and RTEPs in Bangalore:

Compute TEP Pool in Bangalore


Edge TEP Pool in Bangalore


 RTEP Pool in Bangalore


Uplink Profiles for Host TNs and Edge TNs

Uplink profiles are used while configuring NSX on edges and the hosts.
Above screenshot shows uplink profiles being created for edges and hosts in Bangalore location.
Since the edge is up linked to VDS (used for configuring NSX), separate VLANs are used for edge TEP and compute TEP respectively.
VLAN for compute TEP is 7
VLAN for edge TEP is 9



Edge Transport Node Configuration for edge in Bangalore


Compute TN Configuration for host in Bangalore

Above screenshots show edge transport node configuration & compute transport node configuration respectively.


Edge Cluster in Bangalore

Above shows edge cluster in Bangalore

Configure RTEP on edge cluster of Bangalore


Configure RTEPs on edge cluster of Bangalore for cross site traffic.

Before proceeding to configurations on Global Manager, ensure that same steps are followed for preparing fabric in Delhi location.
RTEPs are to be created on the edge cluster in Delhi too.
There is RTEP to RTEP communication for cross site traffic. 

Configurations on Global Manager:

With the fabric set up properly in both the locations – Bangalore and Delhi, now configuration can be applied on Global Manager.


VLAN 5 and VLAN 51 are used for uplink connectivity from edge to the physical routers in Bangalore.
VLANs 55 and 56 are used for uplink connectivity from edge to the physical routers in Delhi.

Segment for uplink on Tier 0 Gateway using VLAN 5 in Bangalore


Segment for uplink on Tier 0 Gateway using VLAN 51 in Bangalore





Segment for uplink on Tier 0 Gateway using VLAN 55 in Delhi


Segment for uplink on Tier 0 using VLAN 56 in Delhi



Above segments are created on Global Manager.

These segments are to be used while creating layer 3 interfaces on the stretched Tier 0 Gateway from Global Manager.


Next create stretched Tier 0 Gateway from Global Manager

Create Tier 0 Gateway

While creating Tier 0 Gateway, Bangalore location is set as primary for federation.
And Delhi location is set as secondary location for federation.
This Tier 0 Gateway has Active-Active availability mode as there is no requirement of services like edge firewall / NAT on this Tier 0 Gateway.
Active-Active availability mode provides greater throughput for North-South traffic.

Layer 3 interfaces on Stretched Tier 0 Gateway


Create Layer 3 interfaces on the stretched Tier 0 Gateway as shown above.

Here B1-V1-E1 corresponds to location Bangalore, VLAN 5 which is first uplink VLAN there, edge node 1 in Bangalore.
D1-V2-E4 corresponds to Delhi, VLAN 56 which is second uplink VLAN there, edge node 4 in Delhi.

And similar logic is used to create rest of the layer 3 interfaces on Tier 0 Gateway.

At this stage, we can test reach-ability between physical routers and Tier 0 Gateway interfaces.



BGP Setup:

BGP Setup

Physical routers are in BGP AS 65001
Stretched Tier 0 Gateway uses BGP AS 65000





In the BGP configs, BGP AS number is configured and BGP neighbors are specified.

Enable route redistribution for connected Tier 0/Tier 1 subnets on stretched Tier 0  Gateway for both the locations – Bangalore and Delhi.

Physical routers in Delhi are prepending AS while receiving routes from edges in Delhi.



Create stretched Tier 1 Gateway from Global Manager
 

From the Global Manager, create a stretched Tier 1 Gateway.

In the lab setup, I have not used custom span so as to require edge cluster association with T1.
This Tier 1 Gateway is DR only Tier 1.
Span of this Tier 1 Gateway is the same as the span of Tier 0 Gateway spanning both locations Bangalore and Delhi.

Advertise connected subnets on Tier 1 Gateway.




Create stretched segment from Global Manager

While creating overlay segment, specify the Tier 1 Gateway and gateway IP.
Traffic type is Overlay

Now attach workloads to this segment and test East-West Connectivity and North-South connectivity.

Validation:

Ping from VM in Bangalore to VM in Delhi


Ping from VM in Delhi to VM in Bangalore

From the above, we verify the MAC addresses associated with the VMs and that there is reach-ability between the VMs in different locations.

Verify RTEP to RTEP communication

Above we see that RTEP to RTEP tunnel is established properly.

Traffic flow from loopback of physical router in Delhi to VM on the segment

The above trace shows the traffic from loopback of physical router in Delhi destined to VM on segment (this VM is on host ESXi 12 of location Delhi) goes through Bangalore.
Above output also shows that route through location Bangalore has shorter AS Path because AS Path prepending is applied on physical router of Delhi 
 
Routing tables of Tier 1 DR and Tier 0 DR on ESXi 12 in Delhi


VM in Delhi is on ESXi 12
Above shows the routing tables corresponding to Tier 1 DR and Tier 0 DR of ESXi 12

Trace from VM in Delhi to loopback on physical router 2 of Delhi

Trace from VM in Bangalore to loopback address on physical router 2 of Delhi
Traceflow from VM in Delhi location to loopback of physical router in Delhi


RTEP interface on Edge Node 2 in Bangalore location


From the above output, we see that traffic lands on Edge Node 4 in Delhi, then goes through the tunnel between RTEPs and lands on Edge Node 2 in Bangalore location. Edge Node 2 then routes it towards the physical network.





Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s