Build Load Balancing Service in VMC on AWS with Avi Load Balancer – Part4

This blog is Part 4 of this series. If you have not gone through the Part1, Part2 and Part3, please go and check them out now.

In Part3, we set up an active-active global load balancing service for our testing application (https://www.sddc.vmconaws.link).

Some applications require stickiness between a client and a server. That is to say, all requests in a long-lived transaction from a client must be sent to the same server; otherwise, the application session may be broken. Unfortunately, in an active-active GSLB setup, we cannot guarantee that a client session is always sent to the same back-end server in the following use case: a client which was initially served by a back-end server in SDDC01 may be redirected to SDDC02 when the new DNS query is resolved to the SDDC02 VIP after the DNS TTL is expired.

Avi Networks GSLB site cookie persistence is designed to handle the above use case. When the traffic is received by the AVi LB in SDDC02, the Avi LB checks the cookie within the request and finds out that the session is initially connected to SDDC01. So the SDDC02 Avi LB will work as a proxy, which will forward the client’s traffic to the SDDC01 Avi LB. Please note that the source IP of client traffic will be changed to the local load balancing virtual IP (In our case, the source IP will be 192.168.100.100.) by SDDC02 Avi LB before forwarding the traffic to SDDC01. This source NAT is requisite as it will ensure the return traffic from the back-end server will use the same path as the incoming traffic.

Step 1: Add a Federated PKI Profile

Go to Templates—>Security—>PKI Profile and click Create button to create a PKI profile. Input the parameters as below:

  • Name: gslb-pki-server
  • Enable CRL Check: No
  • Is Federated: Yes. That is to say that the PKI profile will be replicated across the federation: SDDC01 and SDDC02.
  • Certificate Authority: Add the self-signed certificate which we created in Part 2 as the CA. This will ensure that the AVi load balancer will trust the self-signed certificate presented by the peering SDDC when it works as a proxy for the client.

Step 2: Add a Federated Persistence Profile

Go to Templates—>Profiles—>Persistence and click Create button to create a GSLB persistence profile. Input the parameters as below:

  • Name: gslb-persistence01
  • Type: GSLB Site
  • Application Cookie Name: site-affinity-persistence
  • Is Federated: Yes. That is to say that the persistence profile will be replicated across the federation: SDDC01 and SDDC02.
  • Persistence Timeout: 30 mins

Step 3: Add a Federated Health Monitor

Go to Templates—>Profiles—>Health Monitors and click Create button to create a GSLB health monitor. Input the parameters as below:

  • Name: gslb-hm-https01
  • Type: HTTPS
  • Is Federated: Yes. That is to say that the health monitor will be replicated across the federation: SDDC01 and SDDC02.
  • Health Monitor Port: 443
  • Response Code: 2xx, 3xx

Step 4: Change the GSLB service

Go to Applications—>GSLB Service and edit the exiting GSLB service gslb-vs01.

  • Health Monitor: gslb-hm-https01
  • Site Persistence: Enabled
  • Site Cookie Application Persistence Profile: gslb-persistence01

After we have completed the above configuration, a new pool called SP-gslb-vs01-sddc01-vs01 is added into the local load balancing virtual service: sddc-vs01 on SDDC01 Avi LB.

When we check the member information of the new pool, the virtual IP of local virtual service in the peering SDDC (SDDC02) is shown as the only pool member. Please note that this pool is created by Avi LB automatically so the settings cannot be changed by any users.

Similarly, on SDDC02 Avi LB, a new pool called SP-gslb-vs01-sddc02-vs01 is created and added to the local load balancing virtual service: sddc02-vs01.

Let’s verify our work.

First, the GSLB DNS resolves the DNS name (www.sddc.vmconaws.link) of our testing application to the VIP in SDDC01. When we input the URL https://www.sddc.vmconaws.link into the browser, we are served by centos02 in SDDC01 as expected.

Now change the DNS resolution to point to SDDC02 VIP. Go to our testing application again and we are still served by the same back-end server centos02 in SDDC01.

The cross-site traffic between Avi LBs can be verified via a packet capture on SDDC02 Avi LB. From the packet capture, we can see the HTTPs session which is destined for SDDC02 is now forwarded from SDDC02 to SDDC01 by SDDC02 Avi LB.

As GSLB site cookie is based on HTTP cookie, so there are a few restrictions with it:

  • Site persistence applies only to Avi VIPs.
  • Site persistence across multiple virtual services within the same Controller cluster is not supported.
  • For site persistence to be turned on for a global application, all of its individual members must run on active sites.
  • Site persistence applies only to HTTP or HTTPs traffic when Avi LB terminates TLS/SSL session.

This is the end of this blog. Thank you very much for reading!

Build Load Balancing Service in VMC on AWS with Avi Load Balancer – Part3

This blog is Part 3 of this series. If you have not gone through  Part1 and Part2, please go and check them out now.

In Part 1 and Part 2, we deployed the Avi Load Balancers and completed the local load balancing setup in VMC SDDC01. To achieve high availability across different SDDCs, global load balancing is required. In this blog, let’s set up an active-active global load balancing service for our testing web application so that the web servers in both two SDDCs can serve the client simultaneously.

Section 1: Infrastructure

Task 1: Follow Part1 and Part2 to deploy Avi load balancer and set up local load balancing in VMC SDDC02 as the above diagram.

  • Avi Controller Cluster
    • Cluster IP: 192.168.101.3
    • Controller Node1: 192.168.101.4
    • Controller Node2: 192.168.101.5
    • Controller Node3: 192.168.101.6
  • SE Engine
    • SE01: 192.168.101.10
    • SE02: 192.168.101.11
  • LB Virtual Service:
    • VIP: 192.168.100.100 with back-end member server Centos03 (192.168.100.25)
  • NAT: 52.26.167.214<->192.168.100.100

Task 2: Connectivity between VMC SDDC and TGW

Please refer to my friend Avnish Tripathi’s blog (https://vmtechie.blog/2019/09/15/connect-aws-transit-gateway-to-vmware-cloud-on-aws/) to connect VMC SDDC01 and VMC SDDC02 to AWS Transit Gateway with Route-based VPN.

Task 3: Set up NAT for DNS Service virtual IP in VMC Console

SDDC01: static NAT 54.201.246.64<->192.168.96.101. Here 192.168.96.101 is the DNS virtual service VIP in SDDC01.

SDDC02: static NAT 52.32.129.180<->192.168.100.101. Here 192.168.100.101 is the DNS virtual service VIP in SDDC02.

Task 4: Add Firewall rules for GSLB

  • Allow inter-SDDC traffic as the below in VMC console.
  • Allow DNS traffic from the Internet to GSLB DNS service virtual IP

Task5: DNS sub-domain delegation.

In the DNS server, delegate sub-domain (sddc.vmconaws.link) to the two DNS virtual service VIPs corresponding public IPs, which means the two to-be-defined DNS virtual services will work as the name servers of sub-domain.

Section 2: Enable GSLB

Task 1: Create a DNS virtual service

In the SDDC01 Avi Controller GUI, go to Application—>Virtual Services—>Create Virtual Service then input the parameters as below:

  • Name: sddc01-g-dns01
  • IPv4 VIP: 192.168.96.101
  • Application Profile: System-DNS
  • TCP/UDP Profile: System-UDP-Per-Pkt
  • Service Port: 53
  • Service Port: 53, override TCP/UDP with System-TCP-Proxy
  • Pool: leave blank

Leave the rest of the settings as default.

In SDDC02, create a similar DNS virtual service (sddc02-g-dns01) with VIP 192.168.100.101.

Task 2: GSLB Site

Avi uses GSLB sites to define different data centers. GSLB sites fall into two broad categories — Avi sites and external sites. This blog focuses on Avi sites. Each Avi site is characterized as either an active or a passive site. Active sites are further classified into two types — GSLB leader and followers. The active site from which the initial GSLB site configuration is performed, is the designated GSLB leader. GSLB configuration changes are permitted only by logging into the leader, which propagates those changes to all accessible followers. The only way to switch the leadership role to a follower is by overriding the configuration of the leader from a follower site. This override can be invoked in the case of site failures or for maintenance.

In our setup, SDDC01 will work as “Leader” site and SDDC02 will work as “Follower” site.

In SDDC01 Avi Controller GUI, go to Infrastructure—>GSLB and click the Edit icon to enable GSLB Service.

In the “New GSLB Configuration” window, input the parameters as below:

  • Name: sddc01-gslb
  • Username: admin
  • Password: Password for Avi Controller
  • GSLB Subdomain: sddc.vmconaws.link

Then click “Save and Set DNS Virtual Service.

Select the newly defined DNS service in task1 as “DNS Virtual Services” and configure “sddc.vmconaws.link” as the subdomain, then save the change.

Now the GSLB setup is as below.

Click “Add New Site” button to add SDDC02 as the second GSLB site. Then Input the parameters below in the “New GSLB Site” window:

  • Name: sddc02-gslb
  • Username: admin
  • Password: Password for Avi Controller
  • IP Address: 192.168.101.3 (SDDC02 Avi Cluster VIP)
  • Port: 443

Click “Save and Set DNS Virtual Services” button. Then select the newly defined DNS service in task1 as “DNS Virtual Services” and configure “sddc.vmconaws.link” as the subdomain, then save the change.

Now the GSLB Site configuration is completed. We can see that “sddc01-gslb” works as the “Leader” site and “sddc02-gslb” works as the “Active” site.

Typically, the VIP configured in a local virtual service (configured as a GSLB pool member) is a private IP address. In our configurations, the VIPs are 192.168.x.x. But these IP addresses are not reachable by the Internet client. To handle this, we have to enable NAT-aware Public-Private GSLB feature. Go to Infrastructure—>GSLB—>Active Members—>sddc01-gslb then click the Edit icon. In the advanced settings, input the following parameters:

  • Type: Private
  • IP Address:
    • 10.0.0.0/8
    • 172.16.0.0/15
    • 192.168.0.0/16

Section 3: Create a GSLB Service

We are ready to create a GSLB service for our application (www.sddc.vmconaws.link) now. To achieve active-active GSLB service and distribute the load evenly across 3 backend servers (2 in SDDC01 and 1 in SDDC02). We developed the following GSLB service design:

  • The GSLB service includes 1 GSLB pool.
  • There is one GSLB pool member in each SDDC.
  • Groups Load Balancing Algorithm: Priority-based
  • Pool Members Load Balancing Algorithm: Round Robin

Go to Application—>GSLB Services—>Create, click the “Advanced Setup” button. In the “New GSLB Service” input the following parameters:

  • Name: gslb-vs01
  • Application Name:www
  • Sudomain: .sddc.vmconaws.link
  • Health Monitor: System-GSLB-HTTPS
  • Group Load Balancing Algorithm: Priority-based
  • Health Monitor Scope: All Members
  • Controller Health Status: Enabled

Then click “Add Pool” and input the following parameters:

  • Name: gslb-vs01-pool
  • Pool Members Load Balancing Algorithm: Round Robin (Note this means the client will be sent to the local load balancer in both SDDC01 and SDDC02).
  • Pool Member (SDDC01):
    • Site Cluster Controller: sddc01-gslb
    • Virtual Service: sddc01-vs01
    • Public IP: 34.216.94.228
    • Ratio: 2 (The virtual service will receive 2/3 load.)
  • Pool Member (SDDC02):
    • Site Cluster Controller: sddc02-gslb
    • Virtual Service: sddc02-vs01
    • Public IP: 52.26.167.214
    • Ratio: 1 (The virtual service will receive 1/3 load.)

We will change the following parameters as well for this GSLB service.

Now we have completed the setup of active-active GSLB for our web service.

Let’s verify our work.

  • The GLSB DNS service will respond to the DNS query for DNS name http://www.sddc.vmconaws.link with the public IP of SDDC01 web virtual services or the public IP of SDDC02 web virtual service via the round-robin algorithm.
  • The web servers in both SDDCs serve the clients simultaneously.

This is the end of this blog. Thank you very much for reading!

Build Load Balancing Service in VMC on AWS with Avi Load Balancer – Part2

This blog is Part 2 of this series. If you have not gone through the Part1, please go and check it out now.

In Part 2, we will demo how to set up a local load balancing virtual service for a web-based application on our deployed Avi load balancer. The IP Address allocation and network connectivity are shown below.

There are hundreds of features are available when setting up a local load balancing service in Avi load balancer. In this blog, we will focus on the widely used features in enterprise load balancing solution:

  • TLS/SSL Termination
  • Session Persistence
  • Health Monitor

Section 1: TLS/SSL Termination

The following deployment architectures are supported by Avi Load balancer (LB) for SSL:

  • None: SSL traffic is handled as pass-through (layer 4), flowing through Avi LB without terminating the encrypted traffic.
  • Client-side: Traffic from the client to Avi LB is encrypted, with unencrypted HTTP to the back-end servers.
  • Server-side: Traffic from the client to Avi LB is unencrypted HTTP, with encrypted HTTPS to the back-end servers.
  • Both: Traffic from the client to Avi LB is encrypted and terminated at Avi LB, which then re-encrypts traffic to the back-end server.
  • Intercept: Terminate client SSL traffic, send it unencrypted over the wire for taps to intercept, then encrypt to the destination server.

We will use Client-side deployment architecture here.

Step 1: Get or Generate a certificate

Please note that a CA signed certificate is highly recommended for any production system. We will use a self-signed certificate here for simplification. Go to Templates—>Security—SSL/TLS Certificate, which all installed certificates are listed. A self-signed certificate is shown, its subject name is http://www.sddc.vmconaws.link.

Step 2: Create a customized SSL/TLS profile

The system default SSL/TLS profile still includes the support for TLS 1.0, which is not considered as very secure protocol anymore. So, we will go to Templates—>Security—>SSL/TLS Profile to create a new SSL/TLS profile which excludes TLS 1.0 as below:

Section 2: Session Persistence

Cookie persistence is the most-common persistence mechanism for a web-based application. Here we will define a persistence profile for our testing web application. Go to Templates—>Profiles—>Persistence and click “Create” button, then input the parameters as below:

  • Name: sddc011-vs01-pp01
  • Type: HTTP Cookie
  • HTTP Cookie Name: vmconaws-demo
  • Persistence Timeout: 30mins

Please note that the cookie payload contains the back-end server IP address and port, which is encrypted with AES-256. 

Section 3: Health Monitor

Avi load balancer uses the health monitor to check if the back-end servers in the load balancing pool are healthy to provide the required service or not. There are two kinds of health monitors:

  • Active Health Monitor: Active health monitors send proactive queries to servers, synthetically mimicking a client. Send and receive timeout intervals may be defined, which statically determine the server response as successful or failed.
  • Passive Health Monitor: While active health monitors provide a binary good/bad analysis of server health, passive health monitors provide a more subtle check by attempting to understand and react to the client-to-server interaction. For example, if a server is quickly responding with valid responses (such as HTTP 200), then all is well; however, if the server is sending back errors (such as TCP resets or HTTP 5xx errors), the server is assumed to have errors. 

Only active health monitors may be edited. The passive monitor has no settings.

Note: Best practice is to enable both a passive and an active health monitor to each pool.

Let’s start to create an active health monitor for our application. Go to Templates—>Profiles—>Health Monitors and click “Create” button, then input the parameters as below:

  • Name: sddc01-vs01-hm01
  • Server Response Data: sddc01
  • Server Response Code: 2xx
  • Health Monitor Port: 80 (Please note that we don’t change the default setting here. But this option can be very useful for some cluster-based application)

Section 4: Create a Load Balancing Pool

Now it is time to create the load balancing pool. Go to Application—>Pools and click “Create Pool”. In the Step 1 window, input the parameters as below:

  • Load Balance: Least Connections
  • Persistence: sddc-vs01-pp01

Add an active health monitor: sddc01-vs01-hm01.

Add two member servers:

  • centos01: 192.168.96.25
  • centos02: 192.168.96.26

Section 5: Create a Virtual Service

We will use the “Advanced Setup” to create a virtual service for our web application.

In “Step 1: Setting” window, input the parameters as below:

We use the system pre-defined application profile “System-HTTP” as the applied Application Profile for simplification here. The “System-HTTP” profile includes comprehensive configuration options for a web application, which possibly requests a separated blog to cover. Let’s list a few here:

  • X-Forwarded-For: Avi SE will insert an X-Forwarded-For (XFF) header into the HTTP request headers when the request is passed to the server. This feature is enabled.
  • Preserve Client IP Address: Avi SE will use the client-IP rather than SNAT IP for load-balanced connections from the SE to back-end application servers. This feature is disabled.
  • HTTP-to-HTTPS Redirect: Client requests received via HTTP will be redirected to HTTPS. This feature is disabled.

Leave all settings as default for Step 2 and 3.

In “Step 4: Advanced”, input the parameters as below:

  • Use VIP as SNAT: enabled
  • SE Group: Default-Group

Section 5: VMC Setup

To enable user’s access to our testing web, two changes are required in the VMC SDDC.

  • Network Address Translation
  • A CGW firewall rule to allow traffic from the Internet to the LB VIP (192.168.96.100) on HTTPs

PIV

So far, we have completed all load balancing configurations. Let’s go to verify our work.

Application web page (https://www.sddc.vmconaws.link):

Session Persistence Cookie:

This is the end of this blog. Thank you very much for reading!

Build Load Balancing Service in VMC on AWS with Avi Load Balancer – Part1

When we design a highly available (HA) infrastructure for a mission-critical application, local load balancing and global load balancing are always the essential components of the solution. This series of blogs will demonstrate how to build an enterprise-level local load balancing and global load balancing service in VMC on AWS SDDC with Avi Networks load balancer.

This series of blogs will cover the following topics:

  1. How to deploy Avi load balancer in a VMC SDDC;
  2. How to set up local load balancing service to achieve HA within a VMC SDDC (https://davidwzhang.com/2019/09/21/build-load-balancing-service-in-vmc-on-aws-with-avi-load-balancer-part2/)
  3. How to set up global load balancing service to achieve HA across different SDDCs which are in different AWS Availability Zones (https://davidwzhang.com/2019/09/30/build-load-balancing-service-in-vmc-on-aws-with-avi-load-balancer-part3/)
  4. How to set up global load balancing site affinity (https://davidwzhang.com/2019/10/08/build-load-balancing-service-in-vmc-on-aws-with-avi-load-balancer-part4/)
  5. How to automate Avi LB with Ansible (https://davidwzhang.com/2019/10/14/automate-avi-lb-service-with-ansible/)

By the end of this series, we will complete an HA infrastructure build as the following diagram: this design leverages local load balancing service and global load balancing service to provide 99.99%+ SLA to a web-based mission-critical application.

The Avi load balancer platform is built on software-defined architectural principles which separate the data plane and control plane. The product components include:

  • Avi Controller (control plane) The Avi Controller stores and manages all policies related to services and management. HA of the Avi Controller requires 3 separate Controller instances, configured as a 3-node cluster
  • Avi Service Engines (data plane) Each Avi Service Engine runs on its own virtual machine. The Avi SEs provide the application delivery services to end-user traffic, and also collect real-time end-to-end metrics for traffic between end-users and applications.

In Part 1, we will cover the deployment of Avi load balancer. The diagram below shows the controller and service engine (SE) network connectivity and IP address allocation.

Depending on the level of vCenter access provided, Avi load balancer supports 3 modes of deployment. In VMC on AWS, only the “no-access” mode is supported. Please refer to https://avinetworks.com/docs/ for more information about Avi load balancer deployment modes in VMWare Cloud.

Section 1: Controller Cluster

Let’s start to deploy the Avi controllers and set up the controller cluster. First, download the ova package for the controller appliance. In this demo, the version of Avi load balancer controller is v18.2.5. After the download, deploy the controller virtual appliance via “Deploying OVF Template” wizard in VMC SDDC vCenter. In the “Customize template” window, input parameters as below:

  • Management interface IP: 192.168.80.4
  • Management interface Subnet mask: 255.255.255.0
  • Default gateway: 192.168.80.1
  • Sysadmin login authentication key: Password

After this 1st controller appliance is deployed and powered on, it is ready to start the controller initial configuration. Go to the controller management GUI https://192.168.80.4

(1) Username/Password

(2) DNS and NTP

(3) SMTP

(4) Multiple-Tenants? Select No here for simplification.

The initial configuration for the 1st controller is completed. As the first controller of the cluster, it will receive the “Leader” role. The second and third controller will work as “Follower”. When we are logged in the GUI of this first controller, go to Administration—>Controller, as shown below.

Similarly, go to deploy and perform the initial configuration for the 2nd (192.168.80.5) and 3rd controller (192.168.80.6).

In the management GUI of the 1st controller, go to Administration—>Controller and click “Edit”. In “Edit Controller Configuration” window, add the second node and third node into the cluster as below.

After a few minutes, the cluster is set up successfully.

Section 2: Service Engine

Now it is ready to deploy SE virtual appliances. In this demo, two SEs will be deployed. These 2 SEs are added into the default Sevice Engine Group with the default HA mode (N+M).

Step 1: Create and download the SE image.

Go to Infrastructure—>Clouds, click the download icon and select the ova format. Please note that this SE ova package is only for the linked controller cluster. It can not be used for another controller cluster.

Step 2: Get the cluster UUID and authentication token for SE deployment.

Step 3: In SDDC vCenter, run the “Deploy OVF Template” wizard to import SE ova package. In the “Customize template” window, the input parameters:

  • IP Address of the Avi Controller: 192.168.80.3 (cluster IP of the controller)
  • Authentication token for Avi Controller: as Step2
  • Controller Cluster UUID for Avi Controller: as Step 2
  • Management Interface IP Address: 192.168.80.10
  • Management Interface Subnet Mask: 255.255.255.0
  • Default Gateway: 192.168.80.1
  • DNS Information: 10.1.1.151
  • Sysadmin login authentication key: Password

Please note that the second vNIC will be used as the SE data interface.

Then continue to deploy the second SE (mgmt IP: 192.168.80.11/24).

The deployed SEs will register themself into the controller cluster as below.

Step 4: Now the SEs have established the control and management plane communication with the controller cluster. It is time to set up the SE’s data network.

During the setup, I found that the vNIC for virtual appliance VM and SE Ethernet Interface is not properly mapped, for example, the data interface is the 2nd vNIC of SE VM in vCenter but it is shown as Ethernet 5 in SE network setup. To get the correct mapping, the mac address of data vNIC will be leveraged. Go to SDDC vCenter and get the MAC address of SE data interface.

In the controller management GUI, go to Infrastructure—>Service Engine and edit the selected SE. In the interface list, select the correct interface which has the same mac address then provide the IP address and subnet mask.

The final step is to add a gateway for this data interface. Go to Infrastructure—>Routing—>Static Route and create a new static default route.

Tip: VM-VM anti-affinity policy is highly recommended to enhance the HA of the controller and service engine virtual appliances.

This is the end of the blog. Thank you very much for reading!

How to lock down your Softlayer Vyatta

In Softlayer, Vyatta Network Gateway is offered to provide routing, firewall and VPN gateway function. As a network security device, we have to protect Vyatta gateway itself properly.

Softlayer suggest:

“This Vyatta gateway is administered directly by the customer.  The customer has the ability to login directly to the device and make extensive configurations for servicing their network traffic. The customer is responsible for maintaining proper backups of the device’s configuration files.”

So as a customer of Softlayer, it is YOUR responsibility to secure the Vyatta gateway.

Here I will try to give you a few tips to lock down your Vyatta gateway:

  1. Disable un-secure and unused services running on Vyatta gateway. We are lucky: only SSH and HTTPs are enabled by default with Softlayer Vyatta build.
  2. Softlayer Vyatta build allows you to SSH to Vyatta gateway through Internet by default. You have two ways to make it more secure:
  3. Set SSH service only listens on Vyatta private network

set service ssh listen-address private-ip

  1. Apply firewall rules on the Vyatta gateway public interface to only allowed trusted network to access your Vyatta gateway. Note the firewall rules should apply as “local”.
  2. Apply the principle of least privilege by use of Role-based access control (RBAC). Vyatta defines 3 roles (operator, administrator and superuser) by default.
  3. Integrate with your central AAA server if you have one for access control. TACACS+ and Radius are supported by Vyatta gateway.
  4. Configure SNMP and Syslog to monitor the operation of Vyatta gateway.
  5. BY default, Softlayer Vyatta gateway NTP is configured to sync the Vyatta clock with Softlayer NTP server. You can change to sync the clock with your own NTP server if you like. Don’t forget to change the time-zone to reflect your local time!
  6. Control the device access in customer portal to only allow your network administrator has access to Vyatta gateway. Make sure the user name and password of Vyatta is only visible to them.
  7. Follow your password management policy and change your password regularly.

Citrix Netscaler L2 and L3 mode

Citrix NetScaler as an L2 Device

A NetScaler functioning as an L2 device is said to operate in L2 mode. In L2 mode, the NetScaler

forwards packets between network interfaces when all of the following conditions are met:

• The packets are destined to another device’s media access control (MAC) address.

• The destination MAC address is on a different network interface.

• The network interface is a member of the same virtual LAN (VLAN).

By default, all network interfaces are members of a pre-defined VLAN, VLAN 1. Address Resolution

Protocol (ARP) requests and responses are forwarded to all network interfaces that are members of

the same VLAN. To avoid bridging loops, L2 mode must be disabled if another L2 device is working

in parallel with the NetScaler.

Citrix NetScaler as a Packet Forwarding Device

A NetScaler can function as a packet forwarding device, and this mode of operation is called

L3 mode. With L3 mode enabled, the NetScaler forwards any received unicast packets that are

destined for an IP address that it does not have internally configured, if there is a route to the

destination. A NetScaler can also route packets between VLANs.

In both modes of operation, L2 and L3, a NetScaler generally drops packets that are in:

• Multicast frames

• Unknown protocol frames destined for a NetScaler’s

• Spanning Tree protocol (unless BridgeBPDUs is ON)

Citrix Netscaler CloudBridge L3 mode lab

The following lab is going to run through the steps to build a working L3 NetScaler Cloud Bridge tunnel. The lab is built on VMware workstation 9.2.

This solution shows in this lab can be applied to Softlayer Cloud Offering for the secure connectivity from your own environment to Softlayer DC.

 

Lab environment component

NetScaler VPX Platinum Evaluation version 10.1-119.7. (You can download this edition from Citrix.com)

Vyatta Router (Note: No routing at Vyatta for 192.168.108.0/24 or 192.168.175.0/24)

Lab Topology

CloudBridge L3

IP addressing

Please see the above diagram for the IP addressing

Lab Steps

Step 0. Perform initial configuration of Netscaler including NSIP, SNIP and gateway as the above topology

Step 1. Log in the Netscaler GUI management, verify Netscaler works in L3 mode in System-Settings-Configuring mode on both Netscalers

Netscaler l3mode

Step 2. Enable CloudBridge feature under System-Settings-Configuring advances features on both Netscalers

CloudBridge feature

Step 3. Under System-CloudBridge Connector, click Getting Started to open the CloudBridge configuration wizard at Netscaler@DC-A

CloudBridge Wizard

Step 4. In the wizard, select Netscaler icon

CloudBridge Wizard1

Step 5. Type in the remote Netscaler@DC-B NSIP and user/password

CloudBridge Wizard2

Step 6. Configure the Cloud BridgeConnector

CloudBridge Wizard3

After click Continue button, the wizard will complete the configuration for you on both Netscalers.

Step 7. Configure bridge SNIP. Netscaler @DC-A: 172.16.31.1/30 Netscaler @DC-B: 172.16.31.2/30

Netscaler BridgeSNIP

Netscaler BridgeSNIP1

Step 8. Add routing from local DC to remote DC for network in the peering DC

On Netscaler@DC-A

Netsacler Routing1

On Netscaler@DC-B

Netsacler Routing2

Step 9. Verify the CloudBridge Tunnel works well

In GUI, you can see the tunnel status is up as the below:

Netscaler IP Tunnels

Personally, I prefer to perform the status check by CLI.

Netscaler@DC-A

> ping 172.16.31.2

PING 172.16.31.2 (172.16.31.2): 56 data bytes

64 bytes from 172.16.31.2: icmp_seq=0 ttl=255 time=18.184 ms

64 bytes from 172.16.31.2: icmp_seq=1 ttl=255 time=2.586 ms

64 bytes from 172.16.31.2: icmp_seq=2 ttl=255 time=3.075 ms

64 bytes from 172.16.31.2: icmp_seq=3 ttl=255 time=2.590 ms

^C

— 172.16.31.2 ping statistics —

4 packets transmitted, 4 packets received, 0% packet loss

 

round-trip min/avg/max/stddev = 2.586/6.609/18.184/6.686 ms

> show arp

IP               MAC                Iface VLAN  TD     Origin     TTL

—               —                —– —-  —     ——     —

1)      127.0.0.1        00:0c:29:93:a6:c7  LO/1  1     0      PERMANENT  N/A

2)      172.16.31.2      00:0c:29:17:ea:7f  TUN1  1     0      DYNAMIC    1196

3)      192.168.107.20   00:0c:29:93:a6:c7  LO/1  1     0      PERMANENT  N/A

4)      192.168.107.21   00:0c:29:93:a6:c7  LO/1  1     0      PERMANENT  N/A

5)      192.168.107.10   00:0c:29:86:7a:18  0/1   1     0      DYNAMIC    1189

6)      192.168.107.100  00:0c:29:1a:15:a2  0/1   1     0      DYNAMIC    1184

Done

> show ip

Ipaddress        TD    Type             Mode     Arp      Icmp     Vserver  State

———        —    —-             —-     —      —-     ——-  ——

1)      192.168.107.20   0     NetScaler IP     Active   Enabled  Enabled  NA       Enabled

2)      192.168.107.21   0     SNIP             Active   Enabled  Enabled  NA       Enabled

3)      172.16.31.1      0     SNIP             Active   Enabled  Enabled  NA       Enabled

> show iptunnel

1) Domain…….:               0

Name………:  cbbridge1 (TUN1)

Remote…….:  192.168.174.21   Mask……: 255.255.255.255

Local……..:  192.168.107.21   Encap…..:  192.168.107.21

Protocol…..:             GRE   Type……:               C

IPSec Profile Name…….:       cbbridge1

IPSec Tunnel Status……:              UP

 

Done

> show route

Network          Netmask          Gateway/OwnedIP  State   TD     Type

——-          ——-          —————  —–   —     —-

1)      0.0.0.0          0.0.0.0          192.168.107.10   UP      0     STATIC

2)      127.0.0.0        255.0.0.0        127.0.0.1        UP      0     PERMANENT

3)      172.16.31.0      255.255.255.252  172.16.31.1      UP      0     DIRECT

4)      192.168.107.0    255.255.255.0    192.168.107.20   UP      0     DIRECT

Done

> stat ipsec counters

 

Secure tunnel(s) summary

Rate (/s)                Total

Bytes Received                                     0                  176

Bytes Sent                                         0                  352

Packets Received                                   0                    2

Packets Sent                                       0                    4

Done

Netscaler@DC-B

> ping 172.16.31.1

PING 172.16.31.1 (172.16.31.1): 56 data bytes

64 bytes from 172.16.31.1: icmp_seq=0 ttl=255 time=0.485 ms

64 bytes from 172.16.31.1: icmp_seq=1 ttl=255 time=0.559 ms

^C

— 172.16.31.1 ping statistics —

2 packets transmitted, 2 packets received, 0% packet loss

round-trip min/avg/max/stddev = 0.485/0.522/0.559/0.037 ms

Done

 

> show arp

IP               MAC                Iface VLAN  TD     Origin     TTL

—               —                —– —-  —     ——     —

1)      127.0.0.1        00:0c:29:17:ea:7f  LO/1  1     0      PERMANENT  N/A

2)      172.16.31.1      00:0c:29:93:a6:c7  TUN1  1     0      DYNAMIC    1065

3)      192.168.174.10   00:0c:29:86:7a:22  0/1   1     0      DYNAMIC    1190

4)      192.168.174.20   00:0c:29:17:ea:7f  LO/1  1     0      PERMANENT  N/A

5)      192.168.174.21   00:0c:29:17:ea:7f  LO/1  1     0      PERMANENT  N/A

Done

> show ip

Ipaddress        TD    Type             Mode     Arp      Icmp     Vserver  State

———        —    —-             —-     —      —-     ——-  ——

1)      192.168.174.20   0     NetScaler IP     Active   Enabled  Enabled  NA       Enabled

2)      192.168.174.21   0     SNIP             Active   Enabled  Enabled  NA       Enabled

3)      172.16.31.2      0     SNIP             Active   Enabled  Enabled  NA       Enabled

Done

> show iptunnel

1) Domain…….:               0

Name………:  cbbridge1 (TUN1)

Remote…….:  192.168.107.21   Mask……: 255.255.255.255

Local……..:  192.168.174.21   Encap…..:  192.168.174.21

Protocol…..:             GRE   Type……:               C

IPSec Profile Name…….:       cbbridge1

IPSec Tunnel Status……:              UP

 

Done

> show route

Network          Netmask          Gateway/OwnedIP  State   TD     Type

——-          ——-          —————  —–   —     —-

1)      0.0.0.0          0.0.0.0          192.168.174.10   UP      0     STATIC

2)      127.0.0.0        255.0.0.0        127.0.0.1        UP      0     PERMANENT

3)      172.16.31.0      255.255.255.252  172.16.31.2      UP      0     DIRECT

4)      192.168.174.0    255.255.255.0    192.168.174.20   UP      0     DIRECT

> stat ipsec counters

 

Secure tunnel(s) summary

Rate (/s)                Total

Bytes Received                                     0                  304

Bytes Sent                                         0                  204

Packets Received                                   0                    4

Packets Sent                                       0                    2

Done

Ping Test from DC-A to DC-B

> ping -S 192.168.108.20 192.168.175.20

PING 192.168.175.20 (192.168.175.20) from 192.168.108.20: 56 data bytes

64 bytes from 192.168.175.20: icmp_seq=0 ttl=255 time=9.419 ms

64 bytes from 192.168.175.20: icmp_seq=1 ttl=255 time=2.559 ms

64 bytes from 192.168.175.20: icmp_seq=2 ttl=255 time=3.598 ms

64 bytes from 192.168.175.20: icmp_seq=3 ttl=255 time=2.561 ms

64 bytes from 192.168.175.20: icmp_seq=4 ttl=255 time=2.592 ms

64 bytes from 192.168.175.20: icmp_seq=5 ttl=255 time=3.107 ms