Setting Up L2VPN in VMC on AWS

In VMC on AWS SDDC, you can extend your on-premise network to VMC SDDC via HCX or L2VPN.

In this blog, I will show you how to set up L2VPN in VMC on AWS to extend network VLAN 100 to SDDC.

This blog is for VMC SDDC, running at version 1.9, which is backed by NSX-T 2.5. The SDDC end will work as a L2VPN server and your on-premise NSX autonomous edge will work as a L2VPN client.


  • UDP 500/4500 and ESP (IP Protocol) are allowed from the On-premise L2VPN client to the VMC SDDC L2VPN Server

Let’s start the setup from the VMC SDDC end.

Section 1: Set up L2VPN at VMC SDDC End

Step 1: Log in to your VMC Console, go to Networking & Security—>Network—>VPN—>Lay2 and click “ADD VPN TUNNEL”.

Select Public IP from the local IP Address drop-down and input the public IP of L2VPN’s remote end. As on-premise NSX Edge is behind a NATed device, the remote private IP is required. In my case, the remote private IP is

Step 2: Create an extended network.

Go to Network—>Segment and add a new segment as below.

  • Segment Name: l2vpn;
  • Connectivity: Extended;
  • VPN Tunnel ID: 100 (please note that the tunnel ID needs to match the on-prem tunnel ID)

After the network segment is created, you will see the below in layer 2 VPN.

Now we can begin to download the AUTONOMOUS EDGE from the highlighted hyperlink above.

While the file is downloading, we can download the peer-code which will be used for authentication between on-premise L2VPN client and SDDC L2VPN server.

The downloaded config is similar to below:


Section 2: Deploy and Setup On-premise NSX autonomous edge

Step 1: Prepare Port Groups.

Create 4 port-groups for NSX autonomous Edge.

  • pg-uplink (no vlan tagging)
  • pg-mgmt
  • pg-trunk01 (trunk)
  • pg-ha

We need to change the trunk port-group pg-trunk01 security setting to accept promiscuous mode and forged transmits. This is required for L2VPN.

Step 2: Deploy NSX Autonomous Edge

We follow the standard process to deploy an OVF template from your vCenter. In “Select Network” of the “Deploy OVF Template” wizard, map the right port-group to different networks. Please note Network 0 is always the management network port for the NSX autonomous edge. To make it simpler, I only deployed a single edge here.

The table below shows the interface/network/adapter mapping relationship in different systems/UI under my setup.

Edge CLIEdge VM vNICOVF TemplateEdge GUIPurpose
eth0Network Adapter1Network 0ManagementManagement
fp-eth0Network Adapter2Network 1eth1Uplink
fp-eth1Network Adapter3Network 2eth2Trunk
fp-eth2Network Adapter4Network 3eth3HA

In the “Customize template” section, provide the password for the root, admin and auditor.

Input hostame(l2vpnclient), management IP (, gateway ( and network mask (

Input DNS and NTP setting:

Provide the input for external port:

  • Port: 0,eth1,,24.
    • VLAN 0 means no VLAN tagging for this port.
    • eth1 means that the external port will be attached to eth1 which is network 1/pg-uplink port group.
    • IP address:
    • Prefix length: 24

There is no need to set up the Internal Port for the autonomous edge deployment. So I left it as blank.

Step 3: Autonomous Edge Setup

After the edge is deployed and powered on, you can log in to the edge UI via

Go to L2VPN and add a L2VPN session, input the Local IP (, Remote IP (SDDC public IP) and Peer Code which I got from the downloaded config in section 1.

Go to Port and add port:

  • Port Name: vlan100
  • Subnet: leave as blank
  • VLAN: 100
  • Exit Interface: eth2 (Note: eth2 is connected to the port-group pg-trunk01).

Then go back to L2VPN and attach the newly created port VLAN100 to the L2VPN session as below. Please note that the Tunnel ID is 100, which is the same tunnel ID as the SDDC end.

After the port is attached successfully, we will see something similar to below.

This is the end of this blog. Thank you very much for reading!

Build Load Balancing Service in VMC on AWS with Avi Load Balancer – Part4

This blog is Part 4 of this series. If you have not gone through the Part1, Part2 and Part3, please go and check them out now.

In Part3, we set up an active-active global load balancing service for our testing application (

Some applications require stickiness between a client and a server. That is to say, all requests in a long-lived transaction from a client must be sent to the same server; otherwise, the application session may be broken. Unfortunately, in an active-active GSLB setup, we cannot guarantee that a client session is always sent to the same back-end server in the following use case: a client which was initially served by a back-end server in SDDC01 may be redirected to SDDC02 when the new DNS query is resolved to the SDDC02 VIP after the DNS TTL is expired.

Avi Networks GSLB site cookie persistence is designed to handle the above use case. When the traffic is received by the AVi LB in SDDC02, the Avi LB checks the cookie within the request and finds out that the session is initially connected to SDDC01. So the SDDC02 Avi LB will work as a proxy, which will forward the client’s traffic to the SDDC01 Avi LB. Please note that the source IP of client traffic will be changed to the local load balancing virtual IP (In our case, the source IP will be by SDDC02 Avi LB before forwarding the traffic to SDDC01. This source NAT is requisite as it will ensure the return traffic from the back-end server will use the same path as the incoming traffic.

Step 1: Add a Federated PKI Profile

Go to Templates—>Security—>PKI Profile and click Create button to create a PKI profile. Input the parameters as below:

  • Name: gslb-pki-server
  • Enable CRL Check: No
  • Is Federated: Yes. That is to say that the PKI profile will be replicated across the federation: SDDC01 and SDDC02.
  • Certificate Authority: Add the self-signed certificate which we created in Part 2 as the CA. This will ensure that the AVi load balancer will trust the self-signed certificate presented by the peering SDDC when it works as a proxy for the client.

Step 2: Add a Federated Persistence Profile

Go to Templates—>Profiles—>Persistence and click Create button to create a GSLB persistence profile. Input the parameters as below:

  • Name: gslb-persistence01
  • Type: GSLB Site
  • Application Cookie Name: site-affinity-persistence
  • Is Federated: Yes. That is to say that the persistence profile will be replicated across the federation: SDDC01 and SDDC02.
  • Persistence Timeout: 30 mins

Step 3: Add a Federated Health Monitor

Go to Templates—>Profiles—>Health Monitors and click Create button to create a GSLB health monitor. Input the parameters as below:

  • Name: gslb-hm-https01
  • Type: HTTPS
  • Is Federated: Yes. That is to say that the health monitor will be replicated across the federation: SDDC01 and SDDC02.
  • Health Monitor Port: 443
  • Response Code: 2xx, 3xx

Step 4: Change the GSLB service

Go to Applications—>GSLB Service and edit the exiting GSLB service gslb-vs01.

  • Health Monitor: gslb-hm-https01
  • Site Persistence: Enabled
  • Site Cookie Application Persistence Profile: gslb-persistence01

After we have completed the above configuration, a new pool called SP-gslb-vs01-sddc01-vs01 is added into the local load balancing virtual service: sddc-vs01 on SDDC01 Avi LB.

When we check the member information of the new pool, the virtual IP of local virtual service in the peering SDDC (SDDC02) is shown as the only pool member. Please note that this pool is created by Avi LB automatically so the settings cannot be changed by any users.

Similarly, on SDDC02 Avi LB, a new pool called SP-gslb-vs01-sddc02-vs01 is created and added to the local load balancing virtual service: sddc02-vs01.

Let’s verify our work.

First, the GSLB DNS resolves the DNS name ( of our testing application to the VIP in SDDC01. When we input the URL into the browser, we are served by centos02 in SDDC01 as expected.

Now change the DNS resolution to point to SDDC02 VIP. Go to our testing application again and we are still served by the same back-end server centos02 in SDDC01.

The cross-site traffic between Avi LBs can be verified via a packet capture on SDDC02 Avi LB. From the packet capture, we can see the HTTPs session which is destined for SDDC02 is now forwarded from SDDC02 to SDDC01 by SDDC02 Avi LB.

As GSLB site cookie is based on HTTP cookie, so there are a few restrictions with it:

  • Site persistence applies only to Avi VIPs.
  • Site persistence across multiple virtual services within the same Controller cluster is not supported.
  • For site persistence to be turned on for a global application, all of its individual members must run on active sites.
  • Site persistence applies only to HTTP or HTTPs traffic when Avi LB terminates TLS/SSL session.

This is the end of this blog. Thank you very much for reading!

Build Load Balancing Service in VMC on AWS with Avi Load Balancer – Part3

This blog is Part 3 of this series. If you have not gone through  Part1 and Part2, please go and check them out now.

In Part 1 and Part 2, we deployed the Avi Load Balancers and completed the local load balancing setup in VMC SDDC01. To achieve high availability across different SDDCs, global load balancing is required. In this blog, let’s set up an active-active global load balancing service for our testing web application so that the web servers in both two SDDCs can serve the client simultaneously.

Section 1: Infrastructure

Task 1: Follow Part1 and Part2 to deploy Avi load balancer and set up local load balancing in VMC SDDC02 as the above diagram.

  • Avi Controller Cluster
    • Cluster IP:
    • Controller Node1:
    • Controller Node2:
    • Controller Node3:
  • SE Engine
    • SE01:
    • SE02:
  • LB Virtual Service:
    • VIP: with back-end member server Centos03 (
  • NAT:<->

Task 2: Connectivity between VMC SDDC and TGW

Please refer to my friend Avnish Tripathi’s blog ( to connect VMC SDDC01 and VMC SDDC02 to AWS Transit Gateway with Route-based VPN.

Task 3: Set up NAT for DNS Service virtual IP in VMC Console

SDDC01: static NAT<-> Here is the DNS virtual service VIP in SDDC01.

SDDC02: static NAT<-> Here is the DNS virtual service VIP in SDDC02.

Task 4: Add Firewall rules for GSLB

  • Allow inter-SDDC traffic as the below in VMC console.
  • Allow DNS traffic from the Internet to GSLB DNS service virtual IP

Task5: DNS sub-domain delegation.

In the DNS server, delegate sub-domain ( to the two DNS virtual service VIPs corresponding public IPs, which means the two to-be-defined DNS virtual services will work as the name servers of sub-domain.

Section 2: Enable GSLB

Task 1: Create a DNS virtual service

In the SDDC01 Avi Controller GUI, go to Application—>Virtual Services—>Create Virtual Service then input the parameters as below:

  • Name: sddc01-g-dns01
  • IPv4 VIP:
  • Application Profile: System-DNS
  • TCP/UDP Profile: System-UDP-Per-Pkt
  • Service Port: 53
  • Service Port: 53, override TCP/UDP with System-TCP-Proxy
  • Pool: leave blank

Leave the rest of the settings as default.

In SDDC02, create a similar DNS virtual service (sddc02-g-dns01) with VIP

Task 2: GSLB Site

Avi uses GSLB sites to define different data centers. GSLB sites fall into two broad categories — Avi sites and external sites. This blog focuses on Avi sites. Each Avi site is characterized as either an active or a passive site. Active sites are further classified into two types — GSLB leader and followers. The active site from which the initial GSLB site configuration is performed, is the designated GSLB leader. GSLB configuration changes are permitted only by logging into the leader, which propagates those changes to all accessible followers. The only way to switch the leadership role to a follower is by overriding the configuration of the leader from a follower site. This override can be invoked in the case of site failures or for maintenance.

In our setup, SDDC01 will work as “Leader” site and SDDC02 will work as “Follower” site.

In SDDC01 Avi Controller GUI, go to Infrastructure—>GSLB and click the Edit icon to enable GSLB Service.

In the “New GSLB Configuration” window, input the parameters as below:

  • Name: sddc01-gslb
  • Username: admin
  • Password: Password for Avi Controller
  • GSLB Subdomain:

Then click “Save and Set DNS Virtual Service.

Select the newly defined DNS service in task1 as “DNS Virtual Services” and configure “” as the subdomain, then save the change.

Now the GSLB setup is as below.

Click “Add New Site” button to add SDDC02 as the second GSLB site. Then Input the parameters below in the “New GSLB Site” window:

  • Name: sddc02-gslb
  • Username: admin
  • Password: Password for Avi Controller
  • IP Address: (SDDC02 Avi Cluster VIP)
  • Port: 443

Click “Save and Set DNS Virtual Services” button. Then select the newly defined DNS service in task1 as “DNS Virtual Services” and configure “” as the subdomain, then save the change.

Now the GSLB Site configuration is completed. We can see that “sddc01-gslb” works as the “Leader” site and “sddc02-gslb” works as the “Active” site.

Typically, the VIP configured in a local virtual service (configured as a GSLB pool member) is a private IP address. In our configurations, the VIPs are 192.168.x.x. But these IP addresses are not reachable by the Internet client. To handle this, we have to enable NAT-aware Public-Private GSLB feature. Go to Infrastructure—>GSLB—>Active Members—>sddc01-gslb then click the Edit icon. In the advanced settings, input the following parameters:

  • Type: Private
  • IP Address:

Section 3: Create a GSLB Service

We are ready to create a GSLB service for our application ( now. To achieve active-active GSLB service and distribute the load evenly across 3 backend servers (2 in SDDC01 and 1 in SDDC02). We developed the following GSLB service design:

  • The GSLB service includes 1 GSLB pool.
  • There is one GSLB pool member in each SDDC.
  • Groups Load Balancing Algorithm: Priority-based
  • Pool Members Load Balancing Algorithm: Round Robin

Go to Application—>GSLB Services—>Create, click the “Advanced Setup” button. In the “New GSLB Service” input the following parameters:

  • Name: gslb-vs01
  • Application Name:www
  • Sudomain:
  • Health Monitor: System-GSLB-HTTPS
  • Group Load Balancing Algorithm: Priority-based
  • Health Monitor Scope: All Members
  • Controller Health Status: Enabled

Then click “Add Pool” and input the following parameters:

  • Name: gslb-vs01-pool
  • Pool Members Load Balancing Algorithm: Round Robin (Note this means the client will be sent to the local load balancer in both SDDC01 and SDDC02).
  • Pool Member (SDDC01):
    • Site Cluster Controller: sddc01-gslb
    • Virtual Service: sddc01-vs01
    • Public IP:
    • Ratio: 2 (The virtual service will receive 2/3 load.)
  • Pool Member (SDDC02):
    • Site Cluster Controller: sddc02-gslb
    • Virtual Service: sddc02-vs01
    • Public IP:
    • Ratio: 1 (The virtual service will receive 1/3 load.)

We will change the following parameters as well for this GSLB service.

Now we have completed the setup of active-active GSLB for our web service.

Let’s verify our work.

  • The GLSB DNS service will respond to the DNS query for DNS name with the public IP of SDDC01 web virtual services or the public IP of SDDC02 web virtual service via the round-robin algorithm.
  • The web servers in both SDDCs serve the clients simultaneously.

This is the end of this blog. Thank you very much for reading!

Build Load Balancing Service in VMC on AWS with Avi Load Balancer – Part2

This blog is Part 2 of this series. If you have not gone through the Part1, please go and check it out now.

In Part 2, we will demo how to set up a local load balancing virtual service for a web-based application on our deployed Avi load balancer. The IP Address allocation and network connectivity are shown below.

There are hundreds of features are available when setting up a local load balancing service in Avi load balancer. In this blog, we will focus on the widely used features in enterprise load balancing solution:

  • TLS/SSL Termination
  • Session Persistence
  • Health Monitor

Section 1: TLS/SSL Termination

The following deployment architectures are supported by Avi Load balancer (LB) for SSL:

  • None: SSL traffic is handled as pass-through (layer 4), flowing through Avi LB without terminating the encrypted traffic.
  • Client-side: Traffic from the client to Avi LB is encrypted, with unencrypted HTTP to the back-end servers.
  • Server-side: Traffic from the client to Avi LB is unencrypted HTTP, with encrypted HTTPS to the back-end servers.
  • Both: Traffic from the client to Avi LB is encrypted and terminated at Avi LB, which then re-encrypts traffic to the back-end server.
  • Intercept: Terminate client SSL traffic, send it unencrypted over the wire for taps to intercept, then encrypt to the destination server.

We will use Client-side deployment architecture here.

Step 1: Get or Generate a certificate

Please note that a CA signed certificate is highly recommended for any production system. We will use a self-signed certificate here for simplification. Go to Templates—>Security—SSL/TLS Certificate, which all installed certificates are listed. A self-signed certificate is shown, its subject name is

Step 2: Create a customized SSL/TLS profile

The system default SSL/TLS profile still includes the support for TLS 1.0, which is not considered as very secure protocol anymore. So, we will go to Templates—>Security—>SSL/TLS Profile to create a new SSL/TLS profile which excludes TLS 1.0 as below:

Section 2: Session Persistence

Cookie persistence is the most-common persistence mechanism for a web-based application. Here we will define a persistence profile for our testing web application. Go to Templates—>Profiles—>Persistence and click “Create” button, then input the parameters as below:

  • Name: sddc011-vs01-pp01
  • Type: HTTP Cookie
  • HTTP Cookie Name: vmconaws-demo
  • Persistence Timeout: 30mins

Please note that the cookie payload contains the back-end server IP address and port, which is encrypted with AES-256. 

Section 3: Health Monitor

Avi load balancer uses the health monitor to check if the back-end servers in the load balancing pool are healthy to provide the required service or not. There are two kinds of health monitors:

  • Active Health Monitor: Active health monitors send proactive queries to servers, synthetically mimicking a client. Send and receive timeout intervals may be defined, which statically determine the server response as successful or failed.
  • Passive Health Monitor: While active health monitors provide a binary good/bad analysis of server health, passive health monitors provide a more subtle check by attempting to understand and react to the client-to-server interaction. For example, if a server is quickly responding with valid responses (such as HTTP 200), then all is well; however, if the server is sending back errors (such as TCP resets or HTTP 5xx errors), the server is assumed to have errors. 

Only active health monitors may be edited. The passive monitor has no settings.

Note: Best practice is to enable both a passive and an active health monitor to each pool.

Let’s start to create an active health monitor for our application. Go to Templates—>Profiles—>Health Monitors and click “Create” button, then input the parameters as below:

  • Name: sddc01-vs01-hm01
  • Server Response Data: sddc01
  • Server Response Code: 2xx
  • Health Monitor Port: 80 (Please note that we don’t change the default setting here. But this option can be very useful for some cluster-based application)

Section 4: Create a Load Balancing Pool

Now it is time to create the load balancing pool. Go to Application—>Pools and click “Create Pool”. In the Step 1 window, input the parameters as below:

  • Load Balance: Least Connections
  • Persistence: sddc-vs01-pp01

Add an active health monitor: sddc01-vs01-hm01.

Add two member servers:

  • centos01:
  • centos02:

Section 5: Create a Virtual Service

We will use the “Advanced Setup” to create a virtual service for our web application.

In “Step 1: Setting” window, input the parameters as below:

We use the system pre-defined application profile “System-HTTP” as the applied Application Profile for simplification here. The “System-HTTP” profile includes comprehensive configuration options for a web application, which possibly requests a separated blog to cover. Let’s list a few here:

  • X-Forwarded-For: Avi SE will insert an X-Forwarded-For (XFF) header into the HTTP request headers when the request is passed to the server. This feature is enabled.
  • Preserve Client IP Address: Avi SE will use the client-IP rather than SNAT IP for load-balanced connections from the SE to back-end application servers. This feature is disabled.
  • HTTP-to-HTTPS Redirect: Client requests received via HTTP will be redirected to HTTPS. This feature is disabled.

Leave all settings as default for Step 2 and 3.

In “Step 4: Advanced”, input the parameters as below:

  • Use VIP as SNAT: enabled
  • SE Group: Default-Group

Section 5: VMC Setup

To enable user’s access to our testing web, two changes are required in the VMC SDDC.

  • Network Address Translation
  • A CGW firewall rule to allow traffic from the Internet to the LB VIP ( on HTTPs


So far, we have completed all load balancing configurations. Let’s go to verify our work.

Application web page (

Session Persistence Cookie:

This is the end of this blog. Thank you very much for reading!

Build Load Balancing Service in VMC on AWS with Avi Load Balancer – Part1

When we design a highly available (HA) infrastructure for a mission-critical application, local load balancing and global load balancing are always the essential components of the solution. This series of blogs will demonstrate how to build an enterprise-level local load balancing and global load balancing service in VMC on AWS SDDC with Avi Networks load balancer.

This series of blogs will cover the following topics:

  1. How to deploy Avi load balancer in a VMC SDDC;
  2. How to set up local load balancing service to achieve HA within a VMC SDDC (
  3. How to set up global load balancing service to achieve HA across different SDDCs which are in different AWS Availability Zones (
  4. How to set up global load balancing site affinity (
  5. How to automate Avi LB with Ansible (

By the end of this series, we will complete an HA infrastructure build as the following diagram: this design leverages local load balancing service and global load balancing service to provide 99.99%+ SLA to a web-based mission-critical application.

The Avi load balancer platform is built on software-defined architectural principles which separate the data plane and control plane. The product components include:

  • Avi Controller (control plane) The Avi Controller stores and manages all policies related to services and management. HA of the Avi Controller requires 3 separate Controller instances, configured as a 3-node cluster
  • Avi Service Engines (data plane) Each Avi Service Engine runs on its own virtual machine. The Avi SEs provide the application delivery services to end-user traffic, and also collect real-time end-to-end metrics for traffic between end-users and applications.

In Part 1, we will cover the deployment of Avi load balancer. The diagram below shows the controller and service engine (SE) network connectivity and IP address allocation.

Depending on the level of vCenter access provided, Avi load balancer supports 3 modes of deployment. In VMC on AWS, only the “no-access” mode is supported. Please refer to for more information about Avi load balancer deployment modes in VMWare Cloud.

Section 1: Controller Cluster

Let’s start to deploy the Avi controllers and set up the controller cluster. First, download the ova package for the controller appliance. In this demo, the version of Avi load balancer controller is v18.2.5. After the download, deploy the controller virtual appliance via “Deploying OVF Template” wizard in VMC SDDC vCenter. In the “Customize template” window, input parameters as below:

  • Management interface IP:
  • Management interface Subnet mask:
  • Default gateway:
  • Sysadmin login authentication key: Password

After this 1st controller appliance is deployed and powered on, it is ready to start the controller initial configuration. Go to the controller management GUI

(1) Username/Password

(2) DNS and NTP

(3) SMTP

(4) Multiple-Tenants? Select No here for simplification.

The initial configuration for the 1st controller is completed. As the first controller of the cluster, it will receive the “Leader” role. The second and third controller will work as “Follower”. When we are logged in the GUI of this first controller, go to Administration—>Controller, as shown below.

Similarly, go to deploy and perform the initial configuration for the 2nd ( and 3rd controller (

In the management GUI of the 1st controller, go to Administration—>Controller and click “Edit”. In “Edit Controller Configuration” window, add the second node and third node into the cluster as below.

After a few minutes, the cluster is set up successfully.

Section 2: Service Engine

Now it is ready to deploy SE virtual appliances. In this demo, two SEs will be deployed. These 2 SEs are added into the default Sevice Engine Group with the default HA mode (N+M).

Step 1: Create and download the SE image.

Go to Infrastructure—>Clouds, click the download icon and select the ova format. Please note that this SE ova package is only for the linked controller cluster. It can not be used for another controller cluster.

Step 2: Get the cluster UUID and authentication token for SE deployment.

Step 3: In SDDC vCenter, run the “Deploy OVF Template” wizard to import SE ova package. In the “Customize template” window, the input parameters:

  • IP Address of the Avi Controller: (cluster IP of the controller)
  • Authentication token for Avi Controller: as Step2
  • Controller Cluster UUID for Avi Controller: as Step 2
  • Management Interface IP Address:
  • Management Interface Subnet Mask:
  • Default Gateway:
  • DNS Information:
  • Sysadmin login authentication key: Password

Please note that the second vNIC will be used as the SE data interface.

Then continue to deploy the second SE (mgmt IP:

The deployed SEs will register themself into the controller cluster as below.

Step 4: Now the SEs have established the control and management plane communication with the controller cluster. It is time to set up the SE’s data network.

During the setup, I found that the vNIC for virtual appliance VM and SE Ethernet Interface is not properly mapped, for example, the data interface is the 2nd vNIC of SE VM in vCenter but it is shown as Ethernet 5 in SE network setup. To get the correct mapping, the mac address of data vNIC will be leveraged. Go to SDDC vCenter and get the MAC address of SE data interface.

In the controller management GUI, go to Infrastructure—>Service Engine and edit the selected SE. In the interface list, select the correct interface which has the same mac address then provide the IP address and subnet mask.

The final step is to add a gateway for this data interface. Go to Infrastructure—>Routing—>Static Route and create a new static default route.

Tip: VM-VM anti-affinity policy is highly recommended to enhance the HA of the controller and service engine virtual appliances.

This is the end of the blog. Thank you very much for reading!

How to lock down your Softlayer Vyatta

In Softlayer, Vyatta Network Gateway is offered to provide routing, firewall and VPN gateway function. As a network security device, we have to protect Vyatta gateway itself properly.

Softlayer suggest:

“This Vyatta gateway is administered directly by the customer.  The customer has the ability to login directly to the device and make extensive configurations for servicing their network traffic. The customer is responsible for maintaining proper backups of the device’s configuration files.”

So as a customer of Softlayer, it is YOUR responsibility to secure the Vyatta gateway.

Here I will try to give you a few tips to lock down your Vyatta gateway:

  1. Disable un-secure and unused services running on Vyatta gateway. We are lucky: only SSH and HTTPs are enabled by default with Softlayer Vyatta build.
  2. Softlayer Vyatta build allows you to SSH to Vyatta gateway through Internet by default. You have two ways to make it more secure:
  3. Set SSH service only listens on Vyatta private network

set service ssh listen-address private-ip

  1. Apply firewall rules on the Vyatta gateway public interface to only allowed trusted network to access your Vyatta gateway. Note the firewall rules should apply as “local”.
  2. Apply the principle of least privilege by use of Role-based access control (RBAC). Vyatta defines 3 roles (operator, administrator and superuser) by default.
  3. Integrate with your central AAA server if you have one for access control. TACACS+ and Radius are supported by Vyatta gateway.
  4. Configure SNMP and Syslog to monitor the operation of Vyatta gateway.
  5. BY default, Softlayer Vyatta gateway NTP is configured to sync the Vyatta clock with Softlayer NTP server. You can change to sync the clock with your own NTP server if you like. Don’t forget to change the time-zone to reflect your local time!
  6. Control the device access in customer portal to only allow your network administrator has access to Vyatta gateway. Make sure the user name and password of Vyatta is only visible to them.
  7. Follow your password management policy and change your password regularly.

Citrix Netscaler L2 and L3 mode

Citrix NetScaler as an L2 Device

A NetScaler functioning as an L2 device is said to operate in L2 mode. In L2 mode, the NetScaler

forwards packets between network interfaces when all of the following conditions are met:

• The packets are destined to another device’s media access control (MAC) address.

• The destination MAC address is on a different network interface.

• The network interface is a member of the same virtual LAN (VLAN).

By default, all network interfaces are members of a pre-defined VLAN, VLAN 1. Address Resolution

Protocol (ARP) requests and responses are forwarded to all network interfaces that are members of

the same VLAN. To avoid bridging loops, L2 mode must be disabled if another L2 device is working

in parallel with the NetScaler.

Citrix NetScaler as a Packet Forwarding Device

A NetScaler can function as a packet forwarding device, and this mode of operation is called

L3 mode. With L3 mode enabled, the NetScaler forwards any received unicast packets that are

destined for an IP address that it does not have internally configured, if there is a route to the

destination. A NetScaler can also route packets between VLANs.

In both modes of operation, L2 and L3, a NetScaler generally drops packets that are in:

• Multicast frames

• Unknown protocol frames destined for a NetScaler’s

• Spanning Tree protocol (unless BridgeBPDUs is ON)