Setting Up L2VPN in VMC on AWS

In VMC on AWS SDDC, you can extend your on-premise network to VMC SDDC via HCX or L2VPN.

In this blog, I will show you how to set up L2VPN in VMC on AWS to extend network VLAN 100 to SDDC.

This blog is for VMC SDDC, running at version 1.9, which is backed by NSX-T 2.5. The SDDC end will work as a L2VPN server and your on-premise NSX autonomous edge will work as a L2VPN client.

Prerequisite

  • UDP 500/4500 and ESP (IP Protocol) are allowed from the On-premise L2VPN client to the VMC SDDC L2VPN Server

Let’s start the setup from the VMC SDDC end.

Section 1: Set up L2VPN at VMC SDDC End

Step 1: Log in to your VMC Console, go to Networking & Security—>Network—>VPN—>Lay2 and click “ADD VPN TUNNEL”.

Select Public IP from the local IP Address drop-down and input the public IP of L2VPN’s remote end. As on-premise NSX Edge is behind a NATed device, the remote private IP is required. In my case, the remote private IP is 10.1.1.240.

Step 2: Create an extended network.

Go to Network—>Segment and add a new segment as below.

  • Segment Name: l2vpn;
  • Connectivity: Extended;
  • VPN Tunnel ID: 100 (please note that the tunnel ID needs to match the on-prem tunnel ID)

After the network segment is created, you will see the below in layer 2 VPN.

Now we can begin to download the AUTONOMOUS EDGE from the highlighted hyperlink above.

While the file is downloading, we can download the peer-code which will be used for authentication between on-premise L2VPN client and SDDC L2VPN server.

The downloaded config is similar to below:

[{"transport_tunnel_path":"/infra/tier-0s/vmc/locale-services/default/ipsec-vpn-services/default/sessions/7998a0c0-52b7-11ea-b949-d95049696f90","peer_code":"MCxiNmY2NTg1LHsic2l0ZU5hbWUiOiJMMlZQTiIsInNyY1RhcElwIjoiMTY5LjI1NC4yMC4yIiwiZHxxxxxxxxxxxxxxxxxGgxNCIsImVuY3J5cHRBbmREaWdlc3QiOiJhZXMtZ2NtL3NoYS0yNTYiLCJwc2siOiJOb25lIiwidHVubmVscyI6W3sibG9jYWxJZCI6IjEwLjEuMS4yNDAiLCJwZWVySWQiOiI1Mi4zMy4xMjAuMTk4IiwibG9jYWxWdGlJcCI6IjE2OS4yNTQuMzEuMjU0LzMwIn1dfQ=="}]

Section 2: Deploy and Setup On-premise NSX autonomous edge

Step 1: Prepare Port Groups.

Create 4 port-groups for NSX autonomous Edge.

  • pg-uplink (no vlan tagging)
  • pg-mgmt
  • pg-trunk01 (trunk)
  • pg-ha

We need to change the trunk port-group pg-trunk01 security setting to accept promiscuous mode and forged transmits. This is required for L2VPN.

Step 2: Deploy NSX Autonomous Edge

We follow the standard process to deploy an OVF template from your vCenter. In “Select Network” of the “Deploy OVF Template” wizard, map the right port-group to different networks. Please note Network 0 is always the management network port for the NSX autonomous edge. To make it simpler, I only deployed a single edge here.

The table below shows the interface/network/adapter mapping relationship in different systems/UI under my setup.

Edge CLIEdge VM vNICOVF TemplateEdge GUIPurpose
eth0Network Adapter1Network 0ManagementManagement
fp-eth0Network Adapter2Network 1eth1Uplink
fp-eth1Network Adapter3Network 2eth2Trunk
fp-eth2Network Adapter4Network 3eth3HA

In the “Customize template” section, provide the password for the root, admin and auditor.

Input hostame(l2vpnclient), management IP (10.1.1.241), gateway (10.1.1.1) and network mask (255.255.255.0).

Input DNS and NTP setting:

Provide the input for external port:

  • Port: 0,eth1,10.1.1.240,24.
    • VLAN 0 means no VLAN tagging for this port.
    • eth1 means that the external port will be attached to eth1 which is network 1/pg-uplink port group.
    • IP address: 10.1.1.240
    • Prefix length: 24

There is no need to set up the Internal Port for the autonomous edge deployment. So I left it as blank.

Step 3: Autonomous Edge Setup

After the edge is deployed and powered on, you can log in to the edge UI via https://10.1.1.241.

Go to L2VPN and add a L2VPN session, input the Local IP (10.1.1.240), Remote IP (SDDC public IP) and Peer Code which I got from the downloaded config in section 1.

Go to Port and add port:

  • Port Name: vlan100
  • Subnet: leave as blank
  • VLAN: 100
  • Exit Interface: eth2 (Note: eth2 is connected to the port-group pg-trunk01).

Then go back to L2VPN and attach the newly created port VLAN100 to the L2VPN session as below. Please note that the Tunnel ID is 100, which is the same tunnel ID as the SDDC end.

After the port is attached successfully, we will see something similar to below.

This is the end of this blog. Thank you very much for reading!

Integrate VMware NSX-T with Kubernetes

Kubernetes (K8s) is an open-source system for automating deployment, scaling, and management of containerized applications. K8s use network plugin to provide the required networking functions like routing, switching, firewall and load balancing. VMware NSX-T provides a network plugin called NCP for K8s as well. If you want to know more about VMware NSX-T, please go to docs.vmware.com.

In this blog, I will show you how to integrate VMWare NSX-T with Kubernetes.

Here, we will build a three nodes single master K8s cluster. All 3 nodes are RHEL 7.5 virtual machine.

  • master node:
    • Hostname: master.k8s
    • Mgmt IP: 10.1.73.233
  • worker node1:
    • Hostname: node1.k8s
    • Mgmt IP: 10.1.73.234
  • worker node2:
    • Hostname: node2.k8s
    • Mgmt IP: 10.1.73.235

On each node, there are 2 vNICs attached. The first vNIC is ens192 which is for management and the second vNIC is ens224, which is for K8s transport and connected to an overlay logical switch.

NSX-T version: 2.3.0.0.0.10085405;

NSX-T NCP version: 2.3.1.10693410

Docker version: 18.03.1-ce;

K8s version: 1.11.4

1. Prepare K8s Cluster Setup

1.1 Get Offline Packages and Docker Images

As there is no Internet access in my environment, I have to prepare my K8s cluster offline. To do that, I need to get the following packages:

  • Docker offline installation packages
  • Kubeadm offline installation packages which will be used to set up the K8s cluster;
  • Docker offline images;

1.1.1 Docker Offline Installation Packages

Regarding how to get Docker offline installation packages, please refer to my other blog: Install Docker Offline on Centos7.

1.1.2 Kubeadm Offline Installation Packages

Getting Kubeadm offline installation packages is quite straightforward as well. You can use Yum with downloadonly option.

yum install --downloadonly --downloaddir=/root/ kubelet-1.11.0
yum install --downloadonly --downloaddir=/root/ kubeadm-1.11.0
yum install --downloadonly --downloaddir=/root/ kubectl-1.11.0

1.1.3 Docker Offline Images

Below are the required Docker images for K8s cluster.

  • k8s.gcr.io/kube-proxy-amd64 v1.11.4
  • k8s.gcr.io/kube-apiserver-amd64 v1.11.4
  • k8s.gcr.io/kube-controller-manager-amd64 v1.11.4
  • k8s.gcr.io/kube-scheduler-amd64 v1.11.4
  • k8s.gcr.io/coredns 1.1.3
  • k8s.gcr.io/etcd-amd64 3.2.18
  • k8s.gcr.io/pause-amd64 3.1
  • k8s.gcr.io/pause 3.1

You possibly notice that the above includes two
identical pause images although these two have different repository names. There is a story around this. Initially, I only got the first image
“k8s.gcr.io/pause-amd64” loaded. The setup passed through “kubeadm init” pre-flight but failed at the real cluster setup stage. When I checked the log, I found out that the cluster set up process kept requesting the second image. I guess it is a bug with kubeadm v1.11.0 which I am using.

I put an example here to show how to use “docker pull” CLI to download a docker image in case you don’t know how to do it.

docker pull k8s.gcr.io/kube-proxy-amd64:v1.11.4

Once you have all Docker images, you need to export these Docker images as offline images via “docker save”.

docker save k8s.gcr.io/pause-amd64:3.1 -o /pause-amd64:3.1.docker

Now it is time to upload all your installation packages and offline images to all your K8s 3 nodes including master node.

1.2 Disable SELinux and Firewalld

# disable SELinux
setenforce 0
# Change SELINUX to permissive for /etc/selinux/config
vi /etc/selinux/config
SELINUX=permissive
# Stop and disable firewalld
systemctl disable firewalld && systemctl stop firewalld

1.3 Config DNS Resolution

# Update the /etc/hosts file as below on all three K8s nodes
10.1.73.233   master.k8s
10.1.73.234   node1.k8s
10.1.73.235   node2.k8s

1.4 Install Docker and Kubeadm

To install Docker and Kubeadm, first you put all required packages for Docker or kubeadm into a different directory. For example, all required packages for kubeadm are put into a directory called kubeadm. Then use rpm to install kubeadm as below:

[root@master kubeadm]# rpm -ivh --replacefiles --replacepkgs *.rpm
warning: 53edc739a0e51a4c17794de26b13ee5df939bd3161b37f503fe2af8980b41a89-cri-tools-1.12.0-0.x86_64.rpm: Header V4 RSA/SHA512 Signature, key ID 3e1ba8d5: NOKEY
warning: socat-1.7.3.2-2.el7.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID f4a80eb5: NOKEY
Preparing...                          ########################## [100%]
Updating / installing...
   1:socat-1.7.3.2-2.el7              ########################## [ 17%]
   2:kubernetes-cni-0.6.0-0           ########################## [ 33%]
   3:kubelet-1.11.0-0                 ########################## [ 50%]
   4:kubectl-1.11.0-0                 ########################## [ 67%]
   5:cri-tools-1.12.0-0               ########################## [ 83%]
   6:kubeadm-1.11.0-0                 #########################3 [100%]

After Docker and Kubeadm are installed, you can go to enable and start docker and kubelet service:

systemctl enable docker && systemctl start docker
systemctl enable kubelet && systemctl start kubelet

In addition, you need to perform some OS level setup so that your K8s environment can run properly.

# ENABLING THE NET.BRIDGE.BRIDGE-NF-CALL-IPTABLES KERNEL OPTION
sysctl -w net.bridge.bridge-nf-call-iptables=1
echo "net.bridge.bridge-nf-call-iptables=1" > /etc/sysctl.d/k8s.conf
# Disable Swap
swapoff -a && sed -i '/ swap / s/^/#/' /etc/fstab

1.5 Load Docker Offline Images

Now let us load all offline docker images into your local Docker repo on all K8s node via CLI “docker load”.

docker load -i kube-apiserver-amd64:v1.11.4.docker
docker load -i coredns:1.1.3.docker
docker load -i etcd-amd64:3.2.18.docker
docker load -i kube-apiserver-amd64:v1.11.4.docker
docker load -i kube-controller-manager-amd64:v1.11.4.docker
docker load -i kube-proxy-amd64:v1.11.4.docker
docker load -i kube-scheduler-amd64:v1.11.4.docker
docker load -i pause-amd64:3.1.docker
docker load -i pause:3.1.docker

1.6 NSX NCP Plugin

Now you can upload your NSX NCP plugin to all 3 nodes and load the NCP images into local Docker repo.

1.6.1 Load NSX Container Image

docker load -i nsx-ncp-rhel-2.3.1.10693410.tar 

Now the docker image list on your K8s nodes will be similar to below:

[root@master ~]# docker image list
REPOSITORY                                   TAG                 IMAGE ID            CREATED             SIZE
registry.local/2.3.1.10693410/nsx-ncp-rhel   latest              97d54d5c80db        5 months ago        701MB
k8s.gcr.io/kube-proxy-amd64                  v1.11.4             5071d096cfcd        5 months ago        98.2MB
k8s.gcr.io/kube-apiserver-amd64              v1.11.4             de6de495c1f4        5 months ago        187MB
k8s.gcr.io/kube-controller-manager-amd64     v1.11.4             dc1d57df5ac0        5 months ago        155MB
k8s.gcr.io/kube-scheduler-amd64              v1.11.4             569cb58b9c03        5 months ago        56.8MB
k8s.gcr.io/coredns                           1.1.3               b3b94275d97c        11 months ago       45.6MB
k8s.gcr.io/etcd-amd64                        3.2.18              b8df3b177be2        12 months ago       219MB
k8s.gcr.io/pause-amd64                       3.1                 da86e6ba6ca1        16 months ago       742kB
k8s.gcr.io/pause                             3.1                 da86e6ba6ca1        16 months ago       742kB

1.6.2 Install NSX CNI

rpm -ivh --replacefiles nsx-cni-2.3.1.10693410-1.x86_64.rpm

Please note replacefiles option is required as a known bug with NSX-T 2.3. If you don’t include the replacefiles option, you will see an error like below:

[root@master rhel_x86_64]# rpm -i nsx-cni-2.3.1.10693410-1.x86_64.rpm
   file /opt/cni/bin/loopback from install of nsx-cni-2.3.1.10693410-1.x86_64 conflicts with file from package kubernetes-cni-0.6.0-0.x86_64

1.6.3 Install and Config OVS

# Go to OpenvSwitch directory
rpm -ivh openvswitch-2.9.1.9968033.rhel75-1.x86_64.rpm
systemctl start openvswitch.service && systemctl enable openvswitch.service
ovs-vsctl add-br br-int
ovs-vsctl add-port br-int ens224 -- set Interface ens224 ofport_request=1
ip link set br-int up
ip link set ens224 up

2. Setup K8s Cluster

Now you are ready to set up your K8s cluster. I will use kubeadm config file to define my K8s cluster when I initiate the K8s cluster setup. Below is the content of my kubeadm config file.

apiVersion: kubeadm.k8s.io/v1alpha1
kind: MasterConfiguration
kubernetesVersion: v1.11.4
api:
  advertiseAddress: 10.1.73.233
  bindPort: 6443

From the above, you can see that Kubernetes version v1.11.4 will be used and the API server IP is 10.1.73.233, which is the master node IP. Run the following CLI from K8s master node to create the K8s cluster.

kubeadm init --config kubeadm.yml

After the K8s cluster is set up, you can join the resting two worker nodes into the cluster via CLI below:

kubeadm join 10.1.73.233:6443 --token up1nz9.iatqv50bkrqf0rcj --discovery-token-ca-cert-hash sha256:3f9e96e70a59f1979429435caa35d12270d60a7ca9f0a8436dff455e4b8ac1da

Note: You can get the required token and discovery-token-ca-cert-hash from the output of “kubeadm init”.

3. NSX-T and K8s Integration

3.1 Prepare NSX Resource

Before the integration, you have to make sure that you have NSX-T resources configured in NSX manager. The required resource includes:

  • Overlay Transport Zone: overlay_tz
  • Tier 0 router: tier0_router
  • K8s Transport Logical Switch
  • IP Blocks for Kubernetes Pods: container_ip_blocks
  • IP Pool for SNAT: external_ip_pools
  • Firewall Marker Sections: top_firewall_section_marker and bottom_firewall_section_marker

Please refer the NSX Container Plug-in for Kubernetes and Cloud Foundry – Installation and Administration Guide to further check how to create the NSX-T resource. The following are the UUID for all created resources:

  • tier0_router = c86a625e-54e0-4510-9185-e9e1b7e26eb9
  • overlay_tz = f6d90300-c56e-4d26-8684-8eff64cdf5a0
  • container_ip_blocks = f9e411f5-654e-4f0d-99e8-2e5a9812f295
  • external_ip_pools = 84ffd635-640f-41c6-be85-71337e112e69
  • top_firewall_section_marker = ab07e559-79aa-4bc9-a6f0-126ea59278c2
  • bottom_firewall_section_marker = 35aaa6c5-0870-4ac4-bf47-114780863956

In addition, make sure that you tagged switching ports which three k8s nodes are attached to in the following ways:

{'ncp/node_name': '<node_name>'}
{'ncp/cluster': '<cluster_name>'}

node_name is the FQDN hostname of the K8s node and the cluster_name is what you call this cluster in NSX not in K8s cluster context. I show you here my K8s nodes’ tags.

k8s master switching port tags
k8s node1 swicthing port tags

k8s node2 swicthing port tags

3.2 Install NSX NCP Plugin

3.2.1 Create Name Space

kubectl create ns nsx-system

3.2.2 Create Service Account for NCP

kubectl apply -f rbac-ncp.yml -n nsx-system

3.2.3 Create NCP ReplicationController

kubectl apply -f ncp-rc.yml -n nsx-system

3.2.4 Create NCP nsx-node-agent and nsx-kube-proxy DaemonSet

kubectl create -f nsx-node-agent-ds.yml -n nsx-system 

You can find the above 3 yaml files in Github
https://github.com/insidepacket/nsxt-k8s-integration-yaml

Now you have completed the NSX-T and K8s integration. If you check the pods running on your K8s cluster, you will see the similar as below:

[root@master ~]# k get pods --all-namespaces 
NAMESPACE     NAME                                   READY     STATUS    RESTARTS   AGE
kube-system   coredns-78fcdf6894-pg4dz               1/1       Running   0          9d
kube-system   coredns-78fcdf6894-q727q               1/1       Running   128        9d
kube-system   etcd-master.k8s                        1/1       Running   3          14d
kube-system   kube-apiserver-master.k8s              1/1       Running   2          14d
kube-system   kube-controller-manager-master.k8s     1/1       Running   3          14d
kube-system   kube-proxy-5p482                       1/1       Running   2          14d
kube-system   kube-proxy-9mnwk                       1/1       Running   0          12d
kube-system   kube-proxy-wj8qw                       1/1       Running   3          14d
kube-system   kube-scheduler-master.k8s              1/1       Running   3          14d
ns-test1000   http-echo-deployment-b5bbfbb86-j4dxq   1/1       Running   0          2d
nsx-system    nsx-ncp-rr989                          1/1       Running   0          11d
nsx-system    nsx-node-agent-kbsld                   2/2       Running   0          9d
nsx-system    nsx-node-agent-pwhlp                   2/2       Running   0          9d
nsx-system    nsx-node-agent-vnd7m                   2/2       Running   0          9d
nszhang       busybox-756b4db447-2b9kx               1/1       Running   0          5d
nszhang       busybox-deployment-5c74f6dd48-n7tp2    1/1       Running   0          9d
nszhang       http-echo-deployment-b5bbfbb86-xnjz6   1/1       Running   0          2d
nszhang       jenkins-deployment-8546d898cd-zdzs2    1/1       Running   0          11d
nszhang       whoami-deployment-85b65d8757-6m7kt     1/1       Running   0          6d
nszhang       whoami-deployment-85b65d8757-b4m99     1/1       Running   0          6d
nszhang       whoami-deployment-85b65d8757-pwwt9     1/1       Running   0          6d

In NSX-T manager GUI, you will see the following resources are created for K8s cluster.

Logical Switches for K8s
Tier1 Router for K8s
NSX LB for K8s

Tips:

I have met a few issues during my journey. The following CLIs are used a lot when I troubleshoot. I shared these CLI here and hope can help you a bit as well.

  • How to check kubelet service’s log
journalctl -xeu kubelet
  • How to check log for a specific pod
kubectl logs nsx-ncp-rr989 -n nsx-system

“nsx-ncp-rr989” is the name of pod and “nsx-system” is the namespace which we created for NCP.

  • How to check log for a specific container when there are more than 1 container in the pod
kubectl logs nsx-node-agent-n7n7g -c nsx-node-agent -n nsx-system

“nsx-node-agent-n7n7g” is the pod name and “nsx-node-agent” is the container name.

  • Show details of a specific pod
kubectl describe pod nsx-ncp-rr989 -n nsx-system

Automate NSX-T Build with Terraform

Terraform is a widely adopted Infrastructure as Code tool that allow you to define your infrastructure using a simple, declarative programming language, and to deploy and manage infrastructure across public cloud providers including AWS, Azure, Google Cloud & IBM Cloud and other infrastructure providers like VMware NSX-T, F5 Big-IP etc.

In this blog, I will show you how to leverage Terraform NSX-T provider to define a NSX-T tenant environment in minutes.

To build the new NSX-T environment, I am going to:

  1. Create a new Tier1 router named tier1_router;
  2. Create three logical switches under newly created Tier1 router for web/app/db security zone;
  3. Connect the newly created Tier1 router to the existing Tier0 router;
  4. Create a new network service group including SSH and HTTPs;
  5. Create a new firewall section and add a firewall rule to allow outbound SSH/HTTPs traffic from any workload in web logical switch to any workload in app logical switch;

Firstly, I define a Terraform module as below. Note: Terraform module is normally used to define reusable components. For example, the module which I defined here can be re-used to complete non-prod and prod environment build when you provide different input.

/*
provider "nsxt" {
  allow_unverified_ssl = true
  max_retries = 10
  retry_min_delay = 500
  retry_max_delay = 5000
  retry_on_status_codes = [429]
}
*/

data "nsxt_transport_zone" "overlay_transport_zone" {
  display_name = "tz-overlay"
}

data "nsxt_logical_tier0_router" "tier0_router" {
  display_name = "t0"
}

data "nsxt_edge_cluster" "edge_cluster" {
  display_name = "edge-cluster"
}

resource "nsxt_logical_router_link_port_on_tier0" "tier0_port_to_tier1" {
  description = "TIER0_PORT1 provisioned by Terraform"
  display_name = "tier0_port_to_tier1"
  logical_router_id = "${data.nsxt_logical_tier0_router.tier0_router.id}"
  tag {
    scope = "ibm"
    tag   = "blue"
  }
}

resource "nsxt_logical_tier1_router" "tier1_router" {
  description = "RTR1 provisioned by Terraform"
  display_name = "${var.nsxt_logical_tier1_router_name}"
  #failover_mode = "PREEMPTIVE"
  edge_cluster_id = "${data.nsxt_edge_cluster.edge_cluster.id}"
  enable_router_advertisement = true
  advertise_connected_routes = false
  advertise_static_routes = true
  advertise_nat_routes = true
  tag {
    scope = "ibm"
    tag   = "blue"
  }
}

resource "nsxt_logical_router_link_port_on_tier1" "tier1_port_to_tier0" {
  description  = "TIER1_PORT1 provisioned by Terraform"
  display_name = "tier1_port_to_tier0"
  logical_router_id = "${nsxt_logical_tier1_router.tier1_router.id}"
  linked_logical_router_port_id = "${nsxt_logical_router_link_port_on_tier0.tier0_port_to_tier1.id}"
  tag {
    scope = "ibm"
    tag   = "blue"
  }
}

resource "nsxt_logical_switch" "LS-terraform-web" {
  admin_state = "UP"
  description = "LogicalSwitch provisioned by Terraform"
  display_name = "${var.logicalswitch1_name}"
  transport_zone_id = "${data.nsxt_transport_zone.overlay_transport_zone.id}"
  replication_mode  = "MTEP"
  tag {
    scope = "ibm"
    tag = "blue"
  }
}

resource "nsxt_logical_switch" "LS-terraform-app" {
  admin_state = "UP"
  description = "LogicalSwitch provisioned by Terraform"
  display_name = "${var.logicalswitch2_name}"
  transport_zone_id = "${data.nsxt_transport_zone.overlay_transport_zone.id}"
  replication_mode  = "MTEP"
  tag {
    scope = "ibm"
    tag = "blue"
  }
}


resource "nsxt_logical_switch" "LS-terraform-db" {
  admin_state = "UP"
  description = "LogicalSwitch provisioned by Terraform"
  display_name = "${var.logicalswitch3_name}"
  transport_zone_id = "${data.nsxt_transport_zone.overlay_transport_zone.id}"
  replication_mode  = "MTEP"
  tag {
    scope = "ibm"
    tag = "blue"
  }
}

resource "nsxt_logical_port" "lp-terraform-web" {
  admin_state = "UP"
  description = "lp provisioned by Terraform"
  display_name = "lp-terraform-web"
  logical_switch_id = "${nsxt_logical_switch.LS-terraform-web.id}"

  tag {
    scope = "ibm"
    tag   = "blue"
  }
}

resource "nsxt_logical_port" "lp-terraform-app" {
  admin_state = "UP"
  description = "lp provisioned by Terraform"
  display_name = "lp-terraform-app"
  logical_switch_id = "${nsxt_logical_switch.LS-terraform-app.id}"

  tag {
    scope = "ibm"
    tag   = "blue"
  }
}

resource "nsxt_logical_port" "lp-terraform-db" {
  admin_state = "UP"
  description = "lp provisioned by Terraform"
  display_name = "lp-terraform-db"
  logical_switch_id = "${nsxt_logical_switch.LS-terraform-db.id}"

  tag {
    scope = "ibm"
    tag   = "blue"
  }
}

resource "nsxt_logical_router_downlink_port" "lif-terraform-web" {
  description = "lif provisioned by Terraform"
  display_name = "lif-terraform-web"
  logical_router_id = "${nsxt_logical_tier1_router.tier1_router.id}"
  linked_logical_switch_port_id = "${nsxt_logical_port.lp-terraform-web.id}"
  ip_address = "${var.logicalswitch1_gw}"

  tag {
    scope = "ibm"
    tag   = "blue"
  }
}

resource "nsxt_logical_router_downlink_port" "lif-terraform-app" {
  description = "lif provisioned by Terraform"
  display_name = "lif-terraform-app"
  logical_router_id = "${nsxt_logical_tier1_router.tier1_router.id}"
  linked_logical_switch_port_id = "${nsxt_logical_port.lp-terraform-app.id}"
  ip_address = "${var.logicalswitch2_gw}"

  tag {
    scope = "ibm"
    tag   = "blue"
  }
}

resource "nsxt_logical_router_downlink_port" "lif-terraform-db" {
  description = "lif provisioned by Terraform"
  display_name = "lif-terraform-db"
  logical_router_id = "${nsxt_logical_tier1_router.tier1_router.id}"
  linked_logical_switch_port_id = "${nsxt_logical_port.lp-terraform-db.id}"
  ip_address = "${var.logicalswitch3_gw}"

  tag {
    scope = "ibm"
    tag   = "blue"
  }
}

resource "nsxt_l4_port_set_ns_service" "ns_service_tcp_443_22_l4" {
  description = "Service provisioned by Terraform"
  display_name = "web_to_app"
  protocol = "TCP"
  destination_ports = ["443", "22"]
  tag {
    scope = "ibm"
    tag   = "blue"
  }
}

resource "nsxt_firewall_section" "terraform" {
  description = "FS provisioned by Terraform"
  display_name = "Web-App"
  tag {
    scope = "ibm"
    tag = "blue"
  }
  
  applied_to {
    target_type = "LogicalSwitch"
    target_id = "${nsxt_logical_switch.LS-terraform-web.id}"
  }

  section_type = "LAYER3"
  stateful = true

  rule {
    display_name = "out_rule"
    description  = "Out going rule"
    action = "ALLOW"
    logged = true
    ip_protocol = "IPV4"
    direction = "OUT"

    source {
      target_type = "LogicalSwitch"
      target_id = "${nsxt_logical_switch.LS-terraform-web.id}"
    }

    destination {
      target_type = "LogicalSwitch"
      target_id = "${nsxt_logical_switch.LS-terraform-app.id}"
    }
    service {
      target_type = "NSService"
      target_id = "${nsxt_l4_port_set_ns_service.ns_service_tcp_443_22_l4.id}"
    }
    applied_to {
      target_type = "LogicalSwitch"
      target_id = "${nsxt_logical_switch.LS-terraform-web.id}"
    }
  }
}  

output "edge-cluster-id" {
  value = "${data.nsxt_edge_cluster.edge_cluster.id}"
}

output "edge-cluster-deployment_type" {
  value = "${data.nsxt_edge_cluster.edge_cluster.deployment_type}"
}

output "tier0-router-port-id" {
  value = "${nsxt_logical_router_link_port_on_tier0.tier0_port_to_tier1.id}"
}

Then I use the below to call this newly created module:

provider "nsxt" {
  allow_unverified_ssl = true
  max_retries = 10
  retry_min_delay = 500
  retry_max_delay = 5000
  retry_on_status_codes = [429]
}

module "nsxtbuild" {
  source = "/root/terraform/modules/nsxtbuild"
  nsxt_logical_tier1_router_name = "tier1-npr-vr"
  logicalswitch1_name = "npr-web"
  logicalswitch2_name = "npr-app"
  logicalswitch3_name = "npr-db"
  logicalswitch1_gw = "192.168.80.1/24"
  logicalswitch2_gw = "192.168.81.1/24"
  logicalswitch3_gw = "192.168.82.1/24"
}

After “terraform apply”, you can find the required environment is built successfully in NSX Manager.

Logical Switches
T1 vRouter
Service
DFW Rules

NSX-T Routing Path

In this blog, I will show you the routing path for different NSX-T Edge cluster deployment options.

  • The 1st is the simplest scenario: we have a Edge Cluster and there is not any Tier-1 SR. So we will only have Tier-0 DR and Tier-0 SR running in this NSX Edge Cluster.  In the routing path diagram, I used the orange line to show the northbound path and the dark green line to show the southbound path.

Pattern1

  • In the 2nd scenario, Tier-1 vRouter includes Tier-1 DR and Tier-1 SR. Both Tier-1 SR and Tier-0 SR are running in the same NSX Edge Cluster. This design is to provide NAT, Firewall function at Tier-1 level via Tier1-SR. In the routing path diagram, I used the orange line to show the northbound path and the dark green line to show the southbound path.

Pattern2

 

  • In the 3nd scenario, we have 2 Edge clusters:
    • NSX-T T1 Edge Cluster: dedicated for Tier-1 SR/SRs, which is dedicated for running centralized service (e.g. NAT);
    • NSX-T T0 Edge Cluster: dedicated for Tier-0 SR/SRs, which provides uplink connectivity to the physical infrastructure;

This option gives better scalability and creates isolated service domains for Tier-0 and Tier-1. Similarly, I used the orange line to show the northbound path and the dark green line to show the southbound path in the diagram below:

 

Pattern3

Setup NSX L2VPN on Standalone Edge

With NSX L2VPN, you can extend your VLAN/VXLAN across multiple data centers.  Even in a non-NSX environment, you can achieve this as well by use of standalone edge. In this blog, I will show you how to set up NSX L2VPN between Standalone Edge and NSX edge.

Topology:2018-06-27_162648

As the above, we have 1 NSX edge as L2VPN server and 1 standalone edge which resides in the remote DC which is non-NSX environment. Our target is to stretch two VXLAN backed networks (172.16.136.0/24 and 172.16.137.0/24) to 2 VLAN (VLAN100 and VLAN200) backed networks in remote DC via L2VPN. In addition, we will leverage 4 virtual machines for our L2VPN communication testing.

2 virtual machines in NSX environment:

test1000: 10.172.136.100 gw 172.16.136.1 which is connected to VXLAN10032;

test1002: 10.172.137.100 gw 172.16.137.1 which is connected to VXLAN10033;

2 virtual machines in non-NSX environment:

test1001: 10.172.136.101 gw 172.16.136.1 which is connected to a dVS port-group with access vlan 100;

test1003: 10.172.137.101 gw 172.16.137.1 which is connected to a dVS port-group with access vlan 200;

Step 1: Configure NSX Edge as L2VPN Server

  • Create 2 sub interfaces(sub100: 172.16.136.1/24 and sub200: 172.16.137.1) by two VXLANs under trunk port

L2VPN Server03

Two VXLAN sub-interfaces, please note that 1st sub-interface is mapped to vNic10 and 2nd sub-interface is mapped to vNic11.

L2VPN Server04

Sub-interface sub100: tunnel Id 100/172.16.136.1 (VXLAN 10032)

L2VPN Server05

Sub-interface sub200 tunnel Id 200/172.16.137.1 (VXLAN 10033)

L2VPN Server06

  • L2VPN Server setting as below:
    • Listener IP: 172.16.133.1
    • Listener Port: 443
    • Encryption Algorithm: AES128-GCM-SHA256
    • Site Configuration:
      • name: remote
      • User Id/Password: admin/credential
      • Stretched Interfaces: sub100 and sub200

L2VPN Server01

L2VPN Server02

Step 2: Deploy and Setup L2VPN virtual appliance

Use standard process of deploying a virtual appliance.

  • Start the deploy OVF template wizard

1.2

  • Select the standalone Edge ovf file which is downloaded from vmware.com

1.3

1.4

  • Accept extra configuration options

1.5

  • Select name and folder1.6

1.7

  • Select storage

1.8

  • Setup Networks: here we use one dVS port-group for the standalone trunk interface. We will provide more details around the setting of this port-group later1.9
  • Customize template. We will configure L2VPN client here as well.

The configuration includes multiple parts:

Part1: standalone edge admin credentials:

1.10

Part2: standalone edge network setting:

1.11

Part 3: L2VPN setting, which required to exactly match the L2VPN server configuration which you did in Step1 including cipher suite, L2VPN Server address/service port and L2VPN username/password for authentication

1.12

Part4: L2VPN Sub Interfaces

1.13.1

Part5: other setting, e.g. proxy if your standalone edge need proxy to establish connectivity to L2VPN server.

1.14

  • Accept all setting and submit for the standalone edge deployment.

1.14.1

Once the standalone edge deployment is completed and powered on, you should be able to see the L2VPN tunnel is up either on NSX edge L2VPN server or standalone edge via CLI (show service l2vpn).

On NSX edge L2VPN server:

L2VPN up

On standalone edge:

l2vpn status_client

Step 3: Verification of communication

I simply use PING to verify the communication. My initial test is failed. Yes, you still need to configure port group DPortGroup_ClientTrunk to support L2VPN although L2VPN tunnel is up. You don’t need to do the same for NSX edge as it is completed automatically for you when you configure L2VPN on it.

  • VLAN trunking with VLAN100 and VLAN200

PG_ClientTrunk03

PG_ClientTrunk02

After completing of the above configuration, you will be able to ping all testing virtual machines between each other:

  • test1001 to test1000 (communication within 172.16.136.0/24 via L2VPN)

test01

  • test1003 to test1002 (communication for 172.16.137.0/24 via L2VPN)

test02

  • test1001 to test1003 (communication between 172.16.136.0/24 and 172.16.137.0/24 via L2VPN)

test03

You can check the mac-address and L2VPN mapping relationship via CLI “show service l2vpn bridge”

show_service_l2vpn_bridge

Possibly you noted there is an interface called na1 in the above, which is tunnel interface is created at NSX edge for L2VPN, you can find more details via show interface na1″

interface_na1

On standalone edge L2VPN client end, you will find 2 new vNiCs (vNic_110 and vNic_210) for VLAN 100 and 200 are created as well like vNic10 and vNic11 on the NSX Edge L2VPN server end.

L2VPN client new vNic

In addition, you can find a L2VPN tunnel interface tap0 on standalone edge.

l2vpn client trunk

Export NSX-v DFW Rules as CSV File

From NSX-v version 6.4.0, NSX API begins to support JSON format for its response not like before only XML format. From my own expereince, I prefer to use JSON format than XML format as it is easier to decode and encode JSON data than XML data. So I took 1 weekend to re-write my old Python code. Now this code can get Json format NSX-V DFW rules from NSX manager and then place into a CSV file so that you can view and search your DFW rules easily.

Below is a sample of CSV file which is generated by my Python code.

dfw-csv

I have put the source code in Github:

https://github.com/insidepacket/NSX-Toolkit/blob/master/export_dfw_to_csv.py

Feel free to enjoy.

 

Install PowerCLI and PowerNSX Offline on RHEL7

With the release of PowerCLI 10.0.0, VMware adds support for Mac OS and Linux! Now you can install PowerCLI and PowerNSX on Linux System including RHEL, Centos, Unbuntu and Mac OS. To complete installation of VMware PowerCLI 10 and PowerNSX, firstly you need to install Powershell Core 6.0.

In most of enterprise environments, we won’t be so lucky to have Internet access for all your Redhat RHEL systems. In this blog, I will show you how to install Powershell, PowerCLI and PowerNSX offline on Red Hat Enterprise Linux Server.

Software version:

Red Hat Enterprise Linux Server release 7.5 (Maipo)

PowerShell v6.0.2

VMware PowerCLI 10.1.1

VMware PowerNSX 3.0.1110

Step 0: Prerequisite

You have another Windows workstation/server or Linux which have Internet access and Powershell installed so that we can download all required packages.

In addition, make sure that your RHEL meet the following prerequisites:

  • openssl devel (version 1.0.2k and above) package installed

[root@localhost Powershell]# rpm -qa | grep openssl
openssl-1.0.2k-12.el7.x86_64
xmlsec1-openssl-1.2.20-7.el7_4.x86_64
openssl-libs-1.0.2k-12.el7.x86_64
openssl-devel-1.0.2k-12.el7.x86_64

  • “Development tools” packages installed

You can find out which packages are included in the “Development Tools” packages by CLI: yum group info “Development Tools”

Step 1: Install PowerShell v6.0.2

Go to website https://packages.microsoft.com/rhel/7/prod/ to download the required packages including dotnet and powershell.

dotnet

pwsh

  • Installed the following dotnet packages via “rpm -ivh”

[root@localhost yum.repos.d]# rpm -qa | grep dotn
dotnet-runtime-2.0.5-2.0.5-1.x86_64
dotnet-runtime-deps-2.1-2.1.0-1.x86_64
dotnet-hostfxr-2.0.5-2.0.5-1.x86_64
dotnet-sdk-2.1.4-2.1.4-1.x86_64
dotnet-host-2.1.0-1.x86_64

  • Install  Powershell 6.0.2

rpm -ivh powershell-6.0.2-1.rhel.7.x86_64.rpm

After you successfully installed Powershell, you need to create “Modules” directory for PowerCLI and PowerNSX modules. This “Modules” directory is under your home directory: /home/username/.local/share/powershell/Modules for current user or /usr/local/share/powershell/Modules for all users.

Step 2: Install PowerCLI Core

Since PowerCLI version 6.5, you can’t download the PowerCLI package from VMware directly any longer. You have to connect to PowerShell Gallery via Internet to install PowerCLI. As our RHEL has no Internet access. We firstly need to use “Save-Module” to download the latest PowerCLI package then upload to our RHEL system for installation.

Save-Module -Name VMware.PowerCLI -Path /root/powershell/powercli10

After uploading all sub-directories to the RHEL server, you copy all directories/files under the “Modules” directory which you created in Step 1.

[root@localhost powershell]# cd Modules/
[root@localhost Modules]# ls -al
total 4
drwxr-xr-x. 24 root root 4096 Jun 19 13:59 .
drwxr-xr-x. 5 root root 54 Jun 18 19:51 ..
drwxr-xr-x. 3 root root 27 Jun 19 08:51 VMware.DeployAutomation
drwxr-xr-x. 3 root root 27 Jun 19 08:51 VMware.ImageBuilder
drwxr-xr-x. 3 root root 28 Jun 19 08:51 VMware.PowerCLI
drwxr-xr-x. 3 root root 27 Jun 19 08:51 VMware.Vim
drwxr-xr-x. 3 root root 28 Jun 19 08:51 VMware.VimAutomation.Cis.Core
drwxr-xr-x. 3 root root 28 Jun 19 08:51 VMware.VimAutomation.Cloud
drwxr-xr-x. 3 root root 28 Jun 19 08:51 VMware.VimAutomation.Common
drwxr-xr-x. 3 root root 28 Jun 19 08:51 VMware.VimAutomation.Core
drwxr-xr-x. 3 root root 27 Jun 19 08:51 VMware.VimAutomation.HA
drwxr-xr-x. 3 root root 27 Jun 19 08:51 VMware.VimAutomation.HorizonView
drwxr-xr-x. 3 root root 28 Jun 19 08:51 VMware.VimAutomation.License
drwxr-xr-x. 3 root root 28 Jun 19 08:51 VMware.VimAutomation.Nsxt
drwxr-xr-x. 3 root root 28 Jun 19 08:51 VMware.VimAutomation.PCloud
drwxr-xr-x. 3 root root 28 Jun 19 08:51 VMware.VimAutomation.Sdk
drwxr-xr-x. 3 root root 28 Jun 19 08:52 VMware.VimAutomation.Srm
drwxr-xr-x. 3 root root 28 Jun 19 08:52 VMware.VimAutomation.Storage
drwxr-xr-x. 3 root root 21 Jun 19 08:52 VMware.VimAutomation.StorageUtility
drwxr-xr-x. 3 root root 28 Jun 19 08:52 VMware.VimAutomation.Vds
drwxr-xr-x. 3 root root 28 Jun 19 08:52 VMware.VimAutomation.Vmc
drwxr-xr-x. 3 root root 28 Jun 19 08:52 VMware.VimAutomation.vROps
drwxr-xr-x. 3 root root 27 Jun 19 08:52 VMware.VumAutomation

Now your PowerCLI is nearly ready for use.

Issue “pwsh” from bash then you start PowerShell

[root@localhost Modules]# pwsh
PowerShell v6.0.2
Copyright (c) Microsoft Corporation. All rights reserved.

https://aka.ms/pscore6-docs
Type ‘help’ to get help.

PS /root/.local/share/powershell/Modules>

As VMware PowerCLI 10 release notes, not all modules are supported with PowerShell Core 6.0 on RHEL. So before you import the PowerCLI modules, you have to change the  “VMware.PowerCLI.psd1” file to only load supported modules. The location of “VMware.PowerCLI.psd1” file  is as below

[root@localhost 10.1.1.8827524]# pwd
/root/.local/share/powershell/Modules/VMware.PowerCLI/10.1.1.8827524
[root@localhost 10.1.1.8827524]# ls -al
total 64
drwxr-xr-x. 2 root root 115 Jun 19 09:45 .
drwxr-xr-x. 3 root root 28 Jun 19 08:51 ..
-rw-r–r–. 1 root root 15196 Jun 18 21:57 PSGetModuleInfo.xml
-rw-r–r–. 1 root root 16413 Jun 14 10:36 VMware.PowerCLI.cat
-rw-r–r–. 1 root root 11603 Jun 14 10:36 VMware.PowerCLI.ps1
-rw-r–r–. 1 root root 14692 Jun 19 09:45 VMware.PowerCLI.psd1

Edit the above file like below (comment each line which include the un-supported module by adding # in the beginning)

# Modules that must be imported into the global environment prior to importing this module
RequiredModules = @(
@{“ModuleName”=”VMware.VimAutomation.Sdk”;”ModuleVersion”=”10.1.0.8342078″}
@{“ModuleName”=”VMware.VimAutomation.Common”;”ModuleVersion”=”10.1.0.8342134″}
@{“ModuleName”=”VMware.Vim”;”ModuleVersion”=”6.7.0.8343295″}
@{“ModuleName”=”VMware.VimAutomation.Core”;”ModuleVersion”=”10.1.0.8344055″}
#@{“ModuleName”=”VMware.VimAutomation.Srm”;”ModuleVersion”=”10.0.0.7893900″}
#@{“ModuleName”=”VMware.VimAutomation.License”;”ModuleVersion”=”10.0.0.7893904″}
@{“ModuleName”=”VMware.VimAutomation.Vds”;”ModuleVersion”=”10.1.0.8344219″}
@{“ModuleName”=”VMware.VimAutomation.Vmc”;”ModuleVersion”=”10.0.0.7893902″}
@{“ModuleName”=”VMware.VimAutomation.Nsxt”;”ModuleVersion”=”10.1.0.8346947″}
#@{“ModuleName”=”VMware.VimAutomation.vROps”;”ModuleVersion”=”10.0.0.7893921″}
@{“ModuleName”=”VMware.VimAutomation.Cis.Core”;”ModuleVersion”=”10.1.0.8377811″}
#@{“ModuleName”=”VMware.VimAutomation.HA”;”ModuleVersion”=”6.5.4.7567193″}
#@{“ModuleName”=”VMware.VimAutomation.HorizonView”;”ModuleVersion”=”7.5.0.8827468″}
#@{“ModuleName”=”VMware.VimAutomation.PCloud”;”ModuleVersion”=”10.0.0.7893924″}
#@{“ModuleName”=”VMware.VimAutomation.Cloud”;”ModuleVersion”=”10.0.0.7893901″}
#@{“ModuleName”=”VMware.DeployAutomation”;”ModuleVersion”=”6.7.0.8250345″}
#@{“ModuleName”=”VMware.ImageBuilder”;”ModuleVersion”=”6.7.0.8250345″}
@{“ModuleName”=”VMware.VimAutomation.Storage”;”ModuleVersion”=”10.1.0.8313015″}
@{“ModuleName”=”VMware.VimAutomation.StorageUtility”;”ModuleVersion”=”1.2.0.0″}
#@{“ModuleName”=”VMware.VumAutomation”;”ModuleVersion”=”6.5.1.7862888″}
)

If we have not removed the unsupported from the list of “Must Import Module”, we will see error like below:

Import-Module : The VMware.ImageBuilder module is not currently supported on the Core edition of PowerShell.
At line:1 char:1
+ import-module VMware.PowerCLI
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : OperationStopped: (The VMware.Imag… of PowerShell.:String) [Import-Module], RuntimeException
+ FullyQualifiedErrorId : The VMware.ImageBuilder module is not currently supported on the Core edition of PowerShell.,Microsoft.PowerShell.Commands.ImportModuleCommand 

Now you are ready to import PowerCLI modules.

PS /root/.local/share/powershell/Modules> import-module VMware.PowerCLI
$<5> Welcome to VMware PowerCLI!

Log in to a vCenter Server or ESX host: Connect-VIServer
To find out what commands are available, type: Get-VICommand
To show searchable help for all PowerCLI commands: Get-PowerCLIHelp
Once you’ve connected, display all virtual machines: Get-VM
If you need more help, visit the PowerCLI community: Get-PowerCLICommunity

Copyright (C) VMware, Inc. All rights reserved.

PS /root/.local/share/powershell/Modules>

However, when use cmdlet Connect-VIServer to connect vCenter server, you will see an error similar like this:

Error
Connect-VIServer : 06/22/18 11:22:26 AM Connect-VIServer The libcurl library in use (7.29.0) and its SSL backend (“NSS/3.21 Basic ECC”) do not support custom handling of certificates. A libcurl built with OpenSSL is required.

The cause of this error is that RHEL libcurl library is too old which doesn’t support OpenSSL. Please refer the following link which suggests how to fix the above issue by getting curl 7.52.1 installed.

https://www.opentechshed.com/powercli-core-on-centos-7/

[root@localhost ~]# curl –version
curl 7.52.1 (x86_64-pc-linux-gnu) libcurl/7.52.1 OpenSSL/1.0.2k zlib/1.2.7
Protocols: dict file ftp ftps gopher http https imap imaps pop3 pop3s rtsp smb smbs smtp smtps telnet tftp
Features: IPv6 Largefile NTLM NTLM_WB SSL libz UnixSockets HTTPS-proxy

When we try to use “Connect-VIServer” cmdlet again, we see another error. This will happen when you connect to vCenter via IP or your RHEL think the received certificate is not valid:

Connect-VIServer : 6/21/18 11:40:16 AM Connect-VIServer Error: Invalid server certificate. Use Set-PowerCLIConfiguration to set the value for the InvalidCertificateAction option to Ignore to ignore the certificate errors for this connection.
Additional Information: Could not establish trust relationship for the SSL/TLS secure channel with authority ‘10.1.1.2’.
At line:1 char:1
+ Connect-VIServer -Server 10.1.1.2
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : SecurityError: (:) [Connect-VIServer], ViSecurityNegotiationException
+ FullyQualifiedErrorId : Client20_ConnectivityServiceImpl_Reconnect_CertificateError,VMware.VimAutomation.ViCore.Cmdlets.Commands.ConnectVIServer

We have two options here:

  1. Get valid certifcate for vCenter;
  2. Change PowerCLI configuration to disable SSL certificate verification;

Although option 2 is not good from security point of view, I still show you here so that I can go ahead for PowerNSX installation.

PS /root/.local/share/powershell/Modules> Set-PowerCLIConfiguration -InvalidCertificateAction ignore -confirm:$false

Scope ProxyPolicy DefaultVIServerMode InvalidCertificateAction DisplayDeprecationWarnings WebOperationTimeout
Seconds
—– ———– ——————- ———————— ————————– ——————-
Session UseSystemProxy Multiple Ignore True 300
User Ignore
AllUsers

Step 3: Install PowerNSX

  • Create a sub-directory called “PowerNSX” under “Modules” directory

[root@localhost powershell]# cd Modules/
[root@localhost Modules]# ls -al
total 4
drwxr-xr-x. 24 root root 4096 Jun 19 13:59 .
drwxr-xr-x. 5 root root 54 Jun 18 19:51 ..
drwxr-xr-x. 2 root root 48 Jun 19 14:01 PowerNSX

  • Download PowerNSX package from Github (https://github.com/vmware/powernsx) and upload the downloaded zip file to my RHEL server. Then unzip zip file and copy the  following 2 files into PowerNSX directory:

PowerNSX.psd1
PowerNSX.psm1

[root@localhost Modules]# ls -al PowerNSX/
total 1572
drwxr-xr-x. 2 root root 48 Jun 19 14:01 .
drwxr-xr-x. 24 root root 4096 Jun 19 13:59 ..
-rwxr-xr-x. 1 root root 15738 Jun 19 14:01 PowerNSX.psd1
-rwxr-xr-x. 1 root root 1588500 Jun 19 14:00 PowerNSX.psm1

Now you are ready to start using PowerNSX on RHEL. In my example, I query the current transport-zone and create a logical switch called PowerNSX within found NSX transport zone.

PS /root/.local/share/powershell/Modules/PowerNSX> Import-Module PowerNSX
PS /root/.local/share/powershell/Modules/PowerNSX> Get-Module

ModuleType Version Name ExportedCommands
———- ——- —- —————-
Manifest 3.1.0.0 Microsoft.PowerShell.Management {Add-Content, Clear-Content, Clear-Item, Clear-ItemProperty…}
Manifest 3.1.0.0 Microsoft.PowerShell.Utility {Add-Member, Add-Type, Clear-Variable, Compare-Object…}
Script 3.0.1110 PowerNSX {Add-NsxDynamicCriteria, Add-NsxDynamicMemberSet, Add-NsxEdgeInterfaceAddress, Add-NsxFirewallExclusionListMember…}
Script 1.2 PSReadLine {Get-PSReadlineKeyHandler, Get-PSReadlineOption, Remove-PSReadlineKeyHandler, Set-PSReadlineKeyHandler…}
Manifest 10.1.1…. VMware.PowerCLI
Script 6.7.0.8… VMware.Vim
Script 10.1.0…. VMware.VimAutomation.Cis.Core {Connect-CisServer, Disconnect-CisServer, Get-CisService}
Script 10.1.0…. VMware.VimAutomation.Common
Script 10.1.0…. VMware.VimAutomation.Core {Add-PassthroughDevice, Add-VirtualSwitchPhysicalNetworkAdapter, Add-VMHost, Add-VMHostNtpServer…}
Script 10.1.0…. VMware.VimAutomation.Nsxt {Connect-NsxtServer, Disconnect-NsxtServer, Get-NsxtService}
Script 10.1.0…. VMware.VimAutomation.Sdk {Get-InstallPath, Get-PSVersion}
Script 10.1.0…. VMware.VimAutomation.Storage {Add-KeyManagementServer, Copy-VDisk, Export-SpbmStoragePolicy, Get-KeyManagementServer…}
Script 1.2.0.0 VMware.VimAutomation.StorageUtility Update-VmfsDatastore
Script 10.1.0…. VMware.VimAutomation.Vds {Add-VDSwitchPhysicalNetworkAdapter, Add-VDSwitchVMHost, Export-VDPortGroup, Export-VDSwitch…}
Script 10.0.0…. VMware.VimAutomation.Vmc {Connect-Vmc, Disconnect-Vmc, Get-VmcService}

PS /root/.local/share/powershell/Modules/PowerNSX> Connect-NsxServer -vCenterServer 10.1.1.2

Windows PowerShell credential request
vCenter Server SSO Credentials
Password for user user1@davidwzhang.com: ***********
 
Using existing PowerCLI connection to 10.1.1.2
 
 
Version             : 6.4.0
BuildNumber         : 7564187
Credential          : System.Management.Automation.PSCredential
Server              : 10.1.1.4
Port                : 443
Protocol            : https
UriPrefix           :
ValidateCertificate : False
VIConnection        : 10.1.1.2
DebugLogging        : False
DebugLogfile        : \PowerNSXLog-user1@davidwzhang.com:@-2018_06_15_15_25_45.log
 

PS /root/.local/share/powershell/Modules/PowerNSX> Get NsxTransportZone                                                                 

objectId           : vdnscope-1
objectTypeName     : VdnScope
vsmUuid            : 42267595-0C79-1E95-35FE-E0A186F24C3B
nodeId             : 0598778a-9c46-46e7-a9c7-850beb6ac7f3
revision           : 14
type               : type
name               : transport-1
description        : transport-1
clientHandle       :
extendedAttributes :
isUniversal        : false
universalRevision  : 0
id                 : vdnscope-1
clusters           : clusters
virtualWireCount   : 59
controlPlaneMode   : UNICAST_MODE
cdoModeEnabled     : false
cdoModeState       : cdoModeState
 
PS /root/.local/share/powershell/Modules/PowerNSX> Get-NsxTransportZone  transport-1 | New-NsxLogicalSwitch -name PowerNSX
objectId              : virtualwire-65
objectTypeName        : VirtualWire
vsmUuid               : 42267595-0C79-1E95-35FE-E0A186F24C3B
nodeId                : 0598778a-9c46-46e7-a9c7-850beb6ac7f3
revision              : 2
type                  : type
name                  : PowerNSX
description           :
clientHandle          :
extendedAttributes    :
isUniversal           : false
universalRevision     : 0
tenantId              :
vdnScopeId            : vdnscope-1
vdsContextWithBacking : vdsContextWithBacking
vdnId                 : 6059
guestVlanAllowed      : false
controlPlaneMode      : UNICAST_MODE
ctrlLsUuid            : d6f2c975-8927-429c-86f7-3ae0b9ecd9fa
macLearningEnabled    : false

 

When we checked NSX manager, we can see PowerNSX logical switch is created with VXLAN-ID 6059.vxlan