NSX-T Automation with Terraform

NSX-T Automation with Terraform

This post was originally published on this site ---

Do you want to maintain your network and security infrastructure as a code? Do you want to automate NSX-T? One more option has been just added for you!

Following my previous post about NSX-T: OpenAPI and SDKs you might have figured out how easy it is to generate different language bindings for NSX-T. Thankfully to this, we have generated Go Lang NSX-T SDK that we use as a foundation of the new NSX-T Terraform provider.

Terraform is an open-source infrastructure as a code software by HashiCorp. It allows creation, modification, and deletion of an infrastructure using a high-level configuration files that can be shared between team members, treated as a code, edited, reviewed, and versioned. These configuration files are written in HCL(HashiCorp Configuration Language) which is actually JSON with some fine-tuning. Plain JSON can be also used.

There are several important components in Terraform:

1. Providers are responsible for managing the lifecycle of the resources: create, read, update, delete. The Providers usually require some sort of configuration to provide authentication, endpoint URLs, etc. By default, resources are matched with the provider with the start of the name. For example, a resource nsxt_logical_switch is associated with provider called nsxt.

Example of configuring NSX-T provider:

provider "nsxt" {
    host = "${var.nsx_ip}"
    username = "admin"
    password = "${var.nsx_password}"
    allow_unverified_ssl = true
}

2. Data sources allow data to be fetched or computed for use elsewhere in Terraform configuration. They present read-only views into pre-existing data. Every data source is mapped to a provider based on the prefix matching. For example, the nsxt_transport_zone data source maps to the nsxt provider.

Data Source Example:

data "nsxt_transport_zone" "overlay_tz" {
    display_name = "tz1"
}

Currently supported data sources:

  • transport_zone
  • switching_profile
  • ns_service
  • logical_tier0_router
  • edge_cluster

3. Resources are the most important thing we will configure. They are the components that we would like to create, read, update, delete.

Resource Example:

resource "nsxt_logical_switch" "switch1" {
    admin_state = "UP"
    description = "LS created by Terraform"
    display_name = "terraform_switch1"
    transport_zone_id = "${data.nsxt_transport_zone.overlay_tz.id}"
    replication_mode = "MTEP"
    tag {
	scope = "project"
	tag = "terraform-demo"
    }
    tag {
	scope = "tenant"
	tag = "terraform-demo-tenant"
    }
}

The resource block creates a resource of the given type (nsxt_logical_switch) and name of this resource (LS1). The combination of the type and name must be unique. Note that the second parameter is the name of the Terraform resource. The name of the logical switch that will be created in NSX is provided in the configuration block {dispaly_name}. You must use the Terraform name (switch1), if you want to refer to this logical switch later in your code. We also bind this logical switch to the Transport Zone data source that is pre-created in NSX.

Currently supported resources:

  • logical_switch
  • logical_port
  • logical_tier1_router
  • logical_router_downlink_port
  • logical_router_link_port_on_tier0
  • logical_router_link_port_on_tier1
  • l4_port_set_ns_service
  • icmp_type_ns_service
  • igmp_type_ns_service
  • ether_type_ns_service
  • alg_type_ns_service
  • ip_protocol_ns_service
  • ns_group
  • firewall_section
  • nat_rule
  • ip_set
  • dhcp_relay_profile
  • dhcp_relay_service
  • static_route

In the example below we create logical switch based on an overlay transport zone as well as a T1 router connected to both upstream T0 router and the newly created logical switch.

# configure some variables first 
variable "nsx_ip" {
    default = "10.29.15.173"
}
variable "nsx_password" {
    default = "VMware1!"
}

variable "nsx_tag_scope" {
    default = "project"
}
variable "nsx_tag" {
    default = "terraform-demo"
}
variable "nsx_t1_router_name" {
    default = "terraform-demo-router"
}
variable "nsx_t1_ip" {
    default = "192.168.1.1/24"
}
variable "nsx_switch_name" {
    default = "terraform-demo-ls"
}

# Configure the VMware NSX-T Provider
provider "nsxt" {
    host = "${var.nsx_ip}"
    username = "admin"
    password = "${var.nsx_password}"
    allow_unverified_ssl = true
}

# Create the data sources we will need to refer to later
data "nsxt_transport_zone" "overlay_tz" {
    display_name = "tz1"
}
data "nsxt_logical_tier0_router" "tier0_router" {
  display_name = "DefaultT0Router"
}
data "nsxt_edge_cluster" "edge_cluster1" {
    display_name = "EdgeCluster1"
}

# Create NSX-T Logical Switch
resource "nsxt_logical_switch" "switch1" {
    admin_state = "UP"
    description = "LS created by Terraform"
    display_name = "${var.nsx_switch_name}"
    transport_zone_id = "${data.nsxt_transport_zone.overlay_tz.id}"
    replication_mode = "MTEP"
    tag {
	scope = "${var.nsx_tag_scope}"
	tag = "${var.nsx_tag}"
    }
    tag {
	scope = "tenant"
	tag = "second_example_tag"
    }
}

# Create T1 router
resource "nsxt_logical_tier1_router" "tier1_router" {
  description                 = "Tier1 router provisioned by Terraform"
  display_name                = "${var.nsx_t1_router_name}"
  failover_mode               = "PREEMPTIVE"
  edge_cluster_id             = "${data.nsxt_edge_cluster.edge_cluster1.id}"
  enable_router_advertisement = true
  advertise_connected_routes  = true
  advertise_static_routes     = true
  advertise_nat_routes        = true
    tag {
	scope = "${var.nsx_tag_scope}"
	tag = "${var.nsx_tag}"
    }
}

# Create a port on the T0 router. We will connect the T1 router to this port
resource "nsxt_logical_router_link_port_on_tier0" "link_port_tier0" {
  description       = "TIER0_PORT1 provisioned by Terraform"
  display_name      = "TIER0_PORT1"
  logical_router_id = "${data.nsxt_logical_tier0_router.tier0_router.id}"
    tag {
	scope = "${var.nsx_tag_scope}"
	tag = "${var.nsx_tag}"
    }
}

# Create a T1 uplink port and connect it to T0 router
resource "nsxt_logical_router_link_port_on_tier1" "link_port_tier1" {
  description                   = "TIER1_PORT1 provisioned by Terraform"
  display_name                  = "TIER1_PORT1"
  logical_router_id             = "${nsxt_logical_tier1_router.tier1_router.id}"
  linked_logical_router_port_id = "${nsxt_logical_router_link_port_on_tier0.link_port_tier0.id}"
    tag {
	scope = "${var.nsx_tag_scope}"
	tag = "${var.nsx_tag}"
    }
}

# Create a switchport on our logical switch
resource "nsxt_logical_port" "logical_port1" {
  admin_state       = "UP"
  description       = "LP1 provisioned by Terraform"
  display_name      = "LP1"
  logical_switch_id = "${nsxt_logical_switch.switch1.id}"
    tag {
	scope = "${var.nsx_tag_scope}"
	tag = "${var.nsx_tag}"
    }
}

# Create downlink port on the T1 router and connect it to the switchport we created earlier
resource "nsxt_logical_router_downlink_port" "downlink_port" {
  description                   = "DP1 provisioned by Terraform"
  display_name                  = "DP1"
  logical_router_id             = "${nsxt_logical_tier1_router.tier1_router.id}"
  linked_logical_switch_port_id = "${nsxt_logical_port.logical_port1.id}"
  ip_address                    = "${var.nsx_t1_ip}"
    tag {
	scope = "${var.nsx_tag_scope}"
	tag = "${var.nsx_tag}"
    }
}

 

There are several CLI command that you might want to use within the folder where you have your terraform configuration file(s)/.

1. terraform init is used to initialise a  working directory containing Terraform configuration files. This is the first command that should be run after writing a new Terraform configuration. It is safe to run this command multiple times, to bring the working directory up to date.

2. terraform plan is used to create an execution plan. This command is a convenient way to check whether the execution plan matches your expectations specified in the configuration files without making any changes to real resources or to the state.

3. terraform apply is used to apply the changes required to reach the desired state. You may run this command for initial creation of resources as well modification of existing resources in order to achieve the desired state.

4. terraform destroy will destroy the Terraform-managed infrastructure

5. terraform graph can be used to generate a visual representation of either a configuration or an execution plan. The output is in DOT format, which can be used by GraphViz to generate charts. Once GraphViz is installed you can use the following command:

terraform graph | dot -Tpng > graph.png

If we use terraform graph for the example above we will get an image like this:

In order to attach Virtual Machines to the newly created Logical Switches we need to combine it with the vSphere Provider. The only specific part is we need vsphere_network data source with the name of the created Logical Switch to be referred in the network_interface section of the vsphere_virtual_machine resource.

 

data "vsphere_network" "terraform_switch1" {
    name = "${nsxt_logical_switch.switch1.display_name}"
    datacenter_id = "${data.vsphere_datacenter.dc.id}"
}

resource "vsphere_virtual_machine" "vm1" {
    name             = "terraform-test1"
    resource_pool_id = "${data.vsphere_resource_pool.pool.id}"
    datastore_id     = "${data.vsphere_datastore.datastore.id}"
    num_cpus = 1
    memory   = 1024
    guest_id = "${data.vsphere_virtual_machine.template.guest_id}"
    scsi_type = "${data.vsphere_virtual_machine.template.scsi_type}"
    # Attach the VM to the network data source that refers to the newly created logical switch
    network_interface {
      network_id = "${data.vsphere_network.terraform_switch1.id}"
    }
--- code omitted ---

Please check the Youtube video below where I demo a complete example for deploying and securing a three-tier application including firewall section, NAT, and connecting(cloning)  Virtual Machines to NSX-T logical switches.

I hope you enjoy automating NSX-T. Stay tuned, there is more to come …

 

Additional info:

Three-Tier Demo Application: https://github.com/yasensim/nsxt-terraform-three-tier-app

Terraform Documentation: https://www.terraform.io/docs/providers/nsxt/index.html

vSphere Terraform Provider: https://www.terraform.io/docs/providers/vsphere/index.html 

VMware NSX-T Documentation: https://docs.vmware.com/en/VMware-NSX-T/index.html

Go lang bindings: https://github.com/vmware/go-vmware-nsxt

The post NSX-T Automation with Terraform appeared first on Network Virtualization.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.