Terraform automation: Virtual/physical network (Cumulus Network) and Terraform automation: vCenter Server: Difference between pages

From Iwan
(Difference between pages)
Jump to: navigation, search
No edit summary
 
No edit summary
 
Line 1: Line 1:
To provide North/South connectivity and test routing I am deploying predefined Virtual Top of Rack switches with Layer 3 Routing capabilities and a single “Core” router.
To deploy the vCenter Server in our nested home labs and we want to deploy these lab instances multiple times we need to pre-install and pre-configure the vCenter server a little bit.
When we have a fully installed vCenter Server instance we can clone this in each lab instance that we create.
In this article: [http://www.iwan.wiki/The_nested_labs_project_overview/introduction The nested labs' project overview/introduction] I explain how many lab instances I can create (or replicate).


The best way to think of  this is in levels:
There are plenty of blog articles on how to deploy a vCenter Server and there are a few ways to do this.
* LEVEL 1 (INFRA)
I use the CLI deployment where we have to use a.JSON file with input parameters to deploy the full vCenter Server.
**  My physical network infrastructure consisting of my infra-network and my infra-NSXT-network.
I learned this a few years ago from my colleague Wesley van Ede.
* LEVEL 2 (INFRA)
**  The “virtual” network infrastructure of the Lab Tenant (in this case lab 1) to simulate a physical network with a Core Router and two Top of Rack Switches.
**  The underlay networking that I am using is NSX-T as well that is part of my infra-NSXT-network
* LEVEL 3 (VIRTUAL)
**  The “virtual” NSXT network infrastructure living inside the Lab Tenant (in this case lab 1)
**  These components and NSX-T network segments are part of the virtual-NSXT-network specifically for the Lab Tenant (in this case lab 1)


The drawing below will give you an overview of how the networking is done specifically for Lab 1.
So before I started the deploy I prepared the .JSON file:
 
[[File:12E40F6E-D5D1-4FCD-8D84-11D7D2C4CE2B.png]]
 
A description of what device does what can be found in the table below:
 
* '''Ubiquity Edge 4'''
** ''This is my physical internal home router that does all the routing and connectivity between by home networks and the internet''
* '''VX00'''
** ''This is a virtual core router that is part of a certain lab (in this case lab1)  that will simulate the core network of the Lab Tenant.''
** ''This Virtual machine is a Cumulus VX Router that will be preconfigured with with the connect IP addresses and BGP peers.''
** ''This Virtual machine will be cloned from a template using terraform''
* '''VX01'''
** ''This is a virtual core router that is part of a certain lab (in this case lab1)  that will simulate the first Top of Rack switch of the Lab Tenant.''
** ''This Virtual machine is a Cumulus VX Router that will be preconfigured with with the connect IP addresses and BGP peers.''
** ''This Virtual machine will be cloned from a template using terraform/
* '''VX01'''
** ''This is a virtual core router that is part of a certain lab (in this case lab1)  that will simulate the second Top of Rack switch of the Lab Tenant.''
** ''This Virtual machine is a Cumulus VX Router that will be preconfigured with with the connect IP addresses and BGP peers.''
** ''This Virtual machine will be cloned from a template using terraform''
* '''T0-L1-01'''
** ''This is the T0 Gateway that is managed by the Nested NSX-T Manager within Lab 1''
* '''T1-L1-01'''
** ''This is the T1 Gateway that is managed by the Nested NSX-T Manager within Lab 1''
** ''This T1 Gateway will host nested networks that live inside the lab Tenant (In this case lab1)''
 
Now the whole point of doing it like this is that I can create different lab tenants all using a different kind of NSX-T version, or maybe even test a specific NSX-T version with another vSphere version.
 
Using Terraform and pre-defined template virtual machines (VX00, VC01 and VX02) allows me to quickly set up complete SDDC environments with Terraform and also tear them down when I no longer need this.
 
The  latest version of the Cumulus VX deployment image can be downloaded [https://cumulusnetworks.com/products/cumulus-vx/download/ here].
 
Make sure you download the VMware OVA image.
After the download it is time to deploy the image.
For the first image I used the name “VX00-template".
During the deployment wizard you can only select the network for one single network interface.
After the deployment you will see that the Cumulus Virtual Router VM has 4 vNIC interfaces.
 
[[File:screenshot_1538.png|600px]]
 
I will make the following vNIC to Port Group mapping in order to create the above topology in the picture above:
 
'''VX00'''
Network adapter 1 <==> L1-APP-MGMT-11
Network adapter 2 <==> SEG-VLAN16
Network adapter 3 <==> L1-TRANSIT-LEAF-SPINE-20
Network adapter 4 <==>  VM Network (not used)
 
'''VX01'''
Network adapter 1 <==> L1-APP-MGMT-11
Network adapter 2 <==> L1-TRANSIT-LEAF-SPINE-20
Network adapter 3 <==> L1-BGP-UPLINK-01-18
Network adapter 4 <==>  VM Network (not used)
 
'''VX02'''
Network adapter 1 <==> L1-APP-MGMT-11
Network adapter 2 <==> L1-TRANSIT-LEAF-SPINE-20
Network adapter 3 <==> L1-BGP-UPLINK-02-19
Network adapter 4 <==>  VM Network (not used)
 
In order to provide the uplink towards my physical network through VLAN 16 I first need to configure a VLAN 16 interface:
 
{{console|body=
EDGE4
!
configure
set interfaces ethernet eth1 vif 16 address 10.11.16.253/24
set service dns forwarding listen-on eth1.16
set service nat rule 5016 description "VLAN16 - 82.94.132.155"
set service nat rule 5016 log disable
set service nat rule 5016 outbound-interface eth0
set service nat rule 5016 source address 10.11.16.0/24
set service nat rule 5016 type masquerade
set firewall group network-group LAN_NETWORKS network 10.11.16.0/24
commit
save
!
}}
 
{{console|body=
EDGE-16XG
!
configure
vlan database
vlan 16
vlan name 16 NESTED-UPLINK
exit
do copy system:running-config nvram:startup-config
!
}}
 
{{console|body=
Cisco 4849
!
conf t
vlan 16
name NESTED-UPLINK
exit
end
do copy run start
!
}}
 
{{console|body=
Unify Network
!
See screenshot below
}}
 
[[File:screenshot_1539.png|1000px]]
 
{{console|body=
NSX-T
!
See screenshot below
}}
 
[[File:screenshot_1540.png|1000px]]
 
The initial configuration of the Cumulus Routers is done through the vSphere (web) console.
Once the eth0 interface is configured you can access the Router through SSH and do the rest of the configuration through SSH.
We are going to do all the configuration using NCLU commands.
 
The default credentials are:
Username: cumulus
Password: CumulusLinux!
 
[[File:screenshot_1542.png|1000px]]
 
The configuration for the Cumulus Routers can be found below:


'''vcsatemplate.json'''
<div class="toccolours mw-collapsible mw-collapsed">
<div class="toccolours mw-collapsible mw-collapsed">
'''CLICK ON EXPAND ===> ON THE RIGHT ===> TO SEE THE OUTPUT (VX00 config) ===>''' :
'''CLICK ON EXPAND ===> ON THE RIGHT ===> TO SEE THE OUTPUT (vcsatemplate.json code) ===>''' :
<div class="mw-collapsible-content">{{console|body=
<div class="mw-collapsible-content">{{console|body=
VX00
!
cumulus@cumulus:~$ sudo vi /etc/network/interfaces
!
auto eth0
iface eth0
address 192.168.12.21/24
gateway 192.168.12.1


cumulus@cumulus:~$ sudo ifreload -a
{
===========================================================
    “__version”: “2.13.0”,
net add interface eth0 ip address 192.168.12.21/24
    “__comments": "Sample template to deploy a vCenter Server Appliance with an embedded Platform Services Controller on a vCenter Server instance.”,
net add interface eth0 ip gateway 192.168.12.1
    “new_vcsa”: {
!
        “vc”: {
net add hostname VX00
            “__comments”: [
!
                “’datacenter’ must end with a datacenter name, and only with a datacenter name. “,
net pending
                “’target’ must end with an ESXi hostname, a cluster name, or a resource pool name. “,
net commit
                “The item ‘Resources’ must precede the resource pool name. “,
===========================================================
                “All names are case-sensitive. “,
cumulus@cumulus:~$ sudo vi /etc/frr/daemons
                “For details and examples, refer to template help, i.e. vcsa-deploy {install-upgrade-migrate} —template-help”
!
            ],
zebra=yes
            “hostname”: “vcsa-01.home.local”,
bgpd=yes
            “username”: “administrator@vsphere.local”,
!
            “password”: “<my vcenter server password>”,
cumulus@cumulus:~$ sudo systemctl enable frr.service
            “deployment_network”: “L1-APP-MGMT11",
cumulus@cumulus:~$ sudo systemctl start frr.service
            "datacenter": [
!
                "HOME"
===========================================================
            ],
cumulus@cumulus:~$ sudo vi /etc/network/interfaces
            "datastore": "vsanDatastore",
!
            "target": [
# The loopback network interface
                "Compute-New"
auto lo
            ]
iface lo inet loopback
        },
address 1.1.1.1/32
        "appliance": {
#
            "__comments": [
auto swp1
                "You must provide the 'deployment_option' key with a value, which will affect the VCSA's configuration parameters, such as the VCSA's number of vCPUs, the memory size, the storage size, and the maximum numbers of ESXi hosts and VMs which can be managed. For a list of acceptable values, run the supported deployment sizes help, i.e. vcsa-deploy --supported-deployment-sizes"
iface swp1
            ],
address 10.11.16.10/24
            "thin_disk_mode": true,
#
            "deployment_option": "small",
auto swp2
            "name": "Embedded-vCenter-Server-Appliance"
iface swp2
        },
address 192.168.20.10/24
        "network": {
#
            "ip_family": "ipv4",
cumulus@cumulus:~$ sudo ifreload -a
            "mode": "static",
!
            "ip": "192.168.11.10",
===========================================================
            "dns_servers": [
net add loopback lo ip address 1.1.1.1/32
                “192.168.11.11”
net add interface swp1 ip address 10.11.16.10/24
            ],
net add interface swp2 ip address 192.168.20.10/24
            "prefix": "24",
!
            "gateway": "192.168.11.1",
net add bgp autonomous-system 65030
            "system_name": "192.168.11.10"
net add bgp router-id 1.1.1.1
        },
!
        "os": {
#net add bgp neighbor 10.11.16.253 remote-as 64512
            “password”: “<password for the vCenter Server you are deploying>",
net add bgp neighbor 192.168.20.11 remote-as 65031
            “ntp_servers”: “192.168.11.11",
net add bgp neighbor 192.168.20.12 remote-as 65032
            "ssh_enable": true
!
        },
net add bgp ipv4 unicast network 1.1.1.1/32
        "sso": {
net add bgp ipv4 unicast redistribute connected
            “password”: “<SSO password for the vCenter Server you are deploying>",
!
            "domain_name": "vsphere.local"
net commit
        }
===========================================================
    },
net show bgp summary
    "ceip": {
net show bgp neighbors
        "description": {
net show route bgp
            "__comments": [
}}</div>
                "++++VMware Customer Experience Improvement Program (CEIP)++++",
</div>
                "VMware's Customer Experience Improvement Program (CEIP) ",
 
                "provides VMware with information that enables VMware to ",
<div class="toccolours mw-collapsible mw-collapsed">
                "improve its products and services, to fix problems, ",
'''CLICK ON EXPAND ===> ON THE RIGHT ===> TO SEE THE OUTPUT (VX01 config) ===>''' :
                "and to advise you on how best to deploy and use our ",
<div class="mw-collapsible-content">{{console|body=
                "products. As part of CEIP, VMware collects technical ",
VX01
                "information about your organization's use of VMware ",
!
                "products and services on a regular basis in association ",
cumulus@cumulus:~$ sudo vi /etc/network/interfaces
                "with your organization's VMware license key(s). This ",
!
                "information does not personally identify any individual. ",
auto eth0
                "",
iface eth0
                "Additional information regarding the data collected ",
address 192.168.12.22/24
                "through CEIP and the purposes for which it is used by ",
gateway 192.168.12.1
                "VMware is set forth in the Trust & Assurance Center at ",
                "http://www.vmware.com/trustvmware/ceip.html . If you ",
                "prefer not to participate in VMware's CEIP for this ",
                "product, you should disable CEIP by setting ",
                "'ceip_enabled': false. You may join or leave VMware's ",
                "CEIP for this product at any time. Please confirm your ",
                "acknowledgement by passing in the parameter ",
                "--acknowledge-ceip in the command line.",
                "++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++"
            ]
        },
        "settings": {
            "ceip_enabled": false
        }
    }
}


cumulus@cumulus:~$ sudo ifreload -a
===========================================================
net add interface eth0 ip address 192.168.12.22/24
net add interface eth0 ip gateway 192.168.12.1
!
net add hostname VX01
!
net pending
net commit
===========================================================
cumulus@cumulus:~$ sudo vi /etc/frr/daemons
!
zebra=yes
bgpd=yes
!
cumulus@cumulus:~$ sudo systemctl enable frr.service
cumulus@cumulus:~$ sudo systemctl start frr.service
!
===========================================================
cumulus@cumulus:~$ sudo vi /etc/network/interfaces
!
# The loopback network interface
auto lo
iface lo inet loopback
address 2.2.2.2/32
#
auto swp1
iface swp1
address 192.168.20.11/24
#
auto swp2
iface swp2
address 192.168.18.1/24
#
cumulus@cumulus:~$ sudo ifreload -a
!
===========================================================
net add loopback lo ip address 2.2.2.2/32
net add interface swp1 ip address 192.168.20.11/24
net add interface swp2 ip address 192.168.18.1/24
!
net add bgp autonomous-system 65031
net add bgp router-id 2.2.2.2
!
net add bgp neighbor 192.168.20.10 remote-as 65030
!
net add bgp ipv4 unicast network 2.2.2.2/32
net add bgp ipv4 unicast redistribute connected
!
net commit
===========================================================
net show bgp summary
net show bgp neighbors
net show route bgp
}}</div>
}}</div>
</div>
</div>


<div class="toccolours mw-collapsible mw-collapsed">
Other prerequisites that need to be in place are:
'''CLICK ON EXPAND ===> ON THE RIGHT ===> TO SEE THE OUTPUT (VX02 config) ===>''' :
* Create “L1-APP-MGMT11” NSX-T Segment on NSX-T infra
<div class="mw-collapsible-content">{{console|body=
* Clone the control server VM and make sure a new DNS zone “lab1.local” on the DNS server
VX02
* Created new A record for new VCSA deployment on the control server
!
cumulus@cumulus:~$ sudo vi /etc/network/interfaces
!
auto eth0
iface eth0
address 192.168.12.23/24
gateway 192.168.12.1


cumulus@cumulus:~$ sudo ifreload -a
NOTE: Before deploying the vCenter Server (template) the Control Server needs to be online and the DNS + DNS records need to be configured because the vCenter Server needs this for the initial deployment. When the install is done you can turn it off for the full lab deployment (clone) later.
===========================================================
net add interface eth0 ip address 192.168.12.23/24
net add interface eth0 ip gateway 192.168.12.1
!
net add hostname VX02
!
net pending
net commit
===========================================================
cumulus@cumulus:~$ sudo vi /etc/frr/daemons
!
zebra=yes
bgpd=yes
!
cumulus@cumulus:~$ sudo systemctl enable frr.service
cumulus@cumulus:~$ sudo systemctl start frr.service
!
===========================================================
cumulus@cumulus:~$ sudo vi /etc/network/interfaces
!
# The loopback network interface
auto lo
iface lo inet loopback
address 3.3.3.3/32
#
auto swp1
iface swp1
address 192.168.20.12/24
#
auto swp2
iface swp2
address 192.168.19.1/24
#
cumulus@cumulus:~$ sudo ifreload -a
!
===========================================================
net add loopback lo ip address 3.3.3.3/32
net add interface swp1 ip address 192.168.20.12/24
net add interface swp2 ip address 192.168.19.1/24
!
net add bgp autonomous-system 65032
net add bgp router-id 3.3.3.3
!
net add bgp neighbor 192.168.20.10 remote-as 65030
!
net add bgp ipv4 unicast network 3.3.3.3/32
net add bgp ipv4 unicast redistribute connected
!
net commit
===========================================================
net show bgp summary
net show bgp neighbors
net show route bgp
}}</div>
</div>


After the configuration is done of the VX routers I verified if the BGP peering is working correctly and it is:
This installation is done using the CLI deployment tool using the command:


{{console|body=
{{console|body=
cumulus@VX00:mgmt:~$ net show bgp summary
E:\vcsa-cli-installer\win32>vcsa-deploy.exe install C:\Scripts\PowerCLI\Terraform\VCSA\vcsatemplate.json —accept-eula —no-ssl-certificate-verification
show bgp ipv4 unicast summary
=============================
BGP router identifier 1.1.1.1, local AS number 65030 vrf-id 0
BGP table version 7
RIB entries 13, using 2392 bytes of memory
Peers 2, using 41 KiB of memory
 
Neighbor            V        AS MsgRcvd MsgSent  TblVer  InQ OutQ  Up/Down State/PfxRcd
VX01(192.168.20.11) 4      65031      44      44        0    0    0 00:01:43            3
VX02(192.168.20.12) 4      65032      36      36        0    0    0 00:01:20            3
 
Total number of neighbors 2
 
 
show bgp ipv6 unicast summary
=============================
% No BGP neighbors found
 
 
show bgp l2vpn evpn summary
===========================
% No BGP neighbors found
cumulus@VX00:mgmt:~$
}}
}}


{{console|body=
After deployment, I turned off the VCSA and downgraded the memory from 16 GB to 8 GB.
cumulus@VX01:mgmt:~$ net show bgp summary
show bgp ipv4 unicast summary
=============================
BGP router identifier 2.2.2.2, local AS number 65031 vrf-id 0
BGP table version 7
RIB entries 13, using 2392 bytes of memory
Peers 1, using 21 KiB of memory
 
Neighbor            V        AS MsgRcvd MsgSent  TblVer  InQ OutQ  Up/Down State/PfxRcd
VX00(192.168.20.10) 4      65030      51      51        0    0    0 00:02:04            5
 
Total number of neighbors 1
 


show bgp ipv6 unicast summary
Now that the vCenter Server (template) is fully installed we are ready to clone it with Terraform.
=============================
The scripts for cloning the vCenter Server can be found below:
% No BGP neighbors found
 
 
show bgp l2vpn evpn summary
===========================
% No BGP neighbors found
cumulus@VX01:mgmt:~$
}}
 
{{console|body=
cumulus@VX02:mgmt:~$ net show bgp summary
show bgp ipv4 unicast summary
=============================
BGP router identifier 3.3.3.3, local AS number 65032 vrf-id 0
BGP table version 7
RIB entries 13, using 2392 bytes of memory
Peers 1, using 21 KiB of memory
 
Neighbor            V        AS MsgRcvd MsgSent  TblVer  InQ OutQ  Up/Down State/PfxRcd
VX00(192.168.20.10) 4      65030      49      49        0    0    0 00:01:58            5
 
Total number of neighbors 1
 
 
show bgp ipv6 unicast summary
=============================
% No BGP neighbors found
 
 
show bgp l2vpn evpn summary
===========================
% No BGP neighbors found
cumulus@VX02:mgmt:~$
}}
 
Now that the Cumulus VX virtual machines have been configured correctly  we are ready to create the terraform script in order to clone them properly.
 
The Terraform script can be found below:


{{console|body=
{{console|body=
Line 447: Line 132:
'''CLICK ON EXPAND ===> ON THE RIGHT ===> TO SEE THE OUTPUT (terraform.tfvars code) ===>''' :
'''CLICK ON EXPAND ===> ON THE RIGHT ===> TO SEE THE OUTPUT (terraform.tfvars code) ===>''' :
<div class="mw-collapsible-content">{{console|body=
<div class="mw-collapsible-content">{{console|body=
vsphere_user = “administrator@vsphere.local”
vsphere_user = “administrator@vsphere.local"
vsphere_password = “<my vCenter Server password>
vsphere_password = “<my vCenter Server Password>"
vsphere_server = "vcsa-01.home.local"
vsphere_server = "vcsa-01.home.local”
vsphere_datacenter = “HOME”
vsphere_datacenter = “HOME”
vsphere_datastore = “vsanDatastore”
vsphere_datastore = “vsanDatastore”
vsphere_resource_pool = “Lab1”
vsphere_resource_pool = “Lab1”
vsphere_network_01 = “L1-APP-MGMT11”
vsphere_network = “L1-APP-MGMT11”
vsphere_network_02 = “SEG-VLAN16”
vsphere_virtual_machine_template = “vcsa-template”
vsphere_network_03 = “L1-TRANSIT-LEAF-SPINE-20”
vsphere_virtual_machine_name = “l1-vcsa”
vsphere_network_04 = “VM Network”
#
vsphere_network_05 = “L1-BGP-UPLINK-01-18”
vsphere_network_06 = “L1-BGP-UPLINK-02-19”
#
vsphere_virtual_machine_template_vx00 = “VX00-template”
vsphere_virtual_machine_name_vx00 = “l1-vx00”
#
vsphere_virtual_machine_template_vx01 = “VX01-template”
vsphere_virtual_machine_name_vx01 = “l1-vx01”
#
vsphere_virtual_machine_template_vx02 = “VX02-template”
vsphere_virtual_machine_name_vx02 = “l1-vx02”
}}</div>
}}</div>
</div>
</div>
Line 477: Line 149:
<div class="mw-collapsible-content">{{console|body=
<div class="mw-collapsible-content">{{console|body=
# vsphere login account. defaults to admin account
# vsphere login account. defaults to admin account
variable “vsphere_user" {
variable "vsphere_user" {
   default = "administrator@vsphere.local"
   default = "administrator@vsphere.local”
}
}


# vsphere account password. empty by default.
# vsphere account password. empty by default.
variable “vsphere_password” {
variable “vsphere_password” {
   default = “<my vCenter Server password>”  
   default = “<my vCenter Server Password>”  
}
}


Line 501: Line 173:


# vsphere network the virtual machine will be connected to. empty by default.
# vsphere network the virtual machine will be connected to. empty by default.
variable “vsphere_network_01” {}
variable “vsphere_network” {}
variable “vsphere_network_02” {}
variable “vsphere_network_03” {}
variable “vsphere_network_04” {}
variable “vsphere_network_05” {}
variable “vsphere_network_06” {}


# vsphere virtual machine template that the virtual machine will be cloned from. empty by default.
# vsphere virtual machine template that the virtual machine will be cloned from. empty by default.
variable “vsphere_virtual_machine_template_vx00” {}
variable "vsphere_virtual_machine_template" {}
variable “vsphere_virtual_machine_template_vx01” {}
variable “vsphere_virtual_machine_template_vx02” {}


# the name of the vsphere virtual machine that is created. empty by default.
# the name of the vsphere virtual machine that is created. empty by default.
variable “vsphere_virtual_machine_name_vx00” {}
variable “vsphere_virtual_machine_name” {}
variable “vsphere_virtual_machine_name_vx01” {}
variable “vsphere_virtual_machine_name_vx02” {}
}}</div>
}}</div>
</div>
</div>
Line 524: Line 187:
'''CLICK ON EXPAND ===> ON THE RIGHT ===> TO SEE THE OUTPUT (main.tf code) ===>''' :
'''CLICK ON EXPAND ===> ON THE RIGHT ===> TO SEE THE OUTPUT (main.tf code) ===>''' :
<div class="mw-collapsible-content">{{console|body=
<div class="mw-collapsible-content">{{console|body=
# use terraform 0.11
provider “vsphere” {
provider “vsphere” {
   user          = ${var.vsphere_user}"
   user          = "${var.vsphere_user}"
   password      = "${var.vsphere_password}"
   password      = "${var.vsphere_password}
   vsphere_server = “${var.vsphere_server}”
   vsphere_server = “${var.vsphere_server}”
   allow_unverified_ssl = true
   allow_unverified_ssl = true
Line 538: Line 199:


data “vsphere_datastore” “datastore” {
data “vsphere_datastore” “datastore” {
   name          = “${var.vsphere_datastore}"
   name          = “${var.vsphere_datastore}
   datacenter_id = "${data.vsphere_datacenter.dc.id}”
   datacenter_id = ${data.vsphere_datacenter.dc.id}”
}
}


data “vsphere_resource_pool” “pool” {
data “vsphere_resource_pool” “pool” {
   name          = ${var.vsphere_resource_pool}
   name          = "${var.vsphere_resource_pool}"
   datacenter_id = “${data.vsphere_datacenter.dc.id}”
   datacenter_id = “${data.vsphere_datacenter.dc.id}”
}
}


data “vsphere_network” “network_01” {
data “vsphere_network” “network” {
   name          = “${var.vsphere_network_01}”
   name          = “${var.vsphere_network}”
  datacenter_id = “${data.vsphere_datacenter.dc.id}”
}
data “vsphere_network” “network_02” {
  name          = “${var.vsphere_network_02}”
  datacenter_id = “${data.vsphere_datacenter.dc.id}”
}
data “vsphere_network” “network_03” {
  name          = “${var.vsphere_network_03}”
  datacenter_id = “${data.vsphere_datacenter.dc.id}”
}
data “vsphere_network” “network_04” {
  name          = “${var.vsphere_network_04}”
  datacenter_id = “${data.vsphere_datacenter.dc.id}”
}
data “vsphere_network” “network_05” {
  name          = “${var.vsphere_network_05}”
  datacenter_id = “${data.vsphere_datacenter.dc.id}”
}
data “vsphere_network” “network_06” {
  name          = “${var.vsphere_network_06}”
  datacenter_id = “${data.vsphere_datacenter.dc.id}”
}
 
data “vsphere_virtual_machine” “template_vx00” {
  name          = “${var.vsphere_virtual_machine_template_vx00}”
  datacenter_id = “${data.vsphere_datacenter.dc.id}”
}
 
data “vsphere_virtual_machine” “template_vx01” {
  name          = “${var.vsphere_virtual_machine_template_vx01}”
   datacenter_id = “${data.vsphere_datacenter.dc.id}”
   datacenter_id = “${data.vsphere_datacenter.dc.id}”
}
}


data “vsphere_virtual_machine” “template_vx02” {
data “vsphere_virtual_machine” “template” {
   name          = “${var.vsphere_virtual_machine_template_vx02}”
   name          = “${var.vsphere_virtual_machine_template}”
   datacenter_id = “${data.vsphere_datacenter.dc.id}”
   datacenter_id = “${data.vsphere_datacenter.dc.id}”
}
}


resource “vsphere_virtual_machine” “cloned_virtual_machine_vx00” {
resource “vsphere_virtual_machine” “cloned_virtual_machine” {
   name            = “${var.vsphere_virtual_machine_name_vx00}”
   name            = “${var.vsphere_virtual_machine_name}”
 
  wait_for_guest_net_routable = false
  wait_for_guest_net_timeout = 0
 
   resource_pool_id = “${data.vsphere_resource_pool.pool.id}”
   resource_pool_id = “${data.vsphere_resource_pool.pool.id}”
   datastore_id    = “${data.vsphere_datastore.datastore.id}”
   datastore_id    = “${data.vsphere_datastore.datastore.id}”
   num_cpus = 2
   num_cpus = 4
   memory  = 1024
   memory  = 8192


   guest_id = “${data.vsphere_virtual_machine.template_vx00.guest_id}
  #num_cpus = “${data.vsphere_virtual_machine.template.num_cpus}”
  #memory  = “${data.vsphere_virtual_machine.template.memory}”
   guest_id = “${data.vsphere_virtual_machine.template.guest_id}"


   network_interface {
   scsi_type = "${data.vsphere_virtual_machine.template.scsi_type}"
    network_id  = “${data.vsphere_network.network_01.id}”
    adapter_type = ${data.vsphere_virtual_machine.template_vx00.network_interface_types[0]}”
  }


   network_interface {
   network_interface {
     network_id  = ${data.vsphere_network.network_02.id}”
     network_id  = "${data.vsphere_network.network.id}"
    adapter_type = “${data.vsphere_virtual_machine.template_vx00.network_interface_types[0]}”
     adapter_type = "${data.vsphere_virtual_machine.template.network_interface_types[0]}"
  }
 
    network_interface {
    network_id  = “${data.vsphere_network.network_03.id}”
    adapter_type = “${data.vsphere_virtual_machine.template_vx00.network_interface_types[0]}”
  }
 
  network_interface {
    network_id  = “${data.vsphere_network.network_04.id}"
     adapter_type = "${data.vsphere_virtual_machine.template_vx00.network_interface_types[0]}
   }
   }


   disk {
   disk {
     label = “disk0”
     label = "disk0"
     size  = “6”
     size  = "12"
  #  unit_number = 0
  #  unit_number = 0
   }
   }


   clone {
   disk {
     template_uuid = “${data.vsphere_virtual_machine.template_vx00.id}”
     label = "disk1"
    size  = "2"
    unit_number = 1
   }
   }
}
resource “vsphere_virtual_machine” “cloned_virtual_machine_vx01” {
  name            = “${var.vsphere_virtual_machine_name_vx01}”
  wait_for_guest_net_routable = false
  wait_for_guest_net_timeout = 0


  resource_pool_id = “${data.vsphere_resource_pool.pool.id}”
    disk {
  datastore_id     = “${data.vsphere_datastore.datastore.id}”
     label = "disk2"
  num_cpus = 2
     size  = "25"
  memory  = 1024
     unit_number = 2
 
  guest_id = “${data.vsphere_virtual_machine.template_vx01.guest_id}”
 
  network_interface {
     network_id  = “${data.vsphere_network.network_01.id}”
     adapter_type = “${data.vsphere_virtual_machine.template_vx01.network_interface_types[0]}”
   }
   }


  network_interface {
    disk {
     network_id  = “${data.vsphere_network.network_03.id}”
     label = "disk3"
     adapter_type = “${data.vsphere_virtual_machine.template_vx01.network_interface_types[0]}”
    size  = "50"
     unit_number = 3
   }
   }


     network_interface {
     disk {
     network_id  = “${data.vsphere_network.network_05.id}”
     label = "disk4"
     adapter_type = “${data.vsphere_virtual_machine.template_vx01.network_interface_types[0]}”
    size  = "10"
     unit_number = 4
   }
   }


  network_interface {
    disk {
     network_id  = “${data.vsphere_network.network_04.id}”
     label = "disk5"
     adapter_type = “${data.vsphere_virtual_machine.template_vx01.network_interface_types[0]}”
    size  = "10"
     unit_number = 5
   }
   }


  disk {
    disk {
     label = “disk0”
     label = "disk6"
     size  = “6”
     size  = "15"
unit_number = 0
    unit_number = 6
   }
   }


  clone {
    disk {
     template_uuid = “${data.vsphere_virtual_machine.template_vx01.id}”
     label = “disk7”
    size  = “25”
    unit_number = 7
   }
   }
}
resource “vsphere_virtual_machine” “cloned_virtual_machine_vx02” {
  name            = “${var.vsphere_virtual_machine_name_vx02}”
  wait_for_guest_net_routable = false
  wait_for_guest_net_timeout = 0
  resource_pool_id = “${data.vsphere_resource_pool.pool.id}”
  datastore_id    = “${data.vsphere_datastore.datastore.id}”
  num_cpus = 2
  memory  = 1024


  guest_id = “${data.vsphere_virtual_machine.template_vx02.guest_id}”
    disk {
 
    label = "disk8"
  network_interface {
     size  = “1”
     network_id  = “${data.vsphere_network.network_01.id}”
     unit_number = 8
     adapter_type = "${data.vsphere_virtual_machine.template_vx02.network_interface_types[0]}”
   }
   }


  network_interface {
    disk {
     network_id  = “${data.vsphere_network.network_03.id}”
     label = “disk9”
     adapter_type = “${data.vsphere_virtual_machine.template_vx02.network_interface_types[0]}”
    size  = “10”
     unit_number = 9
   }
   }


     network_interface {
     disk {
     network_id  = “${data.vsphere_network.network_06.id}”
     label = “disk10”
     adapter_type = “${data.vsphere_virtual_machine.template_vx02.network_interface_types[0]}”
    size  = “10”
     unit_number = 10
   }
   }


  network_interface {
    disk {
     network_id  = “${data.vsphere_network.network_04.id}”
     label = “disk11”
     adapter_type = “${data.vsphere_virtual_machine.template_vx02.network_interface_types[0]}”
    size  = “100”
     unit_number = 11
   }
   }


  disk {
    disk {
     label = “disk0”
     label = “disk12”
     size  = “6”
     size  = “50”
unit_number = 0
    unit_number = 12
   }
   }


   clone {
   clone {
     template_uuid = “${data.vsphere_virtual_machine.template_vx02.id}”
     template_uuid = “${data.vsphere_virtual_machine.template.id}”
   }
   }
}
}
}}</div>
}}</div>
</div>
</div>
So we are ready to execute the terraform code on a per-directory basis.


Validate your code:
Validate your code:
{{console|body=
{{console|body=
ihoogendoor-a01:#Test iwanhoogendoorn$ tfenv use 0.12.24
ihoogendoor-a01:#Test iwanhoogendoorn$ tfenv use 0.11.14
[INFO] Switching to v0.12.24
[INFO] Switching to v0.11.14
[INFO] Switching completed
[INFO] Switching completed
ihoogendoor-a01:Test iwanhoogendoorn$ terraform validate
ihoogendoor-a01:Test iwanhoogendoorn$ terraform validate
Line 744: Line 346:
}}
}}


This Terraform script will clone the three virtual machines with the IP addresses and BGP peer preconfigured.
== Sources ==
* [https://www.terraform.io/docs/providers/vsphere/r/virtual_machine.html#disk-options Source 1]
* [https://sdorsett.github.io/post/2018-12-24-using-terraform-to-clone-a-virtual-machine-on-vsphere/ Source 2]


[[Category:Articles]]
[[Category:Articles]]

Revision as of 21:35, 12 January 2024

To deploy the vCenter Server in our nested home labs and we want to deploy these lab instances multiple times we need to pre-install and pre-configure the vCenter server a little bit. When we have a fully installed vCenter Server instance we can clone this in each lab instance that we create. In this article: The nested labs' project overview/introduction I explain how many lab instances I can create (or replicate).

There are plenty of blog articles on how to deploy a vCenter Server and there are a few ways to do this. I use the CLI deployment where we have to use a.JSON file with input parameters to deploy the full vCenter Server. I learned this a few years ago from my colleague Wesley van Ede.

So before I started the deploy I prepared the .JSON file:

vcsatemplate.json

CLICK ON EXPAND ===> ON THE RIGHT ===> TO SEE THE OUTPUT (vcsatemplate.json code) ===> :

{
    “__version”: “2.13.0”,
    “__comments": "Sample template to deploy a vCenter Server Appliance with an embedded Platform Services Controller on a vCenter Server instance.”,
    “new_vcsa”: {
        “vc”: {
            “__comments”: [
                “’datacenter’ must end with a datacenter name, and only with a datacenter name. “,
                “’target’ must end with an ESXi hostname, a cluster name, or a resource pool name. “,
                “The item ‘Resources’ must precede the resource pool name. “,
                “All names are case-sensitive. “,
                “For details and examples, refer to template help, i.e. vcsa-deploy {install-upgrade-migrate} —template-help”
            ],
            “hostname”: “vcsa-01.home.local”,
            “username”: “administrator@vsphere.local”,
            “password”: “<my vcenter server password>”,
            “deployment_network”: “L1-APP-MGMT11",
            "datacenter": [
                "HOME"
            ],
            "datastore": "vsanDatastore",
            "target": [
                "Compute-New"
            ]
        },
        "appliance": {
            "__comments": [
                "You must provide the 'deployment_option' key with a value, which will affect the VCSA's configuration parameters, such as the VCSA's number of vCPUs, the memory size, the storage size, and the maximum numbers of ESXi hosts and VMs which can be managed. For a list of acceptable values, run the supported deployment sizes help, i.e. vcsa-deploy --supported-deployment-sizes"
            ],
            "thin_disk_mode": true,
            "deployment_option": "small",
            "name": "Embedded-vCenter-Server-Appliance"
        },
        "network": {
            "ip_family": "ipv4",
            "mode": "static",
            "ip": "192.168.11.10",
            "dns_servers": [
                “192.168.11.11”
            ],
            "prefix": "24",
            "gateway": "192.168.11.1",
            "system_name": "192.168.11.10"
        },
        "os": {
            “password”: “<password for the vCenter Server you are deploying>",
            “ntp_servers”: “192.168.11.11",
            "ssh_enable": true
        },
        "sso": {
            “password”: “<SSO password for the vCenter Server you are deploying>",
            "domain_name": "vsphere.local"
        }
    },
    "ceip": {
        "description": {
            "__comments": [
                "++++VMware Customer Experience Improvement Program (CEIP)++++",
                "VMware's Customer Experience Improvement Program (CEIP) ",
                "provides VMware with information that enables VMware to ",
                "improve its products and services, to fix problems, ",
                "and to advise you on how best to deploy and use our ",
                "products. As part of CEIP, VMware collects technical ",
                "information about your organization's use of VMware ",
                "products and services on a regular basis in association ",
                "with your organization's VMware license key(s). This ",
                "information does not personally identify any individual. ",
                "",
                "Additional information regarding the data collected ",
                "through CEIP and the purposes for which it is used by ",
                "VMware is set forth in the Trust & Assurance Center at ",
                "http://www.vmware.com/trustvmware/ceip.html . If you ",
                "prefer not to participate in VMware's CEIP for this ",
                "product, you should disable CEIP by setting ",
                "'ceip_enabled': false. You may join or leave VMware's ",
                "CEIP for this product at any time. Please confirm your ",
                "acknowledgement by passing in the parameter ",
                "--acknowledge-ceip in the command line.",
                "++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++"
            ]
        },
        "settings": {
            "ceip_enabled": false
        }
    }
}

Other prerequisites that need to be in place are:

  • Create “L1-APP-MGMT11” NSX-T Segment on NSX-T infra
  • Clone the control server VM and make sure a new DNS zone “lab1.local” on the DNS server
  • Created new A record for new VCSA deployment on the control server

NOTE: Before deploying the vCenter Server (template) the Control Server needs to be online and the DNS + DNS records need to be configured because the vCenter Server needs this for the initial deployment. When the install is done you can turn it off for the full lab deployment (clone) later.

This installation is done using the CLI deployment tool using the command:

E:\vcsa-cli-installer\win32>vcsa-deploy.exe install C:\Scripts\PowerCLI\Terraform\VCSA\vcsatemplate.json —accept-eula —no-ssl-certificate-verification

After deployment, I turned off the VCSA and downgraded the memory from 16 GB to 8 GB.

Now that the vCenter Server (template) is fully installed we are ready to clone it with Terraform. The scripts for cloning the vCenter Server can be found below:

❯ tree
├── main.tf    
├── terraform.tfvars
├── variables.tf

terraform.tfvars

CLICK ON EXPAND ===> ON THE RIGHT ===> TO SEE THE OUTPUT (terraform.tfvars code) ===> :

vsphere_user = “administrator@vsphere.local"
vsphere_password = “<my vCenter Server Password>"
vsphere_server = "vcsa-01.home.local”
vsphere_datacenter = “HOME”
vsphere_datastore = “vsanDatastore”
vsphere_resource_pool = “Lab1”
vsphere_network = “L1-APP-MGMT11”
vsphere_virtual_machine_template = “vcsa-template”
vsphere_virtual_machine_name = “l1-vcsa”

variables.tf

CLICK ON EXPAND ===> ON THE RIGHT ===> TO SEE THE OUTPUT (variables.tf code) ===> :

root # vsphere login account. defaults to admin account
variable "vsphere_user" {
  default = "administrator@vsphere.local”
}

root # vsphere account password. empty by default.
variable “vsphere_password” {
  default = “<my vCenter Server Password>” 
}

root # vsphere server, defaults to localhost
variable “vsphere_server” {
  default = “vcsa-01.home.local”
}

root # vsphere datacenter the virtual machine will be deployed to. empty by default.
variable “vsphere_datacenter” {}

root # vsphere resource pool the virtual machine will be deployed to. empty by default.
variable “vsphere_resource_pool” {}

root # vsphere datastore the virtual machine will be deployed to. empty by default.
variable “vsphere_datastore” {}

root # vsphere network the virtual machine will be connected to. empty by default.
variable “vsphere_network” {}

root # vsphere virtual machine template that the virtual machine will be cloned from. empty by default.
variable "vsphere_virtual_machine_template" {}

root # the name of the vsphere virtual machine that is created. empty by default.
variable “vsphere_virtual_machine_name” {}

main.tf

CLICK ON EXPAND ===> ON THE RIGHT ===> TO SEE THE OUTPUT (main.tf code) ===> :

provider “vsphere” {
  user           = "${var.vsphere_user}"
  password       = "${var.vsphere_password}”
  vsphere_server = “${var.vsphere_server}”
  allow_unverified_ssl = true
}

data “vsphere_datacenter” “dc” {
  name = “${var.vsphere_datacenter}”
}

data “vsphere_datastore” “datastore” {
  name          = “${var.vsphere_datastore}”
  datacenter_id = “${data.vsphere_datacenter.dc.id}”
}

data “vsphere_resource_pool” “pool” {
  name          = "${var.vsphere_resource_pool}"
  datacenter_id = “${data.vsphere_datacenter.dc.id}”
}

data “vsphere_network” “network” {
  name          = “${var.vsphere_network}”
  datacenter_id = “${data.vsphere_datacenter.dc.id}”
}

data “vsphere_virtual_machine” “template” {
  name          = “${var.vsphere_virtual_machine_template}”
  datacenter_id = “${data.vsphere_datacenter.dc.id}”
}

resource “vsphere_virtual_machine” “cloned_virtual_machine” {
  name             = “${var.vsphere_virtual_machine_name}”
  resource_pool_id = “${data.vsphere_resource_pool.pool.id}”
  datastore_id     = “${data.vsphere_datastore.datastore.id}”
  num_cpus = 4
  memory   = 8192

  #num_cpus = “${data.vsphere_virtual_machine.template.num_cpus}”
  #memory   = “${data.vsphere_virtual_machine.template.memory}”
  guest_id = “${data.vsphere_virtual_machine.template.guest_id}"

  scsi_type = "${data.vsphere_virtual_machine.template.scsi_type}"

  network_interface {
    network_id   = "${data.vsphere_network.network.id}"
    adapter_type = "${data.vsphere_virtual_machine.template.network_interface_types[0]}"
  }

  disk {
    label = "disk0"
    size  = "12"
 #   unit_number = 0
  }

  disk {
    label = "disk1"
    size  = "2"
    unit_number = 1
  }

    disk {
    label = "disk2"
    size  = "25"
    unit_number = 2
  }

    disk {
    label = "disk3"
    size  = "50"
    unit_number = 3
  }

    disk {
    label = "disk4"
    size  = "10"
    unit_number = 4
  }

    disk {
    label = "disk5"
    size  = "10"
    unit_number = 5
  }

    disk {
    label = "disk6"
    size  = "15"
    unit_number = 6
  }

    disk {
    label = “disk7”
    size  = “25”
    unit_number = 7
  }

    disk {
    label = "disk8"
    size  = “1”
    unit_number = 8
  }

    disk {
    label = “disk9”
    size  = “10”
    unit_number = 9
  }

    disk {
    label = “disk10”
    size  = “10”
    unit_number = 10
  }

    disk {
    label = “disk11”
    size  = “100”
    unit_number = 11
  }

    disk {
    label = “disk12”
    size  = “50”
    unit_number = 12
  }

  clone {
    template_uuid = “${data.vsphere_virtual_machine.template.id}”
  }
}

So we are ready to execute the terraform code on a per-directory basis.

Validate your code:

ihoogendoor-a01:#Test iwanhoogendoorn$ tfenv use 0.11.14
[INFO] Switching to v0.11.14
[INFO] Switching completed
ihoogendoor-a01:Test iwanhoogendoorn$ terraform validate

Plan your code:

ihoogendoor-a01:Test iwanhoogendoorn$ terraform plan

Execute your code to implement the Segments:

ihoogendoor-a01:Test iwanhoogendoorn$ terraform apply

When the segments need to be removed again you can revert the implementation:

ihoogendoor-a01:Test iwanhoogendoorn$ terraform destroy

Sources