provider/azurerm: Example of VM Scale Set with Ubuntu (#15290)
* initial commit - 101-vm-from-user-image * changed branch name * not deploying - storage problems * provisions vm but image not properly prepared * storage not correct * provisions properly * changed main.tf to azuredeploy.tf * added tfvars and info for README * tfvars ignored and corrected file ext * added CI config; added sane defaults for variables; updated deployment script, added mac specific deployment for local testing * deploy.sh to be executable * executable deploy files * added CI files; changed vars * prep for PR * removal of old folder * prep for PR * wrong args for travis * more PR prep * updated README * commented out variables in terraform.tfvars * Topic 101 vm from user image (#2) * initial commit - 101-vm-from-user-image * added tfvars and info for README * added CI config; added sane defaults for variables; updated deployment script, added mac specific deployment for local testing * prep for PR * added new template * oops, left off master * prep for PR * correct repository for destination * renamed scripts to be more intuitive; added check for docker * merge vm simple; vm from image * initial commit * deploys locally * updated deploy * changed to allow http & https (like ARM tmplt) * changed host_name & host_name variable desc * merge master * added new constructs/naming for deploy scripts, etc. * suppress az login output * merge of CI config * prep for PR * took out armviz button and minor README changes * changed host_name * fixed merge conflicts * changed host_name variable * updating Hashicorp's changes to merged simple linux branch * updating files to merge w/master and prep for Hashicorp pr * Revert "updating files to merge w/master and prep for Hashicorp pr" This reverts commit b850cd5d2a858eff073fc5a1097a6813d0f8b362. * Revert "updating Hashicorp's changes to merged simple linux branch" This reverts commit dbaf8d14a9cdfcef0281919671357f6171ebd4e6. * work in progress; waiting on support for lb inbound nat & autoscale settings * changing .travis.yml for this branch * updated deploy validation; readme; travis.yml * in progress; lb inbound nat pool id argument added * deploys vmss, not autoscale (no resource) * merging hashicorp master into this branch * chmod for deploy scripts * cleaned up main.tf * ran tf fmt * fixed typo in travis.yml * pinning azuresdk/azure-cli-python version * typo * adding comments * provisions withouth autoscale * fixed clean up to destroy rg * renamed example directory * reverted to Hashicorp's travis.yml * merge conflict - return line * merge conflict - white space * updated README
This commit is contained in:
parent
a37a70b133
commit
36956d863b
|
@ -0,0 +1,22 @@
|
||||||
|
# Linux VM Scale Set
|
||||||
|
|
||||||
|
This template deploys a desired count Linux VM Scale Set. Once the VMSS is deployed, the user can deploy an application inside each of the VMs (either by directly logging into the VMs or via a [`remote-exec` provisioner](https://www.terraform.io/docs/provisioners/remote-exec.html)).
|
||||||
|
|
||||||
|
## main.tf
|
||||||
|
The `main.tf` file contains the actual resources that will be deployed. It also contains the Azure Resource Group definition and any defined variables.
|
||||||
|
|
||||||
|
## outputs.tf
|
||||||
|
This data is outputted when `terraform apply` is called, and can be queried using the `terraform output` command.
|
||||||
|
|
||||||
|
## provider.tf
|
||||||
|
You may leave the provider block in the `main.tf`, as it is in this template, or you can create a file called `provider.tf` and add it to your `.gitignore` file.
|
||||||
|
|
||||||
|
Azure requires that an application is added to Azure Active Directory to generate the `client_id`, `client_secret`, and `tenant_id` needed by Terraform (`subscription_id` can be recovered from your Azure account details). Please go [here](https://www.terraform.io/docs/providers/azurerm/) for full instructions on how to create this to populate your `provider.tf` file.
|
||||||
|
|
||||||
|
## terraform.tfvars
|
||||||
|
If a `terraform.tfvars` file is present in the current directory, Terraform automatically loads it to populate variables. We don't recommend saving usernames and password to version control, but you can create a local secret variables file and use `-var-file` to load it.
|
||||||
|
|
||||||
|
## variables.tf
|
||||||
|
The `variables.tf` file contains all of the input parameters that the user can specify when deploying this Terraform template.
|
||||||
|
|
||||||
|
![`terraform graph`](/examples/azure-vmss-ubuntu/graph.png)
|
|
@ -0,0 +1,35 @@
|
||||||
|
#!/bin/bash
|
||||||
|
|
||||||
|
set -o errexit -o nounset
|
||||||
|
|
||||||
|
docker run --rm -it \
|
||||||
|
-e ARM_CLIENT_ID \
|
||||||
|
-e ARM_CLIENT_SECRET \
|
||||||
|
-e ARM_SUBSCRIPTION_ID \
|
||||||
|
-e ARM_TENANT_ID \
|
||||||
|
-v $(pwd):/data \
|
||||||
|
--entrypoint "/bin/sh" \
|
||||||
|
hashicorp/terraform:light \
|
||||||
|
-c "cd /data; \
|
||||||
|
/bin/terraform get; \
|
||||||
|
/bin/terraform validate; \
|
||||||
|
/bin/terraform plan -out=out.tfplan -var admin_username=$KEY -var hostname=$KEY -var vmss_name=$KEY -var resource_group=$KEY -var admin_password=$PASSWORD; \
|
||||||
|
/bin/terraform apply out.tfplan"
|
||||||
|
|
||||||
|
# cleanup deployed azure resources via azure-cli
|
||||||
|
docker run --rm -it \
|
||||||
|
azuresdk/azure-cli-python:0.2.10 \
|
||||||
|
sh -c "az login --service-principal -u $ARM_CLIENT_ID -p $ARM_CLIENT_SECRET --tenant $ARM_TENANT_ID > /dev/null; \
|
||||||
|
az resource list -g $KEY;"
|
||||||
|
|
||||||
|
# cleanup deployed azure resources via terraform
|
||||||
|
docker run --rm -it \
|
||||||
|
-e ARM_CLIENT_ID \
|
||||||
|
-e ARM_CLIENT_SECRET \
|
||||||
|
-e ARM_SUBSCRIPTION_ID \
|
||||||
|
-e ARM_TENANT_ID \
|
||||||
|
-v $(pwd):/data \
|
||||||
|
--workdir=/data \
|
||||||
|
--entrypoint "/bin/sh" \
|
||||||
|
hashicorp/terraform:light \
|
||||||
|
-c "/bin/terraform destroy -force -var resource_group=$KEY -var admin_username=$KEY -var hostname=$KEY -var vmss_name=$KEY -var admin_password=$PASSWORD;"
|
|
@ -0,0 +1,15 @@
|
||||||
|
#!/bin/bash
|
||||||
|
|
||||||
|
set -o errexit -o nounset
|
||||||
|
|
||||||
|
if docker -v; then
|
||||||
|
|
||||||
|
# generate a unique string for CI deployment
|
||||||
|
export KEY=$(cat /dev/urandom | env LC_CTYPE=C tr -cd 'a-z' | head -c 12)
|
||||||
|
export PASSWORD=$KEY$(cat /dev/urandom | env LC_CTYPE=C tr -cd 'A-Z' | head -c 2)$(cat /dev/urandom | env LC_CTYPE=C tr -cd '0-9' | head -c 2)
|
||||||
|
|
||||||
|
/bin/sh ./deploy.ci.sh
|
||||||
|
|
||||||
|
else
|
||||||
|
echo "Docker is used to run terraform commands, please install before run: https://docs.docker.com/docker-for-mac/install/"
|
||||||
|
fi
|
Binary file not shown.
After Width: | Height: | Size: 202 KiB |
|
@ -0,0 +1,127 @@
|
||||||
|
# provider "azurerm" {
|
||||||
|
# subscription_id = "${var.subscription_id}"
|
||||||
|
# client_id = "${var.client_id}"
|
||||||
|
# client_secret = "${var.client_secret}"
|
||||||
|
# tenant_id = "${var.tenant_id}"
|
||||||
|
# }
|
||||||
|
|
||||||
|
resource "azurerm_resource_group" "rg" {
|
||||||
|
name = "${var.resource_group}"
|
||||||
|
location = "${var.location}"
|
||||||
|
}
|
||||||
|
|
||||||
|
resource "azurerm_virtual_network" "vnet" {
|
||||||
|
name = "${var.resource_group}vnet"
|
||||||
|
location = "${azurerm_resource_group.rg.location}"
|
||||||
|
address_space = ["10.0.0.0/16"]
|
||||||
|
resource_group_name = "${azurerm_resource_group.rg.name}"
|
||||||
|
}
|
||||||
|
|
||||||
|
resource "azurerm_subnet" "subnet" {
|
||||||
|
name = "subnet"
|
||||||
|
address_prefix = "10.0.0.0/24"
|
||||||
|
resource_group_name = "${azurerm_resource_group.rg.name}"
|
||||||
|
virtual_network_name = "${azurerm_virtual_network.vnet.name}"
|
||||||
|
}
|
||||||
|
|
||||||
|
resource "azurerm_public_ip" "pip" {
|
||||||
|
name = "${var.hostname}-pip"
|
||||||
|
location = "${azurerm_resource_group.rg.location}"
|
||||||
|
resource_group_name = "${azurerm_resource_group.rg.name}"
|
||||||
|
public_ip_address_allocation = "Dynamic"
|
||||||
|
domain_name_label = "${var.hostname}"
|
||||||
|
}
|
||||||
|
|
||||||
|
resource "azurerm_lb" "lb" {
|
||||||
|
name = "LoadBalancer"
|
||||||
|
location = "${azurerm_resource_group.rg.location}"
|
||||||
|
resource_group_name = "${azurerm_resource_group.rg.name}"
|
||||||
|
depends_on = ["azurerm_public_ip.pip"]
|
||||||
|
|
||||||
|
frontend_ip_configuration {
|
||||||
|
name = "LBFrontEnd"
|
||||||
|
public_ip_address_id = "${azurerm_public_ip.pip.id}"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
resource "azurerm_lb_backend_address_pool" "backlb" {
|
||||||
|
name = "BackEndAddressPool"
|
||||||
|
resource_group_name = "${azurerm_resource_group.rg.name}"
|
||||||
|
loadbalancer_id = "${azurerm_lb.lb.id}"
|
||||||
|
}
|
||||||
|
|
||||||
|
resource "azurerm_lb_nat_pool" "np" {
|
||||||
|
resource_group_name = "${azurerm_resource_group.rg.name}"
|
||||||
|
loadbalancer_id = "${azurerm_lb.lb.id}"
|
||||||
|
name = "NATPool"
|
||||||
|
protocol = "Tcp"
|
||||||
|
frontend_port_start = 50000
|
||||||
|
frontend_port_end = 50119
|
||||||
|
backend_port = 22
|
||||||
|
frontend_ip_configuration_name = "LBFrontEnd"
|
||||||
|
}
|
||||||
|
|
||||||
|
resource "azurerm_storage_account" "stor" {
|
||||||
|
name = "${var.resource_group}stor"
|
||||||
|
location = "${azurerm_resource_group.rg.location}"
|
||||||
|
resource_group_name = "${azurerm_resource_group.rg.name}"
|
||||||
|
account_type = "${var.storage_account_type}"
|
||||||
|
}
|
||||||
|
|
||||||
|
resource "azurerm_storage_container" "vhds" {
|
||||||
|
name = "vhds"
|
||||||
|
resource_group_name = "${azurerm_resource_group.rg.name}"
|
||||||
|
storage_account_name = "${azurerm_storage_account.stor.name}"
|
||||||
|
container_access_type = "blob"
|
||||||
|
}
|
||||||
|
|
||||||
|
resource "azurerm_virtual_machine_scale_set" "scaleset" {
|
||||||
|
name = "autoscalewad"
|
||||||
|
location = "${azurerm_resource_group.rg.location}"
|
||||||
|
resource_group_name = "${azurerm_resource_group.rg.name}"
|
||||||
|
upgrade_policy_mode = "Manual"
|
||||||
|
overprovision = true
|
||||||
|
depends_on = ["azurerm_lb.lb", "azurerm_virtual_network.vnet"]
|
||||||
|
|
||||||
|
sku {
|
||||||
|
name = "${var.vm_sku}"
|
||||||
|
tier = "Standard"
|
||||||
|
capacity = "${var.instance_count}"
|
||||||
|
}
|
||||||
|
|
||||||
|
os_profile {
|
||||||
|
computer_name_prefix = "${var.vmss_name}"
|
||||||
|
admin_username = "${var.admin_username}"
|
||||||
|
admin_password = "${var.admin_password}"
|
||||||
|
}
|
||||||
|
|
||||||
|
os_profile_linux_config {
|
||||||
|
disable_password_authentication = false
|
||||||
|
}
|
||||||
|
|
||||||
|
network_profile {
|
||||||
|
name = "${var.hostname}-nic"
|
||||||
|
primary = true
|
||||||
|
|
||||||
|
ip_configuration {
|
||||||
|
name = "${var.hostname}ipconfig"
|
||||||
|
subnet_id = "${azurerm_subnet.subnet.id}"
|
||||||
|
load_balancer_backend_address_pool_ids = ["${azurerm_lb_backend_address_pool.backlb.id}"]
|
||||||
|
load_balancer_inbound_nat_rules_ids = ["${element(azurerm_lb_nat_pool.np.*.id, count.index)}"]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
storage_profile_os_disk {
|
||||||
|
name = "${var.hostname}"
|
||||||
|
caching = "ReadWrite"
|
||||||
|
create_option = "FromImage"
|
||||||
|
vhd_containers = ["${azurerm_storage_account.stor.primary_blob_endpoint}${azurerm_storage_container.vhds.name}"]
|
||||||
|
}
|
||||||
|
|
||||||
|
storage_profile_image_reference {
|
||||||
|
publisher = "${var.image_publisher}"
|
||||||
|
offer = "${var.image_offer}"
|
||||||
|
sku = "${var.ubuntu_os_version}"
|
||||||
|
version = "latest"
|
||||||
|
}
|
||||||
|
}
|
|
@ -0,0 +1,3 @@
|
||||||
|
output "hostname" {
|
||||||
|
value = "${var.vmss_name}"
|
||||||
|
}
|
|
@ -0,0 +1,59 @@
|
||||||
|
# variable "subscription_id" {}
|
||||||
|
# variable "client_id" {}
|
||||||
|
# variable "client_secret" {}
|
||||||
|
# variable "tenant_id" {}
|
||||||
|
|
||||||
|
variable "resource_group" {
|
||||||
|
description = "The name of the resource group in which to create the virtual network."
|
||||||
|
}
|
||||||
|
|
||||||
|
variable "location" {
|
||||||
|
description = "The location/region where the virtual network is created. Changing this forces a new resource to be created."
|
||||||
|
default = "southcentralus"
|
||||||
|
}
|
||||||
|
|
||||||
|
variable "storage_account_type" {
|
||||||
|
description = "Specifies the type of the storage account"
|
||||||
|
default = "Standard_LRS"
|
||||||
|
}
|
||||||
|
|
||||||
|
variable "hostname" {
|
||||||
|
description = "A string that determines the hostname/IP address of the origin server. This string could be a domain name, IPv4 address or IPv6 address."
|
||||||
|
}
|
||||||
|
|
||||||
|
variable "vm_sku" {
|
||||||
|
description = "Size of VMs in the VM Scale Set."
|
||||||
|
default = "Standard_A1"
|
||||||
|
}
|
||||||
|
|
||||||
|
variable "ubuntu_os_version" {
|
||||||
|
description = "The Ubuntu version for the VM. This will pick a fully patched image of this given Ubuntu version. Allowed values are: 15.10, 14.04.4-LTS."
|
||||||
|
default = "16.04.0-LTS"
|
||||||
|
}
|
||||||
|
|
||||||
|
variable "image_publisher" {
|
||||||
|
description = "The name of the publisher of the image (az vm image list)"
|
||||||
|
default = "Canonical"
|
||||||
|
}
|
||||||
|
|
||||||
|
variable "image_offer" {
|
||||||
|
description = "The name of the offer (az vm image list)"
|
||||||
|
default = "UbuntuServer"
|
||||||
|
}
|
||||||
|
|
||||||
|
variable "vmss_name" {
|
||||||
|
description = "String used as a base for naming resources. Must be 3-61 characters in length and globally unique across Azure. A hash is prepended to this string for some resources, and resource-specific information is appended."
|
||||||
|
}
|
||||||
|
|
||||||
|
variable "instance_count" {
|
||||||
|
description = "Number of VM instances (100 or less)."
|
||||||
|
default = "5"
|
||||||
|
}
|
||||||
|
|
||||||
|
variable "admin_username" {
|
||||||
|
description = "Admin username on all VMs."
|
||||||
|
}
|
||||||
|
|
||||||
|
variable "admin_password" {
|
||||||
|
description = "Admin password on all VMs."
|
||||||
|
}
|
Loading…
Reference in New Issue