terraform/examples/azure-openshift-origin/README.md

7.8 KiB

OpenShift Origin Deployment Template

This Terraform template was based on this Azure Quickstart Template. Changes to the ARM template that may have occurred since the creation of this example may not be reflected here.

OpenShift Origin with Username / Password

Current template deploys OpenShift Origin 1.5 RC0.

This template deploys OpenShift Origin with basic username / password for authentication to OpenShift. You can select to use either CentOS or RHEL for the OS. It includes the following resources:

Resource Properties
Virtual Network Address prefix: 10.0.0.0/16
Master subnet: 10.0.0.0/24
Node subnet: 10.0.1.0/24
Load Balancer 2 probes and two rules for TCP 80 and TCP 443
Public IP Addresses OpenShift Master public IP
OpenShift Router public IP attached to Load Balancer
Storage Accounts 2 Storage Accounts
Virtual Machines Single master
User-defined number of nodes
All VMs include a single attached data disk for Docker thin pool logical volume

If you have a Red Hat subscription and would like to deploy an OpenShift Container Platform (formerly OpenShift Enterprise) cluster, please visit: https://github.com/Microsoft/openshift-container-platform

Generate SSH Keys

You'll need to generate an SSH key pair in order to provision this template. Ensure that you do not include a passcode with the private key.
If you are using a Windows computer, you can download puttygen.exe. You will need to export to OpenSSH (from Conversions menu) to get a valid Private Key for use in the Template.
From a Linux or Mac, you can just use the ssh-keygen command. Once you are finished deploying the cluster, you can always generate a new key pair that uses a passphrase and replaces the original one used during initial deployment.

Create Key Vault to store SSH Private Key

You will need to create a Key Vault to store your SSH Private Key that will then be used as part of the deployment.

  1. Create Key Vault using Powershell
    a. Create new resource group: New-AzureRMResourceGroup -Name 'ResourceGroupName' -Location 'West US'
    b. Create key vault: New-AzureRmKeyVault -VaultName 'KeyVaultName' -ResourceGroup 'ResourceGroupName' -Location 'West US'
    c. Create variable with sshPrivateKey: $securesecret = ConvertTo-SecureString -String '[copy ssh Private Key here - including line feeds]' -AsPlainText -Force
    d. Create Secret: Set-AzureKeyVaultSecret -Name 'SecretName' -SecretValue $securesecret -VaultName 'KeyVaultName'
    e. Enable the Key Vault for Template Deployments: Set-AzureRmKeyVaultAccessPolicy -VaultName 'KeyVaultName' -ResourceGroupName 'ResourceGroupName' -EnabledForTemplateDeployment

  2. Create Key Vault using Azure CLI 1.0
    a. Create new Resource Group: azure group create <name> <location>
    Ex: azure group create ResourceGroupName 'East US'
    b. Create Key Vault: azure keyvault create -u <vault-name> -g <resource-group> -l <location>
    Ex: azure keyvault create -u KeyVaultName -g ResourceGroupName -l 'East US'
    c. Create Secret: azure keyvault secret set -u <vault-name> -s <secret-name> --file <private-key-file-name>
    Ex: azure keyvault secret set -u KeyVaultName -s SecretName --file ~/.ssh/id_rsa
    d. Enable the Keyvvault for Template Deployment: azure keyvault set-policy -u <vault-name> --enabled-for-template-deployment true
    Ex: azure keyvault set-policy -u KeyVaultName --enabled-for-template-deployment true

  3. Create Key Vault using Azure CLI 2.0
    a. Create new Resource Group: az group create -n <name> -l <location>
    Ex: az group create -n ResourceGroupName -l 'East US'
    b. Create Key Vault: az keyvault create -n <vault-name> -g <resource-group> -l <location> --enabled-for-template-deployment true
    Ex: az keyvault create -n KeyVaultName -g ResourceGroupName -l 'East US' --enabled-for-template-deployment true
    c. Create Secret: az keyvault secret set --vault-name <vault-name> -n <secret-name> --file <private-key-file-name>
    Ex: az keyvault secret set --vault-name KeyVaultName -n SecretName --file ~/.ssh/id_rsa

  4. Clone the Openshift repository here
    a. Note the local script path, this will be needed for remote-execs on the remote machines.

Deploy Template

Once you have collected all of the prerequisites for the template, you can deploy the template via terraform.

Monitor deployment via Terraform and get the console URL from outputs of successful deployment which will look something like (if using sample parameters file and "West US 2" location):

https://me-master1.westus2.cloudapp.azure.com:8443/console

The cluster will use self-signed certificates. Accept the warning and proceed to the login page.

NOTE

Ensure combination of openshiftMasterPublicIpDnsLabelPrefix, and nodeLbPublicIpDnsLabelPrefix parameters, combined with the deployment location give you globally unique URL for the cluster or deployment will fail at the step of allocating public IPs with fully-qualified-domain-names as above.

NOTE

This template deploys a bastion host, merely for the connection provisioner and allowing remote-exec to run commands on machines without public IPs; notice the specific dependencies on the order in which VMs are created for this to work properly.

NOTE

The OpenShift Ansible playbook does take a while to run when using VMs backed by Standard Storage. VMs backed by Premium Storage are faster. If you want Premimum Storage, select a DS or GS series VM.


Be sure to follow the OpenShift instructions to create the ncessary DNS entry for the OpenShift Router for access to applications.

Post-Deployment Operations

This template creates an OpenShift user but does not make it a full OpenShift user. To do that, please perform the following.

  1. SSH in to master node

  2. Execute the following command:

    sudo oadm policy add-cluster-role-to-user cluster-admin <user>
    

Additional OpenShift Configuration Options

You can configure additional settings per the official OpenShift Origin Documentation.

Few options you have

  1. Deployment Output

a. openshiftConsoleUrl the openshift console url
b. openshiftMasterSsh ssh command for master node
c. openshiftNodeLoadBalancerFQDN node load balancer

get the deployment output data

a. portal.azure.com -> choose 'Resource groups' select your group select 'Deployments' and there the deployment 'Microsoft.Template'. As output from the deployment it contains information about the openshift console url, ssh command and load balancer url.
b. With the Azure CLI : azure group deployment list <resource group name>

  1. add additional users. you can find much detail about this in the openshift.org documentation under 'Cluster Administration' and 'Managing Users'. This installation uses htpasswd as the identity provider. To add more user ssh in to master node and execute following command:

    sudo htpasswd /etc/origin/master/htpasswd user1
    

Now this user can login with the 'oc' CLI tool or the openshift console url.