Fixes#7374
The introduction of the AzureRM SDK 3.0.0-beta means that the
`name_servers` for the DNS Zone are returned from the API
This PR has a dependency on #7420 being merged first
```
make testacc TEST=./builtin/providers/azurerm TESTARGS='-run=TestAccAzureRMDnsZone_'
==> Checking that code complies with gofmt requirements...
/Users/stacko/Code/go/bin/stringer
go generate $(go list ./... | grep -v /vendor/)
2016/06/30 15:20:01 Generated command/internal_plugin_list.go
TF_ACC=1 go test ./builtin/providers/azurerm -v
-run=TestAccAzureRMDnsZone_ -timeout 120m
=== RUN TestAccAzureRMDnsZone_basic
--- PASS: TestAccAzureRMDnsZone_basic (92.42s)
PASS
ok github.com/hashicorp/terraform/builtin/providers/azurerm
92.444s
```
succeeded
This fixes#7122 where the SG was not fully configured before a
dependant service was created
```
make testacc TEST=./builtin/providers/azurerm
TESTARGS='-run=TestAccAzureRMNetworkSecurityGroup_'
==> Checking that code complies with gofmt requirements...
go generate $(go list ./... | grep -v /vendor/)
TF_ACC=1 go test ./builtin/providers/azurerm -v
-run=TestAccAzureRMNetworkSecurityGroup_ -timeout 120m
=== RUN TestAccAzureRMNetworkSecurityGroup_basic
--- PASS: TestAccAzureRMNetworkSecurityGroup_basic (128.93s)
=== RUN TestAccAzureRMNetworkSecurityGroup_withTags
--- PASS: TestAccAzureRMNetworkSecurityGroup_withTags (164.52s)
=== RUN TestAccAzureRMNetworkSecurityGroup_addingExtraRules
--- PASS: TestAccAzureRMNetworkSecurityGroup_addingExtraRules (178.20s)
PASS
ok github.com/hashicorp/terraform/builtin/providers/azurerm
471.677s
```
There were lots of hacky checks in the ARM VM resource to check for a
length of 1 param block
MaxItems was introduced so this PR updates to use MaxItems
When using winrm config block, we had an address issue. This is no
longer a set but a list (As a list fits it more approp)
```
make testacc TEST=./builtin/providers/azurerm
TESTARGS='-run=TestAccAzureRMVirtualMachine_'
==> Checking that code complies with gofmt requirements...
go generate $(go list ./... | grep -v /vendor/)
TF_ACC=1 go test ./builtin/providers/azurerm -v
-run=TestAccAzureRMVirtualMachine_ -timeout 120m
=== RUN TestAccAzureRMVirtualMachine_basicLinuxMachine
--- PASS: TestAccAzureRMVirtualMachine_basicLinuxMachine (587.80s)
=== RUN TestAccAzureRMVirtualMachine_tags
--- PASS: TestAccAzureRMVirtualMachine_tags (554.53s)
=== RUN TestAccAzureRMVirtualMachine_updateMachineSize
--- PASS: TestAccAzureRMVirtualMachine_updateMachineSize (612.52s)
=== RUN TestAccAzureRMVirtualMachine_basicWindowsMachine
--- PASS: TestAccAzureRMVirtualMachine_basicWindowsMachine (765.90s)
=== RUN TestAccAzureRMVirtualMachine_windowsUnattendedConfig
--- PASS: TestAccAzureRMVirtualMachine_windowsUnattendedConfig
(770.53s)
=== RUN TestAccAzureRMVirtualMachine_winRMConfig
--- PASS: TestAccAzureRMVirtualMachine_winRMConfig (827.90s)
PASS
ok github.com/hashicorp/terraform/builtin/providers/azurerm
4119.192s
```
* provider/azurerm: DNS CNAME resource wasn't posting records
Azure changed the API for CNAME at some point and since then we haven't
been creating CNAME records. The API changes from []records to a single
record
This PR changes the schema for dns cnames to have a record parameter and
adds a deprecation warning around records. Talked with @jen20 on this
and we decided that it's currently broken and we should handle this as
part of 0.7 where there are other breaking changes
```
TF_LOG=1 make testacc TEST=./builtin/providers/azurerm
TESTARGS='-run=TestAccAzureRMDnsCNameRecord' 2>~/tf.log
==> Checking that code complies with gofmt requirements...
go generate $(go list ./... | grep -v /vendor/)
TF_ACC=1 go test ./builtin/providers/azurerm -v
-run=TestAccAzureRMDnsCNameRecord -timeout 120m
=== RUN TestAccAzureRMDnsCNameRecord_basic
--- PASS: TestAccAzureRMDnsCNameRecord_basic (97.22s)
=== RUN TestAccAzureRMDnsCNameRecord_subdomain
--- PASS: TestAccAzureRMDnsCNameRecord_subdomain (94.94s)
=== RUN TestAccAzureRMDnsCNameRecord_updateRecords
--- PASS: TestAccAzureRMDnsCNameRecord_updateRecords (116.62s)
```
* Change DNS Records to removed rather than deprecated
Fixes#7053 where, when using `additional_unattend_config` in
`os_profile_windows_config` we got an error as follows:
```
azurerm_virtual_machine.test: [DEBUG] Error setting Virtual Machine
Storage OS Profile Windows Configuration: &errors.errorString{s:"Invalid
address to set: []string{\"os_profile_windows_config\", \"1534614206\",
\"additional_unattend_config\"}"}
```
apply
The IP COnfiguration block of `azurerm_network_interface` didn't have a
hash created in a way that changes to the optional params were being
picked up:
```
~ azurerm_network_interface.test
ip_configuration.273485505.name: "testconfiguration1" => ""
ip_configuration.273485505.private_ip_address_allocation: "dynamic" => ""
ip_configuration.273485505.subnet_id: "/subscriptions/34ca515c-4629-458e-bf7c-738d77e0d0ea/resourceGroups/acctestrg/providers/Microsoft.Network/virtualNetworks/acctvn/subnets/acctsub" => ""
ip_configuration.~273485505.load_balancer_backend_address_pools_ids.#: "" => "<computed>"
ip_configuration.~273485505.load_balancer_inbound_nat_rules_ids.#: "" => "<computed>"
ip_configuration.~273485505.name: "" => "testconfiguration1"
ip_configuration.~273485505.private_ip_address: "" => "<computed>"
ip_configuration.~273485505.private_ip_address_allocation: "" => "dynamic"
ip_configuration.~273485505.public_ip_address_id: "" => "${azurerm_public_ip.test.id}"
ip_configuration.~273485505.subnet_id: "" => "/subscriptions/34ca515c-4629-458e-bf7c-738d77e0d0ea/resourceGroups/acctestrg/providers/Microsoft.Network/virtualNetworks/acctvn/subnets/acctsub"
```
This caused the following error:
```
Error applying plan:
1 error(s) occurred:
* azurerm_network_interface.test: diffs didn't match during apply. This is a bug with Terraform and should be reported as a GitHub Issue.
Please include the following information in your report:
```
Notice that the hash didn't change. This change adds the remaining optional params to the hash so that the hash id will change.
```
~ azurerm_network_interface.test
ip_configuration.4255411321.load_balancer_backend_address_pools_ids.#: "" => "<computed>"
ip_configuration.4255411321.load_balancer_inbound_nat_rules_ids.#: "" => "<computed>"
ip_configuration.4255411321.name: "" => "testconfiguration1"
ip_configuration.4255411321.private_ip_address: "" => "<computed>"
ip_configuration.4255411321.private_ip_address_allocation: "" => "dynamic"
ip_configuration.4255411321.public_ip_address_id: "" => "/subscriptions/34ca515c-4629-458e-bf7c-738d77e0d0ea/resourceGroups/acctestrg/providers/Microsoft.Network/publicIPAddresses/public-ip"
ip_configuration.4255411321.subnet_id: "" => "/subscriptions/34ca515c-4629-458e-bf7c-738d77e0d0ea/resourceGroups/acctestrg/providers/Microsoft.Network/virtualNetworks/acctvn/subnets/acctsub"
ip_configuration.966273186.name: "testconfiguration1" => ""
ip_configuration.966273186.private_ip_address_allocation: "dynamic" => ""
ip_configuration.966273186.subnet_id: "/subscriptions/34ca515c-4629-458e-bf7c-738d77e0d0ea/resourceGroups/acctestrg/providers/Microsoft.Network/virtualNetworks/acctvn/subnets/acctsub" => ""
```
This allows the Update to work as expected :)
```
azurerm_network_interface.test: Modifications complete
Apply complete! Resources: 0 added, 1 changed, 0 destroyed.
```
`azurerm_storage_account` access keys
Please note that we do NOT have the ability to manage the access keys -
we are just getting the keys that the account creates for us. To manage
the keys, you would need to use the azure portal still
* Adding private ip address reference
* adding private ip address reference
* Updating the docs.
* Removing optional attrib from private_ip_address
Removing optional attribute from private_ip_address, this element is only being used in the read.
* Selecting the first element instead of using a loop for now.
Change this to a loop when https://github.com/Azure/azure-sdk-for-go/issues/259 is fixed
The `storage_data_disk` was trying to use vhd_url rather than vhd_uri.
This was causing an error on creating a new data_disk as part of a VM
Also added validation as data_disks can only be 1 - 1023 GB in size
ssh_keys were throwing an error similar to this:
```
* azurerm_virtual_machine.test: [DEBUG] Error setting Virtual Machine
* Storage OS Profile Linux Configuration: &errors.errorString{s:"Invalid
* address to set: []string{\"os_profile_linux_config\", \"0\",
* \"ssh_keys\"}"}
```
This was because of nesting of Set within a Set in the schema. By
changing this to a List within a Set, the schema works as expected. This
means we can now set SSH Keys on VMs. This has been tested using a
remote-exec and a connection block with the ssh key
```
azurerm_virtual_machine.test: Still creating... (2m10s elapsed)
azurerm_virtual_machine.test (remote-exec): Connected!
azurerm_virtual_machine.test (remote-exec): CONNECTED!
```
adminPassword
The Azure API never returns the AdminPAssword (as is correct) from the
Read API call. Therefore on Create, we do not set the AdminPassword of
the vm as part of the state. The Same func is used for Create & Update,
therefore when we changed anything on the VM, we were getting the
following error:
```
statusCode:Conflict
serviceRequestId:f498a6c8-6e7a-420f-9788-400f18078921
statusMessage:{"error":{"code":"PropertyChangeNotAllowed","target":"adminPassword","message":"Changing property 'adminPassword' is not allowed."}}
```
To fix this, we need to excldue the AdminPassword from the Update func
if it is empty
The azure tests relating to cdn endpoints (TestAccAzureRMCdnEndpoint_basic
etc) are breaking because ARM is not accepting port values of 0. The
following error is received:
statusCode:BadRequest
statusMessage:{"error":{"code":"BadRequest","message":"Invalid port \"0\". Port value must be a number between 1 and 65535."}}
This patch sets the ports for example.com to 443 and 80.
This commit uses Riviera to register the Microsoft.Compute provider as a
canary for whether or not the Azure account credentials are set up. It
used to use the MS client, but that appeared to panic internally if the
credentials were bad. It's possible that we were using it wrong, but
there are no docs so ¯\_(ツ)_/¯.
As part of this, we parellelise the registration of the other providers.
This shaves the latency of each provider request times the number of
providers minus 1 off the "startup" time of the AzureRM provider. The
result is quite noticeable.