Added the hosted_zone_id attribute, which aliases to the Route 53
zone ID that can be used to route Alias Resource Record Sets to.
This fixeshashicorp/terraform#6489.
ssh_keys were throwing an error similar to this:
```
* azurerm_virtual_machine.test: [DEBUG] Error setting Virtual Machine
* Storage OS Profile Linux Configuration: &errors.errorString{s:"Invalid
* address to set: []string{\"os_profile_linux_config\", \"0\",
* \"ssh_keys\"}"}
```
This was because of nesting of Set within a Set in the schema. By
changing this to a List within a Set, the schema works as expected. This
means we can now set SSH Keys on VMs. This has been tested using a
remote-exec and a connection block with the ssh key
```
azurerm_virtual_machine.test: Still creating... (2m10s elapsed)
azurerm_virtual_machine.test (remote-exec): Connected!
azurerm_virtual_machine.test (remote-exec): CONNECTED!
```
Change the AWS DB Instance to now include the DB Option Group param. Adds a test to prove that it works
Add acceptance tests for the AWS DB Option Group work. This ensures that Options can be added and updated
Documentation for the AWS DB Option resource
automated_snapshot_retention_period
The default value for `automated_snapshot_retention_period` is 1.
Therefore, it can be included in the `CreateClusterInput` without
needing to check that it is set.
This was actually stopping people from setting the value to 0 (disabling
the snapshots) as there is an issue in `d.GetOk()` evaluating 0 for int
adminPassword
The Azure API never returns the AdminPAssword (as is correct) from the
Read API call. Therefore on Create, we do not set the AdminPassword of
the vm as part of the state. The Same func is used for Create & Update,
therefore when we changed anything on the VM, we were getting the
following error:
```
statusCode:Conflict
serviceRequestId:f498a6c8-6e7a-420f-9788-400f18078921
statusMessage:{"error":{"code":"PropertyChangeNotAllowed","target":"adminPassword","message":"Changing property 'adminPassword' is not allowed."}}
```
To fix this, we need to excldue the AdminPassword from the Update func
if it is empty
On Create, notify_no_data was being ignored.
On Read and Update, no_data_timeframe was being misused.
There was also a redundant read of escalation_message on Create.
For a long time now, the diff logic has relied on the behavior of
`mapstructure.WeakDecode` to determine how various primitives are
converted into strings. The `schema.DiffString` function is used for
all primitive field types: TypeBool, TypeInt, TypeFloat, and TypeString.
The `mapstructure` library's string representation of booleans is "0"
and "1", which differs from `strconv.FormatBool`'s "false" and "true"
(which is used in writing out boolean fields to the state).
Because of this difference, diffs have long had the potential for
cosmetically odd but semantically neutral output like:
"true" => "1"
"false" => "0"
So long as `mapstructure.Decode` or `strconv.ParseBool` are used to
interpret these strings, there's no functional problem.
We had our first clear functional problem with #6005 and friends, where
users noticed diffs like the above showing up unexpectedly and causing
troubles when `ignore_changes` was in play.
This particular bug occurs down in Terraform core's EvalIgnoreChanges.
There, the diff is modified to account for ignored attributes, and
special logic attempts to handle properly the situation where the
ignored attribute was going to trigger a resource replacement. That
logic relies on the string representations of the Old and New fields in
the diff to be the same so that it filters properly.
So therefore, we now get a bug when a diff includes `Old: "0", New:
"false"` since the strings do not match, and `ignore_changes` is not
properly handled.
Here, we introduce `TypeBool`-specific normalizing into `finalizeDiff`.
I spiked out a full `diffBool` function, but figuring out which pieces
of `diffString` to duplicate there got hairy. This seemed like a simpler
and more direct solution.
Fixes#6005 (and potentially others!)
- Addresses the issue when local state file has logging_config populated and the user
disables the configuration via the UI (or in this case an
application of the TF config). This will now properly set the
logging_config during the read operation and identify the state as
diverging
Fixes#6390
* TF-6256 - SG Rule Retry
- Preferring slower but consistent runs when AWS API calls do not properly return the SG Rule in the list of ingress/egress rules.
- Testing has shown that several times that we had to exceed 20 attempts
before the SG was actually returned
* TF-6256 - Refactor of rule lookup
- Adjusting to use resource.Retry
- Extract lookup method for matching ipPermissions set
Here is an example that will setup the following:
+ An SSH key resource.
+ A virtual server resource that uses an existing SSH key.
+ A virtual server resource using an existing SSH key and a Terraform managed SSH key (created as "test_key_1" in the example below).
(create this as sl.tf and run terraform commands from this directory):
```hcl
provider "softlayer" {
username = ""
api_key = ""
}
resource "softlayer_ssh_key" "test_key_1" {
name = "test_key_1"
public_key = "${file(\"~/.ssh/id_rsa_test_key_1.pub\")}"
# Windows Example:
# public_key = "${file(\"C:\ssh\keys\path\id_rsa_test_key_1.pub\")}"
}
resource "softlayer_virtual_guest" "my_server_1" {
name = "my_server_1"
domain = "example.com"
ssh_keys = ["123456"]
image = "DEBIAN_7_64"
region = "ams01"
public_network_speed = 10
cpu = 1
ram = 1024
}
resource "softlayer_virtual_guest" "my_server_2" {
name = "my_server_2"
domain = "example.com"
ssh_keys = ["123456", "${softlayer_ssh_key.test_key_1.id}"]
image = "CENTOS_6_64"
region = "ams01"
public_network_speed = 10
cpu = 1
ram = 1024
}
```
You'll need to provide your SoftLayer username and API key,
so that Terraform can connect. If you don't want to put
credentials in your configuration file, you can leave them
out:
```
provider "softlayer" {}
```
...and instead set these environment variables:
- **SOFTLAYER_USERNAME**: Your SoftLayer username
- **SOFTLAYER_API_KEY**: Your API key
IPv6 support added.
We support 1 IPv6 address per interface. It seems like the vSphere SDK supports more than one, since it's provided as a list.
I can change it to support more than one address. I decided to stick with one for now since that's how the configuration parameters
had been set up by other developers.
The global gateway configuration option has been removed. Instead the user should specify a gateway on NIC level (ipv4_gateway and ipv6_gateway).
For now, the global gateway will be used as a fallback for every NICs ipv4_gateway.
The global gateway configuration option has been marked as deprecated.
* added update function with support for vcpu and memory
* waiting for vmware tools redundant with WaitForIP
* proper error handling of PowerOn task
* added test cases for update memory and vcpu
* reboot flag
this implements two new resource types:
* openstack_networking_secgroup_v2 - create a neutron security group
* openstack_networking_secgroup_rule_v2 - create a newutron security
group rule
Unlike their nova counterparts the neutron security groups allow a user
to specify the target tenant_id allowing a cloud admin to create per
tenant resources.
* Adding File Resource for vSphere provider
Allows for file upload to vSphere at specified location. This also
includes update for moving or renaming of file resources.
* Ensuring required parameters are provided
If any of the entries in `commands` on `docker_container` resources was
empty, the assertion to string panic'd. Since we can't use ValidateFunc
on list elements, we can only really check this at apply time. If any
value is nil (resolves to empty string during conversion), we fail with
an error prior to creating the container.
Fixes#6409.
We were passing in a disk path of `[datastore] `, which the createDisk
call would create a file named `.vmdk` on the datastore, which is not
the expected behavior. This make sure that if the user did not pass in
a vmdk path, that we call CreateDisk with an empty string like it
expects.