s3.GetBucketTagging returns an error if there are no tags associated
with a bucket. Consequently, any configuration with a tagless s3 bucket
would fail with an error, "the TagSet does not exist".
Handle that error more appropriately, interpreting it as an empty set of
tags.
* master:
update CHANGELOG
providers/digitalocean: add dot in GET response
providers/digitalocean: force fqdn in dns rr value
update CHANGELOG
Add disk size to google_compute_instance disk blocks.
'project' should be set to the project's ID, not its name.
Don't error when enabling DNS hostnames in a VPC
Correct AWS VPC or route table read functions
Updates to GCE Instances and Instance Templates to allow for false values to be set for the auto_delete setting.
Update GCE Instance Template tests now that existing disk must exist prior to template creation.
Update Google API import to point to the new location.
add network field to the network_interface
I was working on building a validation to check the user-provided
"device_name" for "root_block_device" on AWS Instances, when I realized
that if I can check it, I might as well just derive it automatically!
So that's what we do here - when you customize the details of the root
block device, device name is just comes from the selected AMI.
The AWS API call ModifyVpcAttribute will allow only one attribute to be
modified at a time. Modifying both results in the error:
Fields for multiple attribute types specified: enableDnsHostnames, enableDnsSupport
Retructure the provider to honor this restriction.
Also, enable DNS support before attempting to enable DNS hostnames,
since the former is a prerequisite of the latter.
Additionally, fix what must have been a copy&paste error, setting
enable_dns_support to the value of enable_dns_hostnames.
If the state file contained a VPC or a route table which no longer
exists, Terraform would fail to create the correct plan, which is to
recreate them.
In the case of VPCs, this was due to incorrect error handling. The AWS
SDK returns a aws.APIError, not a *aws.APIError on error. When the VPC
no longer exists, upon attempting to refresh state Terraform would
simply exit with an error.
For route tables, the provider would recognize that the route table no
longer existed, but would not make the appropriate call to update the
state as such. Thus there'd be no crash, but also no plan to re-create
the route table.
Though not directly connected, trying to delete a subnet and security group in
parallel can cause a dependency violation from the subnet, claiming there are
dependencies.
This commit fixes that by allowing subnet deletion to tolerate failure with a
retry / refresh function.
Fixes#934
Instance block devices are now managed by three distinct sub-resources:
* `root_block_device` - introduced previously
* `ebs_block_device` - all additional ebs-backed volumes
* `ephemeral_block_device` - instance store / ephemeral devices
The AWS API support around BlockDeviceMapping is pretty confusing. It's
a single collection type that supports these three members each of which
has different fields and different behavior.
My biggest hiccup came from the fact that Instance Store volumes do not
show up in any response BlockDeviceMapping for any EC2 `Describe*` API
calls. They're only available from the instance meta-data service as
queried from inside the node.
This removes `block_device` altogether for a clean break from old
configs. New configs will need to sort their `block_device`
declarations into the three new types. The field has been marked
`Removed` to indicate this to users.
With the new block device format being introduced, we need to ensure
Terraform is able to properly read statefiles written in the old format.
So we use the new `helper/schema` facility of "state migrations" to
transform statefiles in the old format to something that the current
version of the schema can use.
Fixes#858