This commit changes the openstack_networking_port_v2 fixed_ip
parameter from a List to a Set. This is because OpenStack does not
preserve the original ordering of the fixed IPs.
* provider/openstack: Set Availability Zone in Instances
This commit configures the openstack_compute_instance_v2 resource
to set the Availability Zone of the instance resource.
* vendor: Updating Gophercloud for OpenStack
* Begin stubbing out the Circonus provider.
* Remove all references to `reverse:secret_key`.
This value is dynamically set by the service and unused by Terraform.
* Update the `circonus_check` resource.
Still a WIP.
* Add docs for the `circonus_check` resource.
Commit miss, this should have been included in the last commit.
* "Fix" serializing check tags
I still need to figure out how I can make them order agnostic w/o using
a TypeSet. I'm worried that's what I'm going to have to do.
* Spike a quick circonus_broker data source.
* Convert tags to a Set so the order does not matter.
* Add a `circonus_account` data source.
* Correctly spell account.
Pointed out by: @postwait
* Add the `circonus_contact_group` resource.
* Push descriptions into their own file in order to reduce the busyness of the schema when reviewing code.
* Rename `circonus_broker` and `broker` to `circonus_collector` and `collector`, respectively.
Change made with concent by Circonus to reduce confusion (@postwait, @maier, and several others).
* Use upstream contsants where available.
* Import the latest circonus-gometrics.
* Move to using a Set of collectors vs a list attached to a single attribute.
* Rename "cid" to "id" in the circonus_account data source and elsewhere
where possible.
* Inject a tag automatically. Update gometrics.
* Checkpoint `circonus_metric` resource.
* Enable provider-level auto-tagging. This is disabled by default.
* Rearrange metric. This is an experimental "style" of a provider. We'll see.
That moment. When you think you've gone off the rails on a mad scientist
experiment but like the outcome and think you may be onto something but
haven't proven it to yourself or anyone else yet? That. That exact
feeling of semi-confidence while being alone in the wilderness. Please
let this not be the Terraform provider equivalent of DJB's C style of
coding.
We'll know in another resource or two if this was a horrible mistake or
not.
* Begin moving `resource_circonus_check` over to the new world order/structure:
Much of this is WIP and incomplete, but here is the new supported
structure:
```
variable "used_metric_name" {
default = "_usage`0`_used"
}
resource "circonus_check" "usage" {
# collectors = ["${var.collectors}"]
collector {
id = "${var.collectors[0]}"
}
name = "${var.check_name}"
notes = "${var.notes}"
json {
url = "https://${var.target}/account/current"
http_headers = {
"Accept" = "application/json"
"X-Circonus-App-Name" = "TerraformCheck"
"X-Circonus-Auth-Token" = "${var.api_token}"
}
}
stream {
name = "${circonus_metric.used.name}"
tags = "${circonus_metric.used.tags}"
type = "${circonus_metric.used.type}"
}
tags = {
source = "circonus"
}
}
resource "circonus_metric" "used" {
name = "${var.used_metric_name}"
tags = {
source = "circonus"
}
type = "numeric"
}
```
* Document the `circonus_metric` resource.
* Updated `circonus_check` docs.
* If a port was present, automatically set it in the Config.
* Alpha sort the check parameters now that they've been renamed.
* Fix a handful of panics as a result of the schema changing.
* Move back to a `TypeSet` for tags. After a stint with `TypeMap`, move
back to `TypeSet`.
A set of strings seems to match the API the best. The `map` type was
convenient because it reduced the amount of boilerplate, but you loose
out on other things. For instance, tags come in the form of
`category:value`, so naturally it seems like you could use a map, but
you can't without severe loss of functionality because assigning two
values to the same category is common. And you can't normalize map
input or suppress the output correctly (this was eventually what broke
the camel's back). I tried an experiment of normalizing the input to be
`category:value` as the key in the map and a value of `""`, but... seee
diff suppress. In this case, simple is good.
While here bring some cleanups to _Metric since that was my initial
testing target.
* Rename `providerConfig` to `_ProviderConfig`
* Checkpoint the `json` check type.
* Fix a few residual issues re: missing descriptions.
* Rename `validateRegexp` to `_ValidateRegexp`
* Use tags as real sets, not just a slice of strings.
* Move the DiffSuppressFunc for tags down to the Elem.
* Fix up unit tests to chase the updated, default hasher function being used.
* Remove `Computed` attribute from `TypeSet` objects.
This fixes a pile of issues re: update that I was having.
* Rename functions.
`GetStringOk` -> `GetStringOK`
`GetSetAsListOk` -> `GetSetAsListOK`
`GetIntOk` -> `GetIntOK`
* Various small cleanups and comments rolled into a single commit.
* Add a `postgresql` check type for the `circonus_check` resource.
* Rename various validator functions to be _CapitalCase vs capitalCase.
* Err... finish the validator renames.
* Add `GetFloat64()` support.
* Add `icmp_ping` check type support.
* Catch up to the _API*Attr renames.
Deliberately left out of the previous commit in order to create a clean
example of what is required to add a new check type to the
`circonus_check` resource.
* Clarify when the `target` attribute is required for the `postgresql`
check type.
* Correctly pull the metric ID attribute from the right location.
* Add a circonus_stream_group resource (a.k.a. a Circonus "metric cluster")
* Add support for the [`caql`](https://login.circonus.com/user/docs/caql_reference) check type.
* Add support for the `http` check type.
* `s/SSL/TLS/g`
* Add support for `tcp` check types.
* Enumerate the available metrics that are supported for each check type.
* Add [`cloudwatch`](https://login.circonus.com/user/docs/Data/CheckTypes/CloudWatch) check type support.
* Add a `circonus_trigger` resource (a.k.a Circonus Ruleset).
* Rename a handful of functions to make it clear in the function name the
direction of flow for information moving through the provider.
TL;DR: Replace `parse` and `read` with "foo to bar"-like names.
* Fix the attribute name used in a validator. Absent != After.
* Set the minimum `absent` predicate to 70s per testing.
* Fix the regression tests for circonus_trigger now that absent has a 70s min
* Fix up the `tcp` check to require a `host` attribute.
Fix tests. It's clear I didn't run these before committing/pushing the
`tcp` check last time.
* Fix `circonus_check` for `cloudwatch` checks.
* Rename `parsePerCheckTypeConfig()` to `_CheckConfigToAPI` to be
consistent with other function names.
grep(1)ability of code++
* Slack buttons as an integer are string encoded.
* Fix updates for `circonus_contact`.
* Fix the out parameters for contact groups.
* Move to using `_CastSchemaToTF()` where appropriate.
* Fix circonus_contact_group. Updates work as expected now.
* Use `_StateSet()` in place of `d.Set()` everywhere.
* Make a quick pass over the collector datasource to modernize its style
* Quick pass for items identified by `golint`.
* Fix up collectors
* Fix the `json` check type.
Reconcile possible sources of drift. Update now works as expected.
* Normalize trigger durations to seconds.
* Improve the robustness of the state handling for the `circonus_contact_group` resource.
* I'm torn on this, but sort the contact groups in the notify list.
This does mean that if the first contact group in the list has a higher
lexical sort order the plan won't converge until the offending resource
is tainted and recreated. But there's also some sorting happening
elsewhere, so.... sort and taint for now and this will need to be
revisited in the future.
* Add support for the `httptrap` check type.
* Remove empty units from the state file.
* Metric clusters can return a 404. Detect this accordingly in its
respective Exists handler.
* Add a `circonus_graph` resource.
* Fix a handful of bugs in the graph provider.
* Re-enable the necessary `ConflictsWith` definitions and normalize attribute names.
* Objects that have been deleted via the UI return a 404. Handle in Exists().
* Teach `circonus_graph`'s Stack set to accept nil values.
* Set `ForceNew: true` for a graph's name.
* Chase various API fixes required to make `circonus_graph` work as expected.
* Fix up the handling of sub-1 zoom resolutions for graphs.
* Add the `check_by_collector` out parameter to the `circonus_check` resource.
* Improve validation of line vs area graphs. Fix graph_style.
* Fix up the `logarithmic` graph axis option.
* Resolve various trivial `go vet` issues.
* Add a stream_group out parameter.
* Remove incorrectly applied `Optional` attributes to the `circonus_account` resource.
* Remove various `Optional` attributes from the `circonus_collector` data source.
* Centralize the common need to suppress leading and trailing whitespace into `suppressWhitespace`.
* Sync up with upstream vendor fixes for circonus_graph.
* Update the checksum value for the http check.
* Chase `circonus_graph`'s underlying `line_style` API object change from `string` to `*string`.
* Clean up tests to use a generic terraform regression testing account.
* Add support for the MySQL to the `circonus_check` resource.
* Begin stubbing out the Circonus provider.
* Remove all references to `reverse:secret_key`.
This value is dynamically set by the service and unused by Terraform.
* Update the `circonus_check` resource.
Still a WIP.
* Add docs for the `circonus_check` resource.
Commit miss, this should have been included in the last commit.
* "Fix" serializing check tags
I still need to figure out how I can make them order agnostic w/o using
a TypeSet. I'm worried that's what I'm going to have to do.
* Spike a quick circonus_broker data source.
* Convert tags to a Set so the order does not matter.
* Add a `circonus_account` data source.
* Correctly spell account.
Pointed out by: @postwait
* Add the `circonus_contact_group` resource.
* Push descriptions into their own file in order to reduce the busyness of the schema when reviewing code.
* Rename `circonus_broker` and `broker` to `circonus_collector` and `collector`, respectively.
Change made with concent by Circonus to reduce confusion (@postwait, @maier, and several others).
* Use upstream contsants where available.
* Import the latest circonus-gometrics.
* Move to using a Set of collectors vs a list attached to a single attribute.
* Rename "cid" to "id" in the circonus_account data source and elsewhere
where possible.
* Inject a tag automatically. Update gometrics.
* Checkpoint `circonus_metric` resource.
* Enable provider-level auto-tagging. This is disabled by default.
* Rearrange metric. This is an experimental "style" of a provider. We'll see.
That moment. When you think you've gone off the rails on a mad scientist
experiment but like the outcome and think you may be onto something but
haven't proven it to yourself or anyone else yet? That. That exact
feeling of semi-confidence while being alone in the wilderness. Please
let this not be the Terraform provider equivalent of DJB's C style of
coding.
We'll know in another resource or two if this was a horrible mistake or
not.
* Begin moving `resource_circonus_check` over to the new world order/structure:
Much of this is WIP and incomplete, but here is the new supported
structure:
```
variable "used_metric_name" {
default = "_usage`0`_used"
}
resource "circonus_check" "usage" {
# collectors = ["${var.collectors}"]
collector {
id = "${var.collectors[0]}"
}
name = "${var.check_name}"
notes = "${var.notes}"
json {
url = "https://${var.target}/account/current"
http_headers = {
"Accept" = "application/json"
"X-Circonus-App-Name" = "TerraformCheck"
"X-Circonus-Auth-Token" = "${var.api_token}"
}
}
stream {
name = "${circonus_metric.used.name}"
tags = "${circonus_metric.used.tags}"
type = "${circonus_metric.used.type}"
}
tags = {
source = "circonus"
}
}
resource "circonus_metric" "used" {
name = "${var.used_metric_name}"
tags = {
source = "circonus"
}
type = "numeric"
}
```
* Document the `circonus_metric` resource.
* Updated `circonus_check` docs.
* If a port was present, automatically set it in the Config.
* Alpha sort the check parameters now that they've been renamed.
* Fix a handful of panics as a result of the schema changing.
* Move back to a `TypeSet` for tags. After a stint with `TypeMap`, move
back to `TypeSet`.
A set of strings seems to match the API the best. The `map` type was
convenient because it reduced the amount of boilerplate, but you loose
out on other things. For instance, tags come in the form of
`category:value`, so naturally it seems like you could use a map, but
you can't without severe loss of functionality because assigning two
values to the same category is common. And you can't normalize map
input or suppress the output correctly (this was eventually what broke
the camel's back). I tried an experiment of normalizing the input to be
`category:value` as the key in the map and a value of `""`, but... seee
diff suppress. In this case, simple is good.
While here bring some cleanups to _Metric since that was my initial
testing target.
* Rename `providerConfig` to `_ProviderConfig`
* Checkpoint the `json` check type.
* Fix a few residual issues re: missing descriptions.
* Rename `validateRegexp` to `_ValidateRegexp`
* Use tags as real sets, not just a slice of strings.
* Move the DiffSuppressFunc for tags down to the Elem.
* Fix up unit tests to chase the updated, default hasher function being used.
* Remove `Computed` attribute from `TypeSet` objects.
This fixes a pile of issues re: update that I was having.
* Rename functions.
`GetStringOk` -> `GetStringOK`
`GetSetAsListOk` -> `GetSetAsListOK`
`GetIntOk` -> `GetIntOK`
* Various small cleanups and comments rolled into a single commit.
* Add a `postgresql` check type for the `circonus_check` resource.
* Rename various validator functions to be _CapitalCase vs capitalCase.
* Err... finish the validator renames.
* Add `GetFloat64()` support.
* Add `icmp_ping` check type support.
* Catch up to the _API*Attr renames.
Deliberately left out of the previous commit in order to create a clean
example of what is required to add a new check type to the
`circonus_check` resource.
* Clarify when the `target` attribute is required for the `postgresql`
check type.
* Correctly pull the metric ID attribute from the right location.
* Add a circonus_stream_group resource (a.k.a. a Circonus "metric cluster")
* Add support for the [`caql`](https://login.circonus.com/user/docs/caql_reference) check type.
* Add support for the `http` check type.
* `s/SSL/TLS/g`
* Add support for `tcp` check types.
* Enumerate the available metrics that are supported for each check type.
* Add [`cloudwatch`](https://login.circonus.com/user/docs/Data/CheckTypes/CloudWatch) check type support.
* Add a `circonus_trigger` resource (a.k.a Circonus Ruleset).
* Rename a handful of functions to make it clear in the function name the
direction of flow for information moving through the provider.
TL;DR: Replace `parse` and `read` with "foo to bar"-like names.
* Fix the attribute name used in a validator. Absent != After.
* Set the minimum `absent` predicate to 70s per testing.
* Fix the regression tests for circonus_trigger now that absent has a 70s min
* Fix up the `tcp` check to require a `host` attribute.
Fix tests. It's clear I didn't run these before committing/pushing the
`tcp` check last time.
* Fix `circonus_check` for `cloudwatch` checks.
* Rename `parsePerCheckTypeConfig()` to `_CheckConfigToAPI` to be
consistent with other function names.
grep(1)ability of code++
* Slack buttons as an integer are string encoded.
* Fix updates for `circonus_contact`.
* Fix the out parameters for contact groups.
* Move to using `_CastSchemaToTF()` where appropriate.
* Fix circonus_contact_group. Updates work as expected now.
* Use `_StateSet()` in place of `d.Set()` everywhere.
* Make a quick pass over the collector datasource to modernize its style
* Quick pass for items identified by `golint`.
* Fix up collectors
* Fix the `json` check type.
Reconcile possible sources of drift. Update now works as expected.
* Normalize trigger durations to seconds.
* Improve the robustness of the state handling for the `circonus_contact_group` resource.
* I'm torn on this, but sort the contact groups in the notify list.
This does mean that if the first contact group in the list has a higher
lexical sort order the plan won't converge until the offending resource
is tainted and recreated. But there's also some sorting happening
elsewhere, so.... sort and taint for now and this will need to be
revisited in the future.
* Add support for the `httptrap` check type.
* Remove empty units from the state file.
* Metric clusters can return a 404. Detect this accordingly in its
respective Exists handler.
* Add a `circonus_graph` resource.
* Fix a handful of bugs in the graph provider.
* Re-enable the necessary `ConflictsWith` definitions and normalize attribute names.
* Objects that have been deleted via the UI return a 404. Handle in Exists().
* Teach `circonus_graph`'s Stack set to accept nil values.
* Set `ForceNew: true` for a graph's name.
* Chase various API fixes required to make `circonus_graph` work as expected.
* Fix up the handling of sub-1 zoom resolutions for graphs.
* Add the `check_by_collector` out parameter to the `circonus_check` resource.
* Improve validation of line vs area graphs. Fix graph_style.
* Fix up the `logarithmic` graph axis option.
* Resolve various trivial `go vet` issues.
* Add a stream_group out parameter.
* Remove incorrectly applied `Optional` attributes to the `circonus_account` resource.
* Remove various `Optional` attributes from the `circonus_collector` data source.
* Centralize the common need to suppress leading and trailing whitespace into `suppressWhitespace`.
* Sync up with upstream vendor fixes for circonus_graph.
* Update the checksum value for the http check.
* Chase `circonus_graph`'s underlying `line_style` API object change from `string` to `*string`.
* Clean up tests to use a generic terraform regression testing account.
* Rename all identifiers that began with a `_` and replace with a corresponding lowercase glyph.
* Remove stale comment in types.
* Move the calls to `ResourceData`'s `SetId()` calls to be first in the
list so that no resources are lost in the event of a `panic()`.
* Remove `stateSet` from the `circonus_trigger` resource.
* Remove `stateSet` from the `circonus_stream_group` resource.
* Remove `schemaSet` from the `circonus_graph` resource.
* Remove `stateSet` from the `circonus_contact` resource.
* Remove `stateSet` from the `circonus_metric` resource.
* Remove `stateSet` from the `circonus_account` data source.
* Remove `stateSet` from the `circonus_collector` data source.
* Remove stray `stateSet` call from the `circonus_contact` resource.
This is an odd artifact to find... I'm completely unsure as to why it
was there to begin with but am mostly certain it's a bug and needs to be
removed.
* Remove `stateSet` from the `circonus_check` resource.
* Remove the `stateSet` helper function.
All call sites have been converted to return errors vs `panic()`'ing at
runtime.
* Remove a pile of unused functions and type definitions.
* Remove the last of the `attrReader` interface.
* Remove an unused `Sprintf` call.
* Update `circonus-gometrics` and remove unused files.
* Document what `convertToHelperSchema()` does.
Rename `castSchemaToTF` to `convertToHelperSchema`.
Change the function parameter ordering so the `map` of attribute
descriptions: this is much easier to maintain when the description map
is first when creating schema inline.
* Move descriptions into their respective source files.
* Remove all instances of `panic()`.
In the case of software bugs, log an error. Never `panic()` and always
return a value.
* Rename `stream_group` to `metric_cluster`.
* Rename triggers to rule sets
* Rename `stream` to `metric`.
* Chase the `stream` -> `metric` change into the docs.
* Remove some unused test functions.
* Add the now required `color` attribute for graphing a `metric_cluster`.
* Add a missing description to silence a warning.
* Add `id` as a selector for the account data source.
* Futureproof testing: Randomize all asset names to prevent any possible resource conflicts.
This isn't a necessary change for our current build and regression
testing, but *just in case* we have a radical change to our testing
framework in the future, make all resource names fully random.
* Rename various values to match the Circonus docs.
* s/alarm/alert/g
* Ensure ruleset criteria can not be empty.
helper/schema: Rename Timeout resource block to Timeouts
- Pluralize configuration argument name to better represent that there is
one block for many timeouts
- use a const for the configuration timeouts key
- update docs
Fixes: #12506
When a replication_task cdc_start_time was specified as an int, it was
causing a panic as the conversion to a Unix timestampe was expecting a
string
```
% make testacc TEST=./builtin/providers/aws TESTARGS='-run=TestAccAwsDmsReplicationTaskBasic'
==> Checking that code complies with gofmt requirements...
go generate $(go list ./... | grep -v /terraform/vendor/)
2017/03/08 22:55:29 Generated command/internal_plugin_list.go
TF_ACC=1 go test ./builtin/providers/aws -v -run=TestAccAwsDmsReplicationTaskBasic -timeout 120m
=== RUN TestAccAwsDmsReplicationTaskBasic
--- PASS: TestAccAwsDmsReplicationTaskBasic (1089.77s)
PASS
ok github.com/hashicorp/terraform/builtin/providers/aws 1089.802s
```
Fixes: #7492
When we use the same IP Address, BGP ASN and VPN Type as an existing
aws_customer_gateway, terraform will take control of that gateway (not
import it!) and try and modify it. This could be very bad
There is a warning on the AWS documentation that one gateway of the same
parameters can be created, Terraform is now going to error if a gateway
of the same parameters is attempted to be created
```
% make testacc TEST=./builtin/providers/aws TESTARGS='-run=TestAccAWSCustomerGateway_'
==> Checking that code complies with gofmt requirements...
go generate $(go list ./... | grep -v /terraform/vendor/)
2017/03/07 18:40:39 Generated command/internal_plugin_list.go
TF_ACC=1 go test ./builtin/providers/aws -v -run=TestAccAWSCustomerGateway_ -timeout 120m
=== RUN TestAccAWSCustomerGateway_importBasic
--- PASS: TestAccAWSCustomerGateway_importBasic (31.11s)
=== RUN TestAccAWSCustomerGateway_basic
--- PASS: TestAccAWSCustomerGateway_basic (68.72s)
=== RUN TestAccAWSCustomerGateway_similarAlreadyExists
--- PASS: TestAccAWSCustomerGateway_similarAlreadyExists (35.18s)
=== RUN TestAccAWSCustomerGateway_disappears
--- PASS: TestAccAWSCustomerGateway_disappears (25.13s)
PASS
ok github.com/hashicorp/terraform/builtin/providers/aws 160.172s
```
* WIP: added a new resource type : google_compute_snapshot
* [WIP]: added a test acceptance for google_compute_snapshot
* Cleanup
* Minor correction : "Deleting disk" message in Delete method
* Error in merge action
* Error in merge action
* added support for linux capabilities
Refs #11623
Added capabilities block
Added tests for it
Added documentation for it.
My PC doesnt support memory swap so it errors there.
```
$ make testacc TEST=./builtin/providers/docker TESTARGS='-run=TestAccDockerContainer_'
==> Checking that code complies with gofmt requirements...
go generate $(go list ./... | grep -v /terraform/vendor/)
2017/02/17 14:57:08 Generated command/internal_plugin_list.go
TF_ACC=1 go test ./builtin/providers/docker -v -run=TestAccDockerContainer_ -timeout 120m
=== RUN TestAccDockerContainer_basic
--- PASS: TestAccDockerContainer_basic (44.50s)
=== RUN TestAccDockerContainer_volume
--- PASS: TestAccDockerContainer_volume (40.73s)
=== RUN TestAccDockerContainer_customized
--- FAIL: TestAccDockerContainer_customized (50.27s)
testing.go:265: Step 0 error: Check failed: Check 2/2 error: Container has wrong memory swap setting: -1
Please check that you machine supports memory swap (you can do that by running 'docker info' command).
=== RUN TestAccDockerContainer_upload
--- PASS: TestAccDockerContainer_upload (38.56s)
FAIL
exit status 1
FAIL github.com/hashicorp/terraform/builtin/providers/docker 174.070s
Makefile:48: recipe for target 'testacc' failed
make: *** [testacc] Error 1
```
* Documentation changes.
* added maxitems and rerun tests
Fixes: #12494
The Create was changed to use the default and not d.GetOk - the update
wasn't - this was causing issues when trying to update to a false value
```
% make testacc TEST=./builtin/providers/datadog
==> Checking that code complies with gofmt requirements...
go generate $(go list ./... | grep -v /terraform/vendor/)
2017/03/07 16:20:54 Generated command/internal_plugin_list.go
TF_ACC=1 go test ./builtin/providers/datadog -v -timeout 120m
=== RUN TestDatadogMonitor_import
--- PASS: TestDatadogMonitor_import (4.77s)
=== RUN TestDatadogUser_import
--- PASS: TestDatadogUser_import (6.23s)
=== RUN TestProvider
--- PASS: TestProvider (0.00s)
=== RUN TestProvider_impl
--- PASS: TestProvider_impl (0.00s)
=== RUN TestAccDatadogMonitor_Basic
--- PASS: TestAccDatadogMonitor_Basic (3.83s)
=== RUN TestAccDatadogMonitor_BasicNoTreshold
--- PASS: TestAccDatadogMonitor_BasicNoTreshold (4.92s)
=== RUN TestAccDatadogMonitor_Updated
--- PASS: TestAccDatadogMonitor_Updated (5.88s)
=== RUN TestAccDatadogMonitor_TrimWhitespace
--- PASS: TestAccDatadogMonitor_TrimWhitespace (3.23s)
=== RUN TestAccDatadogMonitor_Basic_float_int
--- PASS: TestAccDatadogMonitor_Basic_float_int (5.73s)
=== RUN TestAccDatadogTimeboard_update
--- PASS: TestAccDatadogTimeboard_update (8.86s)
=== RUN TestValidateAggregatorMethod
--- PASS: TestValidateAggregatorMethod (0.00s)
=== RUN TestAccDatadogUser_Updated
--- PASS: TestAccDatadogUser_Updated (6.05s)
PASS
ok github.com/hashicorp/terraform/builtin/providers/datadog 49.506s
```
This covers the scenario of an instance created by a spot request. Using
Terraform we only know the spot request is fulfilled but the instance can
still be pending which causes the attachment to fail.
Google Container Engine's cluster API returned instance group manager
URLs when it meant to return instance group URLs. See #4336 for details
about the bug.
While this is undeniably an upstream problem, this PR:
* detects the error, meaning it will work as expected when the API is
fixed.
* corrects the error by requesting the instance group manager, then
retrieving its instance group URL, and using that instead.
* adds a test that exercises the error and the solution, to ensure it is
functioning properly.
* provider/ignition: migration from resources to data resources
* website: provider/ignition documention updated to data resources
* provider/ignition: backwards compatibility support for old resources
* Ensures elb exists before negotiation policy check; Fixes#11260
* Adds acceptance test case for missing elb
* Adds back https properties for test elb
* vendor: Updating cobblerclient for Cobbler
* provider/cobbler: Fix Profile Repos
This commit fixes a bug where adding repos would result in an error.
This was due to the Cobbler API wanting a space-separated list of
repos rather than an array. The Cobbler Service will split the string
by space when used internally, but will always present the repos
as a string.
* provider/cobbler: System Interface Management Test
This commit adds a test to verify that the Management
parameter of System Interfaces works.
* Allow for local development with ns1 provider.
* Adds first implementation of ns1 notification list resource.
* NS1 record.use_client_subnet defaults to true, and added test for field.
* Adds more test cases for monitoring jobs.
* Adds webhook/datafeed notifier types and acctests for notifylists.
* Adds docs for notifylists resource.
* Updates ns1-go rest client via govendor
* Fix typos in record docs
In the event that an unexpected state is returned from
`environmentStateRefreshFunc` errors in the Elastic Beanstalk console
will not be returned to the user.
To aid in tracking down the error that's causing
TestAccGoogleSqlDatabaseInstance_basic to fail (it's claiming an op
can't be found?) I've added the op name (which is unique) to the error
output for op errors.
Our GCP storage tests are really flaky right now due to rate limiting.
In theory, this could also impact Terraform users that are
deleting/creating large numbers of Google Cloud Storage buckets at once.
To fix, I'm detecting the specific error code that GCP returns when it's
a rate limit error, and using that with resource.Retry to try the
request again.
When comparing the config and state for google_project_iam_policy,
always merge the bindings down to a common representation, to avoid a
perpetual diff.
Fixes#11763.
* helper/schema: Add custom Timeout block for resources
* refactor DefaultTimeout to suuport multiple types. Load meta in Refresh from Instance State
* update vpc but it probably wont last anyway
* refactor test into table test for more cases
* rename constant keys
* refactor configdecode
* remove VPC demo
* remove comments
* remove more comments
* refactor some
* rename timeKeys to timeoutKeys
* remove note
* documentation/resources: Document the Timeout block
* document timeouts
* have a test case that covers 'hours'
* restore a System default timeout of 20 minutes, instead of 0
* restore system default timeout of 20 minutes, refactor tests, add test method to handle system default
* rename timeout key constants
* test applying timeout to state
* refactor test
* Add resource Diff test
* clarify docs
* update to use constants
* provider/openstack: Redesign openstack_blockstorage_volume_attach_v2
The current design of openstack_blockstorage_volume_attach_v2 does
not correctly implement the Block Storage API attachment call. It
was only partially implemented, only marking volumes as being
attached, while never actually attaching them.
This redesign is a closer alignment to how creating attachments
to a standalone Block Storage service works.
For creating attachments specifically in the case of OpenStack
Compute instances, the openstack_compute_volume_attach_v2 resource
is required.
* provider/openstack: re-adding instance_id for backwards compatibility
This commit adds the openstack_compute_floatingip_associate_v2
resource which specifically handles associating a floating IP
address to an instance. This can be used instead of the existing
floating_ip options in the openstack_compute_instance_v2 resource.
* Replace DNSimple API client with the official Go client
* Upgrade DNSimple provider to use the new API v2
Acceptance tests pass:
```
=== RUN TestProvider
--- PASS: TestProvider (0.00s)
=== RUN TestProvider_impl
--- PASS: TestProvider_impl (0.00s)
=== RUN TestAccDNSimpleRecord_Basic
--- PASS: TestAccDNSimpleRecord_Basic (2.67s)
=== RUN TestAccDNSimpleRecord_Updated
--- PASS: TestAccDNSimpleRecord_Updated (1.88s)
PASS
ok github.com/hashicorp/terraform/builtin/providers/dnsimple
```
Note that the code still has to be updated to pass the account ID
dynamically in place of "TODO-ACCOUNT".
* Refactor DNSimple provider to expose both client and config
The config is required as the new API wants to know the identifier of
the account you are operating to. The account is not stored in the
client (as the client can talk with different accounts), hence I need
to pass it as part of the config.
* Identify Terraform requests to DNSimple via UserAgent
* Upgrade to the latest dnsimple-go version
* Update docs
Provide upgrade instructions and update the docs for API v2.
* Remove rendundant type declaration
Fixes:#11750
Before this change, adding a log_subscription_filter and then deleting
it manually would yield this error on terraform plan/apply:
```
% terraform plan ✹ ✭
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.
aws_iam_role.iam_for_lambda: Refreshing state... (ID: test_lambdafuntion_iam_role_example123)
aws_cloudwatch_log_group.logs: Refreshing state... (ID: example_lambda_name)
aws_iam_role_policy.test_lambdafunction_iam_policy: Refreshing state... (ID: test_lambdafuntion_iam_role_example123:test_lambdafunction_iam_policy)
aws_lambda_function.test_lambdafunction: Refreshing state... (ID: example_lambda_name_example123)
aws_lambda_permission.allow_cloudwatch_logs: Refreshing state... (ID: AllowExecutionFromCloudWatchLogs)
aws_cloudwatch_log_subscription_filter.test_lambdafunction_logfilter: Refreshing state... (ID: cwlsf-992677504)
Error refreshing state: 1 error(s) occurred:
* aws_cloudwatch_log_subscription_filter.test_lambdafunction_logfilter: aws_cloudwatch_log_subscription_filter.test_lambdafunction_logfilter: Subscription filter for log group example_lambda_name with name prefix test_lambdafunction_logfilter not found!
```
After this patch, we get the following behaviour:
```
% terraform plan ✹ ✭
[WARN] /Users/stacko/Code/go/bin/terraform-provider-aws overrides an internal plugin for aws-provider.
If you did not expect to see this message you will need to remove the old plugin.
See https://www.terraform.io/docs/internals/internal-plugins.html
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.
aws_iam_role.iam_for_lambda: Refreshing state... (ID: test_lambdafuntion_iam_role_example123)
aws_cloudwatch_log_group.logs: Refreshing state... (ID: example_lambda_name)
aws_lambda_function.test_lambdafunction: Refreshing state... (ID: example_lambda_name_example123)
aws_iam_role_policy.test_lambdafunction_iam_policy: Refreshing state... (ID: test_lambdafuntion_iam_role_example123:test_lambdafunction_iam_policy)
aws_lambda_permission.allow_cloudwatch_logs: Refreshing state... (ID: AllowExecutionFromCloudWatchLogs)
aws_cloudwatch_log_subscription_filter.test_lambdafunction_logfilter: Refreshing state... (ID: cwlsf-992677504)
The Terraform execution plan has been generated and is shown below.
Resources are shown in alphabetical order for quick scanning. Green resources
will be created (or destroyed and then created if an existing resource
exists), yellow resources are being changed in-place, and red resources
will be destroyed. Cyan entries are data sources to be read.
Note: You didn't specify an "-out" parameter to save this plan, so when
"apply" is called, Terraform can't guarantee this is what will execute.
+ aws_cloudwatch_log_subscription_filter.test_lambdafunction_logfilter
destination_arn: "arn:aws:lambda:us-west-2:187416307283:function:example_lambda_name_example123"
filter_pattern: "logtype test"
log_group_name: "example_lambda_name"
name: "test_lambdafunction_logfilter"
role_arn: "<computed>"
Plan: 1 to add, 0 to change, 0 to destroy.
```
Fixes: #12232
```
% make testacc TEST=./builtin/providers/aws TESTARGS='-run=TestAccAWSEFSFileSystem_pagedTags'
==> Checking that code complies with gofmt requirements...
go generate $(go list ./... | grep -v /terraform/vendor/)
TF_ACC=1 go test ./builtin/providers/aws -v -run=TestAccAWSEFSFileSystem_pagedTags -timeout 120m
=== RUN TestAccAWSEFSFileSystem_pagedTags
--- PASS: TestAccAWSEFSFileSystem_pagedTags (39.51s)
PASS
ok github.com/hashicorp/terraform/builtin/providers/aws 39.537s
```
not found
Fixes: #12279
When manually deleting an autoscaling_group from the console, a
terraform plan would look as follows:
```
% terraform plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.
aws_launch_configuration.foobar: Refreshing state... (ID: test-0096cf26c7eebdc9fcb5bd1837)
aws_autoscaling_group.foobar: Refreshing state... (ID: test)
aws_autoscaling_schedule.foobar: Refreshing state... (ID: foobar)
Error refreshing state: 1 error(s) occurred:
* aws_autoscaling_schedule.foobar: aws_autoscaling_schedule.foobar: Error retrieving Autoscaling Scheduled Actions: ValidationError: Group test not found
status code: 400, request id: 093e9ed5-fe01-11e6-b990-1f64334b3a10
```
After this patch:
```
% terraform plan ✹ ✭
[WARN] /Users/stacko/Code/go/bin/terraform-provider-aws overrides an internal plugin for aws-provider.
If you did not expect to see this message you will need to remove the old plugin.
See https://www.terraform.io/docs/internals/internal-plugins.html
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.
aws_launch_configuration.foobar: Refreshing state... (ID: test-0096cf26c7eebdc9fcb5bd1837)
aws_autoscaling_group.foobar: Refreshing state... (ID: test)
aws_autoscaling_schedule.foobar: Refreshing state... (ID: foobar)
The Terraform execution plan has been generated and is shown below.
Resources are shown in alphabetical order for quick scanning. Green resources
will be created (or destroyed and then created if an existing resource
exists), yellow resources are being changed in-place, and red resources
will be destroyed. Cyan entries are data sources to be read.
Note: You didn't specify an "-out" parameter to save this plan, so when
"apply" is called, Terraform can't guarantee this is what will execute.
+ aws_autoscaling_group.foobar
arn: "<computed>"
availability_zones.#: "1"
availability_zones.2487133097: "us-west-2a"
default_cooldown: "<computed>"
desired_capacity: "<computed>"
force_delete: "true"
health_check_grace_period: "300"
health_check_type: "ELB"
launch_configuration: "test-0096cf26c7eebdc9fcb5bd1837"
load_balancers.#: "<computed>"
max_size: "1"
metrics_granularity: "1Minute"
min_size: "1"
name: "test"
protect_from_scale_in: "false"
tag.#: "1"
tag.157008572.key: "Foo"
tag.157008572.propagate_at_launch: "true"
tag.157008572.value: "foo-bar"
termination_policies.#: "1"
termination_policies.0: "OldestInstance"
vpc_zone_identifier.#: "<computed>"
wait_for_capacity_timeout: "10m"
+ aws_autoscaling_schedule.foobar
arn: "<computed>"
autoscaling_group_name: "test"
desired_capacity: "0"
end_time: "2017-12-12T06:00:00Z"
max_size: "1"
min_size: "0"
recurrence: "<computed>"
scheduled_action_name: "foobar"
start_time: "2017-12-11T18:00:00Z"
Plan: 2 to add, 0 to change, 0 to destroy.
```
* provider/openstack: Rename provider to loadbalancer_provider
This commit renames provider to loadbalancer_provider in the
openstack_lb_loadbalancer_v2 resource.
It also changes security_group_ids to Computed so default
security groups are added to the state correctly.
* provider/openstack: Switch to a deprecation path for loadbalancer provider
This commit switches to using a deprecation path for removal of the
previous "provider" argument in favor of the new "loadbalancer_provider".
This commit removes the bundled devstack script and updates the
documentation to point to the latest build scripts used for
testing. It also mentions that development should be possible on
any OpenStack environment.
Add tests that ensure that image syntax resolves to API input the way we
want it to.
Add a lot of different input forms for images, to more closely map to
what the API accepts, so anything that's valid input to the API should
also be valid input in a config.
Stop resolving image families to specific image URLs, allowing things
like instance templates to evolve over time as new images are pushed.
Fixes: #12205
You cannot use an index of an empty slide therefore, we got a panic as follows:
```
aws_ssm_association.foo: Creating...
instance_id: "" => "i-002f3898dc95350e7"
name: "" => "test_document_association-%s"
parameters.%: "" => "2"
parameters.directoryId: "" => "d-926720980b"
parameters.directoryName: "" => "corp.mydomain.com"
Error applying plan:
1 error(s) occurred:
* aws_ssm_association.foo: 1 error(s) occurred:
* aws_ssm_association.foo: unexpected EOF
Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error
above and apply again to incrementally change your infrastructure.
panic: runtime error: index out of range
2017/02/23 21:41:45 [DEBUG] plugin: terraform-provider-aws:
2017/02/23 21:41:45 [DEBUG] plugin: terraform-provider-aws: goroutine 1419 [running]:
2017/02/23 21:41:45 [DEBUG] plugin: terraform-provider-aws: panic(0x1567480, 0xc42000c110)
2017/02/23 21:41:45 [DEBUG] plugin: terraform-provider-aws: /usr/local/Cellar/go/1.7.4_1/libexec/src/runtime/panic.go:500 +0x1a1
```
Fixes#12183
The fix is in flatmap for this but the entire issue is a bit more
complex. Given a schema with a computed set, if you reference it like
this:
lookup(attr[0], "field")
And "attr" contains a computed set within it, it would panic even though
"field" is available. There were a couple avenues I could've taken to
fix this:
1.) Any complex value containing any unknown value at any point is
entirely unknown.
2.) Only the specific part of the complex value is unknown.
I took route 2 so that the above works without any computed (since
"name" is not computed but something else is). This may actually have an
effect on other parts of Terraform configs, however those similar
configs would've simply crashed previously so it shouldn't break any
pre-existing configs.
* provider/azurerm: Bump AzureRM SDK to v8.0.1-beta
* provider/azurerm: Renaming SDK packages as per MSFT updates
* Bump azurerm sdk 8.0.1 (#12076)
* Updating the constructors to match the updated types
* Updating the Redis Client name
* ObjectID is now a string
* Updating to match the new Storage API specs
This feature allows sending a notification to either an SQS queue or an
SNS topic when an error occurs running an AWS Lambda function.
This fixes#10630.
* providers/spotinst: Add support for Spotinst resources
* providers/spotinst: Fix merge conflict - layouts/docs.erb
* docs/providers/spotinst: Fix the resource description field
* providers/spotinst: Fix the acceptance tests
* providers/spotinst: Mark the device_index as a required field
* providers/spotinst: Change the associate_public_ip_address field to TypeBool
* docs/providers/spotinst: Update the description of the adjustment field
* providers/spotinst: Rename IamRole to IamInstanceProfile to make it more compatible with the AWS provider
* docs/providers/spotinst: Rename iam_role to iam_instance_profile
* providers/spotinst: Deprecate the iam_role attribute
* providers/spotinst: Fix a misspelled var (IamRole)
* providers/spotinst: Fix possible null pointer exception related to "iam_instance_profile"
* docs/providers/spotinst: Add "load_balancer_names" missing description
* providers/spotinst: New resource "spotinst_subscription" added
* providers/spotinst: Eliminate a possible null pointer exception in "spotinst_aws_group"
* providers/spotinst: Eliminate a possible null pointer exception in "spotinst_subscription"
* providers/spotinst: Mark spotinst_subscription as deleted in destroy
* providers/spotinst: Add support for custom event format in spotinst_subscription
* providers/spotinst: Disable the destroy step of spotinst_subscription
* providers/spotinst: Add support for update subscriptions
* providers/spotinst: Merge fixed conflict - layouts/docs.erb
* providers/spotinst: Vendor dependencies
* providers/spotinst: Return a detailed error message
* provider/spotinst: Update the plugin list
* providers/spotinst: Vendor dependencies using govendor
* providers/spotinst: New resource "spotinst_healthcheck" added
* providers/spotinst: Update the Spotinst SDK
* providers/spotinst: Comment out unnecessary log.Printf
* providers/spotinst: Fix the acceptance tests
* providers/spotinst: Gofmt fixes
* providers/spotinst: Use multiple functions to expand each block
* providers/spotinst: Allow ondemand_count to be zero
* providers/spotinst: Change security_group_ids from TypeSet to TypeList
* providers/spotinst: Remove unnecessary `ForceNew` fields
* providers/spotinst: Update the Spotinst SDK
* providers/spotinst: Add support for capacity unit
* providers/spotinst: Add support for EBS volume pool
* providers/spotinst: Delete health check
* providers/spotinst: Allow to set multiple availability zones
* providers/spotinst: Gofmt
* providers/spotinst: Omit empty strings from the load_balancer_names field
* providers/spotinst: Update the Spotinst SDK to v1.1.9
* providers/spotinst: Add support for new strategy parameters
* providers/spotinst: Update the Spotinst SDK to v1.2.0
* providers/spotinst: Add support for Kubernetes integration
* providers/spotinst: Fix merge conflict - vendor/vendor.json
* providers/spotinst: Update the Spotinst SDK to v1.2.1
* providers/spotinst: Add support for Application Load Balancers
* providers/spotinst: Do not allow to set ondemand_count to 0
* providers/spotinst: Update the Spotinst SDK to v1.2.2
* providers/spotinst: Add support for scaling policy operators
* providers/spotinst: Add dimensions to spotinst_aws_group tests
* providers/spotinst: Allow both ARN and name for IAM instance profiles
* providers/spotinst: Allow ondemand_count=0
* providers/spotinst: Split out the set funcs into flatten style funcs
* providers/spotinst: Update the Spotinst SDK to v1.2.3
* providers/spotinst: Add support for EBS optimized flag
* providers/spotinst: Update the Spotinst SDK to v2.0.0
* providers/spotinst: Use stringutil.Stringify for debugging
* providers/spotinst: Update the Spotinst SDK to v2.0.1
* providers/spotinst: Key pair is now optional
* providers/spotinst: Make sure we do not nullify signals on strategy update
* providers/spotinst: Hash both Strategy and EBS Block Device
* providers/spotinst: Hash AWS load balancer
* providers/spotinst: Update the Spotinst SDK to v2.0.2
* providers/spotinst: Verify namespace exists before appending policy
* providers/spotinst: Image ID will be in a separate block from now on, so as to allow ignoring changes only on the image ID. This change is backwards compatible.
* providers/spotinst: user data decoded when returned from spotinst api, so that TF compares the two states properly, and does not update without cause.
Fixes: #8055Fixes: #10264Fixes: #10881
We have swapped from using d.GetOk (as that func returns nil when a
default value is used) and moved to using default values that we can
pass directly to the Struct. The fact we have default values, means that
we can use d.Get which will work here
```
% make testacc TEST=./builtin/providers/datadog
==> Checking that code complies with gofmt requirements...
go generate $(go list ./... | grep -v /terraform/vendor/)
2017/02/22 18:56:03 Generated command/internal_plugin_list.go
TF_ACC=1 go test ./builtin/providers/datadog -v -timeout 120m
=== RUN TestDatadogMonitor_import
--- PASS: TestDatadogMonitor_import (8.66s)
=== RUN TestProvider
--- PASS: TestProvider (0.00s)
=== RUN TestProvider_impl
--- PASS: TestProvider_impl (0.00s)
=== RUN TestAccDatadogMonitor_Basic
--- PASS: TestAccDatadogMonitor_Basic (5.68s)
=== RUN TestAccDatadogMonitor_BasicNoTreshold
--- PASS: TestAccDatadogMonitor_BasicNoTreshold (3.13s)
=== RUN TestAccDatadogMonitor_Updated
--- PASS: TestAccDatadogMonitor_Updated (6.41s)
=== RUN TestAccDatadogMonitor_TrimWhitespace
--- PASS: TestAccDatadogMonitor_TrimWhitespace (3.22s)
=== RUN TestAccDatadogMonitor_Basic_float_int
--- PASS: TestAccDatadogMonitor_Basic_float_int (5.50s)
=== RUN TestAccDatadogTimeboard_update
--- PASS: TestAccDatadogTimeboard_update (8.35s)
=== RUN TestValidateAggregatorMethod
--- PASS: TestValidateAggregatorMethod (0.00s)
PASS
ok github.com/hashicorp/terraform/builtin/providers/datadog 40.954s
```
* provider/aws: New resource codepipeline
* Vendor aws/codepipeline
* Add tests
* Add docs
* Bump codepipeline to v1.6.25
* Adjustments based on feedback
* Force new resource on ID change
* Improve tests
* Switch update to read
Since we don't require a second pass, only do a read.
* Skip tests if GITHUB_TOKEN is not set
Reported by @sethvargo - we auto encode for AWS, we should follow a
similar pattern for Azure.
In order to escape the double encoding, we check that it's not already
encoded before encoding
* Vendor google.golang.org/api/cloudbilling/v1
* providers/google: Add cloudbilling client
* providers/google: google_project supports billing account
This change allows a Terraform user to set and update the billing
account associated with their project.
* providers/google: Testing project billing account
This change adds optional acceptance tests for project billing accounts.
GOOGLE_PROJECT_BILLING_ACCOUNT and GOOGLE_PROJECT_BILLING_ACCOUNT_2
must be set in the environment for the tests to run; otherwise, they
will be skipped.
Also includes a few code cleanups per review.
* providers/google: Improve project billing error message
* vendor: Updating Gophercloud
* provider/openstack: Image Data Source
This commit adds the openstack_images_image_v2 data source which
is able to query the Image Service v2 API for a specific image.
* provider/datadog: Pulls v2 and removes v1 of library go-datadog-api.
See https://github.com/zorkian/go-datadog-api/issues/56 for context.
* Fixes bug in backoff implementation that decreased performance significantly.
* Uses pointers for field types, providing support of distinguishing
between if a value is set, or the default value for that type is
effective.
* provider/datadog: Convert provider to use v2 of go-datadog-api.
* provider/datadog: Update vendored library.
* provider/datadog: Update dashboard resource to reflect API updates.
This commit adds the ability to log all requests and responses
between Terraform and the OpenStack cloud. To enable, set the
OS_DEBUG environment variable to 1.
This commit adds a check to prevent a user from specifying both
a floating IP and a port on a specific network. While this
configuration is currently allowed, the Port will be chosen and
applying the configuration again will show a state mismatch. This
attempts to prevent such a misconfiguration.
This commit has a few more fixes to the recently added
openstack_images_image_v2 resource:
* tags were changed to a Set because the OpenStack Image API does
not seem to respect ordering.
* The visibility argument was fixed.
* Acceptance tests for all updatable fields has been implemented.
* Documentation updates, including a new entry in the sidebar.
* provider/google-cloud: Add maintenance window
Allows specification of the `maintenance_window` within the `settings`
block. This controls when Google will restart a database in order to
apply updates. It is also possible to select an `update_track` to
relatively control updating between instances in the same project.
* Adjustments as suggested in code review.
* Added new resource aws_elastic_beanstalk_application_version.
* Changing bucket and key to required.
* Update to use d.Id() directly in DescribeApplicationVersions.
* Checking err to make sure that the application version is successfully deleted.
* Update `version_label` to `Computed: true`.
* provider/aws: Updating to python solution stack
* provider/aws: Beanstalk App Version delete source
The Elastic Beanstalk API call to delete `application_version` resource
should not delete the s3 bundle, as this object is managed by another
Terraform resource
* provider/aws: Update application version docs
* Fix application version test
* Add `version_label` update test
Adds test that fails after rebasing branch onto v0.8.x. `version_label`
changes do not update the `aws_elastic_beanstalk_environment` resource.
* `version_label` changes to update environment
* Prevent unintended delete of `application_version`
Prevents an `application_version` used by multiple environments from
being deleted.
* Add `force_delete` attribute
* Update documentation
* [datadog] Update go-datadog-api library
Involves one breaking API change. Also some `gofmt`ing.
* [datadog] Add support for new_host_delay to the datadog_monitor resource
New API parameter that Datadog added for monitors to ignore new hosts
for the specified time period in monitor evaluation.
Our delete operation for google_compute_project_metadata didn't check an
error when making the call to delete metadata, which led to a panic in
our tests. This is also probably indicative of why our tests
failed/metadata got left dangling.
Fixes the `TestAccAWSAutoscalingLifecycleHook_omitDefaultResult` acceptance test to run in parallel.
```
$ make testacc TEST=./builtin/providers/aws TESTARGS='-run=TestAccAWSAutoscalingLifecycleHook_omitDefaultResult'
==> Checking that code complies with gofmt requirements...
go generate $(go list ./... | grep -v /terraform/vendor/)
2017/02/15 22:33:26 Generated command/internal_plugin_list.go
TF_ACC=1 go test ./builtin/providers/aws -v -run=TestAccAWSAutoscalingLifecycleHook_omitDefaultResult -timeout 120m
=== RUN TestAccAWSAutoscalingLifecycleHook_omitDefaultResult
--- PASS: TestAccAWSAutoscalingLifecycleHook_omitDefaultResult (146.91s)
PASS
ok github.com/hashicorp/terraform/builtin/providers/aws 146.917s
```
Previously we only validated that the cloudflare record provided was a valid record type. However, a record can be of a valid type, and still not be proxied, making it an invalid record type.
The main downside to having to check for whether or not the record type is proxied or not during validation, is that it relies on having two schema keys populated. This means that we can only catch the improper record type during `apply` time, instead of `plan` time.
```
$ go test -v -run "TestValidateRecordType" ./builtin/providers/cloudflare
=== RUN TestValidateRecordType
--- PASS: TestValidateRecordType (0.00s)
PASS
ok github.com/hashicorp/terraform/builtin/providers/cloudflare 0.004s
```
sensitive
This was pointed out at the HUG in London tonight that we save the plain
text password in state
I don't think this will be ported back to 0-8-stable
This allows for updates to size, type and iops
Fixes: #11931
```
% make testacc TEST=./builtin/providers/aws TESTARGS='-run=TestAccAWSEBSVolume_update'
==> Checking that code complies with gofmt requirements...
go generate $(go list ./... | grep -v /terraform/vendor/)
2017/02/15 22:35:43 Generated command/internal_plugin_list.go
TF_ACC=1 go test ./builtin/providers/aws -v -run=TestAccAWSEBSVolume_update -timeout 120m
=== RUN TestAccAWSEBSVolume_updateSize
--- PASS: TestAccAWSEBSVolume_updateSize (53.57s)
=== RUN TestAccAWSEBSVolume_updateType
--- PASS: TestAccAWSEBSVolume_updateType (57.53s)
=== RUN TestAccAWSEBSVolume_updateIops
--- PASS: TestAccAWSEBSVolume_updateIops (53.63s)
PASS
ok github.com/hashicorp/terraform/builtin/providers/aws 164.753s
```
This extends the work in #11668 to enable final snapshots by default.
This time it's for redshift
```
% make testacc TEST=./builtin/providers/aws TESTARGS='-run=TestAccAWSRedshiftCluster_withFinalSnapshot'
==> Checking that code complies with gofmt requirements...
go generate $(go list ./... | grep -v /terraform/vendor/)
2017/02/04 13:53:02 Generated command/internal_plugin_list.go
TF_ACC=1 go test ./builtin/providers/aws -v -run=TestAccAWSRedshiftCluster_withFinalSnapshot -timeout 120m
=== RUN TestAccAWSRedshiftCluster_withFinalSnapshot
--- PASS: TestAccAWSRedshiftCluster_withFinalSnapshot (859.96s)
PASS
ok github.com/hashicorp/terraform/builtin/providers/aws 859.986s
```
If we get `InvalidParameterException` with the message "Could not deliver test
message to specified" then retry as this is often down to some sort of internal
delay in Amazons API. Also increase the timeout from 30 seconds to 3 minutes as
it has been observed to take that long sometimes for the creation to succeed.
This applies to both log destinations and subscription filters.
* Adds response conditions for papertrail in fastly
* Adds cache conditional for gzip in fastly
* Opens up conitionals under fastly headers
* Adds request conditions to s3 logging for fastly
* Creates conditionals properly for testing
* Clarifies conditionals documentation for the website
* Clarifies resource descriptions for conditionals
* Formats papertrail testing properly
* Fizes syntax issues in gzip and s3 fastly testing
* Tests full schemas for gzip basic testing
* Updates header testing to check full schema
* Fixes gzip and headers testing
* Fixes s3 conditional testing
* vendor: Updating Gophercloud
* provider/openstack: Fix upstream AvailabilityZone field
This commit complements an upstream fix where "availability" was being sent instead of
"availability_zone".
We now enable the final_snapshot of aws_rds_cluster by default. This is
a continuation of the work in #11668
```
% make testacc TEST=./builtin/providers/aws TESTARGS='-run=TestAccAWSRDSCluster_takeFinalSnapshot'
==> Checking that code complies with gofmt requirements...
go generate $(go list ./... | grep -v /terraform/vendor/)
2017/02/04 13:19:52 Generated command/internal_plugin_list.go
TF_ACC=1 go test ./builtin/providers/aws -v -run=TestAccAWSRDSCluster_takeFinalSnapshot -timeout 120m
=== RUN TestAccAWSRDSCluster_takeFinalSnapshot
--- PASS: TestAccAWSRDSCluster_takeFinalSnapshot (141.59s)
PASS
ok github.com/hashicorp/terraform/builtin/providers/aws 141.609s
```
Validate the policy supplied via `assume_role_policy` in an `aws_iam_role`
```
$ make testacc TEST=./builtin/providers/aws TESTARGS='-run=TestAccAWSRole_badJSON'
==> Checking that code complies with gofmt requirements...
go generate $(go list ./... | grep -v /terraform/vendor/)
2017/02/13 14:13:47 Generated command/internal_plugin_list.go
TF_ACC=1 go test ./builtin/providers/aws -v -run=TestAccAWSRole_badJSON -timeout 120m
=== RUN TestAccAWSRole_badJSON
--- PASS: TestAccAWSRole_badJSON (0.00s)
PASS
ok github.com/hashicorp/terraform/builtin/providers/aws 0.019s
```
Introduced in #11369, this fixes an issue with the diff suppress function when creating a new `aws_db_instance` resource, while using the default `engine_version`.
```
$ make testacc TEST=./builtin/providers/aws TESTARGS='-run=TestAccAWSDBInstance_diffSuppressInitialState'
==> Checking that code complies with gofmt requirements...
go generate $(go list ./... | grep -v /terraform/vendor/)
2017/02/13 11:52:12 Generated command/internal_plugin_list.go
TF_ACC=1 go test ./builtin/providers/aws -v -run=TestAccAWSDBInstance_diffSuppressInitialState -timeout 120m
=== RUN TestAccAWSDBInstance_diffSuppressInitialState
--- PASS: TestAccAWSDBInstance_diffSuppressInitialState (480.78s)
PASS
ok github.com/hashicorp/terraform/builtin/providers/aws 480.793s
```
* Importing v7.0.1 of the SDK
* Configuring the Container Service Client
* Scaffolding the Container Service resource
* Scaffolding the Website documentation
* Completing the documentation
* Acceptance Tests for Kubernetes Azure Container Service
* DCOS / Swarm tests
* Parsing values back from the API properly
* Fixing the test
* Service Principal can be optional. Because of course it can.
* Validation for the Container Service Count's
* Updating the docs
* Updating the field required values
* Making the documentation more explicit
* Fixing the build
* Examples for DCOS and Swarm
* Removing storage_uri for now
* Making the SSH Key required as per the docs
* Resolving the merge conflicts
* Removing the unused error's
* Adding Hash's to the schema's
* Switching out the provider registration
* Fixing the hash definitions
* Updating keydata to match
* Client Secret is sensitive
* List -> Set
* Using the first item for the diagnostic_profile
* Helps if you actually update the type
* Updating the docs to include the Computed fields
* Fixing comments / removing redundant optional checks
* Removing the FQDN's from the examples
* Moving the Container resources together
Fixes: #11863
Backwards incompatible so will not be pushed to 0.8.x series. This
follows the datadog documentation as mentioned in the issue
```
% make testacc TEST=./builtin/providers/datadog TESTARGS='-run=TestAccDatadogMonitor_'
==> Checking that code complies with gofmt requirements...
go generate $(go list ./... | grep -v /terraform/vendor/)
2017/02/13 12:30:24 Generated command/internal_plugin_list.go
TF_ACC=1 go test ./builtin/providers/datadog -v -run=TestAccDatadogMonitor_ -timeout 120m
=== RUN TestAccDatadogMonitor_Basic
--- PASS: TestAccDatadogMonitor_Basic (84.23s)
=== RUN TestAccDatadogMonitor_BasicNoTreshold
--- PASS: TestAccDatadogMonitor_BasicNoTreshold (81.92s)
=== RUN TestAccDatadogMonitor_Updated
--- PASS: TestAccDatadogMonitor_Updated (82.91s)
=== RUN TestAccDatadogMonitor_TrimWhitespace
--- PASS: TestAccDatadogMonitor_TrimWhitespace (63.34s)
=== RUN TestAccDatadogMonitor_Basic_float_int
--- PASS: TestAccDatadogMonitor_Basic_float_int (75.84s)
PASS
ok github.com/hashicorp/terraform/builtin/providers/datadog 388.257s
```
The volume_Droplet test was failing from an incorrect fix pushed on Friday. Fixes the failing test.
```
$ make testacc TEST=./builtin/providers/digitalocean TESTARGS='-run=TestAccDigitalOceanVolume_Droplet'
==> Checking that code complies with gofmt requirements...
go generate $(go list ./... | grep -v /terraform/vendor/)
2017/02/11 17:55:09 Generated command/internal_plugin_list.go
TF_ACC=1 go test ./builtin/providers/digitalocean -v -run=TestAccDigitalOceanVolume_Droplet -timeout 120m
=== RUN TestAccDigitalOceanVolume_Droplet
--- PASS: TestAccDigitalOceanVolume_Droplet (52.78s)
PASS
ok github.com/hashicorp/terraform/builtin/providers/digitalocean 52.797s
```
A security_group_rule can also be created from a `prefix_list_id`.
Introduced in #11809
```
$ make testacc TEST=./builtin/providers/aws TESTARGS='-run=TestAccAWSSecurityGroupRule_PrefixListEgress'
==> Checking that code complies with gofmt requirements...
go generate $(go list ./... | grep -v /terraform/vendor/)
2017/02/10 12:41:40 Generated command/internal_plugin_list.go
TF_ACC=1 go test ./builtin/providers/aws -v -run=TestAccAWSSecurityGroupRule_PrefixListEgress -timeout 120m
=== RUN TestAccAWSSecurityGroupRule_PrefixListEgress
--- PASS: TestAccAWSSecurityGroupRule_PrefixListEgress (33.94s)
PASS
ok github.com/hashicorp/terraform/builtin/providers/aws 33.970s
```
* provider/aws: output the log group name when create fails
* adjusted formatting to match other error output
* fixup detailed error message for ResourceAlreadyExistsException
* forgot an import
* show the log group name regardless of error type
Previously the AMI creation accepted a static value for the AMI's block device's volume size.
This change allows the user to omit the `volume_size` attribute, in order to mimic the AWS API behavior, which will use the EBS Volume's size.
Also fixes a potential panic case when setting `iops` on the AMI's block device.
The `aws_ami` resource previously didn't have any acceptance tests, adds two acceptance tests and a full testing suite for the `aws_ami` resource, so further tests can be written, as well as expansion upon the other `aws_ami_*` acceptance tests
```
$ make testacc TEST=./builtin/providers/aws TESTARGS='-run=TestAccAWSAMI_'
==> Checking that code complies with gofmt requirements...
go generate $(go list ./... | grep -v /terraform/vendor/)
2017/02/09 20:18:22 Generated command/internal_plugin_list.go
TF_ACC=1 go test ./builtin/providers/aws -v -run=TestAccAWSAMI_ -timeout 120m
=== RUN TestAccAWSAMI_basic
--- PASS: TestAccAWSAMI_basic (44.21s)
=== RUN TestAccAWSAMI_snapshotSize
--- PASS: TestAccAWSAMI_snapshotSize (45.08s)
PASS
ok github.com/hashicorp/terraform/builtin/providers/aws 89.320s
```
Allows redshift security group tests to better handle being ran in parallel.
```
$ make testacc TEST=./builtin/providers/aws TESTARGS='-run=TestAccAWSRedshiftSecurityGroup_'
==> Checking that code complies with gofmt requirements...
go generate $(go list ./... | grep -v /terraform/vendor/)
2017/02/09 10:40:25 Generated command/internal_plugin_list.go
TF_ACC=1 go test ./builtin/providers/aws -v -run=TestAccAWSRedshiftSecurityGroup_ -timeout 120m
=== RUN TestAccAWSRedshiftSecurityGroup_importBasic
--- PASS: TestAccAWSRedshiftSecurityGroup_importBasic (12.98s)
=== RUN TestAccAWSRedshiftSecurityGroup_ingressCidr
--- PASS: TestAccAWSRedshiftSecurityGroup_ingressCidr (11.02s)
=== RUN TestAccAWSRedshiftSecurityGroup_updateIngressCidr
--- PASS: TestAccAWSRedshiftSecurityGroup_updateIngressCidr (32.81s)
=== RUN TestAccAWSRedshiftSecurityGroup_ingressSecurityGroup
--- PASS: TestAccAWSRedshiftSecurityGroup_ingressSecurityGroup (14.82s)
=== RUN TestAccAWSRedshiftSecurityGroup_updateIngressSecurityGroup
--- PASS: TestAccAWSRedshiftSecurityGroup_updateIngressSecurityGroup (37.43s)
PASS
ok github.com/hashicorp/terraform/builtin/providers/aws 109.090s
```
Allows the redshift parameter group acceptance tests handle being ran in parallel better
```
$ make testacc TEST=./builtin/providers/aws TESTARGS='-run=TestAccAWSRedshiftParameterGroup_'
==> Checking that code complies with gofmt requirements...
go generate $(go list ./... | grep -v /terraform/vendor/)
2017/02/09 10:16:19 Generated command/internal_plugin_list.go
TF_ACC=1 go test ./builtin/providers/aws -v -run=TestAccAWSRedshiftParameterGroup_ -timeout 120m
=== RUN TestAccAWSRedshiftParameterGroup_importBasic
--- PASS: TestAccAWSRedshiftParameterGroup_importBasic (15.17s)
=== RUN TestAccAWSRedshiftParameterGroup_withParameters
--- PASS: TestAccAWSRedshiftParameterGroup_withParameters (13.16s)
=== RUN TestAccAWSRedshiftParameterGroup_withoutParameters
--- PASS: TestAccAWSRedshiftParameterGroup_withoutParameters (12.58s)
PASS
ok github.com/hashicorp/terraform/builtin/providers/aws 40.940s
```
Updates the aws_elb acceptance tests to better handle parallel test runs
```
$ make testacc TEST=./builtin/providers/aws TESTARGS='-run=TestAccAWSLoadBalancerPolicy_'
==> Checking that code complies with gofmt requirements...
go generate $(go list ./... | grep -v /terraform/vendor/)
2017/02/09 10:04:58 Generated command/internal_plugin_list.go
TF_ACC=1 go test ./builtin/providers/aws -v -run=TestAccAWSLoadBalancerPolicy_ -timeout 120m
=== RUN TestAccAWSLoadBalancerPolicy_basic
--- PASS: TestAccAWSLoadBalancerPolicy_basic (24.50s)
=== RUN TestAccAWSLoadBalancerPolicy_updateWhileAssigned
--- PASS: TestAccAWSLoadBalancerPolicy_updateWhileAssigned (42.34s)
PASS
ok github.com/hashicorp/terraform/builtin/providers/aws 66.866s
```
When ran in parallel the tests `TestAccAwsEcsTaskDefinition_withNetwork` and `TestAccAwsEcsTaskDefinition_withTask` will overlap with each other due to the shared naming of the `iam_role` resource.
This fixes these tests to allow running in parallel on TeamCity.
```
$ make testacc TEST=./builtin/providers/aws TESTARGS='-run=TestAccAwsEcsTaskDefinition_withTask'
==> Checking that code complies with gofmt requirements...
go generate $(go list ./... | grep -v /terraform/vendor/)
2017/02/09 09:20:03 Generated command/internal_plugin_list.go
TF_ACC=1 go test ./builtin/providers/aws -v -run=TestAccAwsEcsTaskDefinition_withTask -timeout 120m
PASS
ok github.com/hashicorp/terraform/builtin/providers/aws 0.022s
```
```
$ make testacc TEST=./builtin/providers/aws TESTARGS='-run=TestAccAwsEcsTaskDefinition_withNetwork'
==> Checking that code complies with gofmt requirements...
go generate $(go list ./... | grep -v /terraform/vendor/)
2017/02/09 09:21:10 Generated command/internal_plugin_list.go
TF_ACC=1 go test ./builtin/providers/aws -v -run=TestAccAwsEcsTaskDefinition_withNetwork -timeout 120m
PASS
ok github.com/hashicorp/terraform/builtin/providers/aws 0.026s
```
Fixes the `TestAccAwsAPIGatewayMethod_customauthorizer` acceptance test which would previously fail if the iam_role resources would leak
```
$ make testacc TEST=./builtin/providers/aws TESTARGS='-run=TestAccAwsAPIGatewayMethod_customauthorizer'
==> Checking that code complies with gofmt requirements...
go generate $(go list ./... | grep -v /terraform/vendor/)
2017/02/09 09:10:07 Generated command/internal_plugin_list.go
TF_ACC=1 go test ./builtin/providers/aws -v -run=TestAccAwsAPIGatewayMethod_customauthorizer -timeout 120m
PASS
ok github.com/hashicorp/terraform/builtin/providers/aws 0.022s
```
An AWS Security Group Rule requires at least one of `cidr_blocks`, `self`, or `source_security_group_id` in order to be successfully created.
If the `aws_security_group_rule` doesn't contain one of these attributes, the AWS API will still return a `200` response, and not report any error in the response.
Example response from the API on a malformed submission:
```
2017/02/08 16:04:33 [DEBUG] plugin: terraform: -----------------------------------------------------
2017/02/08 16:04:33 [DEBUG] plugin: terraform: aws-provider (internal) 2017/02/08 16:04:33 [DEBUG] [aws-sdk-go] DEBUG: Response ec2/AuthorizeSecurityGroupIngress Details:
2017/02/08 16:04:33 [DEBUG] plugin: terraform: ---[ RESPONSE ]--------------------------------------
2017/02/08 16:04:33 [DEBUG] plugin: terraform: HTTP/1.1 200 OK
2017/02/08 16:04:33 [DEBUG] plugin: terraform: Connection: close
2017/02/08 16:04:33 [DEBUG] plugin: terraform: Transfer-Encoding: chunked
2017/02/08 16:04:33 [DEBUG] plugin: terraform: Content-Type: text/xml;charset=UTF-8
2017/02/08 16:04:33 [DEBUG] plugin: terraform: Date: Wed, 08 Feb 2017 21:04:33 GMT
2017/02/08 16:04:33 [DEBUG] plugin: terraform: Server: AmazonEC2
2017/02/08 16:04:33 [DEBUG] plugin: terraform: Vary: Accept-Encoding
2017/02/08 16:04:33 [DEBUG] plugin: terraform:
2017/02/08 16:04:33 [DEBUG] plugin: terraform: 102
2017/02/08 16:04:33 [DEBUG] plugin: terraform: <?xml version="1.0" encoding="UTF-8"?>
2017/02/08 16:04:33 [DEBUG] plugin: terraform: <AuthorizeSecurityGroupIngressResponse xmlns="http://ec2.amazonaws.com/doc/2016-11-15/">
2017/02/08 16:04:33 [DEBUG] plugin: terraform: <requestId>ac08c33f-8043-46d4-b637-4c4b2fc7a094</requestId>
2017/02/08 16:04:33 [DEBUG] plugin: terraform: <return>true</return>
2017/02/08 16:04:33 [DEBUG] plugin: terraform: </AuthorizeSecurityGroupIngressResponse>
2017/02/08 16:04:33 [DEBUG] plugin: terraform: 0
2017/02/08 16:04:33 [DEBUG] plugin: terraform:
2017/02/08 16:04:33 [DEBUG] plugin: terraform:
2017/02/08 16:04:33 [DEBUG] plugin: terraform: -----------------------------------------------------
```
This previously caused Terraform to wait until the security_group_rule propagated, which never happened due to the silent failure.
The changeset ensures that one of the required attributes are set prior to creating the aws_security_group_rule.
Also catches the error returned from the retry function. Previously the error was ignored, and only logged at the `DEBUG` level.
Previously, an `aws_rds_cluster` that contains active instance groups would timeout on a destroy, if the destroy was able to only target the rds_cluster and not include the instance groups.
This would result in a `400` response from AWS, and Terraform would sit in a wait-loop until a 15-minute timeout while waiting for the cluster to be destroyed.
This catches the error returned from the `DeleteDBCluster` function call such that the proper error case can be returned to the user.
`400` from the AWS API:
```
2017/02/08 13:40:47 [DEBUG] plugin: terraform: ---[ RESPONSE ]--------------------------------------
2017/02/08 13:40:47 [DEBUG] plugin: terraform: HTTP/1.1 400 Bad Request
2017/02/08 13:40:47 [DEBUG] plugin: terraform: Connection: close
2017/02/08 13:40:47 [DEBUG] plugin: terraform: Content-Length: 337
2017/02/08 13:40:47 [DEBUG] plugin: terraform: Content-Type: text/xml
2017/02/08 13:40:47 [DEBUG] plugin: terraform: Date: Wed, 08 Feb 2017 18:40:46 GMT
2017/02/08 13:40:47 [DEBUG] plugin: terraform: X-Amzn-Requestid: 1b4a76cc-ee2e-11e6-867d-2311ebaffd3e
2017/02/08 13:40:47 [DEBUG] plugin: terraform:
2017/02/08 13:40:47 [DEBUG] plugin: terraform: <ErrorResponse xmlns="http://rds.amazonaws.com/doc/2014-10-31/">
2017/02/08 13:40:47 [DEBUG] plugin: terraform: <Error>
2017/02/08 13:40:47 [DEBUG] plugin: terraform: <Type>Sender</Type>
2017/02/08 13:40:47 [DEBUG] plugin: terraform: <Code>InvalidDBClusterStateFault</Code>
2017/02/08 13:40:47 [DEBUG] plugin: terraform: <Message>Cluster cannot be deleted, it still contains DB instances in non-deleting state.</Message>
2017/02/08 13:40:47 [DEBUG] plugin: terraform: </Error>
2017/02/08 13:40:47 [DEBUG] plugin: terraform: <RequestId>1b4a76cc-ee2e-11e6-867d-2311ebaffd3e</RequestId>
2017/02/08 13:40:47 [DEBUG] plugin: terraform: </ErrorResponse>
2017/02/08 13:40:47 [DEBUG] plugin: terraform:
2017/02/08 13:40:47 [DEBUG] plugin: terraform: -----------------------------------------------------
```
Error returns now, as expected:
```
Error applying plan:
2017/02/08 13:40:47 [DEBUG] plugin: waiting for all plugin processes to complete...
1 error(s) occurred:
* aws_rds_cluster.jake (destroy): 1 error(s) occurred:
2017/02/08 13:40:47 [DEBUG] plugin: terraform: aws-provider (internal) 2017/02/08 13:40:47 [DEBUG] plugin: waiting for all plugin processes to complete...
* aws_rds_cluster.jake: RDS Cluster cannot be deleted: Cluster cannot be deleted, it still contains DB instances in non-deleting state.
```
Our DNS tests were using terraform.test as a DNS name, which GCP was
erroring on, as we haven't proven we own the domain (and can't, as we
don't). To solve this, I updated the tests to use hashicorptest.com,
which we _do_ own, and which we have proven ownership of. The tests now
pass.
Found in testing that a timeout of 30 seconds didn't allow for the error
message that codebuild wasn't supported in eu-west-2
Discussed this with @radeksimko and he suggested a timeout raise
Previously the db_event_subscription import would only work if there was a single db_event_subscription resource. This fixes the import, allowing it to work as expected.
Also fixes the acceptance test for the resource to reflect this.
```
$ make testacc TEST=./builtin/providers/aws TESTARGS='-run=TestAccAWSDBEventSubscription_importBasic'
==> Checking that code complies with gofmt requirements...
go generate $(go list ./... | grep -v /terraform/vendor/)
2017/02/07 10:38:10 Generated command/internal_plugin_list.go
TF_ACC=1 go test ./builtin/providers/aws -v -run=TestAccAWSDBEventSubscription_importBasic -timeout 120m
=== RUN TestAccAWSDBEventSubscription_importBasic
--- PASS: TestAccAWSDBEventSubscription_importBasic (633.33s)
PASS
ok github.com/hashicorp/terraform/builtin/providers/aws 633.353s
```
* Adds schema for fastly healthcheck
* Handles changes to the fastly healthcheck
* Flattens and refreshed fastly healthchecks
* Adds testing for fastly healthcheck
* Adds website documentation for fastly healthcheck
* Fixes terraform syntax in test examples
Fixes the `TestAccAWSDBEventSubscription_basicUpdate` acceptance test
`TestAccAWSDBEventSubscription_importBasic` is still failing, but has been failing since November.
This changes removes read of the deprecated `policy_data` attr in
the `google_project` resource.
0.8.5 introduced new behavior that incorrectly read the `policy_data`
field during the read lifecycle event. This caused Terraform to
assume it owned not just policy defined in the data source, but
everything that was associated with the project. Migrating from 0.8.4
to 0.8.5, this would cause the config (partial) to be compared to the
state (complete, as it was read from the API) and assume some
policies had been explicitly deleted. Terraform would then delete them.
Fixes#11556
Allows the user to specify the log format version when setting up `s3logging` on the fastly service resource.
Requires an update to the vendored `go-fastly` dependency.
Also adds an additional acceptance test for the new attribute.
```
$ make testacc TEST=./builtin/providers/fastly TESTARGS='-run=TestAccFastlyServiceV1_s3logging'
==> Checking that code complies with gofmt requirements...
go generate $(go list ./... | grep -v /terraform/vendor/)
2017/02/06 14:51:55 Generated command/internal_plugin_list.go
TF_ACC=1 go test ./builtin/providers/fastly -v -run=TestAccFastlyServiceV1_s3logging -timeout 120m
=== RUN TestAccFastlyServiceV1_s3logging_basic
--- PASS: TestAccFastlyServiceV1_s3logging_basic (36.11s)
=== RUN TestAccFastlyServiceV1_s3logging_s3_env
--- PASS: TestAccFastlyServiceV1_s3logging_s3_env (15.35s)
=== RUN TestAccFastlyServiceV1_s3logging_formatVersion
--- PASS: TestAccFastlyServiceV1_s3logging_formatVersion (15.71s)
PASS
ok github.com/hashicorp/terraform/builtin/providers/fastly 67.186s
```
- OpenStack provider now supports either a path or the file contents for
`cacert_file`, `cert`, and `key`
- Makes it easier to automate TF by passing in certs as environment
variables
- set `OS_SSL_TESTS=true` to run the acceptance tests
Updates vendored Nomad jobspec parser such that parameterized nomad job files can no be parsed and used with Terraform.
Also fixes tests to adhere to new jobspec version, and update documentation to reflect such as well.
Discovered after #11619 was fixed, and while fixing acceptance tests for the `aws_spot_instance_request` resource.
Previously the `aws_spot_instance_request` resource wouldn't populate any of the block device attributes from the resulting instance's metadata. This fixes that issue, and also fixes the `aws_spot_instance_request` acceptance tests to be more equipped for running in parallel.
```
make testacc TEST=./builtin/providers/aws TESTARGS='-run=TestAccAWSSpotInstanceRequest_'
==> Checking that code complies with gofmt requirements...
go generate $(go list ./... | grep -v /terraform/vendor/)
2017/02/02 18:02:37 Generated command/internal_plugin_list.go
TF_ACC=1 go test ./builtin/providers/aws -v -run=TestAccAWSSpotInstanceRequest_ -timeout 120m
=== RUN TestAccAWSSpotInstanceRequest_basic
--- PASS: TestAccAWSSpotInstanceRequest_basic (111.96s)
=== RUN TestAccAWSSpotInstanceRequest_withBlockDuration
--- PASS: TestAccAWSSpotInstanceRequest_withBlockDuration (105.12s)
=== RUN TestAccAWSSpotInstanceRequest_vpc
--- PASS: TestAccAWSSpotInstanceRequest_vpc (115.81s)
=== RUN TestAccAWSSpotInstanceRequest_SubnetAndSG
--- PASS: TestAccAWSSpotInstanceRequest_SubnetAndSG (130.46s)
PASS
ok github.com/hashicorp/terraform/builtin/providers/aws 463.377s
```
Better handle parallel testing of the `aws_spot_fleet_request` resource. Previously we were catching a lot of errors due to naming the policy_attachment resource the same between tests.
* Terraform ProfitBricks Builder
* make fmt
* Merge remote-tracking branch 'upstream/master' into terraform-provider-profitbricks
# Conflicts:
# command/internal_plugin_list.go
* Addressing PR remarks
* Removed importers
* Added ProfitBricks Data Sources
* Added documentation
* Updated to REST v3:
- nat parameter for Nics
- availabilityZone for Volumes
Minor code clean up
* Minor code clean up
* Fixed typo in volume documentation
* make fmt
* Addressing requested changes
* Added a step in load balancer tests in CheckDestroy where we are making sure that the test doesn't leave dangling resources in ProfitBricks
* Changed expected image name
* Fixed data center test
Code clean up
* Add aws dms vendoring
* Add aws dms endpoint resource
* Add aws dms replication instance resource
* Add aws dms replication subnet group resource
* Add aws dms replication task resource
* Fix aws dms resource go vet errors
* Review fixes: Add id validators for all resources. Add validator for endpoint engine_name.
* Add aws dms resources to importability list
* Review fixes: Add aws dms iam role dependencies to test cases
* Review fixes: Adjustments for handling input values
* Add aws dms replication subnet group tagging
* Fix aws dms subnet group doesn't use standard error for resource not found
* Missed update of aws dms vendored version
* Add aws dms certificate resource
* Update aws dms resources to force new for immutable attributes
* Fix tests failing on subnet deletion by adding explicit dependencies. Combine import tests with basic tests to cut down runtime.
* provider/aws: Update Application Auto Scaling service model
- Add support for automatically scaling an Amazon EC2 Spot fleet.
* Remove duplicate policy_type check.
* Test creating a scalable target for a splot fleet request.
* Test creating a scaling policy for a splot fleet request.
* Update resource docs to support scaling an Amazon EC2 Spot fleet.
- aws_appautoscaling_policy
- aws_appautoscaling_target
* Remove arn attribute from aws_appautoscaling_target
- No arn is generated or returned for this resource.
* Remove optional name attribute from aws_appautoscaling_target
- ScalableTargets do not have a name
- I think this was copied from aws_appautoscaling_policy
* AWS Application Autoscaling resource documentation tweaks
- include a target resource in the policy example
- sort attributes by alpha
- fixup markdown
- add spaces to test config
Previously the `root_block_device` config map was a `schema.TypeSet` with an empty `Set` function, and a hard-limit of 1 on the attribute block.
This prevented a user from making any real changes inside the attribute block, thus leaving the user with a `Apply complete!` message, and nothing changed.
The schema API has since been updated, and we can now specify the `root_block_device` as a `schema.TypeList` with `MaxItems` set to `1`. This fixes the issue, and allows the user to update the `aws_instance`'s `root_block_device` attribute, and see changes actually propagate.
Adds tag support to the `aws_dynamodb_table` resource. Also adds a test for the resource, and a test to ensure that the tags are populated correctly from a resource import.
```
$ make testacc TEST=./builtin/providers/aws TESTARGS='-run=TestAccAWSDynamoDBTable_tags'
==> Checking that code complies with gofmt requirements...
go generate $(go list ./... | grep -v /terraform/vendor/)
2017/02/01 15:35:00 Generated command/internal_plugin_list.go
TF_ACC=1 go test ./builtin/providers/aws -v -run=TestAccAWSDynamoDBTable_tags -timeout 120m
=== RUN TestAccAWSDynamoDBTable_tags
--- PASS: TestAccAWSDynamoDBTable_tags (28.69s)
PASS
ok github.com/hashicorp/terraform/builtin/providers/aws 28.713s
```
```
$ make testacc TEST=./builtin/providers/aws TESTARGS='-run=TestAccAWSDynamoDbTable_importTags'
==> Checking that code complies with gofmt requirements...
go generate $(go list ./... | grep -v /terraform/vendor/)
2017/02/01 15:39:49 Generated command/internal_plugin_list.go
TF_ACC=1 go test ./builtin/providers/aws -v -run=TestAccAWSDynamoDbTable_importTags -timeout 120m
=== RUN TestAccAWSDynamoDbTable_importTags
--- PASS: TestAccAWSDynamoDbTable_importTags (30.62s)
PASS
ok github.com/hashicorp/terraform/builtin/providers/aws 30.645s
```
I believe that if no VPC Endpoints were returned from the AWS API, we
were not guarding against a panic. We were strill trying to inspect the
RouteTableIds. This commit will ensure that no errors are thrown before
trying to use the RouteTableIds
```
% make testacc TEST=./builtin/providers/aws TESTARGS='-run=TestAccAWSVpcEndpointRouteTableAssociation_'
==> Checking that code complies with gofmt requirements...
go generate $(go list ./... | grep -v /terraform/vendor/)
2017/02/01 18:06:29 Generated command/internal_plugin_list.go
TF_ACC=1 go test ./builtin/providers/aws -v -run=TestAccAWSVpcEndpointRouteTableAssociation_ -timeout 120m
=== RUN TestAccAWSVpcEndpointRouteTableAssociation_basic
--- PASS: TestAccAWSVpcEndpointRouteTableAssociation_basic (42.83s)
PASS
ok github.com/hashicorp/terraform/builtin/providers/aws 42.859s
```
Fixes the beanstalk env tests such that they can run in parallel better. Previously, only the beanstalk application was randomized, now the beanstalk environment is also randomized to help better facilitate running our tests in parallel.
```
=== RUN TestAccAWSBeanstalkEnv_outputs
--- PASS: TestAccAWSBeanstalkEnv_outputs (388.74s)
=== RUN TestAccAWSBeanstalkEnv_cname_prefix
--- PASS: TestAccAWSBeanstalkEnv_cname_prefix (386.78s)
=== RUN TestAccAWSBeanstalkEnv_config
--- PASS: TestAccAWSBeanstalkEnv_config (532.56s)
=== RUN TestAccAWSBeanstalkEnv_resource
--- PASS: TestAccAWSBeanstalkEnv_resource (420.47s)
=== RUN TestAccAWSBeanstalkEnv_vpc
--- PASS: TestAccAWSBeanstalkEnv_vpc (516.02s)
=== RUN TestAccAWSBeanstalkEnv_template_change
--- PASS: TestAccAWSBeanstalkEnv_template_change (623.38s)
=== RUN TestAccAWSBeanstalkEnv_basic_settings_update
--- PASS: TestAccAWSBeanstalkEnv_basic_settings_update (705.32s)
```
If an `aws_volume_attachment` is identical to one that already exists in
the API, don't attempt to re-create it (which fails), simply act as
though the creation command had already been run and continue.
This allows Terraform to cleanly recover from a situation where a volume
attachment action hangs indefinitely, possibly due to a bad instance
state, requiring manual intervention such as an instance reboot. In such
a situation, Terraform believes the attachment has failed, when in fact
it succeeded after the timeout had expired. On the subsequent retry run,
attempting to re-create the attachment will fail outright, due to the
AttachVolume API call being non-idempotent. This patch implements the
idempotency client-side by matching the (name, vID, iID) tuple.
Note that volume attachments are not assigned an ID by the API.
message
Fixes: #11568
```
% make testacc TEST=./builtin/providers/aws TESTARGS='-run=TestAccAWSRDSCluster_missingUserNameCausesError'
==> Checking that code complies with gofmt requirements...
go generate $(go list ./... | grep -v /terraform/vendor/)
2017/02/01 12:11:14 Generated command/internal_plugin_list.go
TF_ACC=1 go test ./builtin/providers/aws -v -run=TestAccAWSRDSCluster_missingUserNameCausesError -timeout 120m
=== RUN TestAccAWSRDSCluster_missingUserNameCausesError
--- PASS: TestAccAWSRDSCluster_missingUserNameCausesError (3.22s)
PASS
ok github.com/hashicorp/terraform/builtin/providers/aws 3.243s
```
The error message for a required parameter being missing has a wrong parameter baked into it. Therefore, when the error message tried to fire, it was throwing a panic. Added a test to make sure that we know the condition still fires and with a correct message
Fixes: #11549
When a user passes the wrong argument to a route53_record import, they
got a crash. This was because we expected the ID to parse correctly. The
crash looked like this:
```
% terraform import aws_route53_record.import1 mike.westredd.com
aws_route53_record.import1: Importing from ID "mike.westredd.com"...
aws_route53_record.import1: Import complete!
Imported aws_route53_record (ID: mike.westredd.com)
aws_route53_record.import1: Refreshing state... (ID: mike.westredd.com)
Error importing: 1 error(s) occurred:
* aws_route53_record.import1: unexpected EOF
panic: runtime error: index out of range
```
Rather than throwing a panic to the user, we should present them with a more useful message that tells them what the error is:
```
% terraform import aws_route53_record.import mike.westredd.com
aws_route53_record.import: Importing from ID "mike.westredd.com"...
aws_route53_record.import: Import complete!
Imported aws_route53_record (ID: mike.westredd.com)
aws_route53_record.import: Refreshing state... (ID: mike.westredd.com)
Error importing: 1 error(s) occurred:
* aws_route53_record.import: Error Importing aws_route_53 record. Please make sure the record ID is in the form ZONEID_RECORDNAME_TYPE (i.e. Z4KAPRWWNC7JR_dev_A
```
At least they can work out what the problem is in this case
Cloud SQL Gen 2 instances come with a default 'root'@'%' user on
creation. This change automatically deletes that user after creation. A
Terraform user must use the google_sql_user to create a user with
appropriate host and password.
The `aws_availability_zones` data source test was panicking. This fixes both tests
```
$ make testacc TEST=./builtin/providers/aws TESTARGS='-run=TestAccAWSAvailabilityZones'
==> Checking that code complies with gofmt requirements...
go generate $(go list ./... | grep -v /terraform/vendor/)
2017/01/31 15:47:39 Generated command/internal_plugin_list.go
TF_ACC=1 go test ./builtin/providers/aws -v -run=TestAccAWSAvailabilityZones -timeout 120m
=== RUN TestAccAWSAvailabilityZones_basic
--- PASS: TestAccAWSAvailabilityZones_basic (12.56s)
=== RUN TestAccAWSAvailabilityZones_stateFilter
--- PASS: TestAccAWSAvailabilityZones_stateFilter (13.59s)
PASS
ok github.com/hashicorp/terraform/builtin/providers/aws 26.187s
```
* Added Step Function Activity & Step Function State Machine
* Added SFN State Machine documentation
* Added aws_sfn_activity & documentation
* Allowed import of sfn resources
* Added more checks on tests, fixed documentation
* Handled the update case of a SFN function (might be already deleting)
* Removed the State Machine import test file
* Fixed the eventual consistency of the read after delete for SFN functions
The API asks you to send lower case values, but returns uppercase ones.
Here we lowercase the returned API values.
There is no migration here because the field in question is nested in a
set, so the hash will change regardless. Anyone using this feature now
has it broken anyway.
Fixes 2 acceptance tests for the `aws_instance` data source
```
$ make testacc TEST=./builtin/providers/aws TESTARGS='-run=TestAccAWSInstanceDataSource_SecurityGroups'
==> Checking that code complies with gofmt requirements...
go generate $(go list ./... | grep -v /terraform/vendor/)
2017/01/31 12:12:15 Generated command/internal_plugin_list.go
TF_ACC=1 go test ./builtin/providers/aws -v -run=TestAccAWSInstanceDataSource_SecurityGroups -timeout 120m
=== RUN TestAccAWSInstanceDataSource_SecurityGroups
--- PASS: TestAccAWSInstanceDataSource_SecurityGroups (119.14s)
PASS
ok github.com/hashicorp/terraform/builtin/providers/aws 119.172s
```
```
$ make testacc TEST=./builtin/providers/aws TESTARGS='-run=TestAccAWSInstanceDataSource_tags'
==> Checking that code complies with gofmt requirements...
go generate $(go list ./... | grep -v /terraform/vendor/)
2017/01/31 12:15:42 Generated command/internal_plugin_list.go
TF_ACC=1 go test ./builtin/providers/aws -v -run=TestAccAWSInstanceDataSource_tags -timeout 120m
=== RUN TestAccAWSInstanceDataSource_tags
--- PASS: TestAccAWSInstanceDataSource_tags (118.87s)
PASS
ok github.com/hashicorp/terraform/builtin/providers/aws 118.900s
```
The existing hash function for set items cannot generate consistent hashes when using both `Optional` and `Computed` on a schema field.
I tried to add this use case to the existing code base, but came to the conclusion this would be quite an endeavor.
That together with the fact this is the only field in all sets used in all builtin providers/resources that would be using both options at the same time, made me decide to change this single resource instead.
When switching from one Rancher server to another, we want Terraform
to recreate Rancher resources. This currently leads to ugly `EOF` errors.
This patch resets resource Ids when they can't be found in the Rancher API.
* Image and vhdcontainers are mutually exclusive.
* Fix ip configuration handling and update support for load balancer backend pools.
* Fix os disk handling.
* Remove os_type from disk hash.
* Load balancer pools should not be computed.
* Add support for the overprovision property.
* Update documentation.
* Create acceptance test for scale set lb changes.
* Create acceptance test for scale set overprovisioning.
* OS-131 Updated dependencies to use ukcloud/govcloudair instead of hmrc/vmware-govcd
* OS-131 Fixed failing tests by adding package name to imports of ukcloud/govcloudair
* OS-131 Minor change to force Travis to re-build the PR
* added server_side_encryption to s3_bucket_object resource including associated acceptance test and documentation.
* got acceptance tests passing.
* made server_side_encryption a computed attribute and only set kms_key_id attribute if an S3 non-default master key is in use.
* ensured kms api is only interrogated if required.
Fixes `aws_rds_cluster_parameter_group` acceptance tests, which have been broken since aa8c2ac587
```
$ make testacc TEST=./builtin/providers/aws TESTARGS='-run=TestAccAWSDBClusterParameterGroupOnly'
==> Checking that code complies with gofmt requirements...
go generate $(go list ./... | grep -v /terraform/vendor/)
2017/01/30 16:20:38 Generated command/internal_plugin_list.go
TF_ACC=1 go test ./builtin/providers/aws -v -run=TestAccAWSDBClusterParameterGroupOnly -timeout 120m
=== RUN TestAccAWSDBClusterParameterGroupOnly
--- PASS: TestAccAWSDBClusterParameterGroupOnly (15.26s)
PASS
ok github.com/hashicorp/terraform/builtin/providers/aws 15.282s
```
```
$ make testacc TEST=./builtin/providers/aws TESTARGS='-run=TestAccAWSDBClusterParameterGroup_basic'
==> Checking that code complies with gofmt requirements...
go generate $(go list ./... | grep -v /terraform/vendor/)
2017/01/30 16:22:48 Generated command/internal_plugin_list.go
TF_ACC=1 go test ./builtin/providers/aws -v -run=TestAccAWSDBClusterParameterGroup_basic -timeout 120m
=== RUN TestAccAWSDBClusterParameterGroup_basic
--- PASS: TestAccAWSDBClusterParameterGroup_basic (29.48s)
PASS
ok github.com/hashicorp/terraform/builtin/providers/aws 29.510s
```
Fixes `aws_cloudwatch_log_subscription_filter` acceptance tests that had been failing since mid December
```
$ make testacc TEST=./builtin/providers/aws TESTARGS='-run=TestAccAWSCloudwatchLogSubscriptionFilter_basic'
==> Checking that code complies with gofmt requirements...
go generate $(go list ./... | grep -v /terraform/vendor/)
2017/01/30 16:00:05 Generated command/internal_plugin_list.go
TF_ACC=1 go test ./builtin/providers/aws -v -run=TestAccAWSCloudwatchLogSubscriptionFilter_basic -timeout 120m
=== RUN TestAccAWSCloudwatchLogSubscriptionFilter_basic
--- PASS: TestAccAWSCloudwatchLogSubscriptionFilter_basic (26.34s)
PASS
ok github.com/hashicorp/terraform/builtin/providers/aws 26.364s
```
* Creates papertrail logging resource for fastly
* Adds modification support for fastly papertrail
* Flattens and lists papertrail resources
* Adds testing for fastly papertrail
* Adds papertrail documentation for fastly to the website
* Fixes schema assignment name mistake
* Changes testing hostnames to pass fastly API validation
Implementing vpc_peering_connection_accept.
Additions from @ewbankkit:
Rename 'aws_vpc_peering_connection_accept' to 'aws_vpc_peering_connection_accepter'.
Get it working reusing functionality from 'aws_vpc_peering_connection' resource.
* Add a new data provider to decrypt AWS KMS secrets
* Address feedback
* Rename aws_kms_secrets to aws_kms_secret
* Add more examples to the documentation
Fixes: #11461
This will allow the user to pass a policy to further restrict the use
of AssumeRole. It is important to note that it will NOT allow an
expansion of access rights
According to https://github.com/hashicorp/errwrap
'{{err}}' has to be used instead of '%s'
Without this patch, error output from terraform is missing important information:
* aws_cloudwatch_log_group.logs: Error Getting CloudWatch Logs Tag List: %s
With this patch, I get the important information. E.g.:
* aws_cloudwatch_log_group.logs: Error Getting CloudWatch Logs Tag List: AccessDeniedException: User: arn:aws:sts::XYZ:assumed-role/AAA-BBB-CCC/terraform-assuming-role-assume-role-ReadOnly is not authorized to perform: logs:ListTagsLogGroup on resource: arn:aws:logs:us-east-1:XYZ:log-group:logs:log-stream:
The support for "use_client_subnet" was half finished.
- Field was defined in schema.
- ResourceData-to-struct code was present but incorrect.
- struct-to-ResourceData code was missing.
Made the change and verified with manual testing:
1. In NS1 UI, switched "Use Client Subnet" between checked and
unchecked.
2. In Terraform config file, switched "use_client_subnet" field between
"true", "false", and omitted.
3. The output of "terraform plan" was as expected in all six cases.
This commit removes the default security group rules that are automatically
created when a security group is created. These rules are usually
permissive egress rules which makes it difficult to add more strict egress
security group rules.