* Vendor google.golang.org/api/cloudbilling/v1
* providers/google: Add cloudbilling client
* providers/google: google_project supports billing account
This change allows a Terraform user to set and update the billing
account associated with their project.
* providers/google: Testing project billing account
This change adds optional acceptance tests for project billing accounts.
GOOGLE_PROJECT_BILLING_ACCOUNT and GOOGLE_PROJECT_BILLING_ACCOUNT_2
must be set in the environment for the tests to run; otherwise, they
will be skipped.
Also includes a few code cleanups per review.
* providers/google: Improve project billing error message
* vendor: Updating Gophercloud
* provider/openstack: Image Data Source
This commit adds the openstack_images_image_v2 data source which
is able to query the Image Service v2 API for a specific image.
* provider/datadog: Pulls v2 and removes v1 of library go-datadog-api.
See https://github.com/zorkian/go-datadog-api/issues/56 for context.
* Fixes bug in backoff implementation that decreased performance significantly.
* Uses pointers for field types, providing support of distinguishing
between if a value is set, or the default value for that type is
effective.
* provider/datadog: Convert provider to use v2 of go-datadog-api.
* provider/datadog: Update vendored library.
* provider/datadog: Update dashboard resource to reflect API updates.
This commit adds the ability to log all requests and responses
between Terraform and the OpenStack cloud. To enable, set the
OS_DEBUG environment variable to 1.
This commit adds a check to prevent a user from specifying both
a floating IP and a port on a specific network. While this
configuration is currently allowed, the Port will be chosen and
applying the configuration again will show a state mismatch. This
attempts to prevent such a misconfiguration.
This commit has a few more fixes to the recently added
openstack_images_image_v2 resource:
* tags were changed to a Set because the OpenStack Image API does
not seem to respect ordering.
* The visibility argument was fixed.
* Acceptance tests for all updatable fields has been implemented.
* Documentation updates, including a new entry in the sidebar.
* provider/google-cloud: Add maintenance window
Allows specification of the `maintenance_window` within the `settings`
block. This controls when Google will restart a database in order to
apply updates. It is also possible to select an `update_track` to
relatively control updating between instances in the same project.
* Adjustments as suggested in code review.
* Added new resource aws_elastic_beanstalk_application_version.
* Changing bucket and key to required.
* Update to use d.Id() directly in DescribeApplicationVersions.
* Checking err to make sure that the application version is successfully deleted.
* Update `version_label` to `Computed: true`.
* provider/aws: Updating to python solution stack
* provider/aws: Beanstalk App Version delete source
The Elastic Beanstalk API call to delete `application_version` resource
should not delete the s3 bundle, as this object is managed by another
Terraform resource
* provider/aws: Update application version docs
* Fix application version test
* Add `version_label` update test
Adds test that fails after rebasing branch onto v0.8.x. `version_label`
changes do not update the `aws_elastic_beanstalk_environment` resource.
* `version_label` changes to update environment
* Prevent unintended delete of `application_version`
Prevents an `application_version` used by multiple environments from
being deleted.
* Add `force_delete` attribute
* Update documentation
* [datadog] Update go-datadog-api library
Involves one breaking API change. Also some `gofmt`ing.
* [datadog] Add support for new_host_delay to the datadog_monitor resource
New API parameter that Datadog added for monitors to ignore new hosts
for the specified time period in monitor evaluation.
Our delete operation for google_compute_project_metadata didn't check an
error when making the call to delete metadata, which led to a panic in
our tests. This is also probably indicative of why our tests
failed/metadata got left dangling.
Fixes the `TestAccAWSAutoscalingLifecycleHook_omitDefaultResult` acceptance test to run in parallel.
```
$ make testacc TEST=./builtin/providers/aws TESTARGS='-run=TestAccAWSAutoscalingLifecycleHook_omitDefaultResult'
==> Checking that code complies with gofmt requirements...
go generate $(go list ./... | grep -v /terraform/vendor/)
2017/02/15 22:33:26 Generated command/internal_plugin_list.go
TF_ACC=1 go test ./builtin/providers/aws -v -run=TestAccAWSAutoscalingLifecycleHook_omitDefaultResult -timeout 120m
=== RUN TestAccAWSAutoscalingLifecycleHook_omitDefaultResult
--- PASS: TestAccAWSAutoscalingLifecycleHook_omitDefaultResult (146.91s)
PASS
ok github.com/hashicorp/terraform/builtin/providers/aws 146.917s
```
Previously we only validated that the cloudflare record provided was a valid record type. However, a record can be of a valid type, and still not be proxied, making it an invalid record type.
The main downside to having to check for whether or not the record type is proxied or not during validation, is that it relies on having two schema keys populated. This means that we can only catch the improper record type during `apply` time, instead of `plan` time.
```
$ go test -v -run "TestValidateRecordType" ./builtin/providers/cloudflare
=== RUN TestValidateRecordType
--- PASS: TestValidateRecordType (0.00s)
PASS
ok github.com/hashicorp/terraform/builtin/providers/cloudflare 0.004s
```
sensitive
This was pointed out at the HUG in London tonight that we save the plain
text password in state
I don't think this will be ported back to 0-8-stable
This allows for updates to size, type and iops
Fixes: #11931
```
% make testacc TEST=./builtin/providers/aws TESTARGS='-run=TestAccAWSEBSVolume_update'
==> Checking that code complies with gofmt requirements...
go generate $(go list ./... | grep -v /terraform/vendor/)
2017/02/15 22:35:43 Generated command/internal_plugin_list.go
TF_ACC=1 go test ./builtin/providers/aws -v -run=TestAccAWSEBSVolume_update -timeout 120m
=== RUN TestAccAWSEBSVolume_updateSize
--- PASS: TestAccAWSEBSVolume_updateSize (53.57s)
=== RUN TestAccAWSEBSVolume_updateType
--- PASS: TestAccAWSEBSVolume_updateType (57.53s)
=== RUN TestAccAWSEBSVolume_updateIops
--- PASS: TestAccAWSEBSVolume_updateIops (53.63s)
PASS
ok github.com/hashicorp/terraform/builtin/providers/aws 164.753s
```
This extends the work in #11668 to enable final snapshots by default.
This time it's for redshift
```
% make testacc TEST=./builtin/providers/aws TESTARGS='-run=TestAccAWSRedshiftCluster_withFinalSnapshot'
==> Checking that code complies with gofmt requirements...
go generate $(go list ./... | grep -v /terraform/vendor/)
2017/02/04 13:53:02 Generated command/internal_plugin_list.go
TF_ACC=1 go test ./builtin/providers/aws -v -run=TestAccAWSRedshiftCluster_withFinalSnapshot -timeout 120m
=== RUN TestAccAWSRedshiftCluster_withFinalSnapshot
--- PASS: TestAccAWSRedshiftCluster_withFinalSnapshot (859.96s)
PASS
ok github.com/hashicorp/terraform/builtin/providers/aws 859.986s
```
If we get `InvalidParameterException` with the message "Could not deliver test
message to specified" then retry as this is often down to some sort of internal
delay in Amazons API. Also increase the timeout from 30 seconds to 3 minutes as
it has been observed to take that long sometimes for the creation to succeed.
This applies to both log destinations and subscription filters.
* Adds response conditions for papertrail in fastly
* Adds cache conditional for gzip in fastly
* Opens up conitionals under fastly headers
* Adds request conditions to s3 logging for fastly
* Creates conditionals properly for testing
* Clarifies conditionals documentation for the website
* Clarifies resource descriptions for conditionals
* Formats papertrail testing properly
* Fizes syntax issues in gzip and s3 fastly testing
* Tests full schemas for gzip basic testing
* Updates header testing to check full schema
* Fixes gzip and headers testing
* Fixes s3 conditional testing
* vendor: Updating Gophercloud
* provider/openstack: Fix upstream AvailabilityZone field
This commit complements an upstream fix where "availability" was being sent instead of
"availability_zone".
We now enable the final_snapshot of aws_rds_cluster by default. This is
a continuation of the work in #11668
```
% make testacc TEST=./builtin/providers/aws TESTARGS='-run=TestAccAWSRDSCluster_takeFinalSnapshot'
==> Checking that code complies with gofmt requirements...
go generate $(go list ./... | grep -v /terraform/vendor/)
2017/02/04 13:19:52 Generated command/internal_plugin_list.go
TF_ACC=1 go test ./builtin/providers/aws -v -run=TestAccAWSRDSCluster_takeFinalSnapshot -timeout 120m
=== RUN TestAccAWSRDSCluster_takeFinalSnapshot
--- PASS: TestAccAWSRDSCluster_takeFinalSnapshot (141.59s)
PASS
ok github.com/hashicorp/terraform/builtin/providers/aws 141.609s
```
Validate the policy supplied via `assume_role_policy` in an `aws_iam_role`
```
$ make testacc TEST=./builtin/providers/aws TESTARGS='-run=TestAccAWSRole_badJSON'
==> Checking that code complies with gofmt requirements...
go generate $(go list ./... | grep -v /terraform/vendor/)
2017/02/13 14:13:47 Generated command/internal_plugin_list.go
TF_ACC=1 go test ./builtin/providers/aws -v -run=TestAccAWSRole_badJSON -timeout 120m
=== RUN TestAccAWSRole_badJSON
--- PASS: TestAccAWSRole_badJSON (0.00s)
PASS
ok github.com/hashicorp/terraform/builtin/providers/aws 0.019s
```