* provider/google-cloud: Add maintenance window
Allows specification of the `maintenance_window` within the `settings`
block. This controls when Google will restart a database in order to
apply updates. It is also possible to select an `update_track` to
relatively control updating between instances in the same project.
* Adjustments as suggested in code review.
* Added new resource aws_elastic_beanstalk_application_version.
* Changing bucket and key to required.
* Update to use d.Id() directly in DescribeApplicationVersions.
* Checking err to make sure that the application version is successfully deleted.
* Update `version_label` to `Computed: true`.
* provider/aws: Updating to python solution stack
* provider/aws: Beanstalk App Version delete source
The Elastic Beanstalk API call to delete `application_version` resource
should not delete the s3 bundle, as this object is managed by another
Terraform resource
* provider/aws: Update application version docs
* Fix application version test
* Add `version_label` update test
Adds test that fails after rebasing branch onto v0.8.x. `version_label`
changes do not update the `aws_elastic_beanstalk_environment` resource.
* `version_label` changes to update environment
* Prevent unintended delete of `application_version`
Prevents an `application_version` used by multiple environments from
being deleted.
* Add `force_delete` attribute
* Update documentation
* [datadog] Update go-datadog-api library
Involves one breaking API change. Also some `gofmt`ing.
* [datadog] Add support for new_host_delay to the datadog_monitor resource
New API parameter that Datadog added for monitors to ignore new hosts
for the specified time period in monitor evaluation.
Our delete operation for google_compute_project_metadata didn't check an
error when making the call to delete metadata, which led to a panic in
our tests. This is also probably indicative of why our tests
failed/metadata got left dangling.
Fixes the `TestAccAWSAutoscalingLifecycleHook_omitDefaultResult` acceptance test to run in parallel.
```
$ make testacc TEST=./builtin/providers/aws TESTARGS='-run=TestAccAWSAutoscalingLifecycleHook_omitDefaultResult'
==> Checking that code complies with gofmt requirements...
go generate $(go list ./... | grep -v /terraform/vendor/)
2017/02/15 22:33:26 Generated command/internal_plugin_list.go
TF_ACC=1 go test ./builtin/providers/aws -v -run=TestAccAWSAutoscalingLifecycleHook_omitDefaultResult -timeout 120m
=== RUN TestAccAWSAutoscalingLifecycleHook_omitDefaultResult
--- PASS: TestAccAWSAutoscalingLifecycleHook_omitDefaultResult (146.91s)
PASS
ok github.com/hashicorp/terraform/builtin/providers/aws 146.917s
```
Previously we only validated that the cloudflare record provided was a valid record type. However, a record can be of a valid type, and still not be proxied, making it an invalid record type.
The main downside to having to check for whether or not the record type is proxied or not during validation, is that it relies on having two schema keys populated. This means that we can only catch the improper record type during `apply` time, instead of `plan` time.
```
$ go test -v -run "TestValidateRecordType" ./builtin/providers/cloudflare
=== RUN TestValidateRecordType
--- PASS: TestValidateRecordType (0.00s)
PASS
ok github.com/hashicorp/terraform/builtin/providers/cloudflare 0.004s
```