According to documentation, `location` is a required argument for
`azurerm_lb_nat_pool`. However, configuring an `azurerm_lb_nat_pool`
with a `location` argument leads to the following warning:
```
There are warnings and/or errors related to your configuration. Please
fix these before continuing.
Warnings:
* azurerm_lb_nat_pool.test: "location": [DEPRECATED] location is no longer used
No errors found. Continuing with 1 warning(s).
```
Disclaimer: Terraform v0.8.8, haven't moved to 0.9.x yet :/
* provider/openstack: Add all_fixed_ips computed attribute to port resource
This commit adds the `all_fixed_ips` attribute to the
openstack_networking_port_v2 resource. This contains all of the port's
Fixed IPs returned by the Networking v2 API.
* provider/openstack: Revert Port fixed_ip back to a List
This commit reverts the changes made in a8c4e81a6e3f2. This
re-enables the ability to reference IP addresses using the
numerical-element notation.
This commit also makes fixed_ip a non-computed field, meaning
Terraform will no longer set fixed_ip with what was returned
by the API. This resolves the original issue about ordering.
The last use-case is for fixed_ips that received an IP address
via DHCP. Because fixed_ip is no longer computed, the DHCP IP
will not be set. The workaround for this use-case is to use the
new all_fixed_ips attribute.
In effect, users should use fixed_ip only as a way of inputting
data into Terraform and use all_fixed_ips as the output returned
by the API. If use-cases exist where fixed_ip can be used as an
output, that's a bonus feature.
This commit adds the all_metadata attribute which contains all
instance metadata returned from the OpenStack Compute API. This
is useful for gaining access to metadata which was applied
out of band of Terraform.
This commit also stops setting the original metadata attribute,
effectively making metadata an input and all_metadata the output
to reference all data from.
* Adding import to resource_aws_iam_server_certificate.
* provider/aws: Update tests for import of aws_iam_server_certificate
Builds upon the work of @mrcopper in #12940
Resource:
```
% make testacc TEST=./builtin/providers/aws TESTARGS='-run=TestAccAWSIAMServerCertificate_'
==> Checking that code complies with gofmt requirements...
go generate $(go list ./... | grep -v /terraform/vendor/)
2017/03/25 00:08:48 Generated command/internal_plugin_list.go
TF_ACC=1 go test ./builtin/providers/aws -v -run=TestAccAWSIAMServerCertificate_ -timeout 120m
=== RUN TestAccAWSIAMServerCertificate_importBasic
--- PASS: TestAccAWSIAMServerCertificate_importBasic (22.81s)
=== RUN TestAccAWSIAMServerCertificate_basic
--- PASS: TestAccAWSIAMServerCertificate_basic (19.68s)
=== RUN TestAccAWSIAMServerCertificate_name_prefix
--- PASS: TestAccAWSIAMServerCertificate_name_prefix (19.88s)
=== RUN TestAccAWSIAMServerCertificate_disappears
--- PASS: TestAccAWSIAMServerCertificate_disappears (13.94s)
=== RUN TestAccAWSIAMServerCertificate_file
--- PASS: TestAccAWSIAMServerCertificate_file (32.67s)
PASS
ok github.com/hashicorp/terraform/builtin/providers/aws 109.062s
```
Data Source:
```
% make testacc TEST=./builtin/providers/aws TESTARGS='-run=TestAccAWSDataSourceIAMServerCertificate_'
==> Checking that code complies with gofmt requirements...
go generate $(go list ./... | grep -v /terraform/vendor/)
2017/03/25 13:07:10 Generated command/internal_plugin_list.go
TF_ACC=1 go test ./builtin/providers/aws -v -run=TestAccAWSDataSourceIAMServerCertificate_ -timeout 120m
=== RUN TestAccAWSDataSourceIAMServerCertificate_basic
--- PASS: TestAccAWSDataSourceIAMServerCertificate_basic (43.86s)
=== RUN TestAccAWSDataSourceIAMServerCertificate_matchNamePrefix
--- PASS: TestAccAWSDataSourceIAMServerCertificate_matchNamePrefix (2.68s)
PASS
ok github.com/hashicorp/terraform/builtin/providers/aws 46.569s
```
Updates `aws_caller_identity` data source to actually include the correct attributes from the `GetCallerIdentity` API function.
```
$ make testacc TEST=./builtin/providers/aws TESTARGS='-run=TestAccAWSCallerIdentity_basic'
==> Checking that code complies with gofmt requirements...
go generate $(go list ./... | grep -v /terraform/vendor/)
2017/03/27 09:26:13 Generated command/internal_plugin_list.go
TF_ACC=1 go test ./builtin/providers/aws -v -run=TestAccAWSCallerIdentity_basic -timeout 120m
=== RUN TestAccAWSCallerIdentity_basic
--- PASS: TestAccAWSCallerIdentity_basic (12.74s)
PASS
ok github.com/hashicorp/terraform/builtin/providers/aws 12.767s
```
This commit deprecates the floating_ip attributes from the
openstack_compute_instance_v2 resource. It is recommended to use
either the openstack_compute_floatingip_associate resource or
configure an openstack_networking_port_v2 resource with a floating
IP.
This commit deprecates the volume attribute in the
openstack_compute_instance_v2 resource. It's recommended to either
use the block_device attribute or the openstack_compute_volume_attach_v2
resource.
* Add the `consul` check type to the `circonus_check` resource.
* Fix a tab-complete fail.
`Parse` != `Path`, but lexically close.
* Dept of 2nd thoughts: `s/service_name/service/g`
* provider/github: add repository_webhook resource
`repository_webhook` can be used to manage webhooks for repositories.
It is currently limited to organization repositories only.
The changeset includes both documentation and tests.
The tests are green:
```
make testacc TEST=./builtin/providers/github
TESTARGS='-run=TestAccGithubRepositoryWebhook_basic'
==> Checking that code complies with gofmt requirements...
go generate $(go list ./... | grep -v /terraform/vendor/)
2017/03/21 16:20:07 Generated command/internal_plugin_list.go
TF_ACC=1 go test ./builtin/providers/github -v
-run=TestAccGithubRepositoryWebhook_basic -timeout 120m
=== RUN TestAccGithubRepositoryWebhook_basic
--- PASS: TestAccGithubRepositoryWebhook_basic (5.10s)
PASS
ok github.com/hashicorp/terraform/builtin/providers/github 5.113s
```
* provider/github: add github_organization_webhook
the `github_organization_webhook` resource is similar to the
`github_repository_webhook` resource, but it manages webhooks for an
organization.
the tests are green:
```
make testacc TEST=./builtin/providers/github
TESTARGS='-run=TestAccGithubOrganizationWebhook'
==> Checking that code complies with gofmt requirements...
go generate $(go list ./... | grep -v /terraform/vendor/)
2017/03/21 17:23:33 Generated command/internal_plugin_list.go
TF_ACC=1 go test ./builtin/providers/github -v
-run=TestAccGithubOrganizationWebhook -timeout 120m
=== RUN TestAccGithubOrganizationWebhook_basic
--- PASS: TestAccGithubOrganizationWebhook_basic (2.09s)
PASS
ok github.com/hashicorp/terraform/builtin/providers/github 2.109s
```
Terraform will automatically search for AWS API credentials or Instance Profile Credentials. I wish I'd known that when I first read these docs.
Saving credentials outside of tf config files is a much better plan for situations where config files end up in source control and or where multiple people collaborate. Making this information available early will allow new users to set up a much more secure and robust plan for deploying terraform at scale and in production environments.
Adds support for `name_prefix` to the `aws_autoscaling_group` and `aws_elb` resources. Unfortunately when using `name_prefix` with `aws_elb`, this means that the specified prefix can only be a maximum of 6 characters in length. This is because the maximum length for an ELB name is 32 characters, and `resource.PrefixedUniqueId` generates a 26-character unique identifier. I was considering truncating the unique identifier to allow for a longer `name_prefix`, but I worried that doing so would increase the risk of collisions.
* Allow priority attribute of dnsimple_record to be set
Some DNS record types (like MX) allow a priority to specified, and the
ability to do so is important in many environments.
This diff will change dnsimple_record.priority from computed to
optional, allowing it to be used in terraform configs like so:
resource "dnsimple_record" "mx1" {
domain = "example.com"
name = ""
value = "mx1.example.com"
type = "MX"
priority = "1"
}
resource "dnsimple_record" "mx2" {
domain = "example.com"
name = ""
value = "mx2.example.com"
type = "MX"
priority = "2"
}
* mention new priority attribute of dnsimple_record
* add acceptance specs for creating/updating MX records at dnsimple
* Vendor update `github.com/circonus-labs/circonus-gometrics`
* Add the `statsd` check type to the `circonus_check` resource.
* Noop change: Alpha-sort members of maps, variables, and docs.
This augments backend-config to also accept key=value pairs.
This should make Terraform easier to script rather than having to
generate a JSON file.
You must still specify the backend type as a minimal amount in
configurations, example:
```
terraform { backend "consul" {} }
```
This is required because Terraform needs to be able to detect the
_absense_ of that value for unsetting, if that is necessary at some
point.
* fixed broken link
* update one more link
* explicitly define map and change ami to trusty
* remove map definition
* added note about default storage type for aws_db_instance
* added note about default storage type for aws_db_instance
* revert changes to conform with master
It looks like the copy_options value was fat fingered from the
compression_format parameter - I don't believe that GZIP is a valid value for
copy_options, at least based on the documentation.
Adds a link to the documentation and adds a more realistic (and harmless) value
for the copy_options parameter.
This makes it much more directly obvious what `aws_key_pair` does by saying the user *does* provide the key-pair of some kind and that all `aws_key_pair` does is register that public key with an optional name in AWS.
aws_instance documentation is currently available on the site; however, a link is not provided via the navigation under the data sources section. This adds that link.
* Revert "datadog: Fix incorrect schema of monitor for 'silenced' (#12720)"
This reverts commit 8730bf125f.
* Revert "schema: Allow *Resource as Elem of TypeMap in validation (#12722)"
This reverts commit 1df1c21d5b.
* Revert "provider/alicloud: change create ecs postpaid instance API (#12226)"
This reverts commit ffc5a06cb5.
This adds a "lock" config (default true) to allow users to optionally
disable state locking with Consul. This is necessary if the token given
doesn't have session permission and is necessary for backwards
compatibility.
According to the validator, the only acceptable values are 'cpu' and
'memory' (both lower case).
73f4acf041/builtin/providers/aws/validators.go (L722):L725
Updating the docs so that this example works out of the box.
* Add Fastly SSL validation fields
The ssl_hostname field has been deprecated by Fastly. Instead the new
standard is to use the ssl_cert_hostname and ssl_sni_hostname fields:
- ssl_cert_hostname: Used only for certificate verification.
- ssl_sni_hostname: Used only for SNI in the handshake.
Add these fields to the backend block to better support SSL services.
* Add deprecation notice for ssl_hostname
English sentence correction. Corrected following:
"map values are merged and all are values are overridden"
to
"map values are merged and all other values are overridden"
* Begin stubbing out the Circonus provider.
* Remove all references to `reverse:secret_key`.
This value is dynamically set by the service and unused by Terraform.
* Update the `circonus_check` resource.
Still a WIP.
* Add docs for the `circonus_check` resource.
Commit miss, this should have been included in the last commit.
* "Fix" serializing check tags
I still need to figure out how I can make them order agnostic w/o using
a TypeSet. I'm worried that's what I'm going to have to do.
* Spike a quick circonus_broker data source.
* Convert tags to a Set so the order does not matter.
* Add a `circonus_account` data source.
* Correctly spell account.
Pointed out by: @postwait
* Add the `circonus_contact_group` resource.
* Push descriptions into their own file in order to reduce the busyness of the schema when reviewing code.
* Rename `circonus_broker` and `broker` to `circonus_collector` and `collector`, respectively.
Change made with concent by Circonus to reduce confusion (@postwait, @maier, and several others).
* Use upstream contsants where available.
* Import the latest circonus-gometrics.
* Move to using a Set of collectors vs a list attached to a single attribute.
* Rename "cid" to "id" in the circonus_account data source and elsewhere
where possible.
* Inject a tag automatically. Update gometrics.
* Checkpoint `circonus_metric` resource.
* Enable provider-level auto-tagging. This is disabled by default.
* Rearrange metric. This is an experimental "style" of a provider. We'll see.
That moment. When you think you've gone off the rails on a mad scientist
experiment but like the outcome and think you may be onto something but
haven't proven it to yourself or anyone else yet? That. That exact
feeling of semi-confidence while being alone in the wilderness. Please
let this not be the Terraform provider equivalent of DJB's C style of
coding.
We'll know in another resource or two if this was a horrible mistake or
not.
* Begin moving `resource_circonus_check` over to the new world order/structure:
Much of this is WIP and incomplete, but here is the new supported
structure:
```
variable "used_metric_name" {
default = "_usage`0`_used"
}
resource "circonus_check" "usage" {
# collectors = ["${var.collectors}"]
collector {
id = "${var.collectors[0]}"
}
name = "${var.check_name}"
notes = "${var.notes}"
json {
url = "https://${var.target}/account/current"
http_headers = {
"Accept" = "application/json"
"X-Circonus-App-Name" = "TerraformCheck"
"X-Circonus-Auth-Token" = "${var.api_token}"
}
}
stream {
name = "${circonus_metric.used.name}"
tags = "${circonus_metric.used.tags}"
type = "${circonus_metric.used.type}"
}
tags = {
source = "circonus"
}
}
resource "circonus_metric" "used" {
name = "${var.used_metric_name}"
tags = {
source = "circonus"
}
type = "numeric"
}
```
* Document the `circonus_metric` resource.
* Updated `circonus_check` docs.
* If a port was present, automatically set it in the Config.
* Alpha sort the check parameters now that they've been renamed.
* Fix a handful of panics as a result of the schema changing.
* Move back to a `TypeSet` for tags. After a stint with `TypeMap`, move
back to `TypeSet`.
A set of strings seems to match the API the best. The `map` type was
convenient because it reduced the amount of boilerplate, but you loose
out on other things. For instance, tags come in the form of
`category:value`, so naturally it seems like you could use a map, but
you can't without severe loss of functionality because assigning two
values to the same category is common. And you can't normalize map
input or suppress the output correctly (this was eventually what broke
the camel's back). I tried an experiment of normalizing the input to be
`category:value` as the key in the map and a value of `""`, but... seee
diff suppress. In this case, simple is good.
While here bring some cleanups to _Metric since that was my initial
testing target.
* Rename `providerConfig` to `_ProviderConfig`
* Checkpoint the `json` check type.
* Fix a few residual issues re: missing descriptions.
* Rename `validateRegexp` to `_ValidateRegexp`
* Use tags as real sets, not just a slice of strings.
* Move the DiffSuppressFunc for tags down to the Elem.
* Fix up unit tests to chase the updated, default hasher function being used.
* Remove `Computed` attribute from `TypeSet` objects.
This fixes a pile of issues re: update that I was having.
* Rename functions.
`GetStringOk` -> `GetStringOK`
`GetSetAsListOk` -> `GetSetAsListOK`
`GetIntOk` -> `GetIntOK`
* Various small cleanups and comments rolled into a single commit.
* Add a `postgresql` check type for the `circonus_check` resource.
* Rename various validator functions to be _CapitalCase vs capitalCase.
* Err... finish the validator renames.
* Add `GetFloat64()` support.
* Add `icmp_ping` check type support.
* Catch up to the _API*Attr renames.
Deliberately left out of the previous commit in order to create a clean
example of what is required to add a new check type to the
`circonus_check` resource.
* Clarify when the `target` attribute is required for the `postgresql`
check type.
* Correctly pull the metric ID attribute from the right location.
* Add a circonus_stream_group resource (a.k.a. a Circonus "metric cluster")
* Add support for the [`caql`](https://login.circonus.com/user/docs/caql_reference) check type.
* Add support for the `http` check type.
* `s/SSL/TLS/g`
* Add support for `tcp` check types.
* Enumerate the available metrics that are supported for each check type.
* Add [`cloudwatch`](https://login.circonus.com/user/docs/Data/CheckTypes/CloudWatch) check type support.
* Add a `circonus_trigger` resource (a.k.a Circonus Ruleset).
* Rename a handful of functions to make it clear in the function name the
direction of flow for information moving through the provider.
TL;DR: Replace `parse` and `read` with "foo to bar"-like names.
* Fix the attribute name used in a validator. Absent != After.
* Set the minimum `absent` predicate to 70s per testing.
* Fix the regression tests for circonus_trigger now that absent has a 70s min
* Fix up the `tcp` check to require a `host` attribute.
Fix tests. It's clear I didn't run these before committing/pushing the
`tcp` check last time.
* Fix `circonus_check` for `cloudwatch` checks.
* Rename `parsePerCheckTypeConfig()` to `_CheckConfigToAPI` to be
consistent with other function names.
grep(1)ability of code++
* Slack buttons as an integer are string encoded.
* Fix updates for `circonus_contact`.
* Fix the out parameters for contact groups.
* Move to using `_CastSchemaToTF()` where appropriate.
* Fix circonus_contact_group. Updates work as expected now.
* Use `_StateSet()` in place of `d.Set()` everywhere.
* Make a quick pass over the collector datasource to modernize its style
* Quick pass for items identified by `golint`.
* Fix up collectors
* Fix the `json` check type.
Reconcile possible sources of drift. Update now works as expected.
* Normalize trigger durations to seconds.
* Improve the robustness of the state handling for the `circonus_contact_group` resource.
* I'm torn on this, but sort the contact groups in the notify list.
This does mean that if the first contact group in the list has a higher
lexical sort order the plan won't converge until the offending resource
is tainted and recreated. But there's also some sorting happening
elsewhere, so.... sort and taint for now and this will need to be
revisited in the future.
* Add support for the `httptrap` check type.
* Remove empty units from the state file.
* Metric clusters can return a 404. Detect this accordingly in its
respective Exists handler.
* Add a `circonus_graph` resource.
* Fix a handful of bugs in the graph provider.
* Re-enable the necessary `ConflictsWith` definitions and normalize attribute names.
* Objects that have been deleted via the UI return a 404. Handle in Exists().
* Teach `circonus_graph`'s Stack set to accept nil values.
* Set `ForceNew: true` for a graph's name.
* Chase various API fixes required to make `circonus_graph` work as expected.
* Fix up the handling of sub-1 zoom resolutions for graphs.
* Add the `check_by_collector` out parameter to the `circonus_check` resource.
* Improve validation of line vs area graphs. Fix graph_style.
* Fix up the `logarithmic` graph axis option.
* Resolve various trivial `go vet` issues.
* Add a stream_group out parameter.
* Remove incorrectly applied `Optional` attributes to the `circonus_account` resource.
* Remove various `Optional` attributes from the `circonus_collector` data source.
* Centralize the common need to suppress leading and trailing whitespace into `suppressWhitespace`.
* Sync up with upstream vendor fixes for circonus_graph.
* Update the checksum value for the http check.
* Chase `circonus_graph`'s underlying `line_style` API object change from `string` to `*string`.
* Clean up tests to use a generic terraform regression testing account.
* Add support for the MySQL to the `circonus_check` resource.
* Begin stubbing out the Circonus provider.
* Remove all references to `reverse:secret_key`.
This value is dynamically set by the service and unused by Terraform.
* Update the `circonus_check` resource.
Still a WIP.
* Add docs for the `circonus_check` resource.
Commit miss, this should have been included in the last commit.
* "Fix" serializing check tags
I still need to figure out how I can make them order agnostic w/o using
a TypeSet. I'm worried that's what I'm going to have to do.
* Spike a quick circonus_broker data source.
* Convert tags to a Set so the order does not matter.
* Add a `circonus_account` data source.
* Correctly spell account.
Pointed out by: @postwait
* Add the `circonus_contact_group` resource.
* Push descriptions into their own file in order to reduce the busyness of the schema when reviewing code.
* Rename `circonus_broker` and `broker` to `circonus_collector` and `collector`, respectively.
Change made with concent by Circonus to reduce confusion (@postwait, @maier, and several others).
* Use upstream contsants where available.
* Import the latest circonus-gometrics.
* Move to using a Set of collectors vs a list attached to a single attribute.
* Rename "cid" to "id" in the circonus_account data source and elsewhere
where possible.
* Inject a tag automatically. Update gometrics.
* Checkpoint `circonus_metric` resource.
* Enable provider-level auto-tagging. This is disabled by default.
* Rearrange metric. This is an experimental "style" of a provider. We'll see.
That moment. When you think you've gone off the rails on a mad scientist
experiment but like the outcome and think you may be onto something but
haven't proven it to yourself or anyone else yet? That. That exact
feeling of semi-confidence while being alone in the wilderness. Please
let this not be the Terraform provider equivalent of DJB's C style of
coding.
We'll know in another resource or two if this was a horrible mistake or
not.
* Begin moving `resource_circonus_check` over to the new world order/structure:
Much of this is WIP and incomplete, but here is the new supported
structure:
```
variable "used_metric_name" {
default = "_usage`0`_used"
}
resource "circonus_check" "usage" {
# collectors = ["${var.collectors}"]
collector {
id = "${var.collectors[0]}"
}
name = "${var.check_name}"
notes = "${var.notes}"
json {
url = "https://${var.target}/account/current"
http_headers = {
"Accept" = "application/json"
"X-Circonus-App-Name" = "TerraformCheck"
"X-Circonus-Auth-Token" = "${var.api_token}"
}
}
stream {
name = "${circonus_metric.used.name}"
tags = "${circonus_metric.used.tags}"
type = "${circonus_metric.used.type}"
}
tags = {
source = "circonus"
}
}
resource "circonus_metric" "used" {
name = "${var.used_metric_name}"
tags = {
source = "circonus"
}
type = "numeric"
}
```
* Document the `circonus_metric` resource.
* Updated `circonus_check` docs.
* If a port was present, automatically set it in the Config.
* Alpha sort the check parameters now that they've been renamed.
* Fix a handful of panics as a result of the schema changing.
* Move back to a `TypeSet` for tags. After a stint with `TypeMap`, move
back to `TypeSet`.
A set of strings seems to match the API the best. The `map` type was
convenient because it reduced the amount of boilerplate, but you loose
out on other things. For instance, tags come in the form of
`category:value`, so naturally it seems like you could use a map, but
you can't without severe loss of functionality because assigning two
values to the same category is common. And you can't normalize map
input or suppress the output correctly (this was eventually what broke
the camel's back). I tried an experiment of normalizing the input to be
`category:value` as the key in the map and a value of `""`, but... seee
diff suppress. In this case, simple is good.
While here bring some cleanups to _Metric since that was my initial
testing target.
* Rename `providerConfig` to `_ProviderConfig`
* Checkpoint the `json` check type.
* Fix a few residual issues re: missing descriptions.
* Rename `validateRegexp` to `_ValidateRegexp`
* Use tags as real sets, not just a slice of strings.
* Move the DiffSuppressFunc for tags down to the Elem.
* Fix up unit tests to chase the updated, default hasher function being used.
* Remove `Computed` attribute from `TypeSet` objects.
This fixes a pile of issues re: update that I was having.
* Rename functions.
`GetStringOk` -> `GetStringOK`
`GetSetAsListOk` -> `GetSetAsListOK`
`GetIntOk` -> `GetIntOK`
* Various small cleanups and comments rolled into a single commit.
* Add a `postgresql` check type for the `circonus_check` resource.
* Rename various validator functions to be _CapitalCase vs capitalCase.
* Err... finish the validator renames.
* Add `GetFloat64()` support.
* Add `icmp_ping` check type support.
* Catch up to the _API*Attr renames.
Deliberately left out of the previous commit in order to create a clean
example of what is required to add a new check type to the
`circonus_check` resource.
* Clarify when the `target` attribute is required for the `postgresql`
check type.
* Correctly pull the metric ID attribute from the right location.
* Add a circonus_stream_group resource (a.k.a. a Circonus "metric cluster")
* Add support for the [`caql`](https://login.circonus.com/user/docs/caql_reference) check type.
* Add support for the `http` check type.
* `s/SSL/TLS/g`
* Add support for `tcp` check types.
* Enumerate the available metrics that are supported for each check type.
* Add [`cloudwatch`](https://login.circonus.com/user/docs/Data/CheckTypes/CloudWatch) check type support.
* Add a `circonus_trigger` resource (a.k.a Circonus Ruleset).
* Rename a handful of functions to make it clear in the function name the
direction of flow for information moving through the provider.
TL;DR: Replace `parse` and `read` with "foo to bar"-like names.
* Fix the attribute name used in a validator. Absent != After.
* Set the minimum `absent` predicate to 70s per testing.
* Fix the regression tests for circonus_trigger now that absent has a 70s min
* Fix up the `tcp` check to require a `host` attribute.
Fix tests. It's clear I didn't run these before committing/pushing the
`tcp` check last time.
* Fix `circonus_check` for `cloudwatch` checks.
* Rename `parsePerCheckTypeConfig()` to `_CheckConfigToAPI` to be
consistent with other function names.
grep(1)ability of code++
* Slack buttons as an integer are string encoded.
* Fix updates for `circonus_contact`.
* Fix the out parameters for contact groups.
* Move to using `_CastSchemaToTF()` where appropriate.
* Fix circonus_contact_group. Updates work as expected now.
* Use `_StateSet()` in place of `d.Set()` everywhere.
* Make a quick pass over the collector datasource to modernize its style
* Quick pass for items identified by `golint`.
* Fix up collectors
* Fix the `json` check type.
Reconcile possible sources of drift. Update now works as expected.
* Normalize trigger durations to seconds.
* Improve the robustness of the state handling for the `circonus_contact_group` resource.
* I'm torn on this, but sort the contact groups in the notify list.
This does mean that if the first contact group in the list has a higher
lexical sort order the plan won't converge until the offending resource
is tainted and recreated. But there's also some sorting happening
elsewhere, so.... sort and taint for now and this will need to be
revisited in the future.
* Add support for the `httptrap` check type.
* Remove empty units from the state file.
* Metric clusters can return a 404. Detect this accordingly in its
respective Exists handler.
* Add a `circonus_graph` resource.
* Fix a handful of bugs in the graph provider.
* Re-enable the necessary `ConflictsWith` definitions and normalize attribute names.
* Objects that have been deleted via the UI return a 404. Handle in Exists().
* Teach `circonus_graph`'s Stack set to accept nil values.
* Set `ForceNew: true` for a graph's name.
* Chase various API fixes required to make `circonus_graph` work as expected.
* Fix up the handling of sub-1 zoom resolutions for graphs.
* Add the `check_by_collector` out parameter to the `circonus_check` resource.
* Improve validation of line vs area graphs. Fix graph_style.
* Fix up the `logarithmic` graph axis option.
* Resolve various trivial `go vet` issues.
* Add a stream_group out parameter.
* Remove incorrectly applied `Optional` attributes to the `circonus_account` resource.
* Remove various `Optional` attributes from the `circonus_collector` data source.
* Centralize the common need to suppress leading and trailing whitespace into `suppressWhitespace`.
* Sync up with upstream vendor fixes for circonus_graph.
* Update the checksum value for the http check.
* Chase `circonus_graph`'s underlying `line_style` API object change from `string` to `*string`.
* Clean up tests to use a generic terraform regression testing account.
* Rename all identifiers that began with a `_` and replace with a corresponding lowercase glyph.
* Remove stale comment in types.
* Move the calls to `ResourceData`'s `SetId()` calls to be first in the
list so that no resources are lost in the event of a `panic()`.
* Remove `stateSet` from the `circonus_trigger` resource.
* Remove `stateSet` from the `circonus_stream_group` resource.
* Remove `schemaSet` from the `circonus_graph` resource.
* Remove `stateSet` from the `circonus_contact` resource.
* Remove `stateSet` from the `circonus_metric` resource.
* Remove `stateSet` from the `circonus_account` data source.
* Remove `stateSet` from the `circonus_collector` data source.
* Remove stray `stateSet` call from the `circonus_contact` resource.
This is an odd artifact to find... I'm completely unsure as to why it
was there to begin with but am mostly certain it's a bug and needs to be
removed.
* Remove `stateSet` from the `circonus_check` resource.
* Remove the `stateSet` helper function.
All call sites have been converted to return errors vs `panic()`'ing at
runtime.
* Remove a pile of unused functions and type definitions.
* Remove the last of the `attrReader` interface.
* Remove an unused `Sprintf` call.
* Update `circonus-gometrics` and remove unused files.
* Document what `convertToHelperSchema()` does.
Rename `castSchemaToTF` to `convertToHelperSchema`.
Change the function parameter ordering so the `map` of attribute
descriptions: this is much easier to maintain when the description map
is first when creating schema inline.
* Move descriptions into their respective source files.
* Remove all instances of `panic()`.
In the case of software bugs, log an error. Never `panic()` and always
return a value.
* Rename `stream_group` to `metric_cluster`.
* Rename triggers to rule sets
* Rename `stream` to `metric`.
* Chase the `stream` -> `metric` change into the docs.
* Remove some unused test functions.
* Add the now required `color` attribute for graphing a `metric_cluster`.
* Add a missing description to silence a warning.
* Add `id` as a selector for the account data source.
* Futureproof testing: Randomize all asset names to prevent any possible resource conflicts.
This isn't a necessary change for our current build and regression
testing, but *just in case* we have a radical change to our testing
framework in the future, make all resource names fully random.
* Rename various values to match the Circonus docs.
* s/alarm/alert/g
* Ensure ruleset criteria can not be empty.
helper/schema: Rename Timeout resource block to Timeouts
- Pluralize configuration argument name to better represent that there is
one block for many timeouts
- use a const for the configuration timeouts key
- update docs
* added support for linux capabilities
Refs #11623
Added capabilities block
Added tests for it
Added documentation for it.
My PC doesnt support memory swap so it errors there.
```
$ make testacc TEST=./builtin/providers/docker TESTARGS='-run=TestAccDockerContainer_'
==> Checking that code complies with gofmt requirements...
go generate $(go list ./... | grep -v /terraform/vendor/)
2017/02/17 14:57:08 Generated command/internal_plugin_list.go
TF_ACC=1 go test ./builtin/providers/docker -v -run=TestAccDockerContainer_ -timeout 120m
=== RUN TestAccDockerContainer_basic
--- PASS: TestAccDockerContainer_basic (44.50s)
=== RUN TestAccDockerContainer_volume
--- PASS: TestAccDockerContainer_volume (40.73s)
=== RUN TestAccDockerContainer_customized
--- FAIL: TestAccDockerContainer_customized (50.27s)
testing.go:265: Step 0 error: Check failed: Check 2/2 error: Container has wrong memory swap setting: -1
Please check that you machine supports memory swap (you can do that by running 'docker info' command).
=== RUN TestAccDockerContainer_upload
--- PASS: TestAccDockerContainer_upload (38.56s)
FAIL
exit status 1
FAIL github.com/hashicorp/terraform/builtin/providers/docker 174.070s
Makefile:48: recipe for target 'testacc' failed
make: *** [testacc] Error 1
```
* Documentation changes.
* added maxitems and rerun tests
* provider/ignition: migration from resources to data resources
* website: provider/ignition documention updated to data resources
* provider/ignition: backwards compatibility support for old resources
* Allow for local development with ns1 provider.
* Adds first implementation of ns1 notification list resource.
* NS1 record.use_client_subnet defaults to true, and added test for field.
* Adds more test cases for monitoring jobs.
* Adds webhook/datafeed notifier types and acctests for notifylists.
* Adds docs for notifylists resource.
* Updates ns1-go rest client via govendor
* Fix typos in record docs
There is not azurerm_container_service.linux_profile.ssh_keys argument, as stated in the examples, but an ssh_key field. Arguments reference was correct.
* helper/schema: Add custom Timeout block for resources
* refactor DefaultTimeout to suuport multiple types. Load meta in Refresh from Instance State
* update vpc but it probably wont last anyway
* refactor test into table test for more cases
* rename constant keys
* refactor configdecode
* remove VPC demo
* remove comments
* remove more comments
* refactor some
* rename timeKeys to timeoutKeys
* remove note
* documentation/resources: Document the Timeout block
* document timeouts
* have a test case that covers 'hours'
* restore a System default timeout of 20 minutes, instead of 0
* restore system default timeout of 20 minutes, refactor tests, add test method to handle system default
* rename timeout key constants
* test applying timeout to state
* refactor test
* Add resource Diff test
* clarify docs
* update to use constants
* provider/openstack: Redesign openstack_blockstorage_volume_attach_v2
The current design of openstack_blockstorage_volume_attach_v2 does
not correctly implement the Block Storage API attachment call. It
was only partially implemented, only marking volumes as being
attached, while never actually attaching them.
This redesign is a closer alignment to how creating attachments
to a standalone Block Storage service works.
For creating attachments specifically in the case of OpenStack
Compute instances, the openstack_compute_volume_attach_v2 resource
is required.
* provider/openstack: re-adding instance_id for backwards compatibility
This commit adds the openstack_compute_floatingip_associate_v2
resource which specifically handles associating a floating IP
address to an instance. This can be used instead of the existing
floating_ip options in the openstack_compute_instance_v2 resource.
* Replace DNSimple API client with the official Go client
* Upgrade DNSimple provider to use the new API v2
Acceptance tests pass:
```
=== RUN TestProvider
--- PASS: TestProvider (0.00s)
=== RUN TestProvider_impl
--- PASS: TestProvider_impl (0.00s)
=== RUN TestAccDNSimpleRecord_Basic
--- PASS: TestAccDNSimpleRecord_Basic (2.67s)
=== RUN TestAccDNSimpleRecord_Updated
--- PASS: TestAccDNSimpleRecord_Updated (1.88s)
PASS
ok github.com/hashicorp/terraform/builtin/providers/dnsimple
```
Note that the code still has to be updated to pass the account ID
dynamically in place of "TODO-ACCOUNT".
* Refactor DNSimple provider to expose both client and config
The config is required as the new API wants to know the identifier of
the account you are operating to. The account is not stored in the
client (as the client can talk with different accounts), hence I need
to pass it as part of the config.
* Identify Terraform requests to DNSimple via UserAgent
* Upgrade to the latest dnsimple-go version
* Update docs
Provide upgrade instructions and update the docs for API v2.
* Remove rendundant type declaration
* provider/openstack: Rename provider to loadbalancer_provider
This commit renames provider to loadbalancer_provider in the
openstack_lb_loadbalancer_v2 resource.
It also changes security_group_ids to Computed so default
security groups are added to the state correctly.
* provider/openstack: Switch to a deprecation path for loadbalancer provider
This commit switches to using a deprecation path for removal of the
previous "provider" argument in favor of the new "loadbalancer_provider".
This commit removes the bundled devstack script and updates the
documentation to point to the latest build scripts used for
testing. It also mentions that development should be possible on
any OpenStack environment.
Update the docs for `google_project` with @zachgersh's suggestions
(#11895) to properly communicate the changes that took place in 0.8.5.
While they don't break any current configs or state, the new behaviour
should be called out for people who were using the old behaviour and are
adding new projects to their configs/state.
I also took this opportunity to update google_project_iam_policy with a
note to users letting them know that there be dragons.
This feature allows sending a notification to either an SQS queue or an
SNS topic when an error occurs running an AWS Lambda function.
This fixes#10630.
The adjustment was made after I spent a few minutes scratching my head, I have done the following:
* Updated the 'provider' block in the first example code to be 'token' instead of 'access_key' - Didn't work previously.
* Clarified locations of both 'access_key' and 'token' within the Scaleway panel, and appropriate naming.
* Removed "empty" section in example block at the bottom, as this fails with an error when attempted.
Overall I think this increases readability. I have tested this against my own Scaleway account.
* providers/spotinst: Add support for Spotinst resources
* providers/spotinst: Fix merge conflict - layouts/docs.erb
* docs/providers/spotinst: Fix the resource description field
* providers/spotinst: Fix the acceptance tests
* providers/spotinst: Mark the device_index as a required field
* providers/spotinst: Change the associate_public_ip_address field to TypeBool
* docs/providers/spotinst: Update the description of the adjustment field
* providers/spotinst: Rename IamRole to IamInstanceProfile to make it more compatible with the AWS provider
* docs/providers/spotinst: Rename iam_role to iam_instance_profile
* providers/spotinst: Deprecate the iam_role attribute
* providers/spotinst: Fix a misspelled var (IamRole)
* providers/spotinst: Fix possible null pointer exception related to "iam_instance_profile"
* docs/providers/spotinst: Add "load_balancer_names" missing description
* providers/spotinst: New resource "spotinst_subscription" added
* providers/spotinst: Eliminate a possible null pointer exception in "spotinst_aws_group"
* providers/spotinst: Eliminate a possible null pointer exception in "spotinst_subscription"
* providers/spotinst: Mark spotinst_subscription as deleted in destroy
* providers/spotinst: Add support for custom event format in spotinst_subscription
* providers/spotinst: Disable the destroy step of spotinst_subscription
* providers/spotinst: Add support for update subscriptions
* providers/spotinst: Merge fixed conflict - layouts/docs.erb
* providers/spotinst: Vendor dependencies
* providers/spotinst: Return a detailed error message
* provider/spotinst: Update the plugin list
* providers/spotinst: Vendor dependencies using govendor
* providers/spotinst: New resource "spotinst_healthcheck" added
* providers/spotinst: Update the Spotinst SDK
* providers/spotinst: Comment out unnecessary log.Printf
* providers/spotinst: Fix the acceptance tests
* providers/spotinst: Gofmt fixes
* providers/spotinst: Use multiple functions to expand each block
* providers/spotinst: Allow ondemand_count to be zero
* providers/spotinst: Change security_group_ids from TypeSet to TypeList
* providers/spotinst: Remove unnecessary `ForceNew` fields
* providers/spotinst: Update the Spotinst SDK
* providers/spotinst: Add support for capacity unit
* providers/spotinst: Add support for EBS volume pool
* providers/spotinst: Delete health check
* providers/spotinst: Allow to set multiple availability zones
* providers/spotinst: Gofmt
* providers/spotinst: Omit empty strings from the load_balancer_names field
* providers/spotinst: Update the Spotinst SDK to v1.1.9
* providers/spotinst: Add support for new strategy parameters
* providers/spotinst: Update the Spotinst SDK to v1.2.0
* providers/spotinst: Add support for Kubernetes integration
* providers/spotinst: Fix merge conflict - vendor/vendor.json
* providers/spotinst: Update the Spotinst SDK to v1.2.1
* providers/spotinst: Add support for Application Load Balancers
* providers/spotinst: Do not allow to set ondemand_count to 0
* providers/spotinst: Update the Spotinst SDK to v1.2.2
* providers/spotinst: Add support for scaling policy operators
* providers/spotinst: Add dimensions to spotinst_aws_group tests
* providers/spotinst: Allow both ARN and name for IAM instance profiles
* providers/spotinst: Allow ondemand_count=0
* providers/spotinst: Split out the set funcs into flatten style funcs
* providers/spotinst: Update the Spotinst SDK to v1.2.3
* providers/spotinst: Add support for EBS optimized flag
* providers/spotinst: Update the Spotinst SDK to v2.0.0
* providers/spotinst: Use stringutil.Stringify for debugging
* providers/spotinst: Update the Spotinst SDK to v2.0.1
* providers/spotinst: Key pair is now optional
* providers/spotinst: Make sure we do not nullify signals on strategy update
* providers/spotinst: Hash both Strategy and EBS Block Device
* providers/spotinst: Hash AWS load balancer
* providers/spotinst: Update the Spotinst SDK to v2.0.2
* providers/spotinst: Verify namespace exists before appending policy
* providers/spotinst: Image ID will be in a separate block from now on, so as to allow ignoring changes only on the image ID. This change is backwards compatible.
* providers/spotinst: user data decoded when returned from spotinst api, so that TF compares the two states properly, and does not update without cause.
* provider/aws: New resource codepipeline
* Vendor aws/codepipeline
* Add tests
* Add docs
* Bump codepipeline to v1.6.25
* Adjustments based on feedback
* Force new resource on ID change
* Improve tests
* Switch update to read
Since we don't require a second pass, only do a read.
* Skip tests if GITHUB_TOKEN is not set
Reported by @sethvargo - we auto encode for AWS, we should follow a
similar pattern for Azure.
In order to escape the double encoding, we check that it's not already
encoded before encoding
* Vendor google.golang.org/api/cloudbilling/v1
* providers/google: Add cloudbilling client
* providers/google: google_project supports billing account
This change allows a Terraform user to set and update the billing
account associated with their project.
* providers/google: Testing project billing account
This change adds optional acceptance tests for project billing accounts.
GOOGLE_PROJECT_BILLING_ACCOUNT and GOOGLE_PROJECT_BILLING_ACCOUNT_2
must be set in the environment for the tests to run; otherwise, they
will be skipped.
Also includes a few code cleanups per review.
* providers/google: Improve project billing error message
* vendor: Updating Gophercloud
* provider/openstack: Image Data Source
This commit adds the openstack_images_image_v2 data source which
is able to query the Image Service v2 API for a specific image.
This commit adds the ability to log all requests and responses
between Terraform and the OpenStack cloud. To enable, set the
OS_DEBUG environment variable to 1.
This commit has a few more fixes to the recently added
openstack_images_image_v2 resource:
* tags were changed to a Set because the OpenStack Image API does
not seem to respect ordering.
* The visibility argument was fixed.
* Acceptance tests for all updatable fields has been implemented.
* Documentation updates, including a new entry in the sidebar.
As stated in the second generation pricing documentation:
> Second Generation pricing is composed of the following charges:
>
> Instance pricing
> Storage pricing
> Network pricing
> There is no choice of pricing plan.
cf. https://cloud.google.com/sql/pricing#2nd-gen-pricing
According to the referenced documentation, it is one week, not two months:
> You cannot reuse an instance name for up to a week after you have deleted an instance.
cf. https://cloud.google.com/sql/docs/mysql/delete-instance
* provider/google-cloud: Add maintenance window
Allows specification of the `maintenance_window` within the `settings`
block. This controls when Google will restart a database in order to
apply updates. It is also possible to select an `update_track` to
relatively control updating between instances in the same project.
* Adjustments as suggested in code review.
* Added new resource aws_elastic_beanstalk_application_version.
* Changing bucket and key to required.
* Update to use d.Id() directly in DescribeApplicationVersions.
* Checking err to make sure that the application version is successfully deleted.
* Update `version_label` to `Computed: true`.
* provider/aws: Updating to python solution stack
* provider/aws: Beanstalk App Version delete source
The Elastic Beanstalk API call to delete `application_version` resource
should not delete the s3 bundle, as this object is managed by another
Terraform resource
* provider/aws: Update application version docs
* Fix application version test
* Add `version_label` update test
Adds test that fails after rebasing branch onto v0.8.x. `version_label`
changes do not update the `aws_elastic_beanstalk_environment` resource.
* `version_label` changes to update environment
* Prevent unintended delete of `application_version`
Prevents an `application_version` used by multiple environments from
being deleted.
* Add `force_delete` attribute
* Update documentation
* [datadog] Update go-datadog-api library
Involves one breaking API change. Also some `gofmt`ing.
* [datadog] Add support for new_host_delay to the datadog_monitor resource
New API parameter that Datadog added for monitors to ignore new hosts
for the specified time period in monitor evaluation.
* provider/azurerm: Remove location argument from loadbalancer_backend_address_pool
Applying a Terraform execution plan containing a loadbalancer_backend_address_pool
with a location argument throws a warning:
----
Warnings:
* azurerm_lb_backend_address_pool.test: "location": [DEPRECATED] location is no longer used
No errors found. Continuing with 1 warning(s).
----
* provider/azurerm: Remove location argument from azurerm_lb_rule
(Similar to https://github.com/hashicorp/terraform/pull/11963)
Applying a Terraform execution plan containing a azurerm_lb_rule
with a location argument throws a warning:
```
Warnings:
* azurerm_lb_rule.test: "location": [DEPRECATED] location is no longer used
No errors found. Continuing with 1 warning(s).
```
Removing the (required) location argument from the documentation
as it is deprecated as per the warning.
* provider/azurerm: Remove location argument from azurerm_lb_probe
(Similar to https://github.com/hashicorp/terraform/pull/11963)
Applying a Terraform execution plan containing a azurerm_lb_probe
with a location argument throws a warning:
```
Warnings:
* azurerm_lb_probe.test: "location": [DEPRECATED] location is no longer used
No errors found. Continuing with 1 warning(s).
```
Removing the (required) location argument from the documentation
as it is deprecated as per the warning.
Applying a Terraform execution plan containing a loadbalancer_backend_address_pool
with a location argument throws a warning:
----
Warnings:
* azurerm_lb_backend_address_pool.test: "location": [DEPRECATED] location is no longer used
No errors found. Continuing with 1 warning(s).
----
* Adds response conditions for papertrail in fastly
* Adds cache conditional for gzip in fastly
* Opens up conitionals under fastly headers
* Adds request conditions to s3 logging for fastly
* Creates conditionals properly for testing
* Clarifies conditionals documentation for the website
* Clarifies resource descriptions for conditionals
* Formats papertrail testing properly
* Fizes syntax issues in gzip and s3 fastly testing
* Tests full schemas for gzip basic testing
* Updates header testing to check full schema
* Fixes gzip and headers testing
* Fixes s3 conditional testing
We now enable the final_snapshot of aws_rds_cluster by default. This is
a continuation of the work in #11668
```
% make testacc TEST=./builtin/providers/aws TESTARGS='-run=TestAccAWSRDSCluster_takeFinalSnapshot'
==> Checking that code complies with gofmt requirements...
go generate $(go list ./... | grep -v /terraform/vendor/)
2017/02/04 13:19:52 Generated command/internal_plugin_list.go
TF_ACC=1 go test ./builtin/providers/aws -v -run=TestAccAWSRDSCluster_takeFinalSnapshot -timeout 120m
=== RUN TestAccAWSRDSCluster_takeFinalSnapshot
--- PASS: TestAccAWSRDSCluster_takeFinalSnapshot (141.59s)
PASS
ok github.com/hashicorp/terraform/builtin/providers/aws 141.609s
```
* Importing v7.0.1 of the SDK
* Configuring the Container Service Client
* Scaffolding the Container Service resource
* Scaffolding the Website documentation
* Completing the documentation
* Acceptance Tests for Kubernetes Azure Container Service
* DCOS / Swarm tests
* Parsing values back from the API properly
* Fixing the test
* Service Principal can be optional. Because of course it can.
* Validation for the Container Service Count's
* Updating the docs
* Updating the field required values
* Making the documentation more explicit
* Fixing the build
* Examples for DCOS and Swarm
* Removing storage_uri for now
* Making the SSH Key required as per the docs
* Resolving the merge conflicts
* Removing the unused error's
* Adding Hash's to the schema's
* Switching out the provider registration
* Fixing the hash definitions
* Updating keydata to match
* Client Secret is sensitive
* List -> Set
* Using the first item for the diagnostic_profile
* Helps if you actually update the type
* Updating the docs to include the Computed fields
* Fixing comments / removing redundant optional checks
* Removing the FQDN's from the examples
* Moving the Container resources together
* Enable remote s3 state support for assume role
- provide role_arn in backend config to enable assume role
Fixes#8739
* Check for errors after obtaining credentials
I lost a few hours figuring out the right way to describe an ARN for an API
Gateway resource. Specifically I translated the example poorly since I didn't
realize I had to append the path onto the end of the ARN.
Adds two links to an Amazon documentation page describing the format for API
Gateway ARN's. Adds an additional path component to the ARN example so you can
see you need to specify paths.
* Adds schema for fastly healthcheck
* Handles changes to the fastly healthcheck
* Flattens and refreshed fastly healthchecks
* Adds testing for fastly healthcheck
* Adds website documentation for fastly healthcheck
* Fixes terraform syntax in test examples
- OpenStack provider now supports either a path or the file contents for
`cacert_file`, `cert`, and `key`
- Makes it easier to automate TF by passing in certs as environment
variables
- set `OS_SSL_TESTS=true` to run the acceptance tests
Updates vendored Nomad jobspec parser such that parameterized nomad job files can no be parsed and used with Terraform.
Also fixes tests to adhere to new jobspec version, and update documentation to reflect such as well.
Can only be made up of lowercase letters 'a'-'z', the numbers 0-9 and the hyphen. The hyphen may not lead or trail in the name.
Changed MySqlServer to mysqlserver
* Terraform ProfitBricks Builder
* make fmt
* Merge remote-tracking branch 'upstream/master' into terraform-provider-profitbricks
# Conflicts:
# command/internal_plugin_list.go
* Addressing PR remarks
* Removed importers
* Added ProfitBricks Data Sources
* Added documentation
* Updated to REST v3:
- nat parameter for Nics
- availabilityZone for Volumes
Minor code clean up
* Minor code clean up
* Fixed typo in volume documentation
* make fmt
* Addressing requested changes
* Added a step in load balancer tests in CheckDestroy where we are making sure that the test doesn't leave dangling resources in ProfitBricks
* Changed expected image name
* Fixed data center test
Code clean up
* Added missing parameter to ProfitBricks provider docs.
* Terraform ProfitBricks Builder
* make fmt
* Merge remote-tracking branch 'upstream/master' into terraform-provider-profitbricks
# Conflicts:
# command/internal_plugin_list.go
* Addressing PR remarks
* Removed importers
* Added ProfitBricks Data Sources
* Added documentation
* Updated to REST v3:
- nat parameter for Nics
- availabilityZone for Volumes
Minor code clean up
* Minor code clean up
* Fixed typo in volume documentation
* make fmt
* Addressing requested changes
* Added a step in load balancer tests in CheckDestroy where we are making sure that the test doesn't leave dangling resources in ProfitBricks
* Changed expected image name
* Fixed data center test
Code clean up
* Add aws dms vendoring
* Add aws dms endpoint resource
* Add aws dms replication instance resource
* Add aws dms replication subnet group resource
* Add aws dms replication task resource
* Fix aws dms resource go vet errors
* Review fixes: Add id validators for all resources. Add validator for endpoint engine_name.
* Add aws dms resources to importability list
* Review fixes: Add aws dms iam role dependencies to test cases
* Review fixes: Adjustments for handling input values
* Add aws dms replication subnet group tagging
* Fix aws dms subnet group doesn't use standard error for resource not found
* Missed update of aws dms vendored version
* Add aws dms certificate resource
* Update aws dms resources to force new for immutable attributes
* Fix tests failing on subnet deletion by adding explicit dependencies. Combine import tests with basic tests to cut down runtime.
* provider/aws: Update Application Auto Scaling service model
- Add support for automatically scaling an Amazon EC2 Spot fleet.
* Remove duplicate policy_type check.
* Test creating a scalable target for a splot fleet request.
* Test creating a scaling policy for a splot fleet request.
* Update resource docs to support scaling an Amazon EC2 Spot fleet.
- aws_appautoscaling_policy
- aws_appautoscaling_target
* Remove arn attribute from aws_appautoscaling_target
- No arn is generated or returned for this resource.
* Remove optional name attribute from aws_appautoscaling_target
- ScalableTargets do not have a name
- I think this was copied from aws_appautoscaling_policy
* AWS Application Autoscaling resource documentation tweaks
- include a target resource in the policy example
- sort attributes by alpha
- fixup markdown
- add spaces to test config
Adds tag support to the `aws_dynamodb_table` resource. Also adds a test for the resource, and a test to ensure that the tags are populated correctly from a resource import.
```
$ make testacc TEST=./builtin/providers/aws TESTARGS='-run=TestAccAWSDynamoDBTable_tags'
==> Checking that code complies with gofmt requirements...
go generate $(go list ./... | grep -v /terraform/vendor/)
2017/02/01 15:35:00 Generated command/internal_plugin_list.go
TF_ACC=1 go test ./builtin/providers/aws -v -run=TestAccAWSDynamoDBTable_tags -timeout 120m
=== RUN TestAccAWSDynamoDBTable_tags
--- PASS: TestAccAWSDynamoDBTable_tags (28.69s)
PASS
ok github.com/hashicorp/terraform/builtin/providers/aws 28.713s
```
```
$ make testacc TEST=./builtin/providers/aws TESTARGS='-run=TestAccAWSDynamoDbTable_importTags'
==> Checking that code complies with gofmt requirements...
go generate $(go list ./... | grep -v /terraform/vendor/)
2017/02/01 15:39:49 Generated command/internal_plugin_list.go
TF_ACC=1 go test ./builtin/providers/aws -v -run=TestAccAWSDynamoDbTable_importTags -timeout 120m
=== RUN TestAccAWSDynamoDbTable_importTags
--- PASS: TestAccAWSDynamoDbTable_importTags (30.62s)
PASS
ok github.com/hashicorp/terraform/builtin/providers/aws 30.645s
```
Fixes: #11587
Adds a small note to the `initial_lifecycle_hook` to note that this will
only work when creating a new Autoscaling group. For everything else,
you need to use the `aws_autoscaling_lifecycle_hook` resource
Cloud SQL Gen 2 instances come with a default 'root'@'%' user on
creation. This change automatically deletes that user after creation. A
Terraform user must use the google_sql_user to create a user with
appropriate host and password.
* Added Step Function Activity & Step Function State Machine
* Added SFN State Machine documentation
* Added aws_sfn_activity & documentation
* Allowed import of sfn resources
* Added more checks on tests, fixed documentation
* Handled the update case of a SFN function (might be already deleting)
* Removed the State Machine import test file
* Fixed the eventual consistency of the read after delete for SFN functions
* Image and vhdcontainers are mutually exclusive.
* Fix ip configuration handling and update support for load balancer backend pools.
* Fix os disk handling.
* Remove os_type from disk hash.
* Load balancer pools should not be computed.
* Add support for the overprovision property.
* Update documentation.
* Create acceptance test for scale set lb changes.
* Create acceptance test for scale set overprovisioning.
* added server_side_encryption to s3_bucket_object resource including associated acceptance test and documentation.
* got acceptance tests passing.
* made server_side_encryption a computed attribute and only set kms_key_id attribute if an S3 non-default master key is in use.
* ensured kms api is only interrogated if required.
* Creates papertrail logging resource for fastly
* Adds modification support for fastly papertrail
* Flattens and lists papertrail resources
* Adds testing for fastly papertrail
* Adds papertrail documentation for fastly to the website
* Fixes schema assignment name mistake
* Changes testing hostnames to pass fastly API validation
Implementing vpc_peering_connection_accept.
Additions from @ewbankkit:
Rename 'aws_vpc_peering_connection_accept' to 'aws_vpc_peering_connection_accepter'.
Get it working reusing functionality from 'aws_vpc_peering_connection' resource.
* Add a new data provider to decrypt AWS KMS secrets
* Address feedback
* Rename aws_kms_secrets to aws_kms_secret
* Add more examples to the documentation
Fixes: #11461
This will allow the user to pass a policy to further restrict the use
of AssumeRole. It is important to note that it will NOT allow an
expansion of access rights
Add support for creating, updating, and deleting projects, as well as
their enabled services and their IAM policies.
Various concessions were made for backwards compatibility, and will be
removed in 0.9 or 0.10.
- When creating an `iam_instance_profile` you will receive an error if you have multiple roles defined but have not increased your AWS limit for the number of roles you can assign to an `iam_instance_profile`.
- See more on defaults: http://docs.aws.amazon.com/IAM/latest/UserGuide/reference_iam-limits.html
* Added aws_api_gateway_integration request_templates in the documentation
* Added aws_api_gateway_integration_response response_templates in the documentation
This commit adds a StateRefresh func for volume attachments. Mostly
this is to add a buffer of time between the request and the return
of the attachment to give time for the volume to become attached,
however, in some cases the refresh function could work as specified.
Docs have also been updated to reflect that a device could be specified,
but to use with caution.
* vendor: update gopkg.in/ns1/ns1-go.v2
* provider/ns1: Port the ns1 provider to Terraform core
* docs/ns1: Document the ns1 provider
* ns1: rename remaining nsone -> ns1 (#10805)
* Ns1 provider (#11300)
* provider/ns1: Flesh out support for meta structs.
Following the structure outlined by @pashap.
Using reflection to reduce copy/paste.
Putting metas inside single-item lists. This is clunky, but I couldn't
figure out how else to have a nested struct. Maybe the Terraform people
know a better way?
Inside the meta struct, all fields are always written to the state; I
can't figure out how to omit fields that aren't used. This is not just
verbose, it actually causes issues because you can't have both "up" and
"up_feed" set).
Also some minor other changes:
- Add "terraform" import support to records and zones.
- Create helper class StringEnum.
* provider/ns1: Make fmt
* provider/ns1: Remove stubbed out RecordRead (used for testing metadata change).
* provider/ns1: Need to get interface that m contains from Ptr Value with Elem()
* provider/ns1: Use empty string to indicate no feed given.
* provider/ns1: Remove old record.regions fields.
* provider/ns1: Removes redundant testAccCheckRecordState
* provider/ns1: Moves account permissions logic to permissions.go
* provider/ns1: Adds tests for team resource.
* provider/ns1: Move remaining permissions logic to permissions.go
* ns1/provider: Adds datasource.config
* provider/ns1: Small clean up of datafeed resource tests
* provider/ns1: removes testAccCheckZoneState in favor of explicit name check
* provider/ns1: More renaming of nsone -> ns1
* provider/ns1: Comment out metadata for the moment.
* Ns1 provider (#11347)
* Fix the removal of empty containers from a flatmap
Removal of empty nested containers from a flatmap would sometimes fail a
sanity check when removed in the wrong order. This would only fail
sometimes due to map iteration. There was also an off-by-one error in
the prefix check which could match the incorrect keys.
* provider/ns1: Adds ns1 go client through govendor.
* provider/ns1: Removes unused debug line
* docs/ns1: Adds docs around apikey/datasource/datafeed/team/user/record.
* provider/ns1: Gets go vet green
An absolute Manta path such as /$MANTA_USER/stor/random/path or ~~/stor/random/path will not work for the path option. /$MANTA_USER/stor/ is automatically prepended to the path.
* provider/aws: Remove hardcoded https from the ecr repository
When the ECR resource was created, we hardcoded the repository URL to
start with https://
This was a mistake as all interaction with the repository now must
include a replace function for the https:// to "" for this to be usable
We need to note this change in the backward incompatibilities
```
% make testacc TEST=./builtin/providers/aws TESTARGS='-run=TestAccAWSEcrRepository_' ✭
==> Checking that code complies with gofmt requirements...
go generate $(go list ./... | grep -v /terraform/vendor/)
2017/01/20 14:37:36 Generated command/internal_plugin_list.go
TF_ACC=1 go test ./builtin/providers/aws -v -run=TestAccAWSEcrRepository_ -timeout 120m
=== RUN TestAccAWSEcrRepository_importBasic
--- PASS: TestAccAWSEcrRepository_importBasic (20.46s)
=== RUN TestAccAWSEcrRepository_basic
--- PASS: TestAccAWSEcrRepository_basic (18.77s)
PASS
ok github.com/hashicorp/terraform/builtin/providers/aws 39.251s
```
* Update ecr_repository.html.markdown
Updates ECS task_definition documentation, and schema validation functions to match the AWS API documentation.
Updates ECS service documentation, and schema validation functions match the AWS API documentation.
* provisioner/chef: Support named run-lists for Policyfiles
Add an optional argument for overriding the Chef Client's initial
run with a named run-list specified by the Policyfile. This is useful
for bootstrapping a node with a one-time setup recipe that deviates
from a policy's normal run-list.
* Update chef client cmd building per review feedback.
* removes region param from backend_service
- this param was not being used in this service
- you need a regional_backend_service if you want to pass this
* deprecated region instead of outright removing
* put session affinity formatting back
```
1 error(s) occurred:
* aws_elasticache_replication_group.cache: Error creating Elasticache Replication Group: InvalidParameterCombination: Expected a parameter group of family redis3.2 but found one of family redis2.8
status code: 400, request id: 9e6563a4-dd91-11e6-bc8b-ed011a44f429
```
* providers/google: add support for encrypting a disk
* providers/google: Add docs for encrypting disks
* providers/google: CSEK small fixes: sensitive params and mismatched state files
Example code fails a validation:
```
Errors:
* aws_elasticache_replication_group.cache: "replication_group_id" must contain from 1 to 20 alphanumeric characters or hyphens
```
* Remove contradiction with Scaleway documentation
The parameters previously termed by Terraform:
1. Organization
2. Access key
Are referred to, respectively, by Scaleway [0] as:
1. Access key
2. Token
which is a confusing contradiction for a user.
Since Scaleway terms (1) both 'access key' [0] and 'organization ID' [1],
@nicolai86 suggested keeping the latter as already used, but changing (2) for
'token'; removing the contradiction.
This commit thus changes the parameters to:
1. Organization
2. Token
Closes#10815.
[0] - https://cloud.scaleway.com/#/credentials
[1] - https://www.scaleway.com/docs/retrieve-my-organization-id-throught-the-api
* Update docs to reflect Scaleway offering x86
Scaleway now provides x86 servers [0] as well as ARM.
This commit removes 'ARM' from various references suggesting that might be the
only option.
[0] - https://blog.online.net/2016/03/08/c2-insanely-affordable-x64-servers/
statistic
Fixes: #11189
This introduces a new parameter and makes an existing parameter from
`required` to `optional` as both cannot be specified together
```
% make testacc TEST=./builtin/providers/aws TESTARGS='-run=TestAccAWSCloudWatchMetricAlarm_' 2 ↵ ✹ ✭
==> Checking that code complies with gofmt requirements...
go generate $(go list ./... | grep -v /terraform/vendor/)
2017/01/13 11:25:24 Generated command/internal_plugin_list.go
TF_ACC=1 go test ./builtin/providers/aws -v -run=TestAccAWSCloudWatchMetricAlarm_ -timeout 120m
=== RUN TestAccAWSCloudWatchMetricAlarm_importBasic
--- PASS: TestAccAWSCloudWatchMetricAlarm_importBasic (19.80s)
=== RUN TestAccAWSCloudWatchMetricAlarm_basic
--- PASS: TestAccAWSCloudWatchMetricAlarm_basic (20.42s)
=== RUN TestAccAWSCloudWatchMetricAlarm_extendedStatistic
--- PASS: TestAccAWSCloudWatchMetricAlarm_extendedStatistic (18.92s)
PASS
```
* provider/aws: New DataSource: aws_elb_hosted_zone_id
This datasource is a list of all of the ELB DualStack Hosted Zone IDs.
This will allow us to reference the correct hosted zone id when creating
route53 alias records
There are many bugs for this - this is just the beginning of fixing them
```
% make testacc TEST=./builtin/providers/aws TESTARGS='-run=TestAccAWSElbHostedZoneId_basic'
==> Checking that code complies with gofmt requirements...
go generate $(go list ./... | grep -v /terraform/vendor/)
2017/01/04 13:04:32 Generated command/internal_plugin_list.go
TF_ACC=1 go test ./builtin/providers/aws -v -run=TestAccAWSElbHostedZoneId_basic -timeout 120m
=== RUN TestAccAWSElbHostedZoneId_basic
--- PASS: TestAccAWSElbHostedZoneId_basic (20.46s)
PASS
ok github.com/hashicorp/terraform/builtin/providers/aws 20.484s
```
* Update elb_hosted_zone_id.html.markdown
* Add subnetwork_project field to allow for XPN in GCE instance templates
* Missing os import
* Removing unneeded check
* fix formatting
* Add subnetwork_project to read
* provider/openstack: LoadBalancer Security Groups
This commit adds the ability to specify security groups on a loadbalancer
resource.
* provider/openstack: LoadBalancer Security Groups Refactor
Moving common security group code into a dedicated function.
Documentation for the `aws_route_table` data source mentions that it supports a route table `id` as an argument, however it was missing from the actual provider code.
Adds in the missing provider code, adds a test, and updates the documentation to use `rtb_id` as the argument, instead of the more ambiguous `id`.
* provider/aws: New Resource - aws_codedeploy_deployment_config
* provider/aws: Adding acceptance tests for new
aws_codedeploy_deployment_config resource
* provider/aws: Documentation for the aws_codedeploy_deployment_config resource
* Update codedeploy_deployment_config.html.markdown
aws_api_gateway_integration_response
This continues the work carried out in #10696
```
% make testacc TEST=./builtin/providers/aws TESTARGS='-run=TestAccAWSAPIGatewayIntegrationResponse_'
==> Checking that code complies with gofmt requirements...
go generate $(go list ./... | grep -v /terraform/vendor/)
2017/01/03 14:18:46 Generated command/internal_plugin_list.go
TF_ACC=1 go test ./builtin/providers/aws -v
-run=TestAccAWSAPIGatewayIntegrationResponse_ -timeout 120m
=== RUN TestAccAWSAPIGatewayIntegrationResponse_basic
--- PASS: TestAccAWSAPIGatewayIntegrationResponse_basic (57.33s)
PASS
ok github.com/hashicorp/terraform/builtin/providers/aws57.352s
```
* Importing the OpsGenie SDK
* Adding the goreq dependency
* Initial commit of the OpsGenie / User provider
* Refactoring to return a single client
* Adding an import test / fixing a copy/paste error
* Adding support for OpsGenie docs
* Scaffolding the user documentation for OpsGenie
* Adding a TODO
* Adding the User data source
* Documentation for OpsGenie
* Adding OpsGenie to the internal plugin list
* Adding support for Teams
* Documentation for OpsGenie Team's
* Validation for Teams
* Removing Description for now
* Optional fields for a User: Locale/Timezone
* Removing an implemented TODO
* Running makefmt
* Downloading about half the internet
Someone witty might simply sign this commit with "npm install"
* Adding validation to the user object
* Fixing the docs
* Adding a test creating multple users
* Prompting for the API Key if it's not specified
* Added a test for multiple users / requested changes
* Fixing the linting
Fixes:#10902
AWS introduced a change to the Mount Target DNS Name to remove the
availability_zone from it -
https://aws.amazon.com/about-aws/whats-new/2016/12/simplified-mounting-of-amazon-efs-file-systems/
This was because there used to be a limit of 1 mount target per AZ -
this has been raised.
```
% make testacc TEST=./builtin/providers/aws TESTARGS='-run=TestAccAWSEFSMountTarget_'
==> Checking that code complies with gofmt requirements...
go generate $(go list ./... | grep -v /terraform/vendor/)
2017/01/04 10:45:35 Generated command/internal_plugin_list.go
TF_ACC=1 go test ./builtin/providers/aws -v -run=TestAccAWSEFSMountTarget_ -timeout 120m
=== RUN TestAccAWSEFSMountTarget_importBasic
--- PASS: TestAccAWSEFSMountTarget_importBasic (236.19s)
=== RUN TestAccAWSEFSMountTarget_basic
--- PASS: TestAccAWSEFSMountTarget_basic (445.52s)
=== RUN TestAccAWSEFSMountTarget_disappears
--- PASS: TestAccAWSEFSMountTarget_disappears (228.31s)
PASS
ok github.com/hashicorp/terraform/builtin/providers/aws 910.044s
```
* vendor: update jen20/riviera to pull in endpoints change
* provider/auzrerm: support non public clouds
Ran tests below with ARM_ENVIRONMENT=german and changes the location to Germany
Central in the test config. The virtual network tests cover both Riviera (resource
groups) and the official SDK client.
TF_ACC=1 go test ./builtin/providers/azurerm -v -run TestAccAzureRMVirtualNetwork_ -timeout 120m
=== RUN TestAccAzureRMVirtualNetwork_importBasic
--- PASS: TestAccAzureRMVirtualNetwork_importBasic (81.84s)
=== RUN TestAccAzureRMVirtualNetwork_basic
--- PASS: TestAccAzureRMVirtualNetwork_basic (78.14s)
=== RUN TestAccAzureRMVirtualNetwork_disappears
--- PASS: TestAccAzureRMVirtualNetwork_disappears (78.45s)
=== RUN TestAccAzureRMVirtualNetwork_withTags
--- PASS: TestAccAzureRMVirtualNetwork_withTags (81.78s)
PASS
ok github.com/hashicorp/terraform/builtin/providers/azurerm 320.310s
* Adding a missing property to the Consumer Groups doc
* Support for Event Hub Authorization Rules
* Documentation for Authorization Rules
* Missed a comment
* Fixing the `no authorisation rule` state
* Making the documentation around the Permissions more explicit / updating the import url
* Fixing up the tests
* Clearing up the docs
* Fixing the linting
* Moving the validation inside of the expand
* Fixing the indentation
* Adding the Container Registry SDK
* Implementing the container registry
* Enabling the provider / registering the Resource Provider
* Acceptance Tests
* Documentation for Container Registry
* Fixing the name validation
* Validation for the Container Registry Name
* Added Import support for Containr Registry
* Storage Account is no longer optional
* Updating the docs
* Forcing a re-run in Travis
The provider now consults the Providers API to detect which providers are already
registered and uses this call to test the credentials
A new option `skip_provider_registration` / `ARM_SKIP_PROVIDER_REGISTRATION` is
now available to opt out of provider registration entirely
* Add 'aws_vpc_peering_connection' data source.
* Changes after code review.
* Add 'accepter' and 'requester' blocks to aws_vpc_peering_connection data source output attributes.
* Documentation
* Scaffolding Consumer Groups
* Linting
* Adding a missing </li>
* User MetaData needs to explicitly be optional
* Typo
* Fixing the test syntax
* Removing the eventHubPath field since it appears to be deprecated
* Added a Complete import test
* WIP
* Updating to use SDK 7.0.1
* Calling the correct method
* Removing eventhubPath and updating the docs
* Fixing a typo
removing a PostgreSQL role.
Add manual overrides if this isn't the desired behavior, but it should
universally be the desired outcome except when a ROLE name is reused
across multiple databases in the same PostgreSQL cluster, in which case
the `skip_drop_role` is necessary for all but the last PostgreSQL
provider.
This commit adds some basic tests for block device functionality. It also
expands the existing block device documentation as well references the
new "volume attach" resources for further block storage functionality.
* Implementing Redis Cache
* Properties should never be nil
* Updating the SDK to 7.0.1
* Redis Cache updated for SDK 7.0.1
* Fixing the max memory validation tests
* Cleaning up
* Adding tests for Standard with Tags
* Int's -> Strings for the moment
* Making the RedisConfiguration object mandatory
* Only parse out redis configuration values if they're set
* Updating the RedisConfiguration object to be required in the documentaqtion
* Adding Tags to the Standard tests / importing excluding the redisConfiguration
* Removing support for import for Redis Cache for now
* Removed a scaling test
* vendor: update github.com/Ensighten/udnssdk to v1.2.1
* ultradns_tcpool: add
* ultradns.baseurl: set default
* ultradns.record: cleanup test
* ultradns_record: extract common, cleanup
* ultradns: extract common
* ultradns_dirpool: add
* ultradns_dirpool: fix rdata.ip_info.ips to be idempotent
* ultradns_tcpool: add doc
* ultradns_dirpool: fix rdata.geo_codes.codes to be idempotent
* ultradns_dirpool: add doc
* ultradns: cleanup testing
* ultradns_record: rename resource
* ultradns: log username from config, not client
udnssdk.Client is being refactored to use x/oauth2, so don't assume we
can access Username from it
* ultradns_probe_ping: add
* ultradns_probe_http: add
* doc: add ultradns_probe_ping
* doc: add ultradns_probe_http
* ultradns_record: remove duplication from error messages
* doc: cleanup typos in ultradns
* ultradns_probe_ping: add test for pool-level probe
* Clean documentation
* ultradns: pull makeSetFromStrings() up to common.go
* ultradns_dirpool: log hashIPInfoIPs
Log the key and generated hashcode used to index ip_info.ips into a set.
* ultradns: simplify hashLimits()
Limits blocks only have the "name" attribute as their primary key, so
hashLimits() needn't use a buffer to concatenate.
Also changes log level to a more approriate DEBUG.
* ultradns_tcpool: convert rdata to schema.Set
RData blocks have the "host" attribute as their primary key, so it is
used by hashRdatas() to create the hashcode.
Tests are updated to use the new hashcode indexes instead of natural
numbers.
* ultradns_probe_http: convert agents to schema.Set
Also pull the makeSetFromStrings() helper up to common.go
* ultradns: pull hashRdatas() up to common
* ultradns_dirpool: convert rdata to schema.Set
Fixes TF-66
* ultradns_dirpool.conflict_resolve: fix default from response
UltraDNS REST API User Guide claims that "Directional Pool
Profile Fields" have a "conflictResolve" field which "If not
specified, defaults to GEO."
https://portal.ultradns.com/static/docs/REST-API_User_Guide.pdf
But UltraDNS does not actually return a conflictResolve
attribute when it has been updated to "GEO".
We could fix it in udnssdk, but that would require either:
* hide the response by coercing "" to "GEO" for everyone
* use a pointer to allow checking for nil (requires all
users to change if they fix this)
An ideal solution would be to have the UltraDNS API respond
with this attribute for every dirpool's rdata.
So at the risk of foolish consistency in the sdk, we're
going to solve it where it's visible to the user:
by checking and overriding the parsing. I'm sorry.
* ultradns_record: convert rdata to set
UltraDNS does not store the ordering of rdata elements, so we need a way
to identify if changes have been made even it the order changes.
A perfect job for schema.Set.
* ultradns_record: parse double-encoded answers for TXT records
* ultradns: simplify hashLimits()
Limits blocks only have the "name" attribute as their primary key, so
hashLimits() needn't use a buffer to concatenate.
* ultradns_dirpool.description: validate
* ultradns_dirpool.rdata: doc need for set
* ultradns_dirpool.conflict_resolve: validate
* provider/pagerduty: Allow 'team_responder' role for pagerduty_user resource
* Change unit test to exercise 'team_responder' and reformat
* Update the test fixture to use the 'team_responder' role
releases.
When postgresql_schema_policy lands this attribute should be removed in
order to provide a single way of accomplishing setting permissions on
schema objects.
* provider/aws: data source for AWS Hosted Zone
* add caller_reference, resource_record_set_count fields, manage private zone and trailing dot
* fix fmt
* update documentation, use string function in hostedZoneNamewq
* add vpc_id support
* add tags support
* add documentation for hosted zone data source tags support
* provider/aws: Add the aws_eip data source
* Document the aws_eip data source on the website
* provider/aws: support query by public_ip for aws_eip data source
* Initial checkin for PR request
* Added an argument to provider to allow control over whether or not TLS Certs will skip verification. Controllable via provider or env variable being set
* Initial check-in to use refactored module
* Checkin of very MVP for creating/deleting host test which works and validates basic host creation and deletion
* Check in with support for creating hosts with variables working
* Checking in work to date
* Remove code that causes travis CI to fail while I debug
* Adjust create to accept multivale
* Back on track. Working basic tests. go-icinga2-api needs more test too
* Squashing
* Back on track. Working basic tests. go-icinga2-api needs more test too
* Check in refactored hostgroup support
* Check in refactored check_command, hosts, and hsotgroup with a few test
* Checking in service code
* Add in dependency for icinga2 provider
* Add documentation. Refactor, fix and extend based on feedback from Hashicorp
* Added checking and validation around invalid URL and unavailable server
* Add support to import databases. See docs.
* Add support for renaming databases
* Add support for all known PostgreSQL database attributes, including:
* "allow_connections"
* "lc_ctype"
* "lc_collate"
* "connection_limit"
* "encoding"
* "is_template"
* "owner"
* "tablespace_name"
* "template"
Both libpq(3) and github.com/lib/pq both use `sslmode`. Prefer this vs
the non-standard `ssl_mode`. `ssl_mode` is supported for compatibility
but should be removed in the future.
Changelog: yes
The value is only multiplied by the API for topics in non-premium namespaces
TF_ACC=1 go test ./builtin/providers/azurerm -v -run TestAccAzureRMServiceBusTopic_enablePartitioning -timeout 120m
=== RUN TestAccAzureRMServiceBusTopic_enablePartitioningStandard
--- PASS: TestAccAzureRMServiceBusTopic_enablePartitioningStandard (378.80s)
=== RUN TestAccAzureRMServiceBusTopic_enablePartitioningPremium
--- PASS: TestAccAzureRMServiceBusTopic_enablePartitioningPremium (655.00s)
PASS
ok github.com/hashicorp/terraform/builtin/providers/azurerm 1033.874s
Fixes#8455, #5390
This add a new `no_device` attribute to `ephemeral_block_device` block,
which allows users omit ephemeral devices from AMI's predefined block
device mappings, which is useful for EBS-only instance types.
* provider/datadog #9375: Refactor tags to a list instead of a map.
Tags are allowed to be but not restricted to, key value pairs (ie: foo:bar)
but are esssentially strings. This changes allows using, and mixing of tags with
form "foo" and "foo:bar". It also allows using duplicate keys like "foo:bar" and "foo:baz".
* provider/datadog update import test.
* "external" provider for gluing in external logic
This provider will become a bit of glue to help people interface external
programs with Terraform without writing a full Terraform provider.
It will be nowhere near as capable as a first-class provider, but is
intended as a light-touch way to integrate some pre-existing or custom
system into Terraform.
* Unit test for the "resourceProvider" utility function
This small function determines the dependable name of a provider for
a given resource name and optional provider alias. It's simple but it's
a key part of how resource nodes get connected to provider nodes so
worth specifying the intended behavior in the form of a test.
* Allow a provider to export a resource with the provider's name
If a provider only implements one resource of each type (managed vs. data)
then it can be reasonable for the resource names to exactly match the
provider name, if the provider name is descriptive enough for the
purpose of the each resource to be obvious.
* provider/external: data source
A data source that executes a child process, expecting it to support a
particular gateway protocol, and exports its result. This can be used as
a straightforward way to retrieve data from sources that Terraform
doesn't natively support..
* website: documentation for the "external" provider
* add rds db for opsworks
* switched to stack in vpc
* implement update method
* add docs
* implement and document force new resource behavior
* implement retry for update and delete
* add test that forces new resource
* Update to latest version of go-datadog-api
* Updates to latest go-datadog-api version, which adds more complete
timeboard support.
* Add more complete timeboard support
* Adds in support for missing timeboard fields, so now we can have nice
things like conditional formats and more.
* Document new fields in datadog_timeboard resource
* Add acceptance test for datadog timeboard changes
* Add new aws_vpc_endpoint_route_table_association resource.
This commit adds a new resource which allows to a list of route tables to be
either added and/or removed from an existing VPC Endpoint. This resource would
also be complimentary to the existing `aws_vpc_endpoint` resource where the
route tables might not be specified (not a requirement for a VPC Endpoint to
be created successfully) during creation, especially where the workflow is
such where the route tables are not immediately known.
Signed-off-by: Krzysztof Wilczynski <krzysztof.wilczynski@linux.com>
Additions by Kit Ewbank <Kit_Ewbank@hotmail.com>:
* Add functionality
* Add documentation
* Add acceptance tests
* Set VPC endpoint route_table_ids attribute to "Computed"
* Changes after review - Set resource ID in create function.
* Changes after code review by @kwilczynski:
* Removed error types and simplified the error handling in 'resourceAwsVPCEndpointRouteTableAssociationRead'
* Simplified logging in 'resourceAwsVPCEndpointRouteTableAssociationDelete'
Update our instance template to include metadata_startup_script, to
match our instance resource. Also, we've resolved the diff errors around
metadata.startup-script, and people want to use that to create startup
scripts that don't force a restart when they're changed, so let's stop
disallowing it.
Also, we had a bunch of calls to `schema.ResourceData.Set` that ignored
the errors, so I added error handling for those calls. It's mostly
bundled with this code because I couldn't be sure whether it was the
root of bugs or not, so I took care of it while addressing the startup
script issue.
This change doesn't make much sense now, as projects are read-only
anyways, so there's not a lot that importing really does for you--you
can already reference pre-existing projects just by defining them in
your config.
But as we discussed #10425, this change made more and more sense. In a
world where projects can be created, we can no longer reference
pre-existing projects just by defining them in config. We get that
ability back by making projects importable.
* provider/aws: Add DeploymentRollback as a valid TriggerEvent type
* provider/aws: Add auto_rollback_configuration to aws_codedeploy_deployment_group
* provider/aws: Document auto_rollback_configuration
- part of aws_codedeploy_deployment_group
* provider/aws: Support removing and disabling auto_rollback_configuration
- part of aws_codedeploy_deployment_group resource
- when removing configuration, ensure events are removed
- when disabling configuration, preserve events in case configuration is re-enabled
* provider/aws: Add alarm_configuration to aws_codedeploy_deployment_group
* provider/aws: Document alarm_configuration
- part of aws_codedeploy_deployment_group
* provider/aws: Support removing alarm_configuration
- part of aws_codedeploy_deployment_group resource
- disabling configuration doesn't appear to work...
* provider/aws: Refactor auto_rollback_configuration tests
- Add create test
- SKIP failing test for now
- Add tests for build & map functions
* provider/aws: Refactor new aws_code_deploy_deployment_group tests
- alarm_configuration and auto_rollback_configuration only
- add assertions to deployment_group basic test
- rename config funcs to be more easy to read
- group public tests together
* provider/aws: A max of 10 alarms can be added to a deployment group.
- aws_code_deploy_deployment_group.alarm_configuration.alarms
- verified this causes test failure with expected exception
* provider/aws: Test disabling alarm_configuration and auto_rollback_configuration
- the tests now pass after rebasing the latest master branch
Google's Backend Services gives users control over the session affinity modes.
Let's allow Terraform users to leverage this option.
We don't change the default value ("NONE", as provided by Google).
* provider/azurerm: support import of route
TF_ACC=1 go test ./builtin/providers/azurerm -v -run TestAccAzureRMRoute_import -timeout 120m
=== RUN TestAccAzureRMRoute_importBasic
--- PASS: TestAccAzureRMRoute_importBasic (166.99s)
PASS
ok github.com/hashicorp/terraform/builtin/providers/azurerm 167.066s
* provider/azurerm: fix route_table not setting routes
The resource wasn't actually setting the routes in the create/update method,
this went unnoticed as it also didn't read the routes array back to state.
Fixes#10316
TF_ACC=1 go test ./builtin/providers/azurerm -v -run TestAccAzureRMRouteTable -timeout 120m
=== RUN TestAccAzureRMRouteTable_basic
--- PASS: TestAccAzureRMRouteTable_basic (122.96s)
=== RUN TestAccAzureRMRouteTable_disappears
--- PASS: TestAccAzureRMRouteTable_disappears (121.12s)
=== RUN TestAccAzureRMRouteTable_withTags
--- PASS: TestAccAzureRMRouteTable_withTags (136.01s)
=== RUN TestAccAzureRMRouteTable_multipleRoutes
--- PASS: TestAccAzureRMRouteTable_multipleRoutes (155.44s)
PASS
ok github.com/hashicorp/terraform/builtin/providers/azurerm 535.612s
* provider/azurerm: support import of route_table
TF_ACC=1 go test ./builtin/providers/azurerm -v -run TestAccAzureRMRouteTable_import -timeout 120m
=== RUN TestAccAzureRMRouteTable_importBasic
--- PASS: TestAccAzureRMRouteTable_importBasic (121.90s)
PASS
ok github.com/hashicorp/terraform/builtin/providers/azurerm 121.978s
The current example using the ELB's account ID will trigger an update for a resource that uses the `.id` instead if the `.arn` syntax.
Once updated to the `.arn`, no changes are detected.
[ci skip]
* provider/azurerm: support import of virtual_machine
TF_ACC=1 go test ./builtin/providers/azurerm -v -run "TestAccAzureRMVirtualMachine_(basic|import)" -timeout 120m
=== RUN TestAccAzureRMVirtualMachine_importBasic
--- PASS: TestAccAzureRMVirtualMachine_importBasic (561.08s)
=== RUN TestAccAzureRMVirtualMachine_basicLinuxMachine
--- PASS: TestAccAzureRMVirtualMachine_basicLinuxMachine (677.49s)
=== RUN TestAccAzureRMVirtualMachine_basicLinuxMachine_disappears
--- PASS: TestAccAzureRMVirtualMachine_basicLinuxMachine_disappears (674.21s)
=== RUN TestAccAzureRMVirtualMachine_basicWindowsMachine
--- PASS: TestAccAzureRMVirtualMachine_basicWindowsMachine (1105.18s)
PASS
ok github.com/hashicorp/terraform/builtin/providers/azurerm 3017.970s
* provider/azurerm: support import of servicebus_namespace
TF_ACC=1 go test ./builtin/providers/azurerm -v -run TestAccAzureRMServiceBusNamespace_import -timeout 120m
=== RUN TestAccAzureRMServiceBusNamespace_importBasic
--- PASS: TestAccAzureRMServiceBusNamespace_importBasic (345.80s)
PASS
ok github.com/hashicorp/terraform/builtin/providers/azurerm 345.879s
* provider/azurerm: document import of servicebus_topic and servicebus_subscription
* provider/azurerm: support import of dns record resources
TF_ACC=1 go test ./builtin/providers/azurerm -v -run TestAccAzureRMDns[A-z]+Record_importBasic -timeout 120m
=== RUN TestAccAzureRMDnsARecord_importBasic
--- PASS: TestAccAzureRMDnsARecord_importBasic (102.84s)
=== RUN TestAccAzureRMDnsAAAARecord_importBasic
--- PASS: TestAccAzureRMDnsAAAARecord_importBasic (100.59s)
=== RUN TestAccAzureRMDnsCNameRecord_importBasic
--- PASS: TestAccAzureRMDnsCNameRecord_importBasic (98.94s)
=== RUN TestAccAzureRMDnsMxRecord_importBasic
--- PASS: TestAccAzureRMDnsMxRecord_importBasic (107.30s)
=== RUN TestAccAzureRMDnsNsRecord_importBasic
--- PASS: TestAccAzureRMDnsNsRecord_importBasic (98.55s)
=== RUN TestAccAzureRMDnsSrvRecord_importBasic
--- PASS: TestAccAzureRMDnsSrvRecord_importBasic (100.19s)
=== RUN TestAccAzureRMDnsTxtRecord_importBasic
--- PASS: TestAccAzureRMDnsTxtRecord_importBasic (97.49s)
PASS
ok github.com/hashicorp/terraform/builtin/providers/azurerm 706.000s
* provider/azurerm: support import of cdn_endpoint, document profile import
TF_ACC=1 go test ./builtin/providers/azurerm -v -run TestAccAzureRMCdnEndpoint_import -timeout 120m
=== RUN TestAccAzureRMCdnEndpoint_importWithTags
--- PASS: TestAccAzureRMCdnEndpoint_importWithTags (207.83s)
PASS
ok github.com/hashicorp/terraform/builtin/providers/azurerm 207.907s
* provider/azurerm: support import of sql_server, fix sql_firewall import
TF_ACC=1 go test ./builtin/providers/azurerm -v -run TestAccAzureRMSql[A-z]+_importBasic -timeout 120m
=== RUN TestAccAzureRMSqlFirewallRule_importBasic
--- PASS: TestAccAzureRMSqlFirewallRule_importBasic (153.72s)
=== RUN TestAccAzureRMSqlServer_importBasic
--- PASS: TestAccAzureRMSqlServer_importBasic (119.83s)
PASS
ok github.com/hashicorp/terraform/builtin/providers/azurerm 273.630s
This commit adds the ability to authenticate with Swauth/Swift. This can
be used in Swift-only environments that do not have a Keystone service
for authentication.
* provider/aws: Add ability to create aws_ebs_snapshot
```
% make testacc TEST=./builtin/providers/aws TESTARGS='-run=TestAccAWSEBSSnapshot_'
==> Checking that code complies with gofmt requirements...
go generate $(go list ./... | grep -v /terraform/vendor/)
2016/11/10 14:18:36 Generated command/internal_plugin_list.go
TF_ACC=1 go test ./builtin/providers/aws -v -run=TestAccAWSEBSSnapshot_
-timeout 120m
=== RUN TestAccAWSEBSSnapshot_basic
--- PASS: TestAccAWSEBSSnapshot_basic (31.56s)
=== RUN TestAccAWSEBSSnapshot_withDescription
--- PASS: TestAccAWSEBSSnapshot_withDescription (189.35s)
PASS
ok github.com/hashicorp/terraform/builtin/providers/aws220.928s
```
* docs/aws: Addition of the docs for aws_ebs_snapshot resource
* provider/aws: Creation of shared schema funcs for common AWS data source
patterns
* provider/aws: Create aws_ebs_snapshot datasource
Fixes#8828
This data source will use a number of filters, owner_ids, snapshot_ids
and restorable_by_user_ids in order to find the correct snapshot. The
data source has no real use case for most_recent and will error on no
snapshots found or greater than 1 snapshot found
```
% make testacc TEST=./builtin/providers/aws TESTARGS='-run=TestAccAWSEbsSnapshotDataSource_'
==> Checking that code complies with gofmt requirements...
go generate $(go list ./... | grep -v /terraform/vendor/)
2016/11/10 14:34:33 Generated command/internal_plugin_list.go
TF_ACC=1 go test ./builtin/providers/aws -v
-run=TestAccAWSEbsSnapshotDataSource_ -timeout 120m
=== RUN TestAccAWSEbsSnapshotDataSource_basic
--- PASS: TestAccAWSEbsSnapshotDataSource_basic (192.66s)
=== RUN TestAccAWSEbsSnapshotDataSource_multipleFilters
--- PASS: TestAccAWSEbsSnapshotDataSource_multipleFilters (33.84s)
PASS
ok github.com/hashicorp/terraform/builtin/providers/aws226.522s
```
* docs/aws: Addition of docs for the aws_ebs_snapshot data source
Adds the new resource `aws_ebs_snapshot`
* provider/aws: Add aws_alb data source
This adds the aws_alb data source for getting information on an AWS
Application Load Balancer.
The schema is nearly the same as the resource of the same name, with
most of the resource population logic de-coupled into its own function
so that they can be shared between the resource and data source.
* provider/aws: aws_alb data source language revisions
* Multiple/zero result error slightly updated to be a bit more
specific.
* Fixed relic of the copy of the resource docs (resource -> data
source)
When `force_destroy` was specifed on an `aws_iam_user` resource, only IAM
access keys and the login profile were destroyed. If a multi-factor auth
device had been activated for that user, deletion would fail as follows:
```
* aws_iam_user.testuser1: Error deleting IAM User testuser1: DeleteConflict: Cannot delete entity, must delete MFA device first.
status code: 409, request id: aa41b1b7-ac4d-11e6-bb3f-3b4c7a310c65
```
This commit iterates over any of the user's MFA devices and deactivates
them before deleting the user. It follows a pattern similar to that used
to remove users' IAM access keys before deletion.
```
$ make testacc TEST=./builtin/providers/aws TESTARGS='-run=TestAccAWSUser_'
==> Checking that code complies with gofmt requirements...
go generate $(go list ./... | grep -v /terraform/vendor/)
2016/11/20 17:09:00 Generated command/internal_plugin_list.go
TF_ACC=1 go test ./builtin/providers/aws -v -run=TestAccAWSUser_ -timeout 120m
=== RUN TestAccAWSUser_importBasic
--- PASS: TestAccAWSUser_importBasic (5.70s)
=== RUN TestAccAWSUser_basic
--- PASS: TestAccAWSUser_basic (11.12s)
PASS
ok github.com/rhenning/terraform/builtin/providers/aws 20.840s
```
The docs for the CreateDBInstance API call include quite a bit more information about each individual option, (for example `Engine` has each of the possible options listed, whilst the cli reference doesn't).
This commit adds the openstack_compute_volume_attach_v2 resource. This
resource enables a volume to be attached to an instance by using the
OpenStack Compute (Nova) v2 volumeattach API.
This commit adds the openstack_blockstorage_volume_attach_v2 resource. This
resource enables a volume to be attached to an instance by using the OpenStack
Block Storage (Cinder) v2 API.
existing example returns an error like the following should you try to
run `terraform plan` against it:
Error reading config for aws_subnet[example]: data.aws_availability_zone.name_suffix: data variables must be four parts: data.TYPE.NAME.ATTR in:
${cidrsubnet(aws_vpc.example.cidr_block, 4, var.az_number[data.aws_availability_zone.name_suffix])}
Also fixed tests failing auth caused by getStorageAccountAccessKey returning the
key name rather than the value
TF_ACC= go test ./state/remote -v -run=TestAz -timeout=10m -parallel=4
=== RUN TestAzureClient_impl
--- PASS: TestAzureClient_impl (0.00s)
=== RUN TestAzureClient
2016/11/18 13:57:34 [DEBUG] New state was assigned lineage "96037426-f95e-45c3-9183-6c39b49f590b"
2016/11/18 13:57:34 [TRACE] Preserving existing state lineage "96037426-f95e-45c3-9183-6c39b49f590b"
--- PASS: TestAzureClient (130.60s)
=== RUN TestAzureClientEmptyLease
2016/11/18 13:59:44 [DEBUG] New state was assigned lineage "d9997445-1ebf-4b2c-b4df-15ae152f6417"
2016/11/18 13:59:44 [TRACE] Preserving existing state lineage "d9997445-1ebf-4b2c-b4df-15ae152f6417"
--- PASS: TestAzureClientEmptyLease (128.15s)
=== RUN TestAzureClientLease
2016/11/18 14:01:55 [DEBUG] New state was assigned lineage "85912a12-2e0e-464c-9886-8add39ea3a87"
2016/11/18 14:01:55 [TRACE] Preserving existing state lineage "85912a12-2e0e-464c-9886-8add39ea3a87"
--- PASS: TestAzureClientLease (138.09s)
PASS
ok github.com/hashicorp/terraform/state/remote 397.111s
* provider/github: add GitHub labels resource
Provides a GitHub issue label resource.
This resource allows easy management of issue labels for an
organisation's repositories. A name, and a color can be set.
These attributes can be updated without creating a new resource.
* provider/github: add documentation for GitHub issue labels resource
* provider/aws: Add aws_alb_listener data source
This adds the aws_alb_listener data source to get information on an AWS
Application Load Balancer listener.
The schema is slightly modified (only option-wise, attributes are the
same) and we use the aws_alb_listener resource read function to get the
data.
Note that the HTTPS test here may fail due until
hashicorp/terraform#10180 is merged.
* provider/aws: Add aws_alb_listener data source docs
Now documented.
When using the static NAT resource, you no longer have to specify a `network_id`. This can be inferred from the choosen `virtual_machine_id` and/or the `vm_guest_ip`.