These tests cover the new refresh behaviour and would fail with "index
out of range" if the refresh graph is not expanded to take new resources
into account as well (scale out), or if it does not with expanded count
orphans in a way that makes sure they don't get interpolated when walked
(scale in).
* Added new evaluation_delay field
Added new evaluation_delay parameter to pass it through the datadog monitor api
* Changed tests for new evaluation_delay field
* changed documentation
* added vmss with managed disk support
* Update vmss docs
* update vmss test
* added vmss managed disk import test
* update vmss tests
* remove unused test resources
* reverting breaking changes on storage_os_disk and storage_image_reference
* updated vmss tests and documentation
* updated vmss flatten osdisk
* updated vmss resource and import test
* update name in vmss osdisk
* update vmss test to include a blank name
* update vmss test to include a blank name
* Allowed method on aggregator is `avg` ! `average`
While Datadog will accept the value of `average` when creating the query graph, the resultant graph will be empty. Passing the value of `avg` instead correctly renders the graph.
* Fixed gofmt
* Updated test to match new aggregator method
When testing the behavior of multiple provider instances (either aliases
or child module overrides) it's convenient to be able to label the
individual instances to determine which one is actually being used for
the purpose of making test assertions.
* Randomize names for pagerduty_user
* Randomize names for pagerduty_team
* Randomize names for pagerduty_service
* Randomize names for pagerduty_service_integration
* Randomize names for pagerduty_schedule
* Randomize names for pagerduty_escalation_policy
* Randomize names for pagerduty_addon
* Randomize names for data_pagerduty_user
* Randomize names for data_pagerduty_schedule
* Randomize names for data_pagerduty_escalation_policy
* Run in parallel if $PAGERDUTY_PARALLEL is passed
* Attempt to write a new test for cert update
Trying to surface this bug with a test:
https://github.com/hashicorp/terraform/issues/5930
* Fix the error
* Fix the test for the update operation
* Break apart tests for EU vs US to cleanse test run
* Refactor Update to more closely match create, increase debug logging
* Reflect differences of EU and US regions via separate tests
* Add comment re: why of test breakout
* Removed the “SetId” as it was unnecessary
* Ensure the SSL Addon has been provisioned
* Add resource
* Add tests
* Add documentation
* Fix invalid comment
* Remove MinItems
* Add newline
* Store expected ID and format
* Add import note
* expiration_time can be computed if dataset has an expiration_time set
* Handle 404 using new check function
Fixes: #14032
When you are using an IPv6 address directly to an instance, it was
causing the ipv6_address_count to try and ForceNew resource. It wasn't
marked as computed
I was able to see this here:
```
-/+ aws_instance.test
ami: "ami-c5eabbf5" => "ami-c5eabbf5"
associate_public_ip_address: "false" => "<computed>"
availability_zone: "us-west-2a" => "<computed>"
ebs_block_device.#: "0" => "<computed>"
ephemeral_block_device.#: "0" => "<computed>"
instance_state: "running" => "<computed>"
instance_type: "t2.micro" => "t2.micro"
ipv6_address_count: "1" => "0" (forces new resource)
ipv6_addresses.#: "1" => "1"
ipv6_addresses.0: "2600:1f14:bb2:e501::10" => "2600:1f14:bb2:e501::10"
key_name: "" => "<computed>"
network_interface.#: "0" => "<computed>"
network_interface_id: "eni-d19115ec" => "<computed>"
placement_group: "" => "<computed>"
primary_network_interface_id: "eni-d19115ec" => "<computed>"
private_dns: "ip-10-20-1-252.us-west-2.compute.internal" => "<computed>"
private_ip: "10.20.1.252" => "<computed>"
public_dns: "" => "<computed>"
public_ip: "" => "<computed>"
root_block_device.#: "1" => "<computed>"
security_groups.#: "0" => "<computed>"
source_dest_check: "true" => "true"
subnet_id: "subnet-3fdfb476" => "subnet-3fdfb476"
tags.%: "1" => "1"
tags.Name: "stack72" => "stack72"
tenancy: "default" => "<computed>"
volume_tags.%: "0" => "<computed>"
vpc_security_group_ids.#: "1" => "<computed>"
```
It now works as expected:
```
% terraform plan ✹ ✭
[WARN] /Users/stacko/Code/go/bin/terraform-provider-aws overrides an internal plugin for aws-provider.
If you did not expect to see this message you will need to remove the old plugin.
See https://www.terraform.io/docs/internals/internal-plugins.html
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.
aws_vpc.foo: Refreshing state... (ID: vpc-fa61669d)
aws_subnet.foo: Refreshing state... (ID: subnet-3fdfb476)
aws_internet_gateway.foo: Refreshing state... (ID: igw-70629a17)
aws_route_table.test: Refreshing state... (ID: rtb-0a52e16c)
aws_instance.test: Refreshing state... (ID: i-0971755345296aca5)
aws_route_table_association.a: Refreshing state... (ID: rtbassoc-b12493c8)
No changes. Infrastructure is up-to-date.
This means that Terraform did not detect any differences between your
configuration and real physical resources that exist. As a result, Terraform
doesn't need to do anything.
```
We should error check up front on the use of num_cache_nodes and
cluster_mode. This allows us to write a test to make sure all works as
expected
```
% make testacc TEST=./builtin/providers/aws TESTARGS='-run=TestAccAWSElasticacheReplicationGroup_clusteringAndCacheNodesCausesError'
==> Checking that code complies with gofmt requirements...
go generate $(go list ./... | grep -v /terraform/vendor/)
2017/05/09 19:04:56 Generated command/internal_plugin_list.go
TF_ACC=1 go test ./builtin/providers/aws -v -run=TestAccAWSElasticacheReplicationGroup_clusteringAndCacheNodesCausesError -timeout 120m
=== RUN TestAccAWSElasticacheReplicationGroup_clusteringAndCacheNodesCausesError
--- PASS: TestAccAWSElasticacheReplicationGroup_clusteringAndCacheNodesCausesError (40.58s)
PASS
ok github.com/hashicorp/terraform/builtin/providers/aws 40.603s
```
Added support for provisioning a native redis cluster elasticache replication group.
A new TypeSet attribute `cluster_mode` has been added. It requires the following
fields:
- `replicas_per_node_group` - The number of replica nodes in each node group
- `num_node_groups` - The number of node groups for this Redis replication group
Notes:
- `automatic_failover_enabled` must be set to true.
- `number_cache_clusters` is now a optional and computed field. If `cluster_mode` is set
its value will be computed as:
```num_node_groups + num_node_groups * replicas_per_node_group```
Below is a sample config:
resource "aws_elasticache_replication_group" "bar" {
replication_group_id = "tf-redis-cluser"
replication_group_description = "test description"
node_type = "cache.t2.micro"
port = 6379
parameter_group_name = "default.redis3.2.cluster.on"
automatic_failover_enabled = true
cluster_mode {
replicas_per_node_group = 1
num_node_groups = 2
}
}
We were too greedy with the AWS specific tags ignore function - we
basically were ignoring anything starting with `aws` rather than just
using `aws:`
Fixes: #14308Fixes: #14247
With an EC2 instance that only had a single network interface, the primary interface, the Update function would call `ModifyInstanceAttribute()` on the target instance. This would only work if there was a single network interface attached to the EC2 instance. If, however, a secondary network interface was attached to the instance, the `ModifyInstanceAttribute()` API call would fail with the following error message:
> There are multiple interfaces attached to instance 'i-XXXXX'. Please specify an interface ID for the operation instead.
After this changeset, modifying instance security groups now makes the correct call to `ModifyNetworkInterfaceAttribute()` in order to modify the list of security groups on the primary network interface, as initially configured during the instances creation.
This change is also safe from an instance that has a non-default primary network interface, as the instance attribute `vpc_security_group_ids` conflicts with the new `network_interface` attribute.
Test Output:
```
$ make testacc TEST=./builtin/providers/aws TESTARGS="-run=TestAccAWSInstance_addSecurityGroupNetworkInterface"
==> Checking that code complies with gofmt requirements...
go generate $(go list ./... | grep -v /terraform/vendor/)
2017/05/08 17:52:42 Generated command/internal_plugin_list.go
TF_ACC=1 go test ./builtin/providers/aws -v -run=TestAccAWSInstance_addSecurityGroupNetworkInterface -timeout 120m
=== RUN TestAccAWSInstance_addSecurityGroupNetworkInterface
--- PASS: TestAccAWSInstance_addSecurityGroupNetworkInterface (327.75s)
PASS
ok github.com/hashicorp/terraform/builtin/providers/aws 327.756s
```
The implementation would return an error if the resource was detected as
removed - this would break Terraform instead of making it re-create the
missing service account.
* provider/aws: Refresh ssm document from state on 404
Originally reported in #13976
When an SSM Document was deleted outside of Terraform, a terraform
refresh would return the following:
```
% terraform plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.
aws_ssm_document.foo: Refreshing state... (ID: test_document-stack72)
Error refreshing state: 1 error(s) occurred:
* aws_ssm_document.foo: aws_ssm_document.foo: [ERROR] Error describing SSM document: InvalidDocument:
status code: 400, request id: 70c9bed1-33bb-11e7-99aa-697e9b0914e9
```
On applying this patch, it now looks as follows:
```
% terraform plan
[WARN] /Users/stacko/Code/go/bin/terraform-provider-aws overrides an internal plugin for aws-provider.
If you did not expect to see this message you will need to remove the old plugin.
See https://www.terraform.io/docs/internals/internal-plugins.html
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.
aws_ssm_document.foo: Refreshing state... (ID: test_document-stack72)
The Terraform execution plan has been generated and is shown below.
Resources are shown in alphabetical order for quick scanning. Green resources
will be created (or destroyed and then created if an existing resource
exists), yellow resources are being changed in-place, and red resources
will be destroyed. Cyan entries are data sources to be read.
Note: You didn't specify an "-out" parameter to save this plan, so when
"apply" is called, Terraform can't guarantee this is what will execute.
+ aws_ssm_document.foo
arn: "<computed>"
content: " {\n \"schemaVersion\": \"1.2\",\n \"description\": \"Check ip configuration of a Linux instance.\",\n \"parameters\": {\n\n },\n \"runtimeConfig\": {\n \"aws:runShellScript\": {\n \"properties\": [\n {\n \"id\": \"0.aws:runShellScript\",\n \"runCommand\": [\"ifconfig\"]\n }\n ]\n }\n }\n }\n"
created_date: "<computed>"
default_version: "<computed>"
description: "<computed>"
document_type: "Command"
hash: "<computed>"
hash_type: "<computed>"
latest_version: "<computed>"
name: "test_document-stack72"
owner: "<computed>"
parameter.#: "<computed>"
platform_types.#: "<computed>"
schema_version: "<computed>"
status: "<computed>"
Plan: 1 to add, 0 to change, 0 to destroy.
```
* Update resource_aws_ssm_document.go
Add a data source for listing available versions for Container Engine
clusters or retrieving the latest available version.
This is mostly to support our tests for specifying a version for cluster
creation; the withVersion test has been updated to use the data source,
meaning it will stop failing on us as new versions get released.
Adds support for importing normal and organization apps with the
heroku_app resource.
The resource itself depends on an `organization` value in the
configuration to switch behavior for the different kinds of apps, so
`schema.ImportStatePassthrough` is not sufficient.
Adds computed `arn` to security group data source
```
$ make testacc TEST=./builtin/providers/aws TESTARGS="-run=TestAccDataSourceAwsSecurityGroup"
==> Checking that code complies with gofmt requirements...
go generate $(go list ./... | grep -v /terraform/vendor/)
2017/05/05 10:17:35 Generated command/internal_plugin_list.go
TF_ACC=1 go test ./builtin/providers/aws -v -run=TestAccDataSourceAwsSecurityGroup -timeout 120m
=== RUN TestAccDataSourceAwsSecurityGroup
--- PASS: TestAccDataSourceAwsSecurityGroup (56.72s)
PASS
ok github.com/hashicorp/terraform/builtin/providers/aws 56.725s
```
* aws_route53_record: More consistent unquoting of TXT/SPF records.
Before, 'flatten' would remove the first two quotes. This results in
confusing behavior if the value contained two quoted strings.
Now, 'flatten' we only remove the surrounding qutoes, which is more
consistent with 'expand'.
Should improve hashicorp/terraform#8423, but more could still be done.
* aws/route53: Cover new bugfix with an acceptance test
Nomad server could have already purged the test job `foo` by the time the test checks to make sure the `foo` job has been deleted, resulting in a `404` response from the Nomad server.
```
$ make testacc TEST=./builtin/providers/nomad
==> Checking that code complies with gofmt requirements...
go generate $(go list ./... | grep -v /terraform/vendor/)
2017/05/04 15:51:01 Generated command/internal_plugin_list.go
TF_ACC=1 go test ./builtin/providers/nomad -v -timeout 120m
=== RUN TestProvider
--- PASS: TestProvider (0.00s)
=== RUN TestResourceJob_basic
--- PASS: TestResourceJob_basic (1.78s)
=== RUN TestResourceJob_refresh
--- PASS: TestResourceJob_refresh (3.30s)
=== RUN TestResourceJob_disableDestroyDeregister
--- PASS: TestResourceJob_disableDestroyDeregister (3.63s)
=== RUN TestResourceJob_idChange
--- PASS: TestResourceJob_idChange (3.44s)
=== RUN TestResourceJob_parameterizedJob
--- PASS: TestResourceJob_parameterizedJob (1.76s)
PASS
ok github.com/hashicorp/terraform/builtin/providers/nomad 13.924s
```
Fixes: #14217
We now check to make sure that we have the correct number of parts when
we have split the resource ID. It isn't an elegant fix but it works as
expected. Also added some more documentation about what is required to
actually construct the Id needed for import