The acceptance tests for spot_instance_requests were showing falures as
follows:
```
------- Stdout: -------
=== RUN TestAccAWSSpotInstanceRequest_basic
--- FAIL: TestAccAWSSpotInstanceRequest_basic (100.40s)
testing.go:280: Step 0 error: After applying this step, the plan was not empty:
DIFF:
UPDATE: aws_spot_instance_request.foo
volume_tags.%: "" => "<computed>"
```
This was because we were setting volume_tags as computed and thus the
diff. We needed to override the schema to make sure that it was not
being computed - it's only aws_instance that needs computed tags because
of EBS volumes
```
% make testacc TEST=./builtin/providers/aws TESTARGS='-run=TestAccAWSSpotInstanceRequest_'
==> Checking that code complies with gofmt requirements...
go generate $(go list ./... | grep -v /terraform/vendor/)
2017/05/15 10:41:36 Generated command/internal_plugin_list.go
TF_ACC=1 go test ./builtin/providers/aws -v -run=TestAccAWSSpotInstanceRequest_ -timeout 120m
=== RUN TestAccAWSSpotInstanceRequest_basic
--- PASS: TestAccAWSSpotInstanceRequest_basic (86.93s)
=== RUN TestAccAWSSpotInstanceRequest_withBlockDuration
--- PASS: TestAccAWSSpotInstanceRequest_withBlockDuration (97.47s)
=== RUN TestAccAWSSpotInstanceRequest_vpc
--- PASS: TestAccAWSSpotInstanceRequest_vpc (234.56s)
=== RUN TestAccAWSSpotInstanceRequest_SubnetAndSG
--- PASS: TestAccAWSSpotInstanceRequest_SubnetAndSG (146.16s)
PASS
ok github.com/hashicorp/terraform/builtin/providers/aws 565.131s
```
* Adds ExpressRoute circuit documentation
* Adds tests and doc improvements
* Code for basic Express Route Circuit support
* Use the built-in validation helper
* Added ignoreCaseDiffSuppressFunc to a few fields
* Added more information to docs
* Touchup
* Moving SKU properties into a set.
* Updates doc
* A bit more tweaks
* Switch to Sprintf for test string
* Updating the acceptance test name for consistency
These tests cover the new refresh behaviour and would fail with "index
out of range" if the refresh graph is not expanded to take new resources
into account as well (scale out), or if it does not with expanded count
orphans in a way that makes sure they don't get interpolated when walked
(scale in).
Currently, the refresh graph uses the resources from state as a base,
with data sources then layered on. Config is not consulted for resources
and hence new resources that are added with count (or any new resource
from config, for that matter) do not get added to the graph during
refresh.
This is leading to issues with scale in and scale out when the same
value for count is used in both resources, and data sources that may
depend on that resource (and possibly vice versa). While the resources
exist in config and can be used, the fact that ConfigTransformer for
resources is missing means that they don't get added into the graph,
leading to "index out of range" errors and what not.
Further to that, if we add these new resources to the graph for scale
out, considerations need to be taken for scale in as well, which are not
being caught 100% by the current implementation of
NodeRefreshableDataResource. Scale-in resources should be treated as
orphans, which according to the instance-form NodeRefreshableResource
node, should be NodeDestroyableDataResource nodes, but this this logic
is currently not rolled into NodeRefreshableDataResource. This causes
issues on scale-in in the form of race-ish "index out of range" errors
again.
This commit updates the refresh graph so that StateTransformer is no
longer used as the base of the graph. Instead, we add resources from the
state and config in a hybrid fashion:
* First off, resource nodes are added from config, but only if
resources currently exist in state. NodeRefreshableManagedResource
is a new expandable resource node that will expand count and add
orphans from state. Any count-expanded node that has config but no
state is also transformed into a plannable resource, via a new
ResourceRefreshPlannableTransformer.
* The NodeRefreshableDataResource node type will now add count orphans
as NodeDestroyableDataResource nodes. This achieves the same effect
as if the data sources were added by StateTransformer, but ensures
there are no races in the dependency chain, with the added benefit of
directing these nodes straight to the proper
NodeDestroyableDataResource node.
* Finally, config orphans (nodes that don't exist in config anymore
period) are then added, to complete the graph.
This should ensure as much as possible that there is a refresh graph
that best represents both the current state and config with updated
variables and counts.
* Added new evaluation_delay field
Added new evaluation_delay parameter to pass it through the datadog monitor api
* Changed tests for new evaluation_delay field
* changed documentation
* added vmss with managed disk support
* Update vmss docs
* update vmss test
* added vmss managed disk import test
* update vmss tests
* remove unused test resources
* reverting breaking changes on storage_os_disk and storage_image_reference
* updated vmss tests and documentation
* updated vmss flatten osdisk
* updated vmss resource and import test
* update name in vmss osdisk
* update vmss test to include a blank name
* update vmss test to include a blank name
Fix an issue when trying to get a public IPv4 address and a public IPv6
address that results in the following error:
Error launching source instance: InvalidParameterCombination:
Network interfaces and an instance-level IPv6 address count may not
be specified on the same request
To fix, in situations where we want a IPv6 addresses AND we need to
manually specify network interfaces on the instance, create the IPv6
addresses on the network interface that we're creating rather than on
the instance itself.
Fixes#13250
* Allowed method on aggregator is `avg` ! `average`
While Datadog will accept the value of `average` when creating the query graph, the resultant graph will be empty. Passing the value of `avg` instead correctly renders the graph.
* Fixed gofmt
* Updated test to match new aggregator method
The previous behavior of targets was that targeting a particular node
would implicitly target everything it depends on. This makes sense when
the dependencies in question are between resources, since we need to
make sure all of a resource's dependencies are in place before we can
create or update it.
However, it had the undesirable side-effect that targeting a resource
would _exclude_ any outputs referring to it, since the dependency edge
goes from output to resource. This then causes the output to be "stale",
which is problematic when outputs are being consumed by downstream
configs using terraform_remote_state.
GraphNodeTargetDownstream allows nodes to opt-in to a new behavior where
they can be targeted by _inverted_ dependency edges. That is, it allows
outputs to be considered targeted if anything they directly depend on
is targeted.
This is different than the implied targeting behavior in the other
direction because transitive dependencies are not considered unless the
intermediate nodes themselves have TargetDownstream. This means that
an output1→output2→resource chain can implicitly target both outputs, but
an output→resource1→resource2 chain _won't_ target the output if only
resource2 is targeted.
This behavior creates a scenario where an output can be visited before
all of its dependencies are ready, since it may have a mixture of both
targeted and untargeted dependencies. This is fine for outputs because
they silently ignore any errors encountered during interpolation anyway,
but other hypothetical future implementers of this interface may need to
be more careful.
This fixes#14186.