This switches to the Go "context" package for cancellation and threads
the context through all the way to evaluation to allow behavior based on
stopping deep within graph execution.
This also adds the Stop API to provisioners so they can quickly exit
when stop is called.
Fixes#11212
The import graph builder was missing the transform to setup links to
parent providers, so provider inheritance didn't work properly. This
adds that.
This also removes the `PruneProviderTransform` since that has no value
in this graph since we'll never add an unused provider.
This was possible with test fixtures but it is also conceiably possible
with older states or corrupted states. We can also extract the type from
the key so we do that now so that StateFilter is more robust.
Removal of empty nested containers from a flatmap would sometimes fail a
sanity check when removed in the wrong order. This would only fail
sometimes due to map iteration. There was also an off-by-one error in
the prefix check which could match the incorrect keys.
When a InstanceState is merged with an InstanceDiff, any maps arrays or
sets that no longer exist are shown as empty with a count of 0. If these
are left in the flatmap structure, they will cause errors during
expansion because their existing in the map affects the counts for
parent structures.
The change in #10787 used flatmap.Expand to fix interpolation of nested
maps, but it broke interpolation of sets such that their elements were
not represented. For example, the expected string representation of a
splatted aws_network_interface.whatever.*.private_ips should be:
```
[{Variable (TypeList): [{Variable (TypeString): 10.41.17.25}]} {Variable (TypeList): [{Variable (TypeString): 10.41.22.236}]}]
```
But instead it became:
```
[{Variable (TypeList): [{Variable (TypeString): }]} {Variable (TypeList): [{Variable (TypeString): }]}]
```
This is because the expandArray function of expand.go treated arrays to
exclusively be lists, e.g. not sets. The old code used to match for
numeric keys, so it would work for sets, whereas expandArray just
assumed keys started at 0 and ascended incrementally. Remember that
sets' keys are numeric, but since they are hashes, they can be any
integer. The result of assuming that the keys start at 0 led to the
recursive call to flatmap.Expand not matching any keys of the set, and
returning nil, which is why the above example has nothing where the IP
addresses used to be.
So we bring back that matching behavior, but we move it to expandArray
instead. We've modified it to not reconstruct the data structures like
it used to when it was in the Interpolator, and to use the standard int
sorter rather than implementing a custom sorter since a custom one is no
longer necessary thanks to the use of flatmap.Expand.
Fixes#10908, and restores the viability of the workaround I posted in #8696.
Big thanks to @jszwedko for helping me with this fix. I was able to
diagnose the problem along, but couldn't fix it without his help.
Fixes#10729
Destruction ordering wasn't taking into account ordering implied through
variables across module boundaries.
This is because to build the destruction ordering we create a
non-destruction graph to determine the _creation_ ordering (to properly
flip edges). This creation graph we create wasn't including module
variables. This PR adds that transform to the graph.
Fixes#10711
The `ModuleVariablesTransformer` only adds module variables in use. This
was missing module variables used by providers since we ran the provider
too late. This moves the transformer and adds a test for this.
Fixes#10680
This moves TargetsTransformer to run after the transforms that add
module variables is run. This makes targeting work across modules (test
added).
This is a bug that only exists in the new graph, but was caught by a
shadow error in #10680. Tests were added to protect against regressions.
If a data source has explicit dependencies in `depends_on`, we can
assume the user has added those because of a dependency not tracked
directly in the config. If there are any entries in `depends_on`, don't
apply the data source early during Refresh.
Fixes#4645
This is something that never worked (even in legacy graphs), but as we
push forward towards encouraging multi-provider usage especially with
things like the Vault data source, I want to make sure we have this
right for 0.8.
When you have a config like this:
```
resource "foo_type" "name" {}
provider "bar" { attr = "${foo_type.name.value}" }
resource "bar_type" "name" {}
```
Then the destruction ordering MUST be:
1. `bar_type`
2. `foo_type`
Since configuring the client for `bar_type` requires accessing data from
`foo_type`. Prior to this PR, these two would be done in parallel. This
properly pushes forward the dependency.
There are more cases I want to test but this is a basic case that is
fixed.
Fixes#8695
When a list count was computed in a multi-resource access
(foo.bar.*.list), we were returning the value as empty string. I don't
actually know the histocal reasoning for this but this can't be correct:
we must return unknown.
When changing this to unknown, the new tests passed and none of the old
tests failed. This leads me further to believe that the return empty
string is probably a holdover from long ago to just avoid crashes or
UUIDs in the plan output and not actually the correct behavior.
Related to #8036
We have had this behavior for a _long_ time now (since 0.7.0) but it
seems people are still periodically getting bit by it. This adds an
explicit error message that explains that this kind of override isn't
allowed anymore.
* "external" provider for gluing in external logic
This provider will become a bit of glue to help people interface external
programs with Terraform without writing a full Terraform provider.
It will be nowhere near as capable as a first-class provider, but is
intended as a light-touch way to integrate some pre-existing or custom
system into Terraform.
* Unit test for the "resourceProvider" utility function
This small function determines the dependable name of a provider for
a given resource name and optional provider alias. It's simple but it's
a key part of how resource nodes get connected to provider nodes so
worth specifying the intended behavior in the form of a test.
* Allow a provider to export a resource with the provider's name
If a provider only implements one resource of each type (managed vs. data)
then it can be reasonable for the resource names to exactly match the
provider name, if the provider name is descriptive enough for the
purpose of the each resource to be obvious.
* provider/external: data source
A data source that executes a child process, expecting it to support a
particular gateway protocol, and exports its result. This can be used as
a straightforward way to retrieve data from sources that Terraform
doesn't natively support..
* website: documentation for the "external" provider
Fixes#10440
This updates the behavior of "apply" resources to depend on the
destroy versions of their dependencies.
We make an exception to this behavior when the "apply" resource is CBD.
This is odd and not 100% correct, but it mimics the behavior of the
legacy graphs and avoids us having to do major core work to support the
100% correct solution.
I'll explain this in examples...
Given the following configuration:
resource "null_resource" "a" {
count = "${var.count}"
}
resource "null_resource" "b" {
triggers { key = "${join(",", null_resource.a.*.id)}" }
}
Assume we've successfully created this configuration with count = 2.
When going from count = 2 to count = 1, `null_resource.b` should wait
for `null_resource.a.1` to destroy.
If it doesn't, then it is a race: depending when we interpolate the
`triggers.key` attribute of `null_resource.b`, we may get 1 value or 2.
If `null_resource.a.1` is destroyed, we'll get 1. Otherwise, we'll get
2. This was the root cause of #10440
In the legacy graphs, `null_resource.b` would depend on the destruction
of any `null_resource.a` (orphans, tainted, anything!). This would
ensure proper ordering. We mimic that behavior here.
The difference is CBD. If `null_resource.b` has CBD enabled, then the
ordering **in the legacy graph** becomes:
1. null_resource.b (create)
2. null_resource.b (destroy)
3. null_resource.a (destroy)
In this case, the update would always have 2 values for `triggers.key`,
even though we were destroying a resource later! This scenario required
two `terraform apply` operations.
This is what the CBD check is for in this PR. We do this to mimic the
behavior of the legacy graph.
The correct solution to do one day is to allow splat references
(`null_resource.a.*.id`) to happen in parallel and only read up to to
the `count` amount in the state. This requires some fairly significant
work close to the 0.8 release date, so we can defer this to later and
adopt the 0.7.x behavior for now.
Init should only _add_ values, not remove them.
During graph execution, there are steps that expect that a state isn't
being actively pruned out from under it. Namely: writing deposed states.
Writing deposed states has no way to handle if a state changes
underneath it because the only way to uniquely identify a deposed state
is its index in the deposed array. When destroying deposed resources, we
set the value to `<nil>`. If the array is pruned before the next deposed
destroy, then the indexes have changed, and this can cause a crash.
This PR does the following (with more details below):
* `init()` no longer prunes.
* `ReadState()` always prunes before returning. I can't think of a
scenario where this is unsafe since generally we can always START
from a pruned state, its just causing problems to prune
mid-execution.
* Exported State APIs updated to be robust against nil ModuleStates.
Instead, I think we should adopt the following semantics for init/prune
in our structures that support it (Diff, for example). By having
consistent semantics around these functions, we can avoid this in the
future and have set expectations working with them.
* `init()` (in anything) will only ever be additive, and won't change
ordering or existing values. It won't remove values.
* `prune()` is destructive, expectedly.
* Functions on a structure must not assume a pruned structure 100% of
the time. They must be robust to handle nils. This is especially
important because in many cases values such as `Modules` in state
are exported so end users can simply modify them outside of the
exported APIs.
This PR may expose us to unknown crashes but I've tried to cover our
cases in exposed APIs by checking for nil.
Fixes#10439
When a CBD resource depends on a non-CBD resource, the non-CBD resource
is auto-promoted to CBD. This was done in
cf3a259. This PR makes it so that we
also set the config CBD to true. This causes the proper runtime
execution behavior to occur where we depose state and so on.
So in addition to simple graph edge tricks we also treat the non-CBD
resources as CBD resources.
Fixes#10412
The context wasn't properly adding variable values to the Interpolator
instance which made it so that the `console` command couldn't access
variables set via tfvars and the CLI.
This also adds better test coverage in command itself for this.
Fixes#10338
The destruction step for a resource was included the deposed resources
for _all_ resources with that name (ignoring the "index"). For example:
`aws_instance.foo.0` was including destroying deposed for
`aws_instance.foo.1`.
This changes the config to the deposed transformer to properly include
that index.
This change includes a larger change of changing `stateId` to include
the index. This affected more parts but was ultimately the issue in
question.
When referencing a list of maps variable from within a resource, only
the first list element is included the plan. This is because GetRaw
can't access the interpolated values. Add some tests to document this
behavior for both Get and GetRaw.
Fixes#10313
The new graph wasn't properly recording resource dependencies to a
specific index of itself. For example: `foo.bar.2` depending on
`foo.bar.0` wasn't shown in the state when it should've been.
This adds a test to verify this and fixes it.
ResourceAddr.Mode wasn't properly set when moving a module, so data
sources would lose the "data." prefix when their module was moved within
the State.
ResourceConfig.Get could previously return (nil, true) when looking up
an interpolated map in a list because of the indexing ambiguity. Make
sure we test that a non-existent value always returns false.
It makes for sense for this to happen in State.prune(). Also move a
redundant pruning from ResourceState.init, and make sure
ResourceState.prune is called from the parent's prune method.
Fixes a case where ResourceConfig.get inadvertently returns a nil value.
Add an integration test where assigning a map to a list via
interpolation would panic.
Setting variables happens before context validation, so it's possible
that the user could be trying to set an incorrect variable type to a
map. Return a useful error rather than panicking.
Ensure that each instance of BasucGraphBuilder gets a name corresponding
to the Builder which created it. This allows us to differentiate the
graphs in the logs.
This doesn't cause any practical issues as far as I'm aware (couldn't
get any test to fail), but caused shadow errors since it wasn't matching
the prior behavior.
Fixes#10122
The simple fix was that we forgot to close `ReadDataApply` for the
provider. But I've always felt that this section of the code was brittle
and I wanted to put in a more robust solution. The `shadow.Close` method
uses reflection to automatically close all values.
People with `uuid()` usage in their configurations would receive shadow
errors every time on plan because the UUID would change.
This is hacky fix but I also believe correct: if a shadow error contains
uuid() then we ignore the shadow error completely. This feels wrong but
I'll explain why it is likely right:
The "right" feeling solution is to create deterministic random output
across graph runs. This would require using math/rand and seeding it
with the same value each run. However, this alone probably won't work
due to Terraform's parallelism and potential to call uuid() in different
orders. In addition to this, you can't seed crypto/rand and its unlikely
that we'll NEVER use crypto/rand in the future even if we switched
uuid() to use math/rand.
Therefore, the solution is simple: if there is no shadow error, no
problem. If there is a shadow error and it contains uuid(), then ignore
it.
The method marks the start of a set of operations on the Graph, with
extra information optionally provided in the second paramter. This
returns a function with a single End method to mark the end of the set
in the logs.
Refactor the existing graph Begin/End Operation calls to use this single
method. Remove the *string types in the marshal structs, these are
strictly informational and don't need to differentiate empty vs unset
strings.
Add calls to DebugOperation for each step while building the graph.
To maintain the same output, the Graph.Dot implementation needs to be
aware of GraphNodeDotter. Copy the interface into the dag package, and
make the Dot marshaler aware of which nodes implemented the interface.
This way we can remove most of the remaining dot code from terraform.
The dot format generation was done with a mix of code from the terraform
package and the dot package. Unify the dot generation code, and it into
the dag package.
Use an intermediate structure to allow a dag.Graph to marshal itself
directly. This structure will be ablt to marshal directly to JSON, or be
translated to dot format. This was we can record more information about
the graph in the debug logs, and provide a way to translate those logged
structures to dot, which is convenient for viewing the graphs.