Init should only _add_ values, not remove them.
During graph execution, there are steps that expect that a state isn't
being actively pruned out from under it. Namely: writing deposed states.
Writing deposed states has no way to handle if a state changes
underneath it because the only way to uniquely identify a deposed state
is its index in the deposed array. When destroying deposed resources, we
set the value to `<nil>`. If the array is pruned before the next deposed
destroy, then the indexes have changed, and this can cause a crash.
This PR does the following (with more details below):
* `init()` no longer prunes.
* `ReadState()` always prunes before returning. I can't think of a
scenario where this is unsafe since generally we can always START
from a pruned state, its just causing problems to prune
mid-execution.
* Exported State APIs updated to be robust against nil ModuleStates.
Instead, I think we should adopt the following semantics for init/prune
in our structures that support it (Diff, for example). By having
consistent semantics around these functions, we can avoid this in the
future and have set expectations working with them.
* `init()` (in anything) will only ever be additive, and won't change
ordering or existing values. It won't remove values.
* `prune()` is destructive, expectedly.
* Functions on a structure must not assume a pruned structure 100% of
the time. They must be robust to handle nils. This is especially
important because in many cases values such as `Modules` in state
are exported so end users can simply modify them outside of the
exported APIs.
This PR may expose us to unknown crashes but I've tried to cover our
cases in exposed APIs by checking for nil.
Fixes#10439
When a CBD resource depends on a non-CBD resource, the non-CBD resource
is auto-promoted to CBD. This was done in
cf3a259. This PR makes it so that we
also set the config CBD to true. This causes the proper runtime
execution behavior to occur where we depose state and so on.
So in addition to simple graph edge tricks we also treat the non-CBD
resources as CBD resources.
Fixes#10412
The context wasn't properly adding variable values to the Interpolator
instance which made it so that the `console` command couldn't access
variables set via tfvars and the CLI.
This also adds better test coverage in command itself for this.
Fixes#10338
The destruction step for a resource was included the deposed resources
for _all_ resources with that name (ignoring the "index"). For example:
`aws_instance.foo.0` was including destroying deposed for
`aws_instance.foo.1`.
This changes the config to the deposed transformer to properly include
that index.
This change includes a larger change of changing `stateId` to include
the index. This affected more parts but was ultimately the issue in
question.
When referencing a list of maps variable from within a resource, only
the first list element is included the plan. This is because GetRaw
can't access the interpolated values. Add some tests to document this
behavior for both Get and GetRaw.
Fixes#10313
The new graph wasn't properly recording resource dependencies to a
specific index of itself. For example: `foo.bar.2` depending on
`foo.bar.0` wasn't shown in the state when it should've been.
This adds a test to verify this and fixes it.
ResourceAddr.Mode wasn't properly set when moving a module, so data
sources would lose the "data." prefix when their module was moved within
the State.
ResourceConfig.Get could previously return (nil, true) when looking up
an interpolated map in a list because of the indexing ambiguity. Make
sure we test that a non-existent value always returns false.
It makes for sense for this to happen in State.prune(). Also move a
redundant pruning from ResourceState.init, and make sure
ResourceState.prune is called from the parent's prune method.
Fixes a case where ResourceConfig.get inadvertently returns a nil value.
Add an integration test where assigning a map to a list via
interpolation would panic.
Setting variables happens before context validation, so it's possible
that the user could be trying to set an incorrect variable type to a
map. Return a useful error rather than panicking.
Ensure that each instance of BasucGraphBuilder gets a name corresponding
to the Builder which created it. This allows us to differentiate the
graphs in the logs.
This doesn't cause any practical issues as far as I'm aware (couldn't
get any test to fail), but caused shadow errors since it wasn't matching
the prior behavior.
Fixes#10122
The simple fix was that we forgot to close `ReadDataApply` for the
provider. But I've always felt that this section of the code was brittle
and I wanted to put in a more robust solution. The `shadow.Close` method
uses reflection to automatically close all values.
People with `uuid()` usage in their configurations would receive shadow
errors every time on plan because the UUID would change.
This is hacky fix but I also believe correct: if a shadow error contains
uuid() then we ignore the shadow error completely. This feels wrong but
I'll explain why it is likely right:
The "right" feeling solution is to create deterministic random output
across graph runs. This would require using math/rand and seeding it
with the same value each run. However, this alone probably won't work
due to Terraform's parallelism and potential to call uuid() in different
orders. In addition to this, you can't seed crypto/rand and its unlikely
that we'll NEVER use crypto/rand in the future even if we switched
uuid() to use math/rand.
Therefore, the solution is simple: if there is no shadow error, no
problem. If there is a shadow error and it contains uuid(), then ignore
it.
The method marks the start of a set of operations on the Graph, with
extra information optionally provided in the second paramter. This
returns a function with a single End method to mark the end of the set
in the logs.
Refactor the existing graph Begin/End Operation calls to use this single
method. Remove the *string types in the marshal structs, these are
strictly informational and don't need to differentiate empty vs unset
strings.
Add calls to DebugOperation for each step while building the graph.
To maintain the same output, the Graph.Dot implementation needs to be
aware of GraphNodeDotter. Copy the interface into the dag package, and
make the Dot marshaler aware of which nodes implemented the interface.
This way we can remove most of the remaining dot code from terraform.
The dot format generation was done with a mix of code from the terraform
package and the dot package. Unify the dot generation code, and it into
the dag package.
Use an intermediate structure to allow a dag.Graph to marshal itself
directly. This structure will be ablt to marshal directly to JSON, or be
translated to dot format. This was we can record more information about
the graph in the debug logs, and provide a way to translate those logged
structures to dot, which is convenient for viewing the graphs.
This fixes: `TestContext2Apply_moduleDestroyOrder`
The new destroy graph wasn't properly creating edges that happened
_through_ an output, it was only created the edges for _direct_
dependents.
To fix this, the DestroyEdgeTransformer now creates the full transitive
list of destroy edges by walking all ancestors. This will create more
edges than are necessary but also will no longer miss resources through
an output.
This will detect computed counts (which we don't currently support) and
change the error to be more informative that we don't allow computed
counts. Prior to this, the error would instead be something like
`strconv.ParseInt: "${var.foo}" cannot be parsed as int`.
This turns the new graphs on by default and puts the old graphs behind a
flag `-Xlegacy-graph`. This effectively inverts the current 0.7.x
behavior with the new graphs.
We've incubated most of these for a few weeks now. We've found issues
and we've fixed them and we've been using these graphs internally for
awhile without any major issue. Its time to default them on and get them
part of a beta.
Because we now rely on HIL to do the computed calculation, we must make
sure the type is correct (TypeUnknown). Before, we'd just check for the
UUID in the string.
This changes all variable returns in the interpolater to run it through
`hil.InterfaceToVariable` which handles this lookup for us.
This uses the new NodeApplyableProvider graph nodes. This will just make
it easier for us in the future to adopt new graph transforms by starting
to use the new ones here.
The primary change here is to expect that Config contains computed
values. This introduces `unknownCheckWalker` that does a really basic
reflectwalk to look for computed values and use that for IsComputed.
We had a weird mixture before checking whether c.Config was simply
missing values to determine where to look. Now we rely on IsComputed
heavily.
This makes all the computed stuff "just work" since HIL uses the same
computed sentinel value (string UUID) and the type differentiates it
from a regular string.
The map output from the module "mod" loses the computed value from the
template when we validate. If the "extra" field is removed from the map,
the validation fails earlier with map "does not have any elements so
cannot determine type".
Apply will work, because the computed value will exist in the map.
terraform: more specific resource references
terraform: outputs need to know about the new reference format
terraform: resources w/o a config still have a referencable name
This makes the old graph also prune orphan outputs in modules.
This will fix shadow graph errors such as #9905 since the old graph will
also behave correctly in these scenarios.
Luckily, because orphan outputs don't rely on anything, we were able to
simply use the same transformer!
Fixes#9920
This was an issue caught with the shadow graph. Self references in
provisioners were causing a self-edge on destroy apply graphs.
We need to explicitly check that we're not creating an edge to ourself.
This is also how the reference transformer works.
Fixes a shadow graph error found during usage.
The new apply graph was only adding module variables that referenced
data that existed _in the graph_. This isn't a valid optimization since
the data it is referencing may be in the state with no diff, and
therefore available but not in the graph.
This just removes that optimization logic, which causes no failing
tests. It also adds a test that exposes the bug if we had the pruning
logic.
Found via a shadow graph failure:
Provider aliases weren't being configured by the new apply graph.
This was caused by the transform that attaches configs to provider nodes
not being able to handle aliases and therefore not attaching a config.
Added a test to this and fixed it.
Fixes#9444
This appears to be a regression from 0.7.0, but there were no tests
covering it so we missed it and changed the behavior at some point! Oh
no.
This PR make the ordering of multi-var access: `resource.name.*.attr`
consistent: it is the ordering of the count, not the lexical ordering of
the value. This allows behavior where two lists are indexed by count
index and can be assumed to be related (for example user data for an aws
instance, as shown in the above referenced issue).
Two new context tests added to cover this case.
Implement debugInfo and the DebugGraph
DebugInfo will be a global variable through which graph debug
information can we written to a compressed archive. The DebugInfo
methods are all safe for concurrent use, and noop with a nil receiver.
The API outside of the terraform package will be to call SetDebugInfo
to create the archive, and CloseDebugInfo() to properly close the file.
Each write to the archive will be flushed and sync'ed individually, so
in the event of a crash or a missing call to Close, the archive can
still be recovered.
The DebugGraph is a representation of a terraform Graph to be written to
the debug archive, currently in dot format. The DebugGraph also contains
an internal buffer with Printf and Write methods to add to this buffer.
The buffer will be written to an accompanying file in the debug archive
along with the graph.
This also adds a GraphNodeDebugger interface. Any node implementing
`NodeDebug() string` can output information to annotate the debug graph
node, and add the data to the log. This interface may change or be
removed to provide richer options for debugging graph nodes.
The new graph builders all delegate the build to the BasicGraphBuilder.
Having a Name field lets us differentiate the actual builder
implementation in the debug graphs.
The graph transformation we implement around create_before_destroy
need to re-order all resources that depend on the create_before_destroy
resource. Up until now, we've requires that users mark all of these
resources as create_before_destroy. Data soruces however don't have a
lifecycle block for create_before_destroy, and could not be marked this
way.
This PR checks each DestroyNode that doesn't implement CreateBeforeDestroy
for any ancestors that do implement CreateBeforeDestroy. If there are
any, we inherit the behavior and re-order the graph as such.
Fixes#9840
The new apply graph wasn't properly nesting provisioners. This resulted
in reading the provisioners being nil on apply in the shadow graph which
caused the crash in the above issue.
The actual cause of this is that the new graphs we're moving towards do
not have any "flattening" (they are flat to begin with): all modules are
in the root graph from the beginning of construction versus building a
number of different graphs and flattening them. The transform that adds
the provisioners wasn't modified to handle already-flat graphs and so
was only adding provisioners to the root module, not children.
The change modifies the `MissingProvisionerTransformer` (primarily) to
support already-flat graphs and add provisioners for all module levels.
Tests are there to cover this as well.
**NOTE:** This PR focuses on fixing that specific issue. I'm going to follow up
this PR with another PR that is more focused on being robust against
crashing (more nil checks, recover() for shadow graph, etc.). In the
interest of focus and keeping a PR reviewable this focuses only on the
issue itself.
Fixes#7975
This changes the InputMode for the CLI to always be:
InputModeProvider | InputModeVar | InputModeVarUnset
Which means:
* Ask for provider variables
* Ask for user variables _that are not already set_
The change is the latter point. Before, we'd only ask for variables if
zero were given. This forces the user to either have no variables set
via the CLI, env vars, tfvars or ALL variables, but no in between. As
reported in #7975, this isn't expected behavior.
The new change makes is so that unset variables are always asked for.
Users can retain the previous behavior by setting `-input=false`. This
would ensure that variables set by external sources cover all cases.
For #9618, we added the ability to ignore old diffs that were computed
and removed (because the ultimate value ended up being the same). This
ended up breaking computed list/set logic.
The correct behavior, as is evident by how the other "skip" logics work,
is to set `ok = true` so that the remainder of the logic can run which
handles stuff such as computed lists and sets.
Fixes#6447
This ensures that all variables of type string are consistently
converted to a string value upon running Terraform.
The place this is done is in the `Variables()` call within the
`terraform` package. This is the function responsible for loading and
merging the variables from the various sources and seems ideal for
proper conversion to consistent values for various types. We actually
already had tests to this effect.
This also adds docs that talk about the fake-ish boolean variables
Terraform currently has and about how in future versions we'll likely
support them properly, which can cause BC issues so beware.
This was causing flaky behavior in our tests because `TF_VAR_x=""` is
actually a valid env var. For tests, we need to actually unset env vars
that haven't been set before.
Fixes#6327
Deposed instances weren't calling PostApply which was causing the counts
for what happened during `apply` to be wrong. This was a simple fix to
ensure we call that hook.
Fixes#5342
The dynamically expanded subgraph wasn't being validated so cycles
weren't being caught here and Terraform would just hang. This fixes
that.
Note that it may make sense to validate higher level when the graph is
expanded but there are certain cases we actually expect the graph to
potentially be invalid, so this seems safer for now.
Fixes#5826
The `prevent_destroy` lifecycle configuration was not being checked when
the count was decreased for a resource with a count. It was only
checking when attributes changed on pre-existing resources.
This fixes that.
Fixes#5338 (and I'm sure many others)
There is no use case for "simple" variables in Terraform at all so
anytime one is found it should be an error.
There is a _huge_ backwards incompatibility here that was not supposed
to be by design but I'm sure a lot of people are relying on: in the
`template_file` datasource, this bug allowed you to not escape your
interpolations and have the work. For example:
```
data "template_file" "foo" {
template = "${a}"
vars { a = 12 }
}
```
The above would work, but it shouldn't. The template should have to be
`"$${a}"` (to escape the interpolation).
Because of this BC, I recommend holding this until Terraform 0.8.0 and
documenting it carefully. As part of this PR, I've added some special
error message notes.
This creates a standard package and interface for defining, querying,
setting experiments (`-X` flags).
I expect we'll want to continue to introduce various features behind
experimental flags. I want to make doing this as easy as possible and I
want to make _removing_ experiments as easy as possible as well.
The goal with this packge has been to rely on the compiler enforcing our
experiment references as much as possible. This means that every
experiment is a global variable that must be referenced directly, so
when it is removed you'll get compiler errors where the experiment is
referenced.
This also unifies and makes it easy to grab CLI flags to enable/disable
experiments as well as env vars! This way defining an experiment is just
a couple lines of code (documented on the package).
Fixes#3309
There are two primary changes, one to how helper/schema creates diffs
and one to how Terraform compares diffs. Both require careful
understanding.
== 1. helper/schema Changes
helper/schema, given any primitive field (string, int, bool, etc.)
_used to_ create a basic diff when given a computed new value (i.e. from
an unkown interpolation). This would put in the plan that the old value
is whatever the old value was, and the new value was the actual
interpolation. For example, from #3309, the diff showed the following:
```
~ module.test.aws_eip.test-instance.0
instance: "<INSTANCE ID>" => "${element(aws_instance.test-instance.*.id, count.index)}"
```
Then, when running `apply`, the diff would be realized and you would get
a diff mismatch error because it would realize the final value is the
same and remove it from the diff.
**The change:** `helper/schema` now marks unknown primitive values with
`NewComputed` set to true. Semantically this is correct for the diff to
have this information.
== 2. Terraform Diff.Same Changes
Next, the way Terraform compares diffs needed to be updated
Specifically, the case where the diff from the plan had a NewComputed
primitive and the diff from the apply _no longer has that value_. This
is possible if the computed value ended up being the same as the old
value. This is allowed to pass through.
Together, these fix#3309.
This reverts commit c3a4cff133, reversing
changes made to 791a02e6e4.
This change requires plugin recompilation and we should hold off until a
minor release for that.
This enables the shadow graph since all tests pass!
We also change the destroy node to check the resource type using the
addr since that is always available and reliable. The configuration can
be nil for orphans.
This is necessary to get the shadow working properly with the destroy
graph since the destroy graph doesn't set this field but the end state
is still the same.
This is something that should be determined and done during an apply. It
doesn't make a lot of sense that the plan is doing it (in its current
form at least).
Since it is still very much possible for this to cause problems, this
can be used to disable the shadow graph. We'll purposely not document
this since the goal is to remove this flag as we become more confident
with it.
This enables the new apply graph's resource node to apply data sources.
Data sources appear to only be tested for "refresh" which is likely
where they're set but they've also been implemented (not my code, not
trying to edit code) within the "apply" operation as well.
This adds an apply test to ensure data sources work, and then modifies
the new apply node to support data sources.
It appears data sources have always been coded to work during apply, as
can be verified with this test (no impl. changes were necessary to make
it pass).
This test should be added to ensure our apply graph always works with
data sources as well.
This adds the proper logic for "disabling" providers to the new apply
graph: interolating and storing the config for inheritance but not
actually initializing and configuring the provider.
This is important since parent modules will often contain incomplete
provider configurations for the purpose of inheritance that would error
if they were actually attempted to be configured (since they're
incomplete). If the provider is not used, it should be "disabled".
This doesn't explicitly set `rs.Provider` on destroy nodes.
To be honest, I'm not sure why this was done in the first place (git
blame points to 6fda7bb5483a155b8ae1e1e4e4b7b7c4073bc1d9). Tests always
passed without it, and by adding it it causes other tests to fail. I
should've never changed those other tests.
Removing it now to get tests passing, this also reverts the test changes
made in 8213824962f085279810f04b60b95d1176a3a3f2.