This makes the old graph also prune orphan outputs in modules.
This will fix shadow graph errors such as #9905 since the old graph will
also behave correctly in these scenarios.
Luckily, because orphan outputs don't rely on anything, we were able to
simply use the same transformer!
Fixes#9920
This was an issue caught with the shadow graph. Self references in
provisioners were causing a self-edge on destroy apply graphs.
We need to explicitly check that we're not creating an edge to ourself.
This is also how the reference transformer works.
Fixes a shadow graph error found during usage.
The new apply graph was only adding module variables that referenced
data that existed _in the graph_. This isn't a valid optimization since
the data it is referencing may be in the state with no diff, and
therefore available but not in the graph.
This just removes that optimization logic, which causes no failing
tests. It also adds a test that exposes the bug if we had the pruning
logic.
Found via a shadow graph failure:
Provider aliases weren't being configured by the new apply graph.
This was caused by the transform that attaches configs to provider nodes
not being able to handle aliases and therefore not attaching a config.
Added a test to this and fixed it.
Fixes#9444
This appears to be a regression from 0.7.0, but there were no tests
covering it so we missed it and changed the behavior at some point! Oh
no.
This PR make the ordering of multi-var access: `resource.name.*.attr`
consistent: it is the ordering of the count, not the lexical ordering of
the value. This allows behavior where two lists are indexed by count
index and can be assumed to be related (for example user data for an aws
instance, as shown in the above referenced issue).
Two new context tests added to cover this case.
Implement debugInfo and the DebugGraph
DebugInfo will be a global variable through which graph debug
information can we written to a compressed archive. The DebugInfo
methods are all safe for concurrent use, and noop with a nil receiver.
The API outside of the terraform package will be to call SetDebugInfo
to create the archive, and CloseDebugInfo() to properly close the file.
Each write to the archive will be flushed and sync'ed individually, so
in the event of a crash or a missing call to Close, the archive can
still be recovered.
The DebugGraph is a representation of a terraform Graph to be written to
the debug archive, currently in dot format. The DebugGraph also contains
an internal buffer with Printf and Write methods to add to this buffer.
The buffer will be written to an accompanying file in the debug archive
along with the graph.
This also adds a GraphNodeDebugger interface. Any node implementing
`NodeDebug() string` can output information to annotate the debug graph
node, and add the data to the log. This interface may change or be
removed to provide richer options for debugging graph nodes.
The new graph builders all delegate the build to the BasicGraphBuilder.
Having a Name field lets us differentiate the actual builder
implementation in the debug graphs.
The graph transformation we implement around create_before_destroy
need to re-order all resources that depend on the create_before_destroy
resource. Up until now, we've requires that users mark all of these
resources as create_before_destroy. Data soruces however don't have a
lifecycle block for create_before_destroy, and could not be marked this
way.
This PR checks each DestroyNode that doesn't implement CreateBeforeDestroy
for any ancestors that do implement CreateBeforeDestroy. If there are
any, we inherit the behavior and re-order the graph as such.
Fixes#9840
The new apply graph wasn't properly nesting provisioners. This resulted
in reading the provisioners being nil on apply in the shadow graph which
caused the crash in the above issue.
The actual cause of this is that the new graphs we're moving towards do
not have any "flattening" (they are flat to begin with): all modules are
in the root graph from the beginning of construction versus building a
number of different graphs and flattening them. The transform that adds
the provisioners wasn't modified to handle already-flat graphs and so
was only adding provisioners to the root module, not children.
The change modifies the `MissingProvisionerTransformer` (primarily) to
support already-flat graphs and add provisioners for all module levels.
Tests are there to cover this as well.
**NOTE:** This PR focuses on fixing that specific issue. I'm going to follow up
this PR with another PR that is more focused on being robust against
crashing (more nil checks, recover() for shadow graph, etc.). In the
interest of focus and keeping a PR reviewable this focuses only on the
issue itself.
Fixes#7975
This changes the InputMode for the CLI to always be:
InputModeProvider | InputModeVar | InputModeVarUnset
Which means:
* Ask for provider variables
* Ask for user variables _that are not already set_
The change is the latter point. Before, we'd only ask for variables if
zero were given. This forces the user to either have no variables set
via the CLI, env vars, tfvars or ALL variables, but no in between. As
reported in #7975, this isn't expected behavior.
The new change makes is so that unset variables are always asked for.
Users can retain the previous behavior by setting `-input=false`. This
would ensure that variables set by external sources cover all cases.
For #9618, we added the ability to ignore old diffs that were computed
and removed (because the ultimate value ended up being the same). This
ended up breaking computed list/set logic.
The correct behavior, as is evident by how the other "skip" logics work,
is to set `ok = true` so that the remainder of the logic can run which
handles stuff such as computed lists and sets.
Fixes#6447
This ensures that all variables of type string are consistently
converted to a string value upon running Terraform.
The place this is done is in the `Variables()` call within the
`terraform` package. This is the function responsible for loading and
merging the variables from the various sources and seems ideal for
proper conversion to consistent values for various types. We actually
already had tests to this effect.
This also adds docs that talk about the fake-ish boolean variables
Terraform currently has and about how in future versions we'll likely
support them properly, which can cause BC issues so beware.
This was causing flaky behavior in our tests because `TF_VAR_x=""` is
actually a valid env var. For tests, we need to actually unset env vars
that haven't been set before.
Fixes#6327
Deposed instances weren't calling PostApply which was causing the counts
for what happened during `apply` to be wrong. This was a simple fix to
ensure we call that hook.
Fixes#5342
The dynamically expanded subgraph wasn't being validated so cycles
weren't being caught here and Terraform would just hang. This fixes
that.
Note that it may make sense to validate higher level when the graph is
expanded but there are certain cases we actually expect the graph to
potentially be invalid, so this seems safer for now.
Fixes#5826
The `prevent_destroy` lifecycle configuration was not being checked when
the count was decreased for a resource with a count. It was only
checking when attributes changed on pre-existing resources.
This fixes that.
Fixes#5338 (and I'm sure many others)
There is no use case for "simple" variables in Terraform at all so
anytime one is found it should be an error.
There is a _huge_ backwards incompatibility here that was not supposed
to be by design but I'm sure a lot of people are relying on: in the
`template_file` datasource, this bug allowed you to not escape your
interpolations and have the work. For example:
```
data "template_file" "foo" {
template = "${a}"
vars { a = 12 }
}
```
The above would work, but it shouldn't. The template should have to be
`"$${a}"` (to escape the interpolation).
Because of this BC, I recommend holding this until Terraform 0.8.0 and
documenting it carefully. As part of this PR, I've added some special
error message notes.
This creates a standard package and interface for defining, querying,
setting experiments (`-X` flags).
I expect we'll want to continue to introduce various features behind
experimental flags. I want to make doing this as easy as possible and I
want to make _removing_ experiments as easy as possible as well.
The goal with this packge has been to rely on the compiler enforcing our
experiment references as much as possible. This means that every
experiment is a global variable that must be referenced directly, so
when it is removed you'll get compiler errors where the experiment is
referenced.
This also unifies and makes it easy to grab CLI flags to enable/disable
experiments as well as env vars! This way defining an experiment is just
a couple lines of code (documented on the package).
Fixes#3309
There are two primary changes, one to how helper/schema creates diffs
and one to how Terraform compares diffs. Both require careful
understanding.
== 1. helper/schema Changes
helper/schema, given any primitive field (string, int, bool, etc.)
_used to_ create a basic diff when given a computed new value (i.e. from
an unkown interpolation). This would put in the plan that the old value
is whatever the old value was, and the new value was the actual
interpolation. For example, from #3309, the diff showed the following:
```
~ module.test.aws_eip.test-instance.0
instance: "<INSTANCE ID>" => "${element(aws_instance.test-instance.*.id, count.index)}"
```
Then, when running `apply`, the diff would be realized and you would get
a diff mismatch error because it would realize the final value is the
same and remove it from the diff.
**The change:** `helper/schema` now marks unknown primitive values with
`NewComputed` set to true. Semantically this is correct for the diff to
have this information.
== 2. Terraform Diff.Same Changes
Next, the way Terraform compares diffs needed to be updated
Specifically, the case where the diff from the plan had a NewComputed
primitive and the diff from the apply _no longer has that value_. This
is possible if the computed value ended up being the same as the old
value. This is allowed to pass through.
Together, these fix#3309.
This reverts commit c3a4cff133, reversing
changes made to 791a02e6e4.
This change requires plugin recompilation and we should hold off until a
minor release for that.
This enables the shadow graph since all tests pass!
We also change the destroy node to check the resource type using the
addr since that is always available and reliable. The configuration can
be nil for orphans.
This is necessary to get the shadow working properly with the destroy
graph since the destroy graph doesn't set this field but the end state
is still the same.
This is something that should be determined and done during an apply. It
doesn't make a lot of sense that the plan is doing it (in its current
form at least).
Since it is still very much possible for this to cause problems, this
can be used to disable the shadow graph. We'll purposely not document
this since the goal is to remove this flag as we become more confident
with it.
This enables the new apply graph's resource node to apply data sources.
Data sources appear to only be tested for "refresh" which is likely
where they're set but they've also been implemented (not my code, not
trying to edit code) within the "apply" operation as well.
This adds an apply test to ensure data sources work, and then modifies
the new apply node to support data sources.
It appears data sources have always been coded to work during apply, as
can be verified with this test (no impl. changes were necessary to make
it pass).
This test should be added to ensure our apply graph always works with
data sources as well.
This adds the proper logic for "disabling" providers to the new apply
graph: interolating and storing the config for inheritance but not
actually initializing and configuring the provider.
This is important since parent modules will often contain incomplete
provider configurations for the purpose of inheritance that would error
if they were actually attempted to be configured (since they're
incomplete). If the provider is not used, it should be "disabled".
This doesn't explicitly set `rs.Provider` on destroy nodes.
To be honest, I'm not sure why this was done in the first place (git
blame points to 6fda7bb5483a155b8ae1e1e4e4b7b7c4073bc1d9). Tests always
passed without it, and by adding it it causes other tests to fail. I
should've never changed those other tests.
Removing it now to get tests passing, this also reverts the test changes
made in 8213824962f085279810f04b60b95d1176a3a3f2.
This is a requirement for the parallelism of Terraform to work sanely.
We could deep copy every result but I think this would be unrealistic
and impose a performance cost when it isn't necessary in most cases.
Related to #5254
If the count of a resource is interpolated (i.e. `${var.c}`), then it
must be interpolated before any splat variable using that resource can
be used (i.e. `type.name.*.attr`). The original fix for #5254 is to
always ensure that this is the case.
While working on a new apply builder based on the diff in
`f-apply-builder`, this truth no longer always holds. Rather than always
include such a resource, I believe the correct behavior instead is to
use the state as a source of truth during `walkApply` operations.
This change specifically is scoped to `walkApply` operation
interpolations since we know the state of any multi-variable should be
available. The behavior is less clear for other operations so I left the
logic unchanged from prior versions.
The Deposed slice wasn't being normalized and nil values could be read
in from a state file. Filter out the nils during init. There is
still a bug in copystructure, but that will be addressed separately.
A nil InstanceState within State/Modules/Resources/Deposed will panic
during a deep copy. The panic needs to be fixed in copystructure, but
the nil probably should have been normalized out before we got here too.
There were races with ValidateResource in the provider initializing the
data which resulting in lost data for the shadow. A new "Init" function
has been added to the shadow structs to support safe concurrent
initialization.
This adds a new function to get a unique identifier scoped to the graph
walk in order to identify operations against the same instance. This is
used by the shadow to namespace provider function calls.
We allow the built in context to work as expected and shadow just the
components now. This is better since it allows us to use much more of
the REAL structures.
The arguments passed into Apply, Refresh, Diff could be modified which
caused the shadow comparison later to cause errors. Also, the result
should be deep copied so that it isn't modified.