This switches to the Go "context" package for cancellation and threads
the context through all the way to evaluation to allow behavior based on
stopping deep within graph execution.
This also adds the Stop API to provisioners so they can quickly exit
when stop is called.
Fixes#11212
The import graph builder was missing the transform to setup links to
parent providers, so provider inheritance didn't work properly. This
adds that.
This also removes the `PruneProviderTransform` since that has no value
in this graph since we'll never add an unused provider.
This was possible with test fixtures but it is also conceiably possible
with older states or corrupted states. We can also extract the type from
the key so we do that now so that StateFilter is more robust.
Removal of empty nested containers from a flatmap would sometimes fail a
sanity check when removed in the wrong order. This would only fail
sometimes due to map iteration. There was also an off-by-one error in
the prefix check which could match the incorrect keys.
When a InstanceState is merged with an InstanceDiff, any maps arrays or
sets that no longer exist are shown as empty with a count of 0. If these
are left in the flatmap structure, they will cause errors during
expansion because their existing in the map affects the counts for
parent structures.
The change in #10787 used flatmap.Expand to fix interpolation of nested
maps, but it broke interpolation of sets such that their elements were
not represented. For example, the expected string representation of a
splatted aws_network_interface.whatever.*.private_ips should be:
```
[{Variable (TypeList): [{Variable (TypeString): 10.41.17.25}]} {Variable (TypeList): [{Variable (TypeString): 10.41.22.236}]}]
```
But instead it became:
```
[{Variable (TypeList): [{Variable (TypeString): }]} {Variable (TypeList): [{Variable (TypeString): }]}]
```
This is because the expandArray function of expand.go treated arrays to
exclusively be lists, e.g. not sets. The old code used to match for
numeric keys, so it would work for sets, whereas expandArray just
assumed keys started at 0 and ascended incrementally. Remember that
sets' keys are numeric, but since they are hashes, they can be any
integer. The result of assuming that the keys start at 0 led to the
recursive call to flatmap.Expand not matching any keys of the set, and
returning nil, which is why the above example has nothing where the IP
addresses used to be.
So we bring back that matching behavior, but we move it to expandArray
instead. We've modified it to not reconstruct the data structures like
it used to when it was in the Interpolator, and to use the standard int
sorter rather than implementing a custom sorter since a custom one is no
longer necessary thanks to the use of flatmap.Expand.
Fixes#10908, and restores the viability of the workaround I posted in #8696.
Big thanks to @jszwedko for helping me with this fix. I was able to
diagnose the problem along, but couldn't fix it without his help.
Fixes#10729
Destruction ordering wasn't taking into account ordering implied through
variables across module boundaries.
This is because to build the destruction ordering we create a
non-destruction graph to determine the _creation_ ordering (to properly
flip edges). This creation graph we create wasn't including module
variables. This PR adds that transform to the graph.
Fixes#10711
The `ModuleVariablesTransformer` only adds module variables in use. This
was missing module variables used by providers since we ran the provider
too late. This moves the transformer and adds a test for this.
Fixes#10680
This moves TargetsTransformer to run after the transforms that add
module variables is run. This makes targeting work across modules (test
added).
This is a bug that only exists in the new graph, but was caught by a
shadow error in #10680. Tests were added to protect against regressions.
If a data source has explicit dependencies in `depends_on`, we can
assume the user has added those because of a dependency not tracked
directly in the config. If there are any entries in `depends_on`, don't
apply the data source early during Refresh.
Fixes#4645
This is something that never worked (even in legacy graphs), but as we
push forward towards encouraging multi-provider usage especially with
things like the Vault data source, I want to make sure we have this
right for 0.8.
When you have a config like this:
```
resource "foo_type" "name" {}
provider "bar" { attr = "${foo_type.name.value}" }
resource "bar_type" "name" {}
```
Then the destruction ordering MUST be:
1. `bar_type`
2. `foo_type`
Since configuring the client for `bar_type` requires accessing data from
`foo_type`. Prior to this PR, these two would be done in parallel. This
properly pushes forward the dependency.
There are more cases I want to test but this is a basic case that is
fixed.
Fixes#8695
When a list count was computed in a multi-resource access
(foo.bar.*.list), we were returning the value as empty string. I don't
actually know the histocal reasoning for this but this can't be correct:
we must return unknown.
When changing this to unknown, the new tests passed and none of the old
tests failed. This leads me further to believe that the return empty
string is probably a holdover from long ago to just avoid crashes or
UUIDs in the plan output and not actually the correct behavior.
Related to #8036
We have had this behavior for a _long_ time now (since 0.7.0) but it
seems people are still periodically getting bit by it. This adds an
explicit error message that explains that this kind of override isn't
allowed anymore.
* "external" provider for gluing in external logic
This provider will become a bit of glue to help people interface external
programs with Terraform without writing a full Terraform provider.
It will be nowhere near as capable as a first-class provider, but is
intended as a light-touch way to integrate some pre-existing or custom
system into Terraform.
* Unit test for the "resourceProvider" utility function
This small function determines the dependable name of a provider for
a given resource name and optional provider alias. It's simple but it's
a key part of how resource nodes get connected to provider nodes so
worth specifying the intended behavior in the form of a test.
* Allow a provider to export a resource with the provider's name
If a provider only implements one resource of each type (managed vs. data)
then it can be reasonable for the resource names to exactly match the
provider name, if the provider name is descriptive enough for the
purpose of the each resource to be obvious.
* provider/external: data source
A data source that executes a child process, expecting it to support a
particular gateway protocol, and exports its result. This can be used as
a straightforward way to retrieve data from sources that Terraform
doesn't natively support..
* website: documentation for the "external" provider
Fixes#10440
This updates the behavior of "apply" resources to depend on the
destroy versions of their dependencies.
We make an exception to this behavior when the "apply" resource is CBD.
This is odd and not 100% correct, but it mimics the behavior of the
legacy graphs and avoids us having to do major core work to support the
100% correct solution.
I'll explain this in examples...
Given the following configuration:
resource "null_resource" "a" {
count = "${var.count}"
}
resource "null_resource" "b" {
triggers { key = "${join(",", null_resource.a.*.id)}" }
}
Assume we've successfully created this configuration with count = 2.
When going from count = 2 to count = 1, `null_resource.b` should wait
for `null_resource.a.1` to destroy.
If it doesn't, then it is a race: depending when we interpolate the
`triggers.key` attribute of `null_resource.b`, we may get 1 value or 2.
If `null_resource.a.1` is destroyed, we'll get 1. Otherwise, we'll get
2. This was the root cause of #10440
In the legacy graphs, `null_resource.b` would depend on the destruction
of any `null_resource.a` (orphans, tainted, anything!). This would
ensure proper ordering. We mimic that behavior here.
The difference is CBD. If `null_resource.b` has CBD enabled, then the
ordering **in the legacy graph** becomes:
1. null_resource.b (create)
2. null_resource.b (destroy)
3. null_resource.a (destroy)
In this case, the update would always have 2 values for `triggers.key`,
even though we were destroying a resource later! This scenario required
two `terraform apply` operations.
This is what the CBD check is for in this PR. We do this to mimic the
behavior of the legacy graph.
The correct solution to do one day is to allow splat references
(`null_resource.a.*.id`) to happen in parallel and only read up to to
the `count` amount in the state. This requires some fairly significant
work close to the 0.8 release date, so we can defer this to later and
adopt the 0.7.x behavior for now.
Init should only _add_ values, not remove them.
During graph execution, there are steps that expect that a state isn't
being actively pruned out from under it. Namely: writing deposed states.
Writing deposed states has no way to handle if a state changes
underneath it because the only way to uniquely identify a deposed state
is its index in the deposed array. When destroying deposed resources, we
set the value to `<nil>`. If the array is pruned before the next deposed
destroy, then the indexes have changed, and this can cause a crash.
This PR does the following (with more details below):
* `init()` no longer prunes.
* `ReadState()` always prunes before returning. I can't think of a
scenario where this is unsafe since generally we can always START
from a pruned state, its just causing problems to prune
mid-execution.
* Exported State APIs updated to be robust against nil ModuleStates.
Instead, I think we should adopt the following semantics for init/prune
in our structures that support it (Diff, for example). By having
consistent semantics around these functions, we can avoid this in the
future and have set expectations working with them.
* `init()` (in anything) will only ever be additive, and won't change
ordering or existing values. It won't remove values.
* `prune()` is destructive, expectedly.
* Functions on a structure must not assume a pruned structure 100% of
the time. They must be robust to handle nils. This is especially
important because in many cases values such as `Modules` in state
are exported so end users can simply modify them outside of the
exported APIs.
This PR may expose us to unknown crashes but I've tried to cover our
cases in exposed APIs by checking for nil.
Fixes#10439
When a CBD resource depends on a non-CBD resource, the non-CBD resource
is auto-promoted to CBD. This was done in
cf3a259. This PR makes it so that we
also set the config CBD to true. This causes the proper runtime
execution behavior to occur where we depose state and so on.
So in addition to simple graph edge tricks we also treat the non-CBD
resources as CBD resources.
Fixes#10412
The context wasn't properly adding variable values to the Interpolator
instance which made it so that the `console` command couldn't access
variables set via tfvars and the CLI.
This also adds better test coverage in command itself for this.
Fixes#10338
The destruction step for a resource was included the deposed resources
for _all_ resources with that name (ignoring the "index"). For example:
`aws_instance.foo.0` was including destroying deposed for
`aws_instance.foo.1`.
This changes the config to the deposed transformer to properly include
that index.
This change includes a larger change of changing `stateId` to include
the index. This affected more parts but was ultimately the issue in
question.
When referencing a list of maps variable from within a resource, only
the first list element is included the plan. This is because GetRaw
can't access the interpolated values. Add some tests to document this
behavior for both Get and GetRaw.
Fixes#10313
The new graph wasn't properly recording resource dependencies to a
specific index of itself. For example: `foo.bar.2` depending on
`foo.bar.0` wasn't shown in the state when it should've been.
This adds a test to verify this and fixes it.
ResourceAddr.Mode wasn't properly set when moving a module, so data
sources would lose the "data." prefix when their module was moved within
the State.
ResourceConfig.Get could previously return (nil, true) when looking up
an interpolated map in a list because of the indexing ambiguity. Make
sure we test that a non-existent value always returns false.
It makes for sense for this to happen in State.prune(). Also move a
redundant pruning from ResourceState.init, and make sure
ResourceState.prune is called from the parent's prune method.
Fixes a case where ResourceConfig.get inadvertently returns a nil value.
Add an integration test where assigning a map to a list via
interpolation would panic.
Setting variables happens before context validation, so it's possible
that the user could be trying to set an incorrect variable type to a
map. Return a useful error rather than panicking.
Ensure that each instance of BasucGraphBuilder gets a name corresponding
to the Builder which created it. This allows us to differentiate the
graphs in the logs.
This doesn't cause any practical issues as far as I'm aware (couldn't
get any test to fail), but caused shadow errors since it wasn't matching
the prior behavior.
Fixes#10122
The simple fix was that we forgot to close `ReadDataApply` for the
provider. But I've always felt that this section of the code was brittle
and I wanted to put in a more robust solution. The `shadow.Close` method
uses reflection to automatically close all values.
People with `uuid()` usage in their configurations would receive shadow
errors every time on plan because the UUID would change.
This is hacky fix but I also believe correct: if a shadow error contains
uuid() then we ignore the shadow error completely. This feels wrong but
I'll explain why it is likely right:
The "right" feeling solution is to create deterministic random output
across graph runs. This would require using math/rand and seeding it
with the same value each run. However, this alone probably won't work
due to Terraform's parallelism and potential to call uuid() in different
orders. In addition to this, you can't seed crypto/rand and its unlikely
that we'll NEVER use crypto/rand in the future even if we switched
uuid() to use math/rand.
Therefore, the solution is simple: if there is no shadow error, no
problem. If there is a shadow error and it contains uuid(), then ignore
it.
The method marks the start of a set of operations on the Graph, with
extra information optionally provided in the second paramter. This
returns a function with a single End method to mark the end of the set
in the logs.
Refactor the existing graph Begin/End Operation calls to use this single
method. Remove the *string types in the marshal structs, these are
strictly informational and don't need to differentiate empty vs unset
strings.
Add calls to DebugOperation for each step while building the graph.
To maintain the same output, the Graph.Dot implementation needs to be
aware of GraphNodeDotter. Copy the interface into the dag package, and
make the Dot marshaler aware of which nodes implemented the interface.
This way we can remove most of the remaining dot code from terraform.
The dot format generation was done with a mix of code from the terraform
package and the dot package. Unify the dot generation code, and it into
the dag package.
Use an intermediate structure to allow a dag.Graph to marshal itself
directly. This structure will be ablt to marshal directly to JSON, or be
translated to dot format. This was we can record more information about
the graph in the debug logs, and provide a way to translate those logged
structures to dot, which is convenient for viewing the graphs.
This fixes: `TestContext2Apply_moduleDestroyOrder`
The new destroy graph wasn't properly creating edges that happened
_through_ an output, it was only created the edges for _direct_
dependents.
To fix this, the DestroyEdgeTransformer now creates the full transitive
list of destroy edges by walking all ancestors. This will create more
edges than are necessary but also will no longer miss resources through
an output.
This will detect computed counts (which we don't currently support) and
change the error to be more informative that we don't allow computed
counts. Prior to this, the error would instead be something like
`strconv.ParseInt: "${var.foo}" cannot be parsed as int`.
This turns the new graphs on by default and puts the old graphs behind a
flag `-Xlegacy-graph`. This effectively inverts the current 0.7.x
behavior with the new graphs.
We've incubated most of these for a few weeks now. We've found issues
and we've fixed them and we've been using these graphs internally for
awhile without any major issue. Its time to default them on and get them
part of a beta.
Because we now rely on HIL to do the computed calculation, we must make
sure the type is correct (TypeUnknown). Before, we'd just check for the
UUID in the string.
This changes all variable returns in the interpolater to run it through
`hil.InterfaceToVariable` which handles this lookup for us.
This uses the new NodeApplyableProvider graph nodes. This will just make
it easier for us in the future to adopt new graph transforms by starting
to use the new ones here.
The primary change here is to expect that Config contains computed
values. This introduces `unknownCheckWalker` that does a really basic
reflectwalk to look for computed values and use that for IsComputed.
We had a weird mixture before checking whether c.Config was simply
missing values to determine where to look. Now we rely on IsComputed
heavily.
This makes all the computed stuff "just work" since HIL uses the same
computed sentinel value (string UUID) and the type differentiates it
from a regular string.
The map output from the module "mod" loses the computed value from the
template when we validate. If the "extra" field is removed from the map,
the validation fails earlier with map "does not have any elements so
cannot determine type".
Apply will work, because the computed value will exist in the map.
terraform: more specific resource references
terraform: outputs need to know about the new reference format
terraform: resources w/o a config still have a referencable name
This makes the old graph also prune orphan outputs in modules.
This will fix shadow graph errors such as #9905 since the old graph will
also behave correctly in these scenarios.
Luckily, because orphan outputs don't rely on anything, we were able to
simply use the same transformer!
Fixes#9920
This was an issue caught with the shadow graph. Self references in
provisioners were causing a self-edge on destroy apply graphs.
We need to explicitly check that we're not creating an edge to ourself.
This is also how the reference transformer works.
Fixes a shadow graph error found during usage.
The new apply graph was only adding module variables that referenced
data that existed _in the graph_. This isn't a valid optimization since
the data it is referencing may be in the state with no diff, and
therefore available but not in the graph.
This just removes that optimization logic, which causes no failing
tests. It also adds a test that exposes the bug if we had the pruning
logic.
Found via a shadow graph failure:
Provider aliases weren't being configured by the new apply graph.
This was caused by the transform that attaches configs to provider nodes
not being able to handle aliases and therefore not attaching a config.
Added a test to this and fixed it.
Fixes#9444
This appears to be a regression from 0.7.0, but there were no tests
covering it so we missed it and changed the behavior at some point! Oh
no.
This PR make the ordering of multi-var access: `resource.name.*.attr`
consistent: it is the ordering of the count, not the lexical ordering of
the value. This allows behavior where two lists are indexed by count
index and can be assumed to be related (for example user data for an aws
instance, as shown in the above referenced issue).
Two new context tests added to cover this case.
Implement debugInfo and the DebugGraph
DebugInfo will be a global variable through which graph debug
information can we written to a compressed archive. The DebugInfo
methods are all safe for concurrent use, and noop with a nil receiver.
The API outside of the terraform package will be to call SetDebugInfo
to create the archive, and CloseDebugInfo() to properly close the file.
Each write to the archive will be flushed and sync'ed individually, so
in the event of a crash or a missing call to Close, the archive can
still be recovered.
The DebugGraph is a representation of a terraform Graph to be written to
the debug archive, currently in dot format. The DebugGraph also contains
an internal buffer with Printf and Write methods to add to this buffer.
The buffer will be written to an accompanying file in the debug archive
along with the graph.
This also adds a GraphNodeDebugger interface. Any node implementing
`NodeDebug() string` can output information to annotate the debug graph
node, and add the data to the log. This interface may change or be
removed to provide richer options for debugging graph nodes.
The new graph builders all delegate the build to the BasicGraphBuilder.
Having a Name field lets us differentiate the actual builder
implementation in the debug graphs.
The graph transformation we implement around create_before_destroy
need to re-order all resources that depend on the create_before_destroy
resource. Up until now, we've requires that users mark all of these
resources as create_before_destroy. Data soruces however don't have a
lifecycle block for create_before_destroy, and could not be marked this
way.
This PR checks each DestroyNode that doesn't implement CreateBeforeDestroy
for any ancestors that do implement CreateBeforeDestroy. If there are
any, we inherit the behavior and re-order the graph as such.
Fixes#9840
The new apply graph wasn't properly nesting provisioners. This resulted
in reading the provisioners being nil on apply in the shadow graph which
caused the crash in the above issue.
The actual cause of this is that the new graphs we're moving towards do
not have any "flattening" (they are flat to begin with): all modules are
in the root graph from the beginning of construction versus building a
number of different graphs and flattening them. The transform that adds
the provisioners wasn't modified to handle already-flat graphs and so
was only adding provisioners to the root module, not children.
The change modifies the `MissingProvisionerTransformer` (primarily) to
support already-flat graphs and add provisioners for all module levels.
Tests are there to cover this as well.
**NOTE:** This PR focuses on fixing that specific issue. I'm going to follow up
this PR with another PR that is more focused on being robust against
crashing (more nil checks, recover() for shadow graph, etc.). In the
interest of focus and keeping a PR reviewable this focuses only on the
issue itself.
Fixes#7975
This changes the InputMode for the CLI to always be:
InputModeProvider | InputModeVar | InputModeVarUnset
Which means:
* Ask for provider variables
* Ask for user variables _that are not already set_
The change is the latter point. Before, we'd only ask for variables if
zero were given. This forces the user to either have no variables set
via the CLI, env vars, tfvars or ALL variables, but no in between. As
reported in #7975, this isn't expected behavior.
The new change makes is so that unset variables are always asked for.
Users can retain the previous behavior by setting `-input=false`. This
would ensure that variables set by external sources cover all cases.
For #9618, we added the ability to ignore old diffs that were computed
and removed (because the ultimate value ended up being the same). This
ended up breaking computed list/set logic.
The correct behavior, as is evident by how the other "skip" logics work,
is to set `ok = true` so that the remainder of the logic can run which
handles stuff such as computed lists and sets.
Fixes#6447
This ensures that all variables of type string are consistently
converted to a string value upon running Terraform.
The place this is done is in the `Variables()` call within the
`terraform` package. This is the function responsible for loading and
merging the variables from the various sources and seems ideal for
proper conversion to consistent values for various types. We actually
already had tests to this effect.
This also adds docs that talk about the fake-ish boolean variables
Terraform currently has and about how in future versions we'll likely
support them properly, which can cause BC issues so beware.
This was causing flaky behavior in our tests because `TF_VAR_x=""` is
actually a valid env var. For tests, we need to actually unset env vars
that haven't been set before.
Fixes#6327
Deposed instances weren't calling PostApply which was causing the counts
for what happened during `apply` to be wrong. This was a simple fix to
ensure we call that hook.
Fixes#5342
The dynamically expanded subgraph wasn't being validated so cycles
weren't being caught here and Terraform would just hang. This fixes
that.
Note that it may make sense to validate higher level when the graph is
expanded but there are certain cases we actually expect the graph to
potentially be invalid, so this seems safer for now.
Fixes#5826
The `prevent_destroy` lifecycle configuration was not being checked when
the count was decreased for a resource with a count. It was only
checking when attributes changed on pre-existing resources.
This fixes that.
Fixes#5338 (and I'm sure many others)
There is no use case for "simple" variables in Terraform at all so
anytime one is found it should be an error.
There is a _huge_ backwards incompatibility here that was not supposed
to be by design but I'm sure a lot of people are relying on: in the
`template_file` datasource, this bug allowed you to not escape your
interpolations and have the work. For example:
```
data "template_file" "foo" {
template = "${a}"
vars { a = 12 }
}
```
The above would work, but it shouldn't. The template should have to be
`"$${a}"` (to escape the interpolation).
Because of this BC, I recommend holding this until Terraform 0.8.0 and
documenting it carefully. As part of this PR, I've added some special
error message notes.
This creates a standard package and interface for defining, querying,
setting experiments (`-X` flags).
I expect we'll want to continue to introduce various features behind
experimental flags. I want to make doing this as easy as possible and I
want to make _removing_ experiments as easy as possible as well.
The goal with this packge has been to rely on the compiler enforcing our
experiment references as much as possible. This means that every
experiment is a global variable that must be referenced directly, so
when it is removed you'll get compiler errors where the experiment is
referenced.
This also unifies and makes it easy to grab CLI flags to enable/disable
experiments as well as env vars! This way defining an experiment is just
a couple lines of code (documented on the package).
Fixes#3309
There are two primary changes, one to how helper/schema creates diffs
and one to how Terraform compares diffs. Both require careful
understanding.
== 1. helper/schema Changes
helper/schema, given any primitive field (string, int, bool, etc.)
_used to_ create a basic diff when given a computed new value (i.e. from
an unkown interpolation). This would put in the plan that the old value
is whatever the old value was, and the new value was the actual
interpolation. For example, from #3309, the diff showed the following:
```
~ module.test.aws_eip.test-instance.0
instance: "<INSTANCE ID>" => "${element(aws_instance.test-instance.*.id, count.index)}"
```
Then, when running `apply`, the diff would be realized and you would get
a diff mismatch error because it would realize the final value is the
same and remove it from the diff.
**The change:** `helper/schema` now marks unknown primitive values with
`NewComputed` set to true. Semantically this is correct for the diff to
have this information.
== 2. Terraform Diff.Same Changes
Next, the way Terraform compares diffs needed to be updated
Specifically, the case where the diff from the plan had a NewComputed
primitive and the diff from the apply _no longer has that value_. This
is possible if the computed value ended up being the same as the old
value. This is allowed to pass through.
Together, these fix#3309.
This reverts commit c3a4cff133, reversing
changes made to 791a02e6e4.
This change requires plugin recompilation and we should hold off until a
minor release for that.
This enables the shadow graph since all tests pass!
We also change the destroy node to check the resource type using the
addr since that is always available and reliable. The configuration can
be nil for orphans.
This is necessary to get the shadow working properly with the destroy
graph since the destroy graph doesn't set this field but the end state
is still the same.
This is something that should be determined and done during an apply. It
doesn't make a lot of sense that the plan is doing it (in its current
form at least).
Since it is still very much possible for this to cause problems, this
can be used to disable the shadow graph. We'll purposely not document
this since the goal is to remove this flag as we become more confident
with it.
This enables the new apply graph's resource node to apply data sources.
Data sources appear to only be tested for "refresh" which is likely
where they're set but they've also been implemented (not my code, not
trying to edit code) within the "apply" operation as well.
This adds an apply test to ensure data sources work, and then modifies
the new apply node to support data sources.
It appears data sources have always been coded to work during apply, as
can be verified with this test (no impl. changes were necessary to make
it pass).
This test should be added to ensure our apply graph always works with
data sources as well.
This adds the proper logic for "disabling" providers to the new apply
graph: interolating and storing the config for inheritance but not
actually initializing and configuring the provider.
This is important since parent modules will often contain incomplete
provider configurations for the purpose of inheritance that would error
if they were actually attempted to be configured (since they're
incomplete). If the provider is not used, it should be "disabled".
This doesn't explicitly set `rs.Provider` on destroy nodes.
To be honest, I'm not sure why this was done in the first place (git
blame points to 6fda7bb5483a155b8ae1e1e4e4b7b7c4073bc1d9). Tests always
passed without it, and by adding it it causes other tests to fail. I
should've never changed those other tests.
Removing it now to get tests passing, this also reverts the test changes
made in 8213824962f085279810f04b60b95d1176a3a3f2.
This is a requirement for the parallelism of Terraform to work sanely.
We could deep copy every result but I think this would be unrealistic
and impose a performance cost when it isn't necessary in most cases.
Related to #5254
If the count of a resource is interpolated (i.e. `${var.c}`), then it
must be interpolated before any splat variable using that resource can
be used (i.e. `type.name.*.attr`). The original fix for #5254 is to
always ensure that this is the case.
While working on a new apply builder based on the diff in
`f-apply-builder`, this truth no longer always holds. Rather than always
include such a resource, I believe the correct behavior instead is to
use the state as a source of truth during `walkApply` operations.
This change specifically is scoped to `walkApply` operation
interpolations since we know the state of any multi-variable should be
available. The behavior is less clear for other operations so I left the
logic unchanged from prior versions.
The Deposed slice wasn't being normalized and nil values could be read
in from a state file. Filter out the nils during init. There is
still a bug in copystructure, but that will be addressed separately.
A nil InstanceState within State/Modules/Resources/Deposed will panic
during a deep copy. The panic needs to be fixed in copystructure, but
the nil probably should have been normalized out before we got here too.
There were races with ValidateResource in the provider initializing the
data which resulting in lost data for the shadow. A new "Init" function
has been added to the shadow structs to support safe concurrent
initialization.
This adds a new function to get a unique identifier scoped to the graph
walk in order to identify operations against the same instance. This is
used by the shadow to namespace provider function calls.
We allow the built in context to work as expected and shadow just the
components now. This is better since it allows us to use much more of
the REAL structures.
The arguments passed into Apply, Refresh, Diff could be modified which
caused the shadow comparison later to cause errors. Also, the result
should be deep copied so that it isn't modified.
This is necessary so that the shadow version can actually keep track of
what provider is used for what. Before, providers for different alises
were just initialized but the factory had no idea. Arguably this is fine
but when trying to build a shadow graph this presents challenges.
With these changes, we now pass an opaque "uid" through that is used to
keep track of the providers and what real maps to what shadow.
This fixes an issue where orphaned grandchild modules don't properly
inherit their provider configurations from grandparents. I found this
while working on shadow graphs (the shadow graph actually caught an
inconsistency between runs and exposed this bug!), so I'm unsure if this
affects any issue.
To better explain the issue, I'll diagram things.
Here is a hierarchy that _works_ (w/o this PR):
```
root
|-- child1 (orphan)
|-- child2
|-- grandchild
```
All modules in this case will successfully inherit provider
configurations from "root".
Here is a hierarchy that _doesn't work without this PR_:
```
root
|-- child1 (orphan)
|-- grandchild (orphan)
```
In this case, `child1` does successfully inherit the provider from root,
but `grandchild` _will not_ unless `child1` had resources. If `child1`
has no resources, it wouldn't inherit anything. This PR fixes that.
A map value read from a config file will be the default
`[]map[string]interface{}` type decoded from HCL. Since this type can't
be applied to a variable, it's likely that it was a simple map. If
there's a single map value, we can pull that out of the slice during
Eval.
This commit improves the error logging for "Diffs do not match" errors
by using the go-spew library to ensure that the structures are presented
fully and in a consistent order. This allows use of the command line
diff tool to analyse what is wrong.
This implements DeepCopy, still need to implement Equals to make this
more useful. Coming in the next commit but this still has its own full
functionality + tests.
A JSON object will be decoded as a list with a single map value. This
will be properly coerced later, so let it through the initial config
semantic checks.
A race when accessing Provisioner.RawConfig can cause unexpected output
for provisioners that interpolate variables. Use RawConfig.Copy which
needs to acquire the RawConfig mutex to get the values.
Fixes#8890
In an attempt to always show "id" as computed we were producing a
synthetic diff for it, but this causes problems when the id attribute for
a particular data source is actually settable in configuration, since it
masks the setting from the config and forces it to always be empty.
Instead, we'll set it conditionally so that a value provided in the config
can shine through when available.
We con no longer copy an InstanceState via a simple
dereference+assignment because of the mutex which can't be copied. This
adds a set method to properly set all field from another InstanceState,
and take the appropriate locks while doing so.
Add locks to the state structs to handle concurrency during the graph
walks. We can't embed the mutexes due to serialization constraints when
communicating with providers, so expose the Lock/Unlock methods
manually.
Use copystructure.LockedCopy to ensure locks are honored.
Fixes issue where a resource marked as tainted with no other attribute
diffs would never show up in the plan or apply as needing to be
replaced.
One unrelated test needed updating due to a quirk in the testDiffFn
logic - it adds a "type" field diff if the diff is non-Empty. NBD
Fix checksum issue with remote state
If we read a state file with "null" objects in a module and they become
initialized to an empty map the state file may be written out with empty
objects rather than "null", changing the checksum. If we can detect
this, increment the serial number to prevent a conflict in atlas.
Our fakeAtlas test server now needs to decode the state directly rather
than using the ReadState function, so as to be able to read the state
unaltered.
The terraform.State data structures have initialization spread out
throughout the package. More thoroughly initialize State during
ReadState, and add a call to init() during WriteState as another
normalization safeguard.
Expose State.init through an exported Init() method, so that a new State
can be completely realized outside of the terraform package.
Additionally, the internal init now completely walks all internal state
structures ensuring that all maps and slices are initialized. While it
was mentioned before that the `init()` methods are problematic with too
many call sites, expanding this out better exposes the entry points that
will need to be refactored later for improved concurrency handling.
The State structures had a mix of `omitempty` fields. Remove omitempty
for all maps and slices as part of this normalization process. Make
Lineage mandatory, which is now explicitly set in some tests.
Set the default log package output to iotuil.Discard during tests if the
`-v` flag isn't set. If we are verbose, then apply the filter according
to the TF_LOG env variable.
When targeting, only Addressable untargeted nodes were being removed
from the graph. Variable nodes are not directly Addressable, so they
were hanging around. This caused problems with module variables that
referred to Resource nodes. The Resource node would be filtered out of
the graph, but the module Variable node would not, so it would try to
interpolate during the graph walk and be unable to find it's referent.
This would present itself as strange "cannot find variable" errors for
variables that were uninvolved with the currently targeted set of
resources.
Here, we introduce a new interface that can be implemented by graph
nodes to indicate they should be filtered out from targeting even though
they are not directly addressable themselves.
The behaviour whereby outputs for a particular nested module can be
output was broken by the changes for lists and maps. This commit
restores the previous behaviour by passing the module path into the
outputsAsString function.
We also add a new test of this since the code path for indivdual output
vs all outputs for a module has diverged.
This PR fixes#7824, which crashed when applying a plan file. The bug is
that while a map which has come from the HCL parser reifies as a
[]map[string]interface{}, the variable saved in the plan file was not.
We now cover both cases.
Fixes#7824.
Terraform 0.7 introduces lists and maps as first-class values for
variables, in addition to string values which were previously available.
However, there was previously no way to override the default value of a
list or map, and the functionality for overriding specific map keys was
broken.
Using the environment variable method for setting variable values, there
was previously no way to give a variable a value of a list or map. These
now support HCL for individual values - specifying:
TF_VAR_test='["Hello", "World"]'
will set the variable `test` to a two-element list containing "Hello"
and "World". Specifying
TF_VAR_test_map='{"Hello = "World", "Foo" = "bar"}'
will set the variable `test_map` to a two-element map with keys "Hello"
and "Foo", and values "World" and "bar" respectively.
The same logic is applied to `-var` flags, and the file parsed by
`-var-files` ("autoVariables").
Note that care must be taken to not run into shell expansion for `-var-`
flags and environment variables.
We also merge map keys where appropriate. The override syntax has
changed (to be noted in CHANGELOG as a breaking change), so several
tests needed their syntax updating from the old `amis.us-east-1 =
"newValue"` style to `amis = "{ "us-east-1" = "newValue"}"` style as
defined in TF-002.
In order to continue supporting the `-var "foo=bar"` type of variable
flag (which is not valid HCL), a special case error is checked after HCL
parsing fails, and the old code path runs instead.
We conditionally format version with VersionPrerelease in a number of
places. Add a package-level function where we can unify the version
format. Replace most of version formatting in terraform, but leave th
few instances set from the top-level package to make sure we don't break
anything before release.
This adds some unit tests for config maps with dots in the key values.
We check for maps with keys which have overlapping names. There are
however still issues with nested maps which create overlapping flattened
names, as well as nested lists with dots in the key.
This is the first step in allowing overrides of map and list variables.
We convert Context.variables to map[string]interface{} from
map[string]string and fix up all the call sites.
The report in #7378 led us into a deep rabbit hole that turned out to
expose a bug in the graph walk implementation being used by the
`NoopTransformer`. The problem ended up being when two nodes in a single
dependency chain both reported `Noop() -> true` and needed to be
removed. This was breaking the walk and preventing the second node from
ever being visited.
Fixes#7378
Some of the tests for splat syntax were from the pre-list-and-map world,
and effectively flattened the values if interpolating a resource value
which was itself a list.
We now set the expected values correctly so that an interpolation like
`aws_instance.test.*.security_group_ids` now returns a list of lists.
We also fix the implementation to correctly deal with maps.
This set of changes addresses two bug scenarios:
(1) When an ignored change canceled a resource replacement, any
downstream resources referencing computer attributes on that resource
would get "diffs didn't match" errors. This happened because the
`EvalDiff` implementation was calling `state.MergeDiff(diff)` on the
unfiltered diff. Generally this is what you want, so that downstream
references catch the "incoming" values. When there's a potential for the
diff to change, thought, this results in problems w/ references.
Here we solve this by doing away with the separate `EvalNode` for
`ignore_changes` processing and integrating it into `EvalDiff`. This
allows us to only call `MergeDiff` with the final, filtered diff.
(2) When a resource had an ignored change but was still being replaced
anyways, the diff was being improperly filtered. This would cause
problems during apply when not all attributes were available to perform
the replacement.
We solve that by deferring actual attribute removal until after we've
decided that we do not have to replace the resource.
As part of evaluating a variable block, there is a pass made on unknown
keys setting them to the config.DefaultVariableValue sentinal value.
Previously this only took into account one level of nesting and assumed
all values were strings.
This commit now traverses the unknown keys via lists and maps and sets
unknown map keys surgically.
Fixes#7241.
The reproduction of issue #7421 involves a list of maps being passed to
a module, where one or more of the maps has a value which is computed
(for example, from another resource). There is a failure at the point of
use (via lookup interpolation) of the computed value of the form:
```
lookup: lookup failed to find 'elb' in:
${lookup(var.services[count.index], "elb")}
```
Where 'elb' is the key of the map.
* Fix nested module "unknown variable" during dstry
During a destroy with nested modules, accessing a variable between them
causes an "unknown variable accessed" during destroy.
Passing a literal map to a module looks like this in HCL:
module "foo" {
source = "./foo"
somemap {
somekey = "somevalue"
}
}
The HCL parser always wraps an extra list around the map, so we need to
remove that extra list wrapper when the parameter is indeed of type "map".
Fixes#7140
In scenarios with a lot of small configs, it's tedious to fan out actual
dir trees in a test-fixtures dir. It also spreads out the context of the
test - requiring the reader fetch a bunch of scattered 3 line files in
order to understand what is being tested.
Our config loading code still only reads from disk, but in
the `helper/resource` acc test framework we work around this by writing
inline config to temp files and loading it from there. This helper is
based on that strategy.
Eventually it'd be great to be able to build up a `module.Tree` from
config directly, but this gets us the functionality today.
Example Usage:
testModuleInline(t, map[string]string{
"top.tf": `
module "middle" {
source = "./middle"
}
`,
"middle/mid.tf": `
module "bottom" {
source = "./bottom"
amap {
foo = "bar"
}
}
`,
"middle/bottom/bot.tf": `
variable "amap" {
type = "map"
}
`,
}),
In #7170 we found two scenarios where the type checking done during the
`context.Validate()` graph walk was circumvented, and the subsequent
assumption of type safety in the provider's `Diff()` implementation
caused panics.
Both scenarios have to do with interpolations that reference Computed
values. The sentinel we use to indicate that a value is Computed does
not carry any type information with it yet.
That means that an incorrect reference to a list or a map in a string
attribute can "sneak through" validation only to crop up...
1. ...during Plan for Data Source References
2. ...during Apply for Resource references
In order to address this, we:
* add high-level tests for each of these two scenarios in `provider/test`
* add context-level tests for the same two scenarios in `terraform`
(these tests proved _really_ tricky to write!)
* place an `EvalValidateResource` just before `EvalDiff` and `EvalApply` to
catch these errors
* add some plumbing to `Plan()` and `Apply()` to return validation
errors, which were previously only generated during `Validate()`
* wrap unit-tests around `EvalValidateResource`
* add an `IgnoreWarnings` option to `EvalValidateResource` to prevent
active warnings from halting execution on the second-pass validation
Eventually, we might be able to attach type information to Computed
values, which would allow for these errors to be caught earlier. For
now, this solution keeps us safe from panics and raises the proper
errors to the user.
Fixes#7170
The Outputs and Resources maps in the state modules are expected to be
non-nil, and initialized that way when a new module is added to the
state. The V1->V2 upgrade was setting the maps to nil if the len == 0.
Always increment the state serial whenever we upgrade the state version.
This prevents possible version conflicts between local and remote state
when one has been upgraded, but the serial numbers match.
Just like computed sets, computed maps may have both different values
and different cardinality after they're computed. Remove the computed
maps and the values from the compared diffs.
This commit test "TestContext2Input_moduleComputedOutputElement"
by ensuring that we treat a count of zero and non-reified resources
independently rather than returning an empty list for both, which
results in an interpolation failure when using the element function or
indexing.
This test illustrates a failure which occurs during the Input walk, if
an interpolation is used with the input of a splat operation resulting
in a multi-variable.
The bug was found during use of the RC2, but does not correspond to an
open issue at present.
The implementation of Stringer on OutputState previously assumed outputs
may only be strings - we now no longer cast to string, instead using the
built in formatting directives.
The previous mechanism for testing state threw away the mutation made on
the state by calling State() twice - this commit corrects the test to
match the comment.
In addition, we replace the custom copying logic with the copystructure
library to simplify the code.
In cases where we construct state directly rather than reading it via
the usual methods, we need to ensure that the necessary maps are
initialized correctly.
When checking for "same" values in a computed hash, not only might some
of the values differ between versions changing the hash, but there may be
fields not included at all in the original map, and different overall
counts.
Instead of trying to match individual set fields with different hashes,
remove any hashed key longer than the computed key with the same base
name.
Previously, interpolation of multi-variables was returning an empty
variable if the resource count was 0. The empty variable was defined as
TypeString, Value "". This means that empty resource counts fail type
checking for interpolation functions which operate on lists.
Instead, return an empty list if the count is 0. A context test tests
this against further regression. Also add a regression test covering the
case of a single count multi-variable.
In order to make the context testing framework deal with this change it
was necessary to special case empty lists in the test diff function.
Fixes#7002
For `terraform destroy`, we currently build up the same graph we do for
`plan` and `apply` and we do a walk with a special Diff that says
"destroy everything".
We have fought the interpolation subsystem time and again through this
code path. Beginning in #2775 we gained a new feature to selectively
prune out problematic graph nodes. The past chain of destroy fixes I
have been involved with (#6557, #6599, #6753) have attempted to massage
the "noop" definitions to properly handle the edge cases reported.
"Variable is depended on by provider config" is another edge case we add
here and try to fix.
This dive only makes me more convinced that the whole `terraform
destroy` code path needs to be reworked.
For now, I went with a "surgical strike" approach to the problem
expressed in #7047. I found a couple of issues with the existing
Noop and DestroyEdgeInclude logic, especially with regards to
flattening, but I'm explicitly ignoring these for now so we can get this
particular bug fixed ahead of the 0.7 release. My hope is that we can
circle around with a fully specced initiative to refactor `terraform
destroy`'s graph to be more state-derived than config-derived.
Until then, this fixes#7407
The work integrated in hashicorp/terraform#6322 silently broke the
ability to use remote state correctly. This commit adds a fix for that,
making use of the work integrated in hashicorp/terraform#7124.
In order to deal with outputs which are complex structures, we use a
forked version of the flatmap package - the difference in the version
this commit vs the github.com/hashicorp/terraform/flatmap package is
that we add in an additional key for map counts which state requires.
Because we bypass the normal helper/schema mechanism, this is not set
for us.
Because of the HIL type checking of maps, values must be of a homogenous
type. This is unfortunate, as it means we can no longer refer to outputs
as:
${terraform_remote_state.foo.output.outputname}
Instead we had to bring them to the top level namespace:
${terraform_remote_state.foo.outputname}
This actually does lead to better overall usability - and the BC
breakage is made better by the fact that indexing would have broken the
original syntax anyway.
We also add a real-world test and assert against specific values. Tests
which were previously acceptance tests are now run as unit tests, so
regression should be identified at a much earlier stage.
This commit makes two changes: map interpolation can now read flatmapped
structures, such as those present in remote state outputs, and lists are
sorted by the index instead of the value.
The lineage of a state is an identifier shared by a set of states whose
serials are meaningfully comparable because they are produced by
progressive Refresh/Apply operations from the same initial empty state.
This is initialized as a type-4 (random) UUID when a new state is
initialized and then preserved on all other changes.
Since states before this change will not have lineage but users may wish
to set a lineage for an existing state in order to get the safety
benefits it will grow to imply, an empty lineage is considered to be
compatible with all lineages.
This commit makes the current Terraform state version 3 (previously 2),
and a migration process as part of reading v2 state. For the most part
this is unnecessary: helper/schema will deal with upgrading state for
providers written with that framework. However, for providers which
implemented the resource model directly, this gives a best-efforts
attempt at lossless upgrade.
The heuristics used to change the count of a map from the .# key to the
.% key are as follows:
- if the flat map contains any non-numeric keys, we treat it as a
map
- if the map is empty it must be computed or optional, so we remove
it from state
There is a known edge condition: maps with all-numeric keys are
indistinguishable from sets without access to the schema. They will need
manual conversion or may result in spurious diffs.
The flatmapped representation of state prior to this commit encoded maps
and lists (and therefore by extension, sets) with a key corresponding to
the number of elements, or the unknown variable indicator under a .# key
and then individual items. For example, the list ["a", "b", "c"] would
have been encoded as:
listname.# = 3
listname.0 = "a"
listname.1 = "b"
listname.2 = "c"
And the map {"key1": "value1", "key2", "value2"} would have been encoded
as:
mapname.# = 2
mapname.key1 = "value1"
mapname.key2 = "value2"
Sets use the hash code as the key - for example a set with a (fictional)
hashcode calculation may look like:
setname.# = 2
setname.12312512 = "value1"
setname.56345233 = "value2"
Prior to the work done to extend the type system, this was sufficient
since the internal representation of these was effectively the same.
However, following the separation of maps and lists into distinct
first-class types, this encoding presents a problem: given a state file,
it is impossible to tell the encoding of an empty list and an empty map
apart. This presents problems for the type checker during interpolation,
as many interpolation functions will operate on only one of these two
structures.
This commit therefore changes the representation in state of maps to use
a "%" as the key for the number of elements. Consequently the map above
will now be encoded as:
mapname.% = 2
mapname.key1 = "value1"
mapname.key2 = "value2"
This has the effect of an empty list (or set) now being encoded as:
listname.# = 0
And an empty map now being encoded as:
mapname.% = 0
Therefore we can eliminate some nasty guessing logic from the resource
variable supplier for interpolation, at the cost of having to migrate
state up front (to follow in a subsequent commit).
In order to reduce the number of potential situations in which resources
would be "forced new", we continue to accept "#" as the count key when
reading maps via helper/schema. There is no situation under which we can
allow "#" as an actual map key in any case, as it would not be
distinguishable from a list or set in state.
The mapstructure library has a regrettable backward compatibility
concern whereby a WeakDecode of []interface{}{} into a target of
map[string]interface{} yields an empty map rather than an error. One
possibility is to switch to using Decode instead of WeakDecode, but this
loses the nice handling of type conversion, requiring a large volume of
code to be added to Terraform or HIL in order to retain that behaviour.
Instead we add a DecodeHook to our usage of the mapstructure library
which checks for decoding []interface{}{} or []string{} into a map and
returns an error instead.
This has the effect of defeating the code added to retain backwards
compatibility in mapstructure, giving us the correct (for our
circumstances) behaviour of Decode for empty structures and the type
conversion of WeakDecode.
The code is identical to that in the HIL library, and packaged into a
helper.
This removes support for the V0 binary state format which was present in
Terraform prior to 0.3. We still check for the file type and present an
error message explaining to the user that they can upgrade it using a
prior version of Terraform.
This an effort to address hashicorp/terraform#516.
Adding the Sensitive attribute to the resource schema, opening up the
ability for resource maintainers to mark some fields as sensitive.
Sensitive fields are hidden in the output, and, possibly in the future,
could be encrypted.
During accpeptance tests of some of the first data sources (see
hashicorp/terraform#6881 and hashicorp/terraform#6911),
"unknown resource type" errors have been coming up. Traced it down to
the ResourceCountTransformer, which transforms destroy nodes to a
graphNodeExpandedResourceDestroy node. This node's EvalTree() was still
indiscriminately using EvalApply for all resource types, versus
EvalReadDataApply. This accounts for both cases via EvalIf.
Previously the plan phase would produce a data diff only if no state was
already present. However, this is a faulty approach because a state will
already be present in the case where the data resource depends on a
managed resource that existed in state during refresh but became
computed during plan, due to a "forces new resource" diff.
Now we will produce a data diff regardless of the presence of the state
when the configuration is computed during the plan phase.
This fixes#6824.
This means it’s shown correctly in a plan and takes into account any
actions that are dependant on the tainted resource and, vice verse, any
actions that the tainted resource depends on.
So this changes the behaviour from saying this resource is tainted so
just forget about it and make sure it gets deleted in the background,
to saying I want that resource to be recreated (taking into account the
existing resource and it’s place in the graph).
Earlier we had a bug where data resources would not yet removed from the
state during a destroy. This was fixed in cd0c452, and this test will
hopefully make sure it stays fixed.
Adding walkValidate to the EvalTree operations, and removing the
walkValidate guard from the Interpolater.valueModuleVar allows the
values to be interpolated for Validate.
Variables weren't being interpolated during the Input phase, causing a
syntax error on the interpolation string. Adding `walkInput` to the
EvalTree operations prevents skipping the interpolation step.
cd0c452 contained a bug where the creation diff for a data resource was
put into a new local variable within the else block rather than into the
diff variable in the parent scope, causing a null diff to always be
produced.
This restores the expected behavior: a computed data resource appears in
the diff, so it can then be fetched during the apply walk.
Apparently there's been a regression in the creation of data resource
diffs: they aren't showing up in the plan at all.
As a first step to fixing this, this is an intentionally-failing test
that proves it's broken.
Previously the "planDestroy" pass would correctly produce a destroy diff,
but the "apply" pass would just ignore it and make a fresh diff, turning
it back into a "create" because data resources are always eager to
refresh.
Now we consider the previous diff when re-diffing during apply and so
we can preserve the plan to destroy and then ultimately actually "destroy"
the data resource (remove from the state) when we get to ReadDataApply.
This ensures that the state is left empty after "terraform destroy";
previously we would leave behind data resource states.
Building on b10564a, adding tweaks that allow the module var count
search to act recursively, ensuring that a sitaution where something
like var.top gets passed to module middle, as var.middle, and then to
module bottom, as var.bottom, which is then used in a resource count.
A new problem was introduced by the prior fixes for destroy
interpolation messages when resources depend on module variables with
a _count_ attribute, this makes the variable crucial for properly
building the graph - even in destroys. So removing all module variables
from the graph as noops was overzealous.
By borrowing the logic in `DestroyEdgeInclude` we are able to determine
if we need to keep a given module variable relatively easily.
I'd like to overhaul the `Destroy: true` implementation so that it does
not depend on config at all, but I want to continue for now with the
targeted fixes that we can backport into the 0.6 series.
This commit forward ports the changes made for 0.6.17, in order to store
the type and sensitive flag against outputs.
It also refactors the logic of the import for V0 to V1 state, and
fixes up the call sites of the new format for outputs in V2 state.
Finally we fix up tests which did not previously set a state version
where one is required.
Provider nodes interpolate their config during the input walk, but this
is very early and so it's pretty likely that any resources referenced are
entirely absent from the state.
As a special case then, we tolerate the normally-fatal case of having
an entirely missing resource variable so that the input walk can complete,
albeit skipping the providers that have such interpolations.
If these interpolations end up still being unresolved during refresh
(e.g. because the config references a resource that hasn't been created
yet) then we will catch that error on the refresh pass, or indeed on the
plan pass if -refresh=false is used.
The ResourceAddress struct grows a new "Mode" field to match with
Resource, and its parser learns to recognize the "data." prefix so it
can set that field.
Allows -target to be applied to data sources, although that is arguably
not a very useful thing to do. Other future uses of resource addressing,
like the state plumbing commands, may be better uses of this.
Previously they would get left behind in the state because we had no
support for planning their destruction. Now we'll create a "destroy" plan
and act on it by just producing an empty state on apply, thus ensuring
that the data resources don't get left behind in the state after
everything else is gone.
The handling of data "orphans" is simpler than for managed resources
because the only thing we need to deal with is our own state, and the
validation pass guarantees that by the time we get to refresh or apply
the instance state is no longer needed by any other resources and so
we can safely drop it with no fanfare.
This implements the main behavior of data resources, including both the
early read in cases where the configuration is non-computed and the split
plan/apply read for cases where full configuration can't be known until
apply time.
The key difference between data and managed resources is in their
respective lifecycles. Now the expanded resource EvalTree switches on
the resource mode, generating a different lifecycle for each mode.
For this initial change only managed resources are implemented, using the
same implementation as before; data resources are no-ops. The data
resource implementation will follow in a subsequent change.
data resources are a separate namespace of resources than managed
resources, so we need to call a different provider method depending on
what mode of resource we're visiting.
Managed resources use ValidateResource, while data resources use
ValidateDataSource, since at the provider level of abstraction each
provider has separate sets of resources and data sources respectively.
Once a data resource gets into the state, the state system needs to be
able to parse its id to match it with resources in the configuration.
Since data resources live in a separate namespace than managed resources,
the extra "mode" discriminator is required to specify which namespace
we're talking about, just like we do in the resource configuration.
This is a breaking change to the ResourceProvider interface that adds the
new operations relating to data sources.
DataSources, ValidateDataSource, ReadDataDiff and ReadDataApply are the
data source equivalents of Resources, Validate, Diff and Apply (respectively)
for managed resources.
The diff/apply model seems at first glance a rather strange workflow for
read-only resources, but implementing data resources in this way allows them
to fit cleanly into the standard plan/apply lifecycle in cases where the
configuration contains computed arguments and thus the read must be deferred
until apply time.
Along with breaking the interface, we also fix up the plugin client/server
and helper/schema implementations of it, which are all of the callers
used when provider plugins use helper/schema. This would be a breaking
change for any provider plugin that directly implements the provider
interface, but no known plugins do this and it is not recommended.
At the helper/schema layer the implementer sees ReadDataApply as a "Read",
as opposed to "Create" or "Update" as in the managed resource Apply
implementation. The planning mechanics are handled entirely within
helper/schema, so that complexity is hidden from the provider implementation
itself.
The fix that landed in #6557 was unfortunately the wrong subset of the
work I had been doing locally, and users of the attached bugs are still
reporting problems with Terraform v0.6.16.
At the very last step, I attempted to scope down both the failing test
and the implementation to their bare essentials, but ended up with a
test that did not exercise the root of the problem and a subset of the
implementation that was insufficient for a full bugfix.
The key thing I removed from the test was a _referencing output_ for the
module, which is what breaks down the #6557 solution.
I've re-tested the examples in #5440 and #3268 to verify this solution
does indeed solve the problem.
- Fix sensitive outputs for lists and maps
- Fix test prelude which was missed during conflict resolution
- Fix `terraform output` to match old behaviour and not have outputs
header and colouring
- Bump timeout on TestAtlasClient_UnresolvableConflict
This adds a test and the support necessary to read from native maps
passed as variables via interpolation - for example:
```
resource ...... {
mapValue = "${var.map}"
}
```
We also add support for interpolating maps from the flat-mapped resource
config, which is necessary to support assignment of computed maps, which
is now valid.
Unfortunately there is no good way to distinguish between a list and a
map in the flatmap. In lieu of changing that representation (which is
risky), we assume that if all the keys are numeric, this is intended to
be a list, and if not it is intended to be a map. This does preclude
maps which have purely numeric keys, which should be noted as a
backwards compatibility concern.
This commit adds support for native list variables and outputs, building
up on the previous change to state. Interpolation functions now return
native lists in preference to StringList.
List variables are defined like this:
variable "test" {
# This can also be inferred
type = "list"
default = ["Hello", "World"]
}
output "test_out" {
value = "${var.a_list}"
}
This results in the following state:
```
...
"outputs": {
"test_out": [
"hello",
"world"
]
},
...
```
And the result of terraform output is as follows:
```
$ terraform output
test_out = [
hello
world
]
```
Using the output name, an xargs-friendly representation is output:
```
$ terraform output test_out
hello
world
```
The output command also supports indexing into the list (with
appropriate range checking and no wrapping):
```
$ terraform output test_out 1
world
```
Along with maps, list outputs from one module may be passed as variables
into another, removing the need for the `join(",", var.list_as_string)`
and `split(",", var.list_as_string)` which was previously necessary in
Terraform configuration.
This commit also updates the tests and implementations of built-in
interpolation functions to take and return native lists where
appropriate.
A backwards compatibility note: previously the concat interpolation
function was capable of concatenating either strings or lists. The
strings use case was deprectated a long time ago but still remained.
Because we cannot return `ast.TypeAny` from an interpolation function,
this use case is no longer supported for strings - `concat` is only
capable of concatenating lists. This should not be a huge issue - the
type checker picks up incorrect parameters, and the native HIL string
concatenation - or the `join` function - can be used to replicate the
missing behaviour.
This changes the representation of maps in the interpolator from the
dotted flatmap form of a string variable named "var.variablename.key"
per map element to use native HIL maps instead.
This involves porting some of the interpolation functions in order to
keep the tests green, and adding support for map outputs.
There is one backwards incompatibility: as a result of an implementation
detail of maps, one could access an indexed map variable using the
syntax "${var.variablename.key}".
This is no longer possible - instead HIL native syntax -
"${var.variablename["key"]}" must be used. This was previously
documented, (though not heavily used) so it must be noted as a backward
compatibility issue for Terraform 0.7.
This commit adds the groundwork for supporting module outputs of types
other than string. In order to do so, the state version is increased
from 1 to 2 (though the "public-facing" state version is actually as the
first state file was binary).
Tests are added to ensure that V2 (1) state is upgraded to V3 (2) state,
though no separate read path is required since the V2 JSON will
unmarshal correctly into the V3 structure.
Outputs in a ModuleState are now of type map[string]interface{}, and a
test covers round-tripping string, []string and map[string]string, which
should cover all of the types in question.
Type switches have been added where necessary to deal with the
interface{} value, but they currently default to panicking when the input
is not a string.
This commit rectifies the fact that the original binary state is
referred to as V1 in the source code, but the first version of the JSON
state uses StateVersion: 1. We instead make the code refer to V0 as the
binary state, and V1 as the first version of JSON state.
This adds a field terraform_version to the state that represents the
Terraform version that wrote that state. If Terraform encounters a state
written by a future version, it will error. You must use at least the
version that wrote that state.
Internally we have fields to override this behavior (StateFutureAllowed),
but I chose not to expose them as CLI flags, since the user can just
modify the state directly. This is tricky, but should be tricky to
represent the horrible disaster that can happen by enabling it.
We didn't have to bump the state format version since the absense of the
field means it was written by version "0.0.0" which will always be
older. In effect though this change will always apply to version 2 of
the state since it appears in 0.7 which bumped the version for other
purposes.
I decided to split this up from the terraform state rm command to make the diff easier to see. These changes will also be used for terraform state mv.
This adds a `Remove` method to the `*terraform.State` struct. It takes a list of addresses and removes the items matching that list. This leverages the `StateFilter` committed last week to make the view of the world consistent across address lookups.
There is a lot of test duplication here with StateFilter, but in Terraform style: we like it that way.
This introduces the terraform state list command to list the resources
within a state. This is the first of many state management commands to
come into 0.7.
This is the first command of many to come that is considered a
"plumbing" command within Terraform (see "plumbing vs porcelain":
http://git.661346.n2.nabble.com/what-are-plumbing-and-porcelain-td2190639.html).
As such, this PR also introduces a bunch of groundwork to support
plumbing commands.
The main changes:
- Main command output is changed to split "common" and "uncommon"
commands.
- mitchellh/cli is updated to support nested subcommands, since
terraform state list is a nested subcommand.
- terraform.StateFilter is introduced as a way in core to filter/search
the state files. This is very basic currently but I expect to make it
more advanced as time goes on.
- terraform state list command is introduced to list resources in a
state. This can take a series of arguments to filter this down.
Known issues, or things that aren't done in this PR on purpose:
- Unit tests for terraform state list are on the way. Unit tests for the
core changes are all there.
Wow this one was tricky!
This bug presents itself only when using planfiles, because when doing a
straight `terraform apply` the interpolations are left in place from the
Plan graph walk and paper over the issue. (This detail is what made it
so hard to reproduce initially.)
Basically, graph nodes for module variables are visited during the apply
walk and attempt to interpolate. During a destroy walk, no attributes
are interpolated from resource nodes, so these interpolations fail.
This scenario is supposed to be handled by the `PruneNoopTransformer` -
in fact it's described as the example use case in the comment above it!
So the bug had to do with the actual behavor of the Noop transformer.
The resource nodes were not properly reporting themselves as Noops
during a destroy, so they were being left in the graph.
This in turn triggered the module variable nodes to see that they had
another node depending on them, so they also reported that they could
not be pruned.
Therefore we had two nodes in the graph that were effectively noops but
were being visited anyways. The module variable nodes were already graph
leaves, which is why this error presented itself as just stray messages
instead of actual failure to destroy.
Fixes#5440Fixes#5708Fixes#4988Fixes#3268
A consequnce of the work done in #6185 was that variables which were in
a module but not set explicitly (i.e. the default value was relied upon)
were marked as type errors. This was reported in #6230.
This commit adds a test case for this and a patch which fixes the issue.
The flattening process was not properly drawing dependencies between provider
nodes in modules and their parent provider nodes.
Fixes#2832Fixes#4443Fixes#4865
These tests demonstrates a problem where the types to a module input are
not checked. For example, if a module - inner - defines a variable
"should_be_a_map" as a map, or with a default variable of map, we do not
fail if the user sets the variable value in the outer module to a string
value. This is also a problem in nested modules.
The implementation changes add a type checking step into the graph
evaluation process to ensure invalid types are not passed.
The nodes it adds were immediately skipped by flattening and therefore
never had any effect. That makes the transformer effectively dead code
and removable. This was the only usage of FlattenSkip so we can remove
that as well.
The ContextGraphWalker struct includes a lock that's passed down to
BuiltinEvalContext and guards access to interpolation variables as
they're written using SetVariables.
The likely problem being expressed in #5733 is that the same map
reference is also passed down to the Interpolater.Variables field, which
is used for variable lookup.
Here, we plumb the same lock we're using to guard access for writes down
and acquire it before doing variable reads as well. It's not as fine
grained as perhaps it could be, but all the context tests pass and I
believe this should address #5733.
The ignore_changes diff filter was stripping out attributes on Create
but the diff was still making it down to the provider, so Create would
end up missing attributes, causing a full failure if any required
attributes were being ignored.
In addition, any changes that required a replacement of the resource
were causing problems with `ignore_chages`, which didn't properly filter
out the replacement when the triggering attributes were filtered out.
Refs #5627
When a user specifies `-target`s on a `terraform plan` and stores
the resulting diff in a plan file using `-out` - it usually works just
fine since the diff is scoped based on the targets.
When there are tainted resources in the state, however, graph nodes to
destroy them were popping back into the plan when it was being loaded
from a file. This was because Targets weren't being stored in the
Planfile, so Terraform didn't know to filter them out. (In the
non-Planfile scenario, we still had the Targets loaded directly from the
flags.)
By encoding Targets in with the Planfile we can ensure that the same
filters are always applied.
Backwards compatibility should be fine here, since we're just adding a
field. The gob encoder/decoder will just do the right thing (ignore/skip
the field) with planfiles stored w/ versions that don't know about
Targets.
Fixes#5183
Previously these details were relegated to the debug logs, which forces
the user to reproduce the error condition and then go digging for the
error message. Since we're asking users to report this error, let's give
them all the details they need right up front to make it a little easier
on them.
Context:
As part of building up a Plan, Terraform needs to detect "orphaned"
resources--resources which are present in the state but not in the
config. This happens when config for those resources is removed by the
user, making it Terraform's responsibility to destroy them.
Both state and config are organized by Module into a logical tree, so
the process of finding orphans involves checking for orphaned Resources
in the current module and for orphaned Modules, which themselves will
have all their Resources marked as orphans.
Bug:
In #3114 a problem was exposed where, given a module tree that looked
like this:
```
root
|
+-- parent (empty, except for sub-modules)
|
+-- child1 (1 resource)
|
+-- child2 (1 resource)
```
If `parent` was removed, a bunch of error messages would occur during
the plan. The root cause of this was duplicate orphans appearing for the
resources in child1 and child2.
Fix:
This turned out to be a bug in orphaned module detection. When looking
for deeply nested orphaned modules, root.parent was getting added twice
as an orphaned module to the graph.
Here, we add an additional check to prevent a double add, which
addresses this scenario properly.
Fixes#3114 (the Provisioner side of it was fixed in #4877)
Fixes an interpolation race that was occurring when a tainted destroy
node and a primary destroy node both tried to interpolate a computed
count in their config. Since they were sharing a pointer to the _same_
config, depending on how the race played out one of them could catch the
config uninterpolated and would then throw a syntax error.
The `Copy()` tree implemented for this fix can probably be used
elsewhere - basically we should copy the config whenever we drop nodes
into the graph - but for now I'm just applying it to the place that
fixes this bug.
Fixes#4982 - Includes a test covering that race condition.
References to computed list-ish attributes (set, list, map) were being
improperly resolved as an empty list `[]` during the plan phase (when
the value of the reference is not yet known) instead of as an
UnknownValue.
A "diffs didn't match" failure in an AWS DirectoryServices test led to
this discovery (and this commit fixes the failing test):
https://travis-ci.org/hashicorp/terraform/jobs/104812951
Refs #2157 which has the original work to support computed list
attributes at all. This is just a simple tweak to that work.
/cc @radeksimko
This commit adds support for declaring variable types in Terraform
configuration. Historically, the type has been inferred from the default
value, defaulting to string if no default was supplied. This has caused
users to devise workarounds if they wanted to declare a map but provide
values from a .tfvars file (for example).
The new syntax adds the "type" key to variable blocks:
```
variable "i_am_a_string" {
type = "string"
}
variable "i_am_a_map" {
type = "map"
}
```
This commit does _not_ extend the type system to include bools, integers
or floats - the only two types available are maps and strings.
Validation is performed if a default value is provided in order to
ensure that the default value type matches the declared type.
In the case that a type is not declared, the old logic is used for
determining the type. This allows backwards compatiblity with previous
Terraform configuration.
Instead of trying to skip non-targeted orphans as they are added to
the graph in OrphanTransformer, remove knowledge of targeting from
OrphanTransformer and instead make the orphan resource nodes properly
addressable.
That allows us to use existing logic in TargetTransformer to filter out
the nodes appropriately. This does require adding TargetTransformer to the
list of transforms that run during DynamicExpand so that targeting can
be applied to nodes with expanded counts.
Fixes#4515Fixes#2538Fixes#4462
Building on the work of #3846, deprecate `filename` in favor of a
`template` attribute that accepts file contents instead of a path.
Required a bit of work in the interpolation code to prevent Terraform
from assuming that template interpolations were resource variables that
needed to be resolved. Leaving them as "Unknown Variables" prevents
interpolation from happening early and lets the `template_file` resource
do its thing.
This attempts to reproduce the issue described in #2598 whereby outputs
added after an apply are not reflected in the output. As per the issue
the outputs are described using the JSON syntax.
We were only comparing the last element of the module, which meant that
deeply nested modules with the same name but different ancestry had an
undefined sort order, which could cause inconsistencies in state
storage and potentially break remote state MD5 checksumming.
Remote state includes MD5-based checksumming to protect against State
conflicts. This can generate improper conflicts with states that differ
only in their Schema version.
We began to see this issue with
https://github.com/hashicorp/terraform/pull/3470 which changes the
"schema_version" of aws_key_pairs.
Now that we support log line filtering (as of 0090c063) it's good to be
a bit more fussy about what log levels are assigned to different things.
Here we make a few things that are implementation details log as DEBUG,
and prevent spurious errors from EvalValidateCount where it was returning
an empty EvalValidateError rather than nil when everything was okay.
In #2884, Terraform would hang on graphs with an orphaned resource
depended on an orphaned module.
This is because orphan module nodes (which are dependable) were getting
expanded (replaced) with GraphNodeBasicSubgraph nodes (which are _not_
dependable).
The old `graph.Replace()` code resulted in GraphNodeBasicSubgraph being
entered into the lookaside table, even though it is not dependable.
This resulted in an untraversable edge in the graph, so the graph would
hang and wait forever.
Now, we remove entries from the lookaside table when a dependable node
is being replaced with a non-dependable node. This means we lose an
edge, but we can move forward. It's ~probably~ never correct to be
replacing depenable nodes with non-dependable ones, but this tweak
seemed preferable to tossing a panic in there.
When a provider validation only returns a warning, we were cutting the
evaltree short by returning an error. This is fine during a
`walkValidate` but was causing trouble during `walkPlan` and
`walkApply`.
fixes#2870
I was worried about the implications of deeply nested orphaned modules
in the parent commit, so I added a test. It's failing but not quite like
I expected it to. Perhaps I've uncovered an unrelated bug here?
/cc @mitchellh
The `CloseProviderTransformer` relies on the `ProvidedBy()` interface to
look up the proper dependency for the the graph nodes it adds. This
interface needs to yield the name of a provider, _AND_ for flattened
nodes it needs to yield the full path to a provider.
Destroy nodes did not implement this second part, which resulted in
"provider X couldn't be found" when both of these were true:
* A module included a resource that dependend on a provider
* The root did _NOT_ include a provider config
Implementing a proper ProvidedBy() on the flattened version of
destroy nodes solves the issue.
fixes#2581
The context_test file has gotten pretty unruly. Let's split it up into
a few files so we can be nicer to our editors and our own sanity.
Definitely lots more we can do to clean up, but with changes like this
I'd rather do small, focused, clear steps instead of one big "cleaned up
lots of stuff" PR.
By prefixing them with `cmd /c` it will work with both `winner` and
`ssh` connection types.
This PR also reverts some bad stringer changes made in PR #2673
In `helper/schema` we already makes a distinction between `Default`
which is always applied and `InputDefault` which is displayed to the
user for an empty field.
But for variables we just have `Default` which is treated like
`InputDefault`. This changes it to _not_ prompt the user for a value
when the variable declaration includes a default.
Treating this as a UX bugfix and the "don't prompt for variables w/
defaults set" behavior as the originally expected behavior we were
failing to honor.
Added an already-passing test to verify and cover the `helper/schema`
behavior.
Perhaps down the road we can add a `input_default` attribute to
variables to allow similar behavior to `helper/schema` in variables, but
for now just sticking with the fix.
Fixes#2592
Allows target dependencies to be properly calculated across module
boundaries, fixing scenarios where a target depends on something inside
a module, but the contents of the module are not included in the
targeted resources.
fixes#1858
When targeting prunes out all the resource nodes between a provider and
its close node, there was no dependency to ensure the close happened
after the configure. Needed to add an explicit dependency from the close
to the provider.
This tweak highlighted the fact that CloseProviderTransformer needed to
happen after DisableProviderTransformer, since
DisableProviderTransformer inspects up-edges to decide what to disable,
and CloseProviderTransformer adds an up-edge.
fixes#2495
Had to handle a lot of implicit leaning on a few properties of the old
representation:
* Old representation allowed plain strings to be treated as lists
without problem (i.e. shoved into strings.Split), now strings need to
be checked whether they are a list before they are treated as one
(i.e. shoved into StringList(s).Slice()).
* Tested behavior of 0 and 1 length lists in formatlist() was a side
effect of the representation. Needs to be special cased now to
maintain the behavior.
* Found a pretty old context test failure that was wrong in several
different ways. It's covered by TestContext2Apply_multiVar so I
removed it.
This is the initial pure "all tests passing without a diff" stage. The
plan is to change the internal representation of StringList to include a
suffix delimiter, which will allow us to recognize empty and
single-element lists.
The provider input before wasn't scoped by path, which caused
non-descendant parts of the graph to grab the configuration of another
sub-tree. The result is that you'd often get copied provider
configurations across the module barriers.
See GH-2024
Without this 12 line function it’s impossible to use any of the
Terraform code without the need for having the files on disk. As more
and more people are using (parts of) Terraform in other software, this
seems to be a very welcome addition. It has no negative impact on
Terraform itself whatsoever (the function is never called), but it
opens up a lot of other use cases.
Next to the single new function, I renamed the existing function (and
related tests) to better reflect what the function does. So now there
is a `LoadDir` function which calls `LoadFile` for each file, which
kind of made sense to me, especially when now adding a `LoadJSON`
function as well.
But of course if the rename is a problem, I can revert that part as
it’s not related to the added `LoadJSON` function.
Thanks!
Currently Terraform is leaking goroutines and with that memory. I know
strictly speaking this maybe isn’t a real concern for Terraform as it’s
mostly used as a short running command line executable.
But there are a few of us out there that are using Terraform in some
long running processes and then this starts to become a problem.
Next to that it’s of course good programming practise to clean up
resources when they're not needed anymore. So even for the standard
command line use case, this seems an improvement in resource management.
Personally I see no downsides as the primary connection to the plugin
is kept alive (the plugin is not killed) and only unused connections
that will never be used again are closed to free up any related
goroutines and memory.
Because CBD now runs after a RootTransformer, it's now operating on a
graph that _may_ have had a graphNodeRoot added to it (a noop node whose
only purpose is to be a root).
CBD includes a step that tells the destroy node to depend on any parents
of the create node. When one of those parents was "root", this was
causing the destroy node to depend on "root", making it cease to be an
actual root node.
Because graphNodeRoot is a singleton, the follow-up RootTransformer was
not sufficient to slap another root on top - it wasn't being seen as a
fresh node, so edges were just accumulating, and we ended up in a state
with "no roots".
refs #1903 (not sure if this will fix all the "no root found" cases, or
just the one I bumped into)
fixes#1947
Root cause was a bad edge being made by the CBD transform going from the
flattened destroy node to the unflattened create node, which was no
longer in the graph. The destroy node therefore had a dependency that
could never be satisfied, which locked up the walk.
Got this while playing around in a module:
> * unflattenable node: aws_security_group.internal (orphan)
> *terraform.graphNodeOrphanResource
Basically just copied implementation from
d503cc2d82
Adds the ability to target resources within modules, like:
module.mymod.aws_instance.foo
And the ability to target all resources inside a module, like:
module.mymod
Closes#1434
This reimplements my prior attempt at nipping issues where a plan did
not yield the same cycle an apply did. My prior attempt was to have
ctx.Validate generate a "Verbose" worst-case graph. It turns out that
skipping PruneDestroyTransformer to generate this graph misses important
heuristics that prevent cycles by dropping destroy nodes that are
determined to be unused.
This resulted in Validate improperly failing in scenarios where these
heuristics would have broken the cycle.
We detected the problem during the work on #1781 and worked around the
issue by reverting to the non-Verbose graph in Validate.
This commit accomplishes the original goal in a better way - by
generating the full graph and checking it once Plan has calculated the
diff. This guarantees that any graph issue that would be caught by Apply
will be caught by Plan.