This commit test "TestContext2Input_moduleComputedOutputElement"
by ensuring that we treat a count of zero and non-reified resources
independently rather than returning an empty list for both, which
results in an interpolation failure when using the element function or
indexing.
This test illustrates a failure which occurs during the Input walk, if
an interpolation is used with the input of a splat operation resulting
in a multi-variable.
The bug was found during use of the RC2, but does not correspond to an
open issue at present.
The implementation of Stringer on OutputState previously assumed outputs
may only be strings - we now no longer cast to string, instead using the
built in formatting directives.
The previous mechanism for testing state threw away the mutation made on
the state by calling State() twice - this commit corrects the test to
match the comment.
In addition, we replace the custom copying logic with the copystructure
library to simplify the code.
In cases where we construct state directly rather than reading it via
the usual methods, we need to ensure that the necessary maps are
initialized correctly.
When checking for "same" values in a computed hash, not only might some
of the values differ between versions changing the hash, but there may be
fields not included at all in the original map, and different overall
counts.
Instead of trying to match individual set fields with different hashes,
remove any hashed key longer than the computed key with the same base
name.
Previously, interpolation of multi-variables was returning an empty
variable if the resource count was 0. The empty variable was defined as
TypeString, Value "". This means that empty resource counts fail type
checking for interpolation functions which operate on lists.
Instead, return an empty list if the count is 0. A context test tests
this against further regression. Also add a regression test covering the
case of a single count multi-variable.
In order to make the context testing framework deal with this change it
was necessary to special case empty lists in the test diff function.
Fixes#7002
For `terraform destroy`, we currently build up the same graph we do for
`plan` and `apply` and we do a walk with a special Diff that says
"destroy everything".
We have fought the interpolation subsystem time and again through this
code path. Beginning in #2775 we gained a new feature to selectively
prune out problematic graph nodes. The past chain of destroy fixes I
have been involved with (#6557, #6599, #6753) have attempted to massage
the "noop" definitions to properly handle the edge cases reported.
"Variable is depended on by provider config" is another edge case we add
here and try to fix.
This dive only makes me more convinced that the whole `terraform
destroy` code path needs to be reworked.
For now, I went with a "surgical strike" approach to the problem
expressed in #7047. I found a couple of issues with the existing
Noop and DestroyEdgeInclude logic, especially with regards to
flattening, but I'm explicitly ignoring these for now so we can get this
particular bug fixed ahead of the 0.7 release. My hope is that we can
circle around with a fully specced initiative to refactor `terraform
destroy`'s graph to be more state-derived than config-derived.
Until then, this fixes#7407
The work integrated in hashicorp/terraform#6322 silently broke the
ability to use remote state correctly. This commit adds a fix for that,
making use of the work integrated in hashicorp/terraform#7124.
In order to deal with outputs which are complex structures, we use a
forked version of the flatmap package - the difference in the version
this commit vs the github.com/hashicorp/terraform/flatmap package is
that we add in an additional key for map counts which state requires.
Because we bypass the normal helper/schema mechanism, this is not set
for us.
Because of the HIL type checking of maps, values must be of a homogenous
type. This is unfortunate, as it means we can no longer refer to outputs
as:
${terraform_remote_state.foo.output.outputname}
Instead we had to bring them to the top level namespace:
${terraform_remote_state.foo.outputname}
This actually does lead to better overall usability - and the BC
breakage is made better by the fact that indexing would have broken the
original syntax anyway.
We also add a real-world test and assert against specific values. Tests
which were previously acceptance tests are now run as unit tests, so
regression should be identified at a much earlier stage.
This commit makes two changes: map interpolation can now read flatmapped
structures, such as those present in remote state outputs, and lists are
sorted by the index instead of the value.
The lineage of a state is an identifier shared by a set of states whose
serials are meaningfully comparable because they are produced by
progressive Refresh/Apply operations from the same initial empty state.
This is initialized as a type-4 (random) UUID when a new state is
initialized and then preserved on all other changes.
Since states before this change will not have lineage but users may wish
to set a lineage for an existing state in order to get the safety
benefits it will grow to imply, an empty lineage is considered to be
compatible with all lineages.
This commit makes the current Terraform state version 3 (previously 2),
and a migration process as part of reading v2 state. For the most part
this is unnecessary: helper/schema will deal with upgrading state for
providers written with that framework. However, for providers which
implemented the resource model directly, this gives a best-efforts
attempt at lossless upgrade.
The heuristics used to change the count of a map from the .# key to the
.% key are as follows:
- if the flat map contains any non-numeric keys, we treat it as a
map
- if the map is empty it must be computed or optional, so we remove
it from state
There is a known edge condition: maps with all-numeric keys are
indistinguishable from sets without access to the schema. They will need
manual conversion or may result in spurious diffs.
The flatmapped representation of state prior to this commit encoded maps
and lists (and therefore by extension, sets) with a key corresponding to
the number of elements, or the unknown variable indicator under a .# key
and then individual items. For example, the list ["a", "b", "c"] would
have been encoded as:
listname.# = 3
listname.0 = "a"
listname.1 = "b"
listname.2 = "c"
And the map {"key1": "value1", "key2", "value2"} would have been encoded
as:
mapname.# = 2
mapname.key1 = "value1"
mapname.key2 = "value2"
Sets use the hash code as the key - for example a set with a (fictional)
hashcode calculation may look like:
setname.# = 2
setname.12312512 = "value1"
setname.56345233 = "value2"
Prior to the work done to extend the type system, this was sufficient
since the internal representation of these was effectively the same.
However, following the separation of maps and lists into distinct
first-class types, this encoding presents a problem: given a state file,
it is impossible to tell the encoding of an empty list and an empty map
apart. This presents problems for the type checker during interpolation,
as many interpolation functions will operate on only one of these two
structures.
This commit therefore changes the representation in state of maps to use
a "%" as the key for the number of elements. Consequently the map above
will now be encoded as:
mapname.% = 2
mapname.key1 = "value1"
mapname.key2 = "value2"
This has the effect of an empty list (or set) now being encoded as:
listname.# = 0
And an empty map now being encoded as:
mapname.% = 0
Therefore we can eliminate some nasty guessing logic from the resource
variable supplier for interpolation, at the cost of having to migrate
state up front (to follow in a subsequent commit).
In order to reduce the number of potential situations in which resources
would be "forced new", we continue to accept "#" as the count key when
reading maps via helper/schema. There is no situation under which we can
allow "#" as an actual map key in any case, as it would not be
distinguishable from a list or set in state.
The mapstructure library has a regrettable backward compatibility
concern whereby a WeakDecode of []interface{}{} into a target of
map[string]interface{} yields an empty map rather than an error. One
possibility is to switch to using Decode instead of WeakDecode, but this
loses the nice handling of type conversion, requiring a large volume of
code to be added to Terraform or HIL in order to retain that behaviour.
Instead we add a DecodeHook to our usage of the mapstructure library
which checks for decoding []interface{}{} or []string{} into a map and
returns an error instead.
This has the effect of defeating the code added to retain backwards
compatibility in mapstructure, giving us the correct (for our
circumstances) behaviour of Decode for empty structures and the type
conversion of WeakDecode.
The code is identical to that in the HIL library, and packaged into a
helper.
This removes support for the V0 binary state format which was present in
Terraform prior to 0.3. We still check for the file type and present an
error message explaining to the user that they can upgrade it using a
prior version of Terraform.
This an effort to address hashicorp/terraform#516.
Adding the Sensitive attribute to the resource schema, opening up the
ability for resource maintainers to mark some fields as sensitive.
Sensitive fields are hidden in the output, and, possibly in the future,
could be encrypted.
During accpeptance tests of some of the first data sources (see
hashicorp/terraform#6881 and hashicorp/terraform#6911),
"unknown resource type" errors have been coming up. Traced it down to
the ResourceCountTransformer, which transforms destroy nodes to a
graphNodeExpandedResourceDestroy node. This node's EvalTree() was still
indiscriminately using EvalApply for all resource types, versus
EvalReadDataApply. This accounts for both cases via EvalIf.
Previously the plan phase would produce a data diff only if no state was
already present. However, this is a faulty approach because a state will
already be present in the case where the data resource depends on a
managed resource that existed in state during refresh but became
computed during plan, due to a "forces new resource" diff.
Now we will produce a data diff regardless of the presence of the state
when the configuration is computed during the plan phase.
This fixes#6824.
This means it’s shown correctly in a plan and takes into account any
actions that are dependant on the tainted resource and, vice verse, any
actions that the tainted resource depends on.
So this changes the behaviour from saying this resource is tainted so
just forget about it and make sure it gets deleted in the background,
to saying I want that resource to be recreated (taking into account the
existing resource and it’s place in the graph).
Earlier we had a bug where data resources would not yet removed from the
state during a destroy. This was fixed in cd0c452, and this test will
hopefully make sure it stays fixed.
Adding walkValidate to the EvalTree operations, and removing the
walkValidate guard from the Interpolater.valueModuleVar allows the
values to be interpolated for Validate.
Variables weren't being interpolated during the Input phase, causing a
syntax error on the interpolation string. Adding `walkInput` to the
EvalTree operations prevents skipping the interpolation step.
cd0c452 contained a bug where the creation diff for a data resource was
put into a new local variable within the else block rather than into the
diff variable in the parent scope, causing a null diff to always be
produced.
This restores the expected behavior: a computed data resource appears in
the diff, so it can then be fetched during the apply walk.
Apparently there's been a regression in the creation of data resource
diffs: they aren't showing up in the plan at all.
As a first step to fixing this, this is an intentionally-failing test
that proves it's broken.
Previously the "planDestroy" pass would correctly produce a destroy diff,
but the "apply" pass would just ignore it and make a fresh diff, turning
it back into a "create" because data resources are always eager to
refresh.
Now we consider the previous diff when re-diffing during apply and so
we can preserve the plan to destroy and then ultimately actually "destroy"
the data resource (remove from the state) when we get to ReadDataApply.
This ensures that the state is left empty after "terraform destroy";
previously we would leave behind data resource states.
Building on b10564a, adding tweaks that allow the module var count
search to act recursively, ensuring that a sitaution where something
like var.top gets passed to module middle, as var.middle, and then to
module bottom, as var.bottom, which is then used in a resource count.
A new problem was introduced by the prior fixes for destroy
interpolation messages when resources depend on module variables with
a _count_ attribute, this makes the variable crucial for properly
building the graph - even in destroys. So removing all module variables
from the graph as noops was overzealous.
By borrowing the logic in `DestroyEdgeInclude` we are able to determine
if we need to keep a given module variable relatively easily.
I'd like to overhaul the `Destroy: true` implementation so that it does
not depend on config at all, but I want to continue for now with the
targeted fixes that we can backport into the 0.6 series.
This commit forward ports the changes made for 0.6.17, in order to store
the type and sensitive flag against outputs.
It also refactors the logic of the import for V0 to V1 state, and
fixes up the call sites of the new format for outputs in V2 state.
Finally we fix up tests which did not previously set a state version
where one is required.
Provider nodes interpolate their config during the input walk, but this
is very early and so it's pretty likely that any resources referenced are
entirely absent from the state.
As a special case then, we tolerate the normally-fatal case of having
an entirely missing resource variable so that the input walk can complete,
albeit skipping the providers that have such interpolations.
If these interpolations end up still being unresolved during refresh
(e.g. because the config references a resource that hasn't been created
yet) then we will catch that error on the refresh pass, or indeed on the
plan pass if -refresh=false is used.
The ResourceAddress struct grows a new "Mode" field to match with
Resource, and its parser learns to recognize the "data." prefix so it
can set that field.
Allows -target to be applied to data sources, although that is arguably
not a very useful thing to do. Other future uses of resource addressing,
like the state plumbing commands, may be better uses of this.