This commit forward ports the changes made for 0.6.17, in order to store
the type and sensitive flag against outputs.
It also refactors the logic of the import for V0 to V1 state, and
fixes up the call sites of the new format for outputs in V2 state.
Finally we fix up tests which did not previously set a state version
where one is required.
Once a data resource gets into the state, the state system needs to be
able to parse its id to match it with resources in the configuration.
Since data resources live in a separate namespace than managed resources,
the extra "mode" discriminator is required to specify which namespace
we're talking about, just like we do in the resource configuration.
This changes the representation of maps in the interpolator from the
dotted flatmap form of a string variable named "var.variablename.key"
per map element to use native HIL maps instead.
This involves porting some of the interpolation functions in order to
keep the tests green, and adding support for map outputs.
There is one backwards incompatibility: as a result of an implementation
detail of maps, one could access an indexed map variable using the
syntax "${var.variablename.key}".
This is no longer possible - instead HIL native syntax -
"${var.variablename["key"]}" must be used. This was previously
documented, (though not heavily used) so it must be noted as a backward
compatibility issue for Terraform 0.7.
This commit adds the groundwork for supporting module outputs of types
other than string. In order to do so, the state version is increased
from 1 to 2 (though the "public-facing" state version is actually as the
first state file was binary).
Tests are added to ensure that V2 (1) state is upgraded to V3 (2) state,
though no separate read path is required since the V2 JSON will
unmarshal correctly into the V3 structure.
Outputs in a ModuleState are now of type map[string]interface{}, and a
test covers round-tripping string, []string and map[string]string, which
should cover all of the types in question.
Type switches have been added where necessary to deal with the
interface{} value, but they currently default to panicking when the input
is not a string.
This commit rectifies the fact that the original binary state is
referred to as V1 in the source code, but the first version of the JSON
state uses StateVersion: 1. We instead make the code refer to V0 as the
binary state, and V1 as the first version of JSON state.
This adds a field terraform_version to the state that represents the
Terraform version that wrote that state. If Terraform encounters a state
written by a future version, it will error. You must use at least the
version that wrote that state.
Internally we have fields to override this behavior (StateFutureAllowed),
but I chose not to expose them as CLI flags, since the user can just
modify the state directly. This is tricky, but should be tricky to
represent the horrible disaster that can happen by enabling it.
We didn't have to bump the state format version since the absense of the
field means it was written by version "0.0.0" which will always be
older. In effect though this change will always apply to version 2 of
the state since it appears in 0.7 which bumped the version for other
purposes.
I decided to split this up from the terraform state rm command to make the diff easier to see. These changes will also be used for terraform state mv.
This adds a `Remove` method to the `*terraform.State` struct. It takes a list of addresses and removes the items matching that list. This leverages the `StateFilter` committed last week to make the view of the world consistent across address lookups.
There is a lot of test duplication here with StateFilter, but in Terraform style: we like it that way.
Context:
As part of building up a Plan, Terraform needs to detect "orphaned"
resources--resources which are present in the state but not in the
config. This happens when config for those resources is removed by the
user, making it Terraform's responsibility to destroy them.
Both state and config are organized by Module into a logical tree, so
the process of finding orphans involves checking for orphaned Resources
in the current module and for orphaned Modules, which themselves will
have all their Resources marked as orphans.
Bug:
In #3114 a problem was exposed where, given a module tree that looked
like this:
```
root
|
+-- parent (empty, except for sub-modules)
|
+-- child1 (1 resource)
|
+-- child2 (1 resource)
```
If `parent` was removed, a bunch of error messages would occur during
the plan. The root cause of this was duplicate orphans appearing for the
resources in child1 and child2.
Fix:
This turned out to be a bug in orphaned module detection. When looking
for deeply nested orphaned modules, root.parent was getting added twice
as an orphaned module to the graph.
Here, we add an additional check to prevent a double add, which
addresses this scenario properly.
Fixes#3114 (the Provisioner side of it was fixed in #4877)
Instead of trying to skip non-targeted orphans as they are added to
the graph in OrphanTransformer, remove knowledge of targeting from
OrphanTransformer and instead make the orphan resource nodes properly
addressable.
That allows us to use existing logic in TargetTransformer to filter out
the nodes appropriately. This does require adding TargetTransformer to the
list of transforms that run during DynamicExpand so that targeting can
be applied to nodes with expanded counts.
Fixes#4515Fixes#2538Fixes#4462
We were only comparing the last element of the module, which meant that
deeply nested modules with the same name but different ancestry had an
undefined sort order, which could cause inconsistencies in state
storage and potentially break remote state MD5 checksumming.
Remote state includes MD5-based checksumming to protect against State
conflicts. This can generate improper conflicts with states that differ
only in their Schema version.
We began to see this issue with
https://github.com/hashicorp/terraform/pull/3470 which changes the
"schema_version" of aws_key_pairs.
Adds an "alias" field to the provider which allows creating multiple instances
of a provider under different names. This provides support for configurations
such as multiple AWS providers for different regions. In each resource, the
provider can be set with the "provider" field.
(thanks to Cisco Cloud for their support)
We were previously only recording the schema version on refresh. This
caused the state to be incorrectly written after a `terraform apply`
causing subsequent commands to run the state through an unnecessary
migration.
Providers get a per-resource SchemaVersion integer that they can bump
when a resource's schema changes format. Each InstanceState with an
older recorded SchemaVersion than the cureent one is yielded to a
`MigrateSchema` function to be transformed such that it can be addressed
by the current version of the resource's Schema.
Deposed instances need to be stored as a list for certain pathological
cases where destroys fail for some reason (e.g. upstream API failure,
Terraform interrupted mid-run). Terraform needs to be able to remember
all Deposed nodes so that it can clean them up properly in subsequent
runs.
Deposed instances will now never touch the Tainted list - they're fully
managed from within their own list.
Added a "multiDepose" test case that walks through a scenario to
exercise this.