This changes the representation of maps in the interpolator from the
dotted flatmap form of a string variable named "var.variablename.key"
per map element to use native HIL maps instead.
This involves porting some of the interpolation functions in order to
keep the tests green, and adding support for map outputs.
There is one backwards incompatibility: as a result of an implementation
detail of maps, one could access an indexed map variable using the
syntax "${var.variablename.key}".
This is no longer possible - instead HIL native syntax -
"${var.variablename["key"]}" must be used. This was previously
documented, (though not heavily used) so it must be noted as a backward
compatibility issue for Terraform 0.7.
This commit adds the groundwork for supporting module outputs of types
other than string. In order to do so, the state version is increased
from 1 to 2 (though the "public-facing" state version is actually as the
first state file was binary).
Tests are added to ensure that V2 (1) state is upgraded to V3 (2) state,
though no separate read path is required since the V2 JSON will
unmarshal correctly into the V3 structure.
Outputs in a ModuleState are now of type map[string]interface{}, and a
test covers round-tripping string, []string and map[string]string, which
should cover all of the types in question.
Type switches have been added where necessary to deal with the
interface{} value, but they currently default to panicking when the input
is not a string.
This commit rectifies the fact that the original binary state is
referred to as V1 in the source code, but the first version of the JSON
state uses StateVersion: 1. We instead make the code refer to V0 as the
binary state, and V1 as the first version of JSON state.
This adds a field terraform_version to the state that represents the
Terraform version that wrote that state. If Terraform encounters a state
written by a future version, it will error. You must use at least the
version that wrote that state.
Internally we have fields to override this behavior (StateFutureAllowed),
but I chose not to expose them as CLI flags, since the user can just
modify the state directly. This is tricky, but should be tricky to
represent the horrible disaster that can happen by enabling it.
We didn't have to bump the state format version since the absense of the
field means it was written by version "0.0.0" which will always be
older. In effect though this change will always apply to version 2 of
the state since it appears in 0.7 which bumped the version for other
purposes.
I decided to split this up from the terraform state rm command to make the diff easier to see. These changes will also be used for terraform state mv.
This adds a `Remove` method to the `*terraform.State` struct. It takes a list of addresses and removes the items matching that list. This leverages the `StateFilter` committed last week to make the view of the world consistent across address lookups.
There is a lot of test duplication here with StateFilter, but in Terraform style: we like it that way.
This introduces the terraform state list command to list the resources
within a state. This is the first of many state management commands to
come into 0.7.
This is the first command of many to come that is considered a
"plumbing" command within Terraform (see "plumbing vs porcelain":
http://git.661346.n2.nabble.com/what-are-plumbing-and-porcelain-td2190639.html).
As such, this PR also introduces a bunch of groundwork to support
plumbing commands.
The main changes:
- Main command output is changed to split "common" and "uncommon"
commands.
- mitchellh/cli is updated to support nested subcommands, since
terraform state list is a nested subcommand.
- terraform.StateFilter is introduced as a way in core to filter/search
the state files. This is very basic currently but I expect to make it
more advanced as time goes on.
- terraform state list command is introduced to list resources in a
state. This can take a series of arguments to filter this down.
Known issues, or things that aren't done in this PR on purpose:
- Unit tests for terraform state list are on the way. Unit tests for the
core changes are all there.
This replaces this plugin system with the extracted hashicorp/go-plugin
library.
This doesn't introduce any new features such as binary flattening but
opens us up to that a lot more easily and removes a lot of code from TF
in favor of the upstream lib.
This will introduce a protocol change that will cause all existing
plugins to have to be recompiled to work properly. There is no actual
API changes so they just have to recompile, but it is technically
backwards incompatible.
* core: Add support for marking outputs as sensitive
This commit allows an output to be marked "sensitive", in which case the
value is redacted in the post-refresh and post-apply list of outputs.
For example, the configuration:
```
variable "input" {
default = "Hello world"
}
output "notsensitive" {
value = "${var.input}"
}
output "sensitive" {
sensitive = true
value = "${var.input}"
}
```
Would result in the output:
```
terraform apply
Apply complete! Resources: 0 added, 0 changed, 0 destroyed.
Outputs:
notsensitive = Hello world
sensitive = <sensitive>
```
The `terraform output` command continues to display the value as before.
Limitations: Note that sensitivity is not tracked internally, so if the
output is interpolated in another module into a resource, the value will
be displayed. The value is still present in the state.
* provider/fastly: Add support for Conditions for Fastly Services
Docs here:
- https://docs.fastly.com/guides/conditions/
Also Bump go-fastly version for domain support in S3 Logging
* New top level AWS resource aws_eip_association
* Add documentation for aws_eip_association
* Add tests for aws_eip_association
* provider/aws: Change `aws_elastic_ip_association` to have computed
parameters
The AWS API was send ing more parameters than we had set. Therefore,
Terraform was showing constant changes when plans were being formed
Wow this one was tricky!
This bug presents itself only when using planfiles, because when doing a
straight `terraform apply` the interpolations are left in place from the
Plan graph walk and paper over the issue. (This detail is what made it
so hard to reproduce initially.)
Basically, graph nodes for module variables are visited during the apply
walk and attempt to interpolate. During a destroy walk, no attributes
are interpolated from resource nodes, so these interpolations fail.
This scenario is supposed to be handled by the `PruneNoopTransformer` -
in fact it's described as the example use case in the comment above it!
So the bug had to do with the actual behavor of the Noop transformer.
The resource nodes were not properly reporting themselves as Noops
during a destroy, so they were being left in the graph.
This in turn triggered the module variable nodes to see that they had
another node depending on them, so they also reported that they could
not be pruned.
Therefore we had two nodes in the graph that were effectively noops but
were being visited anyways. The module variable nodes were already graph
leaves, which is why this error presented itself as just stray messages
instead of actual failure to destroy.
Fixes#5440Fixes#5708Fixes#4988Fixes#3268