This introduces the terraform state list command to list the resources
within a state. This is the first of many state management commands to
come into 0.7.
This is the first command of many to come that is considered a
"plumbing" command within Terraform (see "plumbing vs porcelain":
http://git.661346.n2.nabble.com/what-are-plumbing-and-porcelain-td2190639.html).
As such, this PR also introduces a bunch of groundwork to support
plumbing commands.
The main changes:
- Main command output is changed to split "common" and "uncommon"
commands.
- mitchellh/cli is updated to support nested subcommands, since
terraform state list is a nested subcommand.
- terraform.StateFilter is introduced as a way in core to filter/search
the state files. This is very basic currently but I expect to make it
more advanced as time goes on.
- terraform state list command is introduced to list resources in a
state. This can take a series of arguments to filter this down.
Known issues, or things that aren't done in this PR on purpose:
- Unit tests for terraform state list are on the way. Unit tests for the
core changes are all there.
This replaces this plugin system with the extracted hashicorp/go-plugin
library.
This doesn't introduce any new features such as binary flattening but
opens us up to that a lot more easily and removes a lot of code from TF
in favor of the upstream lib.
This will introduce a protocol change that will cause all existing
plugins to have to be recompiled to work properly. There is no actual
API changes so they just have to recompile, but it is technically
backwards incompatible.
The `name` attribute will always be normalized to a FQDN, with a trailing "dot"
at the end when returned from the API.
We store the name as it's provided in the configuration, so "www" stays as "www"
and "www.terraformtesting.io." stays as "www.terraformtesting.io.".
The problem here is that if we use a full name as above, and the configuraiton
does *not* include the trailing dot, the API will return a version that does,
and we'll have a conflict.
This is particularly bad when we have a lifecycle block with
`create_before_destroy`; the record will get an update posted (which ends up
being a no-op on AWS's side), but then we'll delete the same record immediately
after, resulting in no record at all.
This PR addresses that by trimming the trailing dot from the `name` when saving
to state. We migrate existing state to match, to avoid false-positive diffs.
* core: Add support for marking outputs as sensitive
This commit allows an output to be marked "sensitive", in which case the
value is redacted in the post-refresh and post-apply list of outputs.
For example, the configuration:
```
variable "input" {
default = "Hello world"
}
output "notsensitive" {
value = "${var.input}"
}
output "sensitive" {
sensitive = true
value = "${var.input}"
}
```
Would result in the output:
```
terraform apply
Apply complete! Resources: 0 added, 0 changed, 0 destroyed.
Outputs:
notsensitive = Hello world
sensitive = <sensitive>
```
The `terraform output` command continues to display the value as before.
Limitations: Note that sensitivity is not tracked internally, so if the
output is interpolated in another module into a resource, the value will
be displayed. The value is still present in the state.
* provider/fastly: Add support for Conditions for Fastly Services
Docs here:
- https://docs.fastly.com/guides/conditions/
Also Bump go-fastly version for domain support in S3 Logging
* New top level AWS resource aws_eip_association
* Add documentation for aws_eip_association
* Add tests for aws_eip_association
* provider/aws: Change `aws_elastic_ip_association` to have computed
parameters
The AWS API was send ing more parameters than we had set. Therefore,
Terraform was showing constant changes when plans were being formed
Wow this one was tricky!
This bug presents itself only when using planfiles, because when doing a
straight `terraform apply` the interpolations are left in place from the
Plan graph walk and paper over the issue. (This detail is what made it
so hard to reproduce initially.)
Basically, graph nodes for module variables are visited during the apply
walk and attempt to interpolate. During a destroy walk, no attributes
are interpolated from resource nodes, so these interpolations fail.
This scenario is supposed to be handled by the `PruneNoopTransformer` -
in fact it's described as the example use case in the comment above it!
So the bug had to do with the actual behavor of the Noop transformer.
The resource nodes were not properly reporting themselves as Noops
during a destroy, so they were being left in the graph.
This in turn triggered the module variable nodes to see that they had
another node depending on them, so they also reported that they could
not be pruned.
Therefore we had two nodes in the graph that were effectively noops but
were being visited anyways. The module variable nodes were already graph
leaves, which is why this error presented itself as just stray messages
instead of actual failure to destroy.
Fixes#5440Fixes#5708Fixes#4988Fixes#3268
* Adding private ip address reference
* adding private ip address reference
* Updating the docs.
* Removing optional attrib from private_ip_address
Removing optional attribute from private_ip_address, this element is only being used in the read.
* Selecting the first element instead of using a loop for now.
Change this to a loop when https://github.com/Azure/azure-sdk-for-go/issues/259 is fixed
The `storage_data_disk` was trying to use vhd_url rather than vhd_uri.
This was causing an error on creating a new data_disk as part of a VM
Also added validation as data_disks can only be 1 - 1023 GB in size
Added the hosted_zone_id attribute, which aliases to the Route 53
zone ID that can be used to route Alias Resource Record Sets to.
This fixeshashicorp/terraform#6489.