Fixes#7774
This modifies the `import` command to load configuration files from the
pwd. This also augments the configuration loading section for the CLI to
have a new option (default false, same as old behavior) to
allow directories with no Terraform configurations.
For import, we allow directories with no Terraform configurations so
this option is set to true.
Fixes#7975
This changes the InputMode for the CLI to always be:
InputModeProvider | InputModeVar | InputModeVarUnset
Which means:
* Ask for provider variables
* Ask for user variables _that are not already set_
The change is the latter point. Before, we'd only ask for variables if
zero were given. This forces the user to either have no variables set
via the CLI, env vars, tfvars or ALL variables, but no in between. As
reported in #7975, this isn't expected behavior.
The new change makes is so that unset variables are always asked for.
Users can retain the previous behavior by setting `-input=false`. This
would ensure that variables set by external sources cover all cases.
The handling of remote variables was completely disabled for push.
We still need to fetch variables from atlas for push, because if the
variable is only set remotely the Input walk will still prompt the user
for a value. We add the missing remote variables to the context
to disable input.
We now only handle remote variables as atlas.TFVar and explicitly pass
around that type rather than an `interface{}`.
Shorten the text fixture slightly to make the output a little more
readable on failures.
The strings we have in the variables may contain escaped double-quotes,
which have already been parsed and had the `\`s removed. We need to
re-escape these, but only if we are in the outer string and not inside an
interpolation.
Add tf_vars to the data structures sent in terraform push.
This takes any value of type []interface{} or map[string]interface{} and
marshals it as a string representation of the equivalent HCL. This
prevents ambiguity in atlas between a string that looks like a json
structure, and an actual json structure.
For the time being we will need a way to serialize data as HCL, so the
command package has an internal encodeHCL function to do so. We can
remove this if we get complete package for marshaling HCL.
* core: Add support for marking outputs as sensitive
This commit allows an output to be marked "sensitive", in which case the
value is redacted in the post-refresh and post-apply list of outputs.
For example, the configuration:
```
variable "input" {
default = "Hello world"
}
output "notsensitive" {
value = "${var.input}"
}
output "sensitive" {
sensitive = true
value = "${var.input}"
}
```
Would result in the output:
```
terraform apply
Apply complete! Resources: 0 added, 0 changed, 0 destroyed.
Outputs:
notsensitive = Hello world
sensitive = <sensitive>
```
The `terraform output` command continues to display the value as before.
Limitations: Note that sensitivity is not tracked internally, so if the
output is interpolated in another module into a resource, the value will
be displayed. The value is still present in the state.
Add `-target=resource` flag to core operations, allowing users to
target specific resources in their infrastructure. When `-target` is
used, the operation will only apply to that resource and its
dependencies.
The calculated dependencies are different depending on whether we're
running a normal operation or a `terraform destroy`.
Generally, "dependencies" refers to ancestors: resources falling
_before_ the target in the graph, because their changes are required to
accurately act on the target.
For destroys, "dependencies" are descendents: those resources which fall
_after_ the target. These resources depend on our target, which is going
to be destroyed, so they should also be destroyed.