As mentioned in #17871 the current example can hide the fact that the module
path plays an important role. The example's explanation is expanded.
Moreover, the verb "attach" is replaced with "map" to make the vocabulary
consistent with the wording in the documentation of the terraform state.
The fallback type for GetResource from an EachMap is a cty.Object,
because resource schemas may contain dynamically typed attributes.
Check for an Object type in the evaluation of self, to use the proper
GetAttr method when extracting the value.
These are often confusing for new contributors, since this looks
suspiciously like the right place to add new functions or change the
behavior of existing ones.
To reduce that confusion, here we remove them entirely from this package
(which is now dead code in Terraform 0.12 anyway) and include in the
documentation comments a pointer to the current function implementations.
If a resource is only destroying instances, there is no reason to
prepare the state and we can remove the Resource (prepare state) nodes.
They normally have pose no issue, but if the instances are being
destroyed along with their dependencies, the resource node may fail to
evaluate due to the missing dependencies (since destroy happens in the
reverse order).
These failures were previously blocked by there being a cycle when the
destroy nodes were directly attached to the resource nodes.
Destroy nodes do not need to be connected to the resource (prepare
state) node when adding them to the graph. Destroy nodes already have a
complete state in the graph (which is being destroyed), any references
will be added in the ReferenceTransformer, and the proper
connection to the create node will be added in the
DestroyEdgeTransformer.
Under normal circumstances this makes no difference, as create and
destroy nodes always have an dependency, so having the prepare state
handled before both only linearizes the operation slightly in the
normal destroy-then-create scenario.
However if there is a dependency on a resource being replaced in another
module, there will be a dependency between the destroy nodes in each
module (to complete the destroy ordering), while the resource node will
depend on the variable->output->resource chain. If both the destroy and
create nodes depend on the resource node, there will be a cycle.
The CBDEdgeTransformer tests worked on fake data structures, with a
synthetic graph, and configs that didn't match. Update them to generate
a more complete graph, with real node implementations, from real
configs.
The output graph is filtered down to instances, and the results still
functionally match the original expected test results, with some minor
additions due to using the real implementation.
When looking for dependencies to fix when handling
create_before_destroy, we need to look past more than one edge, as
dependencies may appear transitively through outputs and variables. Use
Descendants rather than UpEdges.
We have the full graph to use for the CBD transformation, so there's no
longer any need to create a temporary graph, which may differ from the
original.
Some commands don't use variables at all or use them in a way that doesn't
require them to all be fully valid and consistent. For those, we don't
want to fetch variable values from the remote system and try to validate
them because that's wasteful and likely to cause unnecessary error
messages.
Furthermore, the variables endpoint in Terraform Cloud and Enterprise only
works for personal access tokens, so it's important that we don't assume
we can _always_ use it. If we do, then we'll see problems when commands
are run inside Terraform Cloud and Enterprise remote execution contexts,
where the variables map always comes back as empty.
The remote backend uses backend.ParseVariableValues locally only to decide
if the user seems to be trying to use -var or -var-file options locally,
since those are not supported for the remote backend.
Other than detecting those, we don't actually have any need to use the
results of backend.ParseVariableValues, and so it's better for us to
ignore any errors it produces itself and prefer to just send a
potentially-invalid request to the remote system and let the remote system
be responsible for validating it.
This then avoids issues caused by the fact that when remote operations are
in use the local system does not have all of the required context: it
can't see which environment variables will be set in the remote execution
context nor which variables the remote system will set using its own
generated -var-file based on the workspace stored variables.
This tool is intended for analysis on Terraform's AWS provider, but that
provider is no longer developed in this repository and so this tool is
no longer functional.