Now that outputs are always evaluated, we still need a way to remove
them from state when they are destroyed.
Previously, outputs were removed during destroy from the same
"Applyable" node type that evaluates them. Now that we need to possibly
both evaluate and remove output during an apply, we add a new node -
NodeDestroyableOutput.
This new node is added to the graph by the DestroyOutputTransformer,
which make the new destroy node depend on all descendants of the output
node. This ensures that the output remains in the state as long as
everything which may interpolate the output still exists.
Always evaluate outputs during destroy, just like we did for locals.
This breaks existing tests, which we will handle separately.
Don't reverse output/local node evaluation order during destroy, as they
are both being evaluated.
Add a complex destroy provisioner testcase using locals, outputs and
variables.
Add that pesky "id" attribute to the instance states for interpolation.
Destroy-time provisioners require us to re-evaluate during destroy.
Rather than destroying local values, which doesn't do much since they
aren't persisted to state, we always evaluate them regardless of the
type of apply. Since the destroy-time local node is no longer a
"destroy" operation, the order of evaluation need to be reversed. Take
the existing DestroyValueReferenceTransformer and change it to reverse
the outgoing edges, rather than in incoming edges. This makes it so that
any dependencies of a local or output node are destroyed after
evaluation.
Having locals evaluated during destroy failed one other test, but that
was the odd case where we need `id` to exist as an attribute as well as
a field.
Previously we would interpolate the count config (ResourceConfig.RawCount)
only while preparing to dynamic-expand aggregate resource nodes. This is
problematic because we do not dynamic-expand any resource nodes during the
apply walk, and so previously the count value was not available for
interpolation during apply and would result in an error.
Now we interpolate RawCount once for each resource we visit during the
apply walk -- even though that redundantly interpolates the same config
multiple times when count > 1 -- to ensure that it's available by the
time we interpolate any remaining expressions in the config and any
expressions within "connection" and "provisioner" blocks.
This error was masked by us sharing a single RawConfig instance between
the plan and apply walks when "terraform apply" is run with no explicit
plan file argument, but was exposed by the workflow where the plan is
written first to disk since in that case the interpolation result from
during the plan phase is not present in the deflated plan object. For
this reason, the new context test serializes the plan into an in-memory
buffer and reloads it in order to simulate the effect of the two-step
workflow.
Previously we would return the raw error from strconv.ParseInt, which
includes details in its text that expose implementation details and are
thus not helpful to the user.
Instead, we use a locally-defined error message that talks only about
what the caller is expected to know: that count should be parsable as
an integer.
Containers (maps, lists, sets) in an InstanceDiff need to be handled in
their entirety. Unchanged values cannot be filtered out from diffs, as
providers expect attribute containers to be complete.
If a value in ignore_changes maps to a single key in an attribute
container, and there are other changes present, that ignored value must
be included in the diff as well.
In terraform-providers/terraform-provider-aws#2935, we have been cleaning code
duplication by benefiting from the "NormalizeJsonString" present in the "structure" helper.
It appears that tests in the AWS provider are covering more use-cases,
which are added in this work.
When writing an example for a submodule, the example should be placed in
`examples/{example name}` instead of
`modules/{module name}/examples/{example name}`.
Internally, triton-go has changed how it handles errors. We can now get rid of
checking strings for errors, and we have introduced an errors library that
wraps some of the major errors we encounter and test for