Earlier we had a bug where data resources would not yet removed from the
state during a destroy. This was fixed in cd0c452, and this test will
hopefully make sure it stays fixed.
Adding walkValidate to the EvalTree operations, and removing the
walkValidate guard from the Interpolater.valueModuleVar allows the
values to be interpolated for Validate.
Variables weren't being interpolated during the Input phase, causing a
syntax error on the interpolation string. Adding `walkInput` to the
EvalTree operations prevents skipping the interpolation step.
cd0c452 contained a bug where the creation diff for a data resource was
put into a new local variable within the else block rather than into the
diff variable in the parent scope, causing a null diff to always be
produced.
This restores the expected behavior: a computed data resource appears in
the diff, so it can then be fetched during the apply walk.
Apparently there's been a regression in the creation of data resource
diffs: they aren't showing up in the plan at all.
As a first step to fixing this, this is an intentionally-failing test
that proves it's broken.
Previously the "planDestroy" pass would correctly produce a destroy diff,
but the "apply" pass would just ignore it and make a fresh diff, turning
it back into a "create" because data resources are always eager to
refresh.
Now we consider the previous diff when re-diffing during apply and so
we can preserve the plan to destroy and then ultimately actually "destroy"
the data resource (remove from the state) when we get to ReadDataApply.
This ensures that the state is left empty after "terraform destroy";
previously we would leave behind data resource states.
Building on b10564a, adding tweaks that allow the module var count
search to act recursively, ensuring that a sitaution where something
like var.top gets passed to module middle, as var.middle, and then to
module bottom, as var.bottom, which is then used in a resource count.
A new problem was introduced by the prior fixes for destroy
interpolation messages when resources depend on module variables with
a _count_ attribute, this makes the variable crucial for properly
building the graph - even in destroys. So removing all module variables
from the graph as noops was overzealous.
By borrowing the logic in `DestroyEdgeInclude` we are able to determine
if we need to keep a given module variable relatively easily.
I'd like to overhaul the `Destroy: true` implementation so that it does
not depend on config at all, but I want to continue for now with the
targeted fixes that we can backport into the 0.6 series.
This commit forward ports the changes made for 0.6.17, in order to store
the type and sensitive flag against outputs.
It also refactors the logic of the import for V0 to V1 state, and
fixes up the call sites of the new format for outputs in V2 state.
Finally we fix up tests which did not previously set a state version
where one is required.
Provider nodes interpolate their config during the input walk, but this
is very early and so it's pretty likely that any resources referenced are
entirely absent from the state.
As a special case then, we tolerate the normally-fatal case of having
an entirely missing resource variable so that the input walk can complete,
albeit skipping the providers that have such interpolations.
If these interpolations end up still being unresolved during refresh
(e.g. because the config references a resource that hasn't been created
yet) then we will catch that error on the refresh pass, or indeed on the
plan pass if -refresh=false is used.
The ResourceAddress struct grows a new "Mode" field to match with
Resource, and its parser learns to recognize the "data." prefix so it
can set that field.
Allows -target to be applied to data sources, although that is arguably
not a very useful thing to do. Other future uses of resource addressing,
like the state plumbing commands, may be better uses of this.
Previously they would get left behind in the state because we had no
support for planning their destruction. Now we'll create a "destroy" plan
and act on it by just producing an empty state on apply, thus ensuring
that the data resources don't get left behind in the state after
everything else is gone.
The handling of data "orphans" is simpler than for managed resources
because the only thing we need to deal with is our own state, and the
validation pass guarantees that by the time we get to refresh or apply
the instance state is no longer needed by any other resources and so
we can safely drop it with no fanfare.
This implements the main behavior of data resources, including both the
early read in cases where the configuration is non-computed and the split
plan/apply read for cases where full configuration can't be known until
apply time.
The key difference between data and managed resources is in their
respective lifecycles. Now the expanded resource EvalTree switches on
the resource mode, generating a different lifecycle for each mode.
For this initial change only managed resources are implemented, using the
same implementation as before; data resources are no-ops. The data
resource implementation will follow in a subsequent change.
data resources are a separate namespace of resources than managed
resources, so we need to call a different provider method depending on
what mode of resource we're visiting.
Managed resources use ValidateResource, while data resources use
ValidateDataSource, since at the provider level of abstraction each
provider has separate sets of resources and data sources respectively.
Once a data resource gets into the state, the state system needs to be
able to parse its id to match it with resources in the configuration.
Since data resources live in a separate namespace than managed resources,
the extra "mode" discriminator is required to specify which namespace
we're talking about, just like we do in the resource configuration.
This is a breaking change to the ResourceProvider interface that adds the
new operations relating to data sources.
DataSources, ValidateDataSource, ReadDataDiff and ReadDataApply are the
data source equivalents of Resources, Validate, Diff and Apply (respectively)
for managed resources.
The diff/apply model seems at first glance a rather strange workflow for
read-only resources, but implementing data resources in this way allows them
to fit cleanly into the standard plan/apply lifecycle in cases where the
configuration contains computed arguments and thus the read must be deferred
until apply time.
Along with breaking the interface, we also fix up the plugin client/server
and helper/schema implementations of it, which are all of the callers
used when provider plugins use helper/schema. This would be a breaking
change for any provider plugin that directly implements the provider
interface, but no known plugins do this and it is not recommended.
At the helper/schema layer the implementer sees ReadDataApply as a "Read",
as opposed to "Create" or "Update" as in the managed resource Apply
implementation. The planning mechanics are handled entirely within
helper/schema, so that complexity is hidden from the provider implementation
itself.