I updated the "Variables" map incorrectly in earlier commit 10fe50bbdb
while making bulk updates to get the tests compiling again with the
changed underlying APIs.
The original value here was "bar", incorrectly changed to "foo" in that
commit. Here we return it back to "bar".
We only support provider input for the root module. This is already
checked in ProviderInput, but was not checked in SetProviderInput. We
can't actually do anything particularly clever with an invalid call here,
but we will at least generate a WARN log to help with debugging.
Also need to update TestBuiltinEvalContextProviderInput to expect this
new behavior of ignoring input for non-root modules.
The prior commit changed the schema-access model so that all schemas are
fetched up front during context creation and are then readily available
for use throughout graph building and evaluation.
As a result, we no longer need to create dependency edges to a provider
when one of its resources is referenced by another node, and so the
ProviderTransformer needs only to worry about direct ownership
dependencies.
This also avoids the need for us to run AttachSchemaTransformer twice,
since ProviderTransformer no longer needs schema and we can therefore
defer attaching until just before ReferenceTransformer, when all of the
referencable and referencing nodes are already present in the graph.
We now fetch all of the necessary schemas during context creation, so we
can just thread that repository of schemas through into EvalContext and
Evaluator and access the schemas as needed without any further fetching.
This requires updating a few tests to have a valid Provider address in
their state objects, because we need that in order to trigger the loading
of the relevant schema.
This test depends on having a correct schema, so we'll specify the minimum
schema for its fixture inline here rather than using the superset schema
returned by testProvider.
Provider input is now longer handled with a graph walk, so the code
related to the input graph and walk are no longer needed.
For now the Input method is retained on the ResourceProvider interface,
but it will never be called. Subsequent work to revamp the provider API
will remove this method.
Add a graphNodeAttachDestroy interface, so destroy nodes can be attached
to their companion create node. The creator can then reference the
CreateBeforeDestroy status of the destroyer, determining if the current
state needs to be replaced or deposed.
This is needed when a node is forced to become CreateBeforeDestroy by a
dependency rather than the config, since because the config is
immutable, only the destroyer is aware that it has been forced
CreateBeforeDestroy.
The earlier change 5f07201a made it so that the state is always rewritten
by EvalDiffDestroy, but that was too disruptive to other users of
EvalDiffDestroy.
Now we follow the lead of EvalDiff and have a separate pointer for the
_output_ state, which allows the caller to opt in to having its state
pointer updated to reflect the new (nil) state.
NodePlannableResourceInstanceOrphan is the only caller that currently opts
in to this, since that was the focus of 5f07201a. We may need to make a
similar change to other plannable resource destroy nodes, but we'll wait
to see if that needs to be done in a subsequent commit.
The TestApplyGraphBuilder_doubleCBD fixture was updated incorrectly with
a cycle in the desired output. The test matches one the expected string
is fixed.
Now that core has access to the provider configuration schema, our input
logic can be implemented entirely within Context.Input, removing the need
to execute a full graph walk to gather input.
This commit replaces the graph walk call with instead just visiting the
provider configurations (explicit and implied) in the root module, using
the schema to prompt.
The code to manage the input graph walk is not yet removed by this commit,
and will be cleaned up in a subsequent commit once we've made sure there
aren't any other callers/tests depending on parts of it.
It was incorrect to use a type switch to detect the optional schema
attachment interfaces, because they are not mutually-exclusive: resource
nodes implement both GraphNodeAttachResourceSchema and
GraphNodeAttachProvisionerSchema.
This fixes a number of test regressions around dependency analysis in
"provisioner" blocks.
In #14526 we fixed a sticky edge-case where a resource with count = 0 set
won't create its containing module state on apply, and thus when another
expression refers to it we need to deal with that absense.
The original bug fixed by #14526 was actually a nil dereference panic in
this case. Our new HCL2-oriented expression evaluation codepath was, on
the other hand, correctly checking for the nil, but was not taking the
correct action in response to it, leading to the result being an
unexpected unknown value.
Here we replicate the fix to #14526 by behaving as if there are just no
instances present in this case. We achieve this in a slightly different
way here by just creating an empty ModuleState, but the effect is the
same as #14526.
This fixes TestContext2Apply_multiVarMissingState.
While we're planning we must always update the state with the proposed new
data resulting from the plan. In this case, we must record that the
orphan instance doesn't exist at all in the proposed new state by storing
its state as nil.
This in turn allows references to the containing resource to evaluate
properly, using the new updated resource count. This fixes
TestContext2Apply_multiVarCountDec.
This also includes a number of changes to the test output of
TestContext2Apply_multiVarCountDec that make it easier to debug failures.
Both ProviderTransformer and ReferenceTransformer need schema information,
and so there's a chicken-and-egg problem here where previously the schemas
were not getting attached to provider nodes created during
ProviderTransformer.
As a stop-gap measure for now we'll just run AttachSchemaTransformer
twice, so we can catch any new nodes created during the provider
transforms.
Previously we fetched schemas during the AttachSchemaTransformer,
potentially multiple times as that was re-run for each graph built. Now
we fetch the schemas just once during context construction, passing that
result into each of the graph builders.
This only addresses the schema accesses during graph construction. We're
still separately loading schemas during the main walk for evaluation
purposes. This will be addressed in a later commit.
An aliased provider should not be automatically inherited, nor
implicitly instantiated in a module. This test should not have
previously passed.
Add a proxy provider block to the module and update the provider to
match the schema.
The state after EvalReadDataDiff is no longer nil during plan, which
means that we can't use that as a proxy for requiring the diff.
Rather than exiting early to save the EvalWriteState and EvalWriteDiff
evaluations, continue normally regardless to ensure we have the latest
diff and state after the plan. This also aligns the data data source
handling with that of the managed resource.
The Provider field in ResourceState is now required, whereas before it
could be omitted and have Terraform try to discover a fitting provider
configuration automatically.
The automatic behavior was a compatibility shim added in v0.11 to support
states from prior versions without an explicit migration, but for v0.12
we will have a migration to our new state format anyway and so we will
fix this up during that migration pass.
This comprehensive test was covering a few different behaviors that are
intentionally different for v0.12:
- Applying the splat operator to a list of resource instances that haven't
been created yet produces a list of unknown values rather than a single
unknown list as before. This is important because it allows that list
to be passed into length().
- Wrapping a splat expression in another round of brackets now produces
a list of lists, whereas before we had a special case (for compatibility
with prior to v0.10) that would flatten this away in the schema layer.
Previously we would just retain an empty InstanceState in this case, but
now that we must enumerate all of the available instances during
expression evaluation it's important that we be able to recognize
instances that have been deleted.
Because we currently rely on the ReferenceTransformer to introduce the
necessary edges between local/output values and resource destroy nodes, we
must include the destroy phase of any resource we depend on in the
references of these.
This works in conjunction with the changes in the prior commit to restore
correct handling of dependencies for local and output values during
destroy.
With the current design, several seemingly-separate parts of the code must
all coincidentally agree with one another for destroy edges to be created
properly, which makes this code very hard to maintain. In future we should
refactor this so that ReferenceTransformer doesn't create edges for
destroy nodes at all, and have _all_ destroy edges (including
create_before_destroy) be dealt with in the single DestroyEdgeTransformer,
where they can be maintained and unit tested together.
Prior to the introduction of our "addrs" package, we represented destroy
nodes as a special kind of address string ending in ".destroy" or
".destroy-cbd".
Using references to resolve these dependencies is a strange idea to begin
with, since these are not user-visible addresses, but rather than refactor
that now we instead have these weird pseudo-address types ResourcePhase
and ResourceInstancePhase that correspond go those weird address suffixes,
thus restoring the prior behavior.
In future we should rework this so that destroy node edges are not handled
as references at all, and instead handled as part of
DestroyEdgeTransformer where there's better context for implementing this
logic and it can be maintained and tested in a single place.
The old testApplyFn would overwrite ID with "foo" in all cases there
wasn't a diff, which made the test fixtures harder to reason about. If
there's an ID, keep it the same.
The initial destroyer map is constructed using DestroyAddr(), which
returns resource instance addresses, but we were then going on to _use_
that map with resource addresses, which means the keys can't match when
indexed instances are being destroyed.
Now we'll use resource instance addresses in all cases.
This also includes some additional logging that was helpful in debugging
this issue.