The old testDiffFn used th raw config to dynamically set computed values
in the diff. Since the schema now defines what values should be there,
all test diffs end up with unkown computed values. Filter these out by
looking for a value set to "compute"
Previously unknown values were round-tripping through flatmap and coming
out as known strings containing the UnknownVariableValue. (The classic bug
that, ironically, was one of the big reasons to write cty!)
Now we properly handle unknown values in both directions: going in to
flatmap we write UnknownVariableValue at the appropriate key (as the count
for sequences or maps) and then coming out of flatmap we turn
UnknownVariableValue back into a cty unknown value of the requested type.
This was assuming our old practice of a slice starting with the string
"root". We'll normalize here and then stringify the result to ensure that
we get a string consistent with what's used elsewhere.
This is primarily aimed at fixing some of the context plan tests.
Most changes here just introduce some custom schema into the test mocks.
In some cases, a fixture is lightly updated to more modern assumptions.
The test for accessing count.index in a resource block without count set
is removed, because that is no longer valid under the new language
implementation.
This problem should now be caught at validate time rather than plan time,
because we can use the schema to detect the problem before the resource
has been resolved.
The evaluate data source was using a guessed provider configuration
address from configuration in this case, but that isn't necessarily
correct since the resource might actually be associated with a config
inherited from a parent module.
We still need to retain that fallback to config because we are sometimes
asked to evaluate when state is incomplete (like in "terraform console"),
but if possible we'll take the stored provider address from the state
and use that, even if the resource is otherwise "pending".
Destroy nodes were being referenced by their regular paths, which was
causing cycles in the graphs. Destroy nodes can't be referenced directly
in any way, so override the inherited method for a referenceable address.
The id field is always computed, and never directly related to a diff.
Since that field is being converted to a regular schema attribute, we
need to handle the behavior in the mocks too.
These implementations are adaptations of the existing implementations in
config/interpolate_funcs.go, updated to work with the cty API.
The set of functions chosen here was motivated mainly by what Terraform's
existing context tests depend on, so we can get the contexts tests back
into good shape before fleshing out the rest of these functions.
Mostly this is about updating ctx.Plan callers to expect diags instead of
err, but also includes a few light updates to test fixtures, and a fix to
testModuleInline.
There are still some other issues with some of these tests right now, but
all the ones that need to have schema should now have it.
It seems that there is a bug with the evaluation of child module input
variables where they can't find their schema even when a mock is provided.
Will attack this in a subsequent commit.
This should actually have been caught by !val.IsWhollyKnown, since
DynamicVal is always unknown, but something isn't working quite right here
and so for now we'll redundantly check also if it's of the dynamic
pseudo-type, and then revisit cty later to see if there's a real bug
hiding down there.
The recent changes for v0.12 have moved the responsibilities around here
so that it's the caller's responsibility to specify the provider address
in all cases, with the real UI (in the "command" package) providing a
suitable default if nothing is specified.
Therefore the tests at _this_ level must all include an explicit provider
address, since these tests are acting as if they _are_ the UI code.
The approach here is a little hacky, since this edge case applies only to
validate and all of the other evaluateResourceCountExpression callers
don't care about it: we overload the "count" return value as a flag to
allow NodeValidatableResource to allow it to detect this situation and
silently ignore errors in this case.
We were previously doing this for all of the reference types except this
one. Now we do it for resources and resource instances too, which both
allows us to produce a proper error message when one is missing (rather
than returning an unknown value) and allows us to properly handle the
case where there are no instances yet present in the state (e.g. because
we're in the validate walk) but "count" isn't set, and so a single
unknown value is expected rather than an empty tuple.
Previously InitProvider was incorrectly using only the relative address,
which (due to the ambiguity in the string representation of absolute vs.
relative addresses) caused it to always initialize providers in the root
module.
Now we use the absolute address as the key, which then agrees with the
Provider method and ensures that each module gets its own separate
instance of each provider if explicit configuration is present.
This should never happen in real code, but it comes up a lot in test code
where incomplete mock schemas are being used to test with very simple
configurations.
Previously we were skipping all of the validation steps if a provider was
being configured implicitly, and thus had no block in configuration.
This is incorrect, since a provider must still get an opportunity to
configure itself with an empty configuration and possibly reject that
empty configuration with errors.
EvalValidateSelfRef needs schema in order to extract references. It was
previously expecting a *configschema.Block directly, but we weren't
actually passing that in from anywhere except the tests because it's not
available directly in that form during the evaltree for
node_resource_validate.
Instead, we now pass in the whole *ProviderSchema for the associated
provider and have this EvalNode find the schema itself based on the
address. This breaks some of the generality of this node (now only really
works for resource addresses) but that's okay since we have no other
use-case right now anyway.
Our old state format requires all primitive values to be strings. We were
trying to enforce that before, but this didn't work properly because
gocty does not perform automatic type conversions.
Instead, we now convert to string first and then convert the result into
a native Go string afterwards.
At the moment this must be handled as a special case because we're still
using the old representation of output state, but we do still need to
handle this so that unknown values can properly pass between modules
during validate and plan.
Since our ignoring is now implemented in terms of cty objects and HCL
traversals, rather than flatmap keys, it no longer makes sense to test
for the flatmap detail of keeping the map count in a separate key.
It _does_ make sense to ignore an entire block or map, but that's already
covered by another existing tests for just []string{"resource"} above.