This problem should now be caught at validate time rather than plan time,
because we can use the schema to detect the problem before the resource
has been resolved.
The evaluate data source was using a guessed provider configuration
address from configuration in this case, but that isn't necessarily
correct since the resource might actually be associated with a config
inherited from a parent module.
We still need to retain that fallback to config because we are sometimes
asked to evaluate when state is incomplete (like in "terraform console"),
but if possible we'll take the stored provider address from the state
and use that, even if the resource is otherwise "pending".
Destroy nodes were being referenced by their regular paths, which was
causing cycles in the graphs. Destroy nodes can't be referenced directly
in any way, so override the inherited method for a referenceable address.
The id field is always computed, and never directly related to a diff.
Since that field is being converted to a regular schema attribute, we
need to handle the behavior in the mocks too.
These implementations are adaptations of the existing implementations in
config/interpolate_funcs.go, updated to work with the cty API.
The set of functions chosen here was motivated mainly by what Terraform's
existing context tests depend on, so we can get the contexts tests back
into good shape before fleshing out the rest of these functions.
Mostly this is about updating ctx.Plan callers to expect diags instead of
err, but also includes a few light updates to test fixtures, and a fix to
testModuleInline.
There are still some other issues with some of these tests right now, but
all the ones that need to have schema should now have it.
It seems that there is a bug with the evaluation of child module input
variables where they can't find their schema even when a mock is provided.
Will attack this in a subsequent commit.
This should actually have been caught by !val.IsWhollyKnown, since
DynamicVal is always unknown, but something isn't working quite right here
and so for now we'll redundantly check also if it's of the dynamic
pseudo-type, and then revisit cty later to see if there's a real bug
hiding down there.
The recent changes for v0.12 have moved the responsibilities around here
so that it's the caller's responsibility to specify the provider address
in all cases, with the real UI (in the "command" package) providing a
suitable default if nothing is specified.
Therefore the tests at _this_ level must all include an explicit provider
address, since these tests are acting as if they _are_ the UI code.
The approach here is a little hacky, since this edge case applies only to
validate and all of the other evaluateResourceCountExpression callers
don't care about it: we overload the "count" return value as a flag to
allow NodeValidatableResource to allow it to detect this situation and
silently ignore errors in this case.
We were previously doing this for all of the reference types except this
one. Now we do it for resources and resource instances too, which both
allows us to produce a proper error message when one is missing (rather
than returning an unknown value) and allows us to properly handle the
case where there are no instances yet present in the state (e.g. because
we're in the validate walk) but "count" isn't set, and so a single
unknown value is expected rather than an empty tuple.
Previously InitProvider was incorrectly using only the relative address,
which (due to the ambiguity in the string representation of absolute vs.
relative addresses) caused it to always initialize providers in the root
module.
Now we use the absolute address as the key, which then agrees with the
Provider method and ensures that each module gets its own separate
instance of each provider if explicit configuration is present.
This should never happen in real code, but it comes up a lot in test code
where incomplete mock schemas are being used to test with very simple
configurations.
Previously we were skipping all of the validation steps if a provider was
being configured implicitly, and thus had no block in configuration.
This is incorrect, since a provider must still get an opportunity to
configure itself with an empty configuration and possibly reject that
empty configuration with errors.
EvalValidateSelfRef needs schema in order to extract references. It was
previously expecting a *configschema.Block directly, but we weren't
actually passing that in from anywhere except the tests because it's not
available directly in that form during the evaltree for
node_resource_validate.
Instead, we now pass in the whole *ProviderSchema for the associated
provider and have this EvalNode find the schema itself based on the
address. This breaks some of the generality of this node (now only really
works for resource addresses) but that's okay since we have no other
use-case right now anyway.
Our old state format requires all primitive values to be strings. We were
trying to enforce that before, but this didn't work properly because
gocty does not perform automatic type conversions.
Instead, we now convert to string first and then convert the result into
a native Go string afterwards.
At the moment this must be handled as a special case because we're still
using the old representation of output state, but we do still need to
handle this so that unknown values can properly pass between modules
during validate and plan.
Since our ignoring is now implemented in terms of cty objects and HCL
traversals, rather than flatmap keys, it no longer makes sense to test
for the flatmap detail of keeping the map count in a separate key.
It _does_ make sense to ignore an entire block or map, but that's already
covered by another existing tests for just []string{"resource"} above.
This was incorrectly comparing a cty.Value to an hcl.Body. Now we decode
the body first so we can compare two of cty.Value.
Also includes a fix to a stale comment in buildProviderConfig that was no
longer accurate.
The interface of this eval node has changed for v0.12, now requiring both
a provider address and the actual provider object.
We also need to give it a working ctx.EvalBlock implementation on the
mock EvalContext, so we just use installSimpleEval here to get our simple
implementation that just knows how to evaluate constant expressions.
An instance like aws_instance.foo[0] is not permitted to refer to
aws_instance.foo, since that result contains the individual instance along
with all other instances.
A schema is now required for any validation, so these tests now use the
simpleMockProvider function to produce a provider with a simple schema
already configured, and test against that schema.
EvaluateBlock and EvaluateExpr are often called multiple times in a single
operation, and our usual approach of mocking with static values is a poor
fit for those cases.
To accommodate more complex tests, we allow the test to optionally provide
a callback function to use instead of the static return values.
Since a pretty common need is to just evaluate the given block or
expression in a simple way, we also now have a helper method
installSimpleEval that installs reasonable implementations of these
two methods that can (assuming no other customization) just evaluate
constant expressions, which is sufficient for many tests.
This is a variant of diagnosticsAsError that we use to signal to informed
callers that there might just be warnings inside, but we should also do
the right thing if a caller just appends it to an existing diagnostics
without checking first.
The config loader already extracts this during its initial pass and saves
it as a separate field in a configs.Connection value, so requiring it
again here causes confusing errors about this attribute being provided but
not set.
These are some remnants of the shadow graph functionality that was added
to support the graph builder changes in v0.8, but that has since been
removed and so there are no remaining callers for these types and
functions.
The contract for this function has changed as part of the 0.12
reorganization so that, while before it would accept a partial variables
map if all missing variables were optional, it now expects that the caller
has already collected values for all variables and is just checking them
for type-correctness here.
Although this rarely matters, making it always be nil when empty makes
deep assertions simpler in tests.
This also includes a minor update to the test in the core package that
first encountered this problem, to improve the quality of its output
on failure.