This should never happen in real code, but it comes up a lot in test code
where incomplete mock schemas are being used to test with very simple
configurations.
Previously we were skipping all of the validation steps if a provider was
being configured implicitly, and thus had no block in configuration.
This is incorrect, since a provider must still get an opportunity to
configure itself with an empty configuration and possibly reject that
empty configuration with errors.
EvalValidateSelfRef needs schema in order to extract references. It was
previously expecting a *configschema.Block directly, but we weren't
actually passing that in from anywhere except the tests because it's not
available directly in that form during the evaltree for
node_resource_validate.
Instead, we now pass in the whole *ProviderSchema for the associated
provider and have this EvalNode find the schema itself based on the
address. This breaks some of the generality of this node (now only really
works for resource addresses) but that's okay since we have no other
use-case right now anyway.
Our old state format requires all primitive values to be strings. We were
trying to enforce that before, but this didn't work properly because
gocty does not perform automatic type conversions.
Instead, we now convert to string first and then convert the result into
a native Go string afterwards.
At the moment this must be handled as a special case because we're still
using the old representation of output state, but we do still need to
handle this so that unknown values can properly pass between modules
during validate and plan.
Since our ignoring is now implemented in terms of cty objects and HCL
traversals, rather than flatmap keys, it no longer makes sense to test
for the flatmap detail of keeping the map count in a separate key.
It _does_ make sense to ignore an entire block or map, but that's already
covered by another existing tests for just []string{"resource"} above.
This was incorrectly comparing a cty.Value to an hcl.Body. Now we decode
the body first so we can compare two of cty.Value.
Also includes a fix to a stale comment in buildProviderConfig that was no
longer accurate.
The interface of this eval node has changed for v0.12, now requiring both
a provider address and the actual provider object.
We also need to give it a working ctx.EvalBlock implementation on the
mock EvalContext, so we just use installSimpleEval here to get our simple
implementation that just knows how to evaluate constant expressions.
An instance like aws_instance.foo[0] is not permitted to refer to
aws_instance.foo, since that result contains the individual instance along
with all other instances.
A schema is now required for any validation, so these tests now use the
simpleMockProvider function to produce a provider with a simple schema
already configured, and test against that schema.
EvaluateBlock and EvaluateExpr are often called multiple times in a single
operation, and our usual approach of mocking with static values is a poor
fit for those cases.
To accommodate more complex tests, we allow the test to optionally provide
a callback function to use instead of the static return values.
Since a pretty common need is to just evaluate the given block or
expression in a simple way, we also now have a helper method
installSimpleEval that installs reasonable implementations of these
two methods that can (assuming no other customization) just evaluate
constant expressions, which is sufficient for many tests.
This is a variant of diagnosticsAsError that we use to signal to informed
callers that there might just be warnings inside, but we should also do
the right thing if a caller just appends it to an existing diagnostics
without checking first.
The config loader already extracts this during its initial pass and saves
it as a separate field in a configs.Connection value, so requiring it
again here causes confusing errors about this attribute being provided but
not set.
These are some remnants of the shadow graph functionality that was added
to support the graph builder changes in v0.8, but that has since been
removed and so there are no remaining callers for these types and
functions.
The contract for this function has changed as part of the 0.12
reorganization so that, while before it would accept a partial variables
map if all missing variables were optional, it now expects that the caller
has already collected values for all variables and is just checking them
for type-correctness here.
Although this rarely matters, making it always be nil when empty makes
deep assertions simpler in tests.
This also includes a minor update to the test in the core package that
first encountered this problem, to improve the quality of its output
on failure.
These tests now need schema available to run properly, so rather than
crafting a separate schema for each of these we'll just use the one
created by simpleMockProvider, which contains a resource type called
test_object that we'll now use as the basis for all of the tests here.
In this codepath we're still using the old "internal" ResourceAddress
parser to parse resource addresses from the diff, but the string being
parsed does not include a module path and so the result is interpreted
as a resource in the root module.
We need to rewrite this to be in the correct module path as part of
converting to the new address representation, in order to get a correct
absolute address.
A trivial mistake in the rework of this function meant that it was just
discarding its result rather than returning it. It will now return its
result as expected, allowing reference analysis to work for this node
type.
The final "<resource> provided by <provider>" message was showing the
original selection made by the resource itself rather than the provider
address finally resolved after inheritance, which caused the log to
misrepresent what was actually being created in the graph.
Previously this was just stubbing out provider types, but we now need to
include schema for each of the providers and resource types the tests
use in order for the references to be properly detected.
The test fixtures are adjusted slightly here so we can use the
simpleTestSchema as the schema for all of the different blocks in these
tests. The relationships between the resources are still preserved, but
the attributes are renamed to comply with this schema.
Since expression evaluation is now done via an EvalContext method, and
since these tests use the MockEvalContext, we need to provide a value for
the mock call to return. Previously expression evaluation happened at a
different layer that was not mocked here.
These transformers both construct temporary graphs using many of the same
transformers used in the apply graph, and properly doing this now requires
access to the providers and provisioners in order to obtain their schemas.
Along with this, we also update the tests here to use the
simpleMockComponentFactory helper to get a mock provider with a schema
already configured, which means we also need to update the test fixtures
and assertions to use the resource type and attributes defined in that
mock factory.
Having a missing schema is a programming error, but at the time of this
commit we're in the midst of introducing schema all over Terraform and so
there are inevitably some places -- particularly in older unit tests --
where schema isn't yet being provided.
This error allows us to catch those cases and fail gracefully, rather
than panicking further down here when we access t.Components methods.
Also includes some additional logging to aid debugging of this
transformer.
Previously we were attempting to construct a default manually here, and
actually getting it wrong by using the resource type name as a whole
rather than the expected inferrence by prefix.
Now we use the method provided in the addrs package for this purpose,
which implements the standard behavior of shaving off the first
underscore-separated word from the resource type name.
Now that any configuration processing requires schema, we need either a
standalone schema or a provider/provisioner configured with one a lot more
often in tests.
To avoid adding loads of extra boilerplate to the tests, these new helper
functions produce objects pre-configured with a schema that should be
useful for a number of different cases, and can be customized further for
more interesting situations.
A lot of our tests can then just use these pre-defined object and
attribute names in their fixtures in situations where the canned schemas
here are good enough.
Our "Parse..." functions all take hcl.Traversal objects rather than strings,
assuming that in most cases we've already parsed a traversal out of some
larger construct (usually a config file) before interpreting it as an
address.
However, there are some situations -- particularly tests -- where being
able to easily parse a string directly is helpful. These new "Parse...Str"
functions all wrap the function of the same name with no "Str" suffix and
first parse the string with the HCL native syntax traversal parser.
As noted in the function doc comments, this should not be used in "real"
code except in exceptional circumstances, since it creates addresses and
diagnostics that do not have useful source location information for
reporting diagnostics.
The logic under test has intentionally changed here so that setting count
to any value -- even 1 -- causes Terraform to prefer to keep indexed
instances if both non-indexed and indexed are present.
Previously Terraform treated count = 1 as equivalent to count not set at
all, but we now recognize these as two different situations and treat
count = 1 as the same as any other non-zero count, to ensure that if
count is set conditionally it'll always produce indexed instances, even
if the dynamic expression ends up evaluating to 1.
This function was previously checking for a path length greater than one
because the older path format included an always present "root" element
at the start.
We now need to check for a totally-empty list, because otherwise we fail
to add the expected prefix to the front of a path with only one element.
This also includes some adjustments to the related tests and transforms
that do not change behavior but do make the test results easier to
understand and debug.
Due to some disagreement about what representation of provider addresses
we were using, the inherited providers map wasn't matching. Now we'll
consistenly use the "compact" form (just the provider name and optional
alias).
Also includes some other tweaks to make this test better-behaved.
Although the AddModule method now takes a new-style
module address as an argument, internally we're still
shimming it to a []string.
Therefore this test needs to still expect [][]string as
a result, rather tan []addrs.ModuleInstance.
Prior to the refactoring to move provider address parsing/selection up
into the frontend, there was some logic here to just-in-time default a
provider config based on the given resource type.
This is now expected to happen at a higher layer, with ImportTarget
expecting an already-valid provider configuration address.
The normal import codepath was already updated with this in mind, but some
of the provider transform tests are using ImportStateTransformer as a
shortcut for getting some resource nodes added to the graph, and so those
tests now need to include a valid provider address in their ImportTarget
values.
Also includes some adjustments to test output to make the tests easier
to debug.
The zero value of ProviderConfig is not a valid provider config address,
so we'll generate a special string for that in order to make that clear
in case one sneaks in somewhere. This can happen, for example, in the
core flow of resolving provider inheritance during the ProviderTransformer
if a caller attempts to access the resolved provider before that
transformer has run.
The prior implementation of this function (before the rewrite to use
addrs.AbsProviderConfig values) was relying on some implicit behavior of
our provider address normalization to generate an address in the root
module. Since that wasn't explicit, I introduced a bug here when
introducing the new address type, where I generated an address in the
node's own module, rather than in the root as expected.
This new implementation is functionally equivalent to the prior, but is
written to make the intent more obvious: take _just_ the type from the
node's provider address and create an implicit configuration for it
_in the root module_.
Along with the change in approach, this new implementation also has an
updated documentation comment that better describes its current intent
(previously it was outdated) and hopefully clearer trace logging to
better communicate what it's doing.
Previously we were defaulting the provider configuration selection to a
provider in the root module inferred from the resource type name.
This is close, but not quite right: we need to _start_ with a provider
configuration in the same module as we're importing into, and then our
provider resolution steps during import graph construction will use that
as a starting point for a walk up the tree to find the nearest matching
configuration (which might eventually still be in the root, but not
necessarily).
In the change to using addrs types rather than string keys directly here
I incorrectly made this use the _relative_ provider config instead of
the absolute one, causing MissingProviderTransformer to only match
providers defined in the root module (due to ambiguity in the string
representations of these address types).
The rest of this change is improved logging and test output that helped
with debugging this issue.
Some of these tests were using an outdated form to reference a local
directory as a module. While here, I also updated the provider references
to the new unquoted form, which is preferred from v0.12 onwards.