We only support provider input for the root module. This is already
checked in ProviderInput, but was not checked in SetProviderInput. We
can't actually do anything particularly clever with an invalid call here,
but we will at least generate a WARN log to help with debugging.
Also need to update TestBuiltinEvalContextProviderInput to expect this
new behavior of ignoring input for non-root modules.
The prior commit changed the schema-access model so that all schemas are
fetched up front during context creation and are then readily available
for use throughout graph building and evaluation.
As a result, we no longer need to create dependency edges to a provider
when one of its resources is referenced by another node, and so the
ProviderTransformer needs only to worry about direct ownership
dependencies.
This also avoids the need for us to run AttachSchemaTransformer twice,
since ProviderTransformer no longer needs schema and we can therefore
defer attaching until just before ReferenceTransformer, when all of the
referencable and referencing nodes are already present in the graph.
We now fetch all of the necessary schemas during context creation, so we
can just thread that repository of schemas through into EvalContext and
Evaluator and access the schemas as needed without any further fetching.
This requires updating a few tests to have a valid Provider address in
their state objects, because we need that in order to trigger the loading
of the relevant schema.
This test depends on having a correct schema, so we'll specify the minimum
schema for its fixture inline here rather than using the superset schema
returned by testProvider.
Provider input is now longer handled with a graph walk, so the code
related to the input graph and walk are no longer needed.
For now the Input method is retained on the ResourceProvider interface,
but it will never be called. Subsequent work to revamp the provider API
will remove this method.
Add a graphNodeAttachDestroy interface, so destroy nodes can be attached
to their companion create node. The creator can then reference the
CreateBeforeDestroy status of the destroyer, determining if the current
state needs to be replaced or deposed.
This is needed when a node is forced to become CreateBeforeDestroy by a
dependency rather than the config, since because the config is
immutable, only the destroyer is aware that it has been forced
CreateBeforeDestroy.
The earlier change 5f07201a made it so that the state is always rewritten
by EvalDiffDestroy, but that was too disruptive to other users of
EvalDiffDestroy.
Now we follow the lead of EvalDiff and have a separate pointer for the
_output_ state, which allows the caller to opt in to having its state
pointer updated to reflect the new (nil) state.
NodePlannableResourceInstanceOrphan is the only caller that currently opts
in to this, since that was the focus of 5f07201a. We may need to make a
similar change to other plannable resource destroy nodes, but we'll wait
to see if that needs to be done in a subsequent commit.
The TestApplyGraphBuilder_doubleCBD fixture was updated incorrectly with
a cycle in the desired output. The test matches one the expected string
is fixed.
Now that core has access to the provider configuration schema, our input
logic can be implemented entirely within Context.Input, removing the need
to execute a full graph walk to gather input.
This commit replaces the graph walk call with instead just visiting the
provider configurations (explicit and implied) in the root module, using
the schema to prompt.
The code to manage the input graph walk is not yet removed by this commit,
and will be cleaned up in a subsequent commit once we've made sure there
aren't any other callers/tests depending on parts of it.
It was incorrect to use a type switch to detect the optional schema
attachment interfaces, because they are not mutually-exclusive: resource
nodes implement both GraphNodeAttachResourceSchema and
GraphNodeAttachProvisionerSchema.
This fixes a number of test regressions around dependency analysis in
"provisioner" blocks.
In #14526 we fixed a sticky edge-case where a resource with count = 0 set
won't create its containing module state on apply, and thus when another
expression refers to it we need to deal with that absense.
The original bug fixed by #14526 was actually a nil dereference panic in
this case. Our new HCL2-oriented expression evaluation codepath was, on
the other hand, correctly checking for the nil, but was not taking the
correct action in response to it, leading to the result being an
unexpected unknown value.
Here we replicate the fix to #14526 by behaving as if there are just no
instances present in this case. We achieve this in a slightly different
way here by just creating an empty ModuleState, but the effect is the
same as #14526.
This fixes TestContext2Apply_multiVarMissingState.
While we're planning we must always update the state with the proposed new
data resulting from the plan. In this case, we must record that the
orphan instance doesn't exist at all in the proposed new state by storing
its state as nil.
This in turn allows references to the containing resource to evaluate
properly, using the new updated resource count. This fixes
TestContext2Apply_multiVarCountDec.
This also includes a number of changes to the test output of
TestContext2Apply_multiVarCountDec that make it easier to debug failures.
Both ProviderTransformer and ReferenceTransformer need schema information,
and so there's a chicken-and-egg problem here where previously the schemas
were not getting attached to provider nodes created during
ProviderTransformer.
As a stop-gap measure for now we'll just run AttachSchemaTransformer
twice, so we can catch any new nodes created during the provider
transforms.
Previously we fetched schemas during the AttachSchemaTransformer,
potentially multiple times as that was re-run for each graph built. Now
we fetch the schemas just once during context construction, passing that
result into each of the graph builders.
This only addresses the schema accesses during graph construction. We're
still separately loading schemas during the main walk for evaluation
purposes. This will be addressed in a later commit.
An aliased provider should not be automatically inherited, nor
implicitly instantiated in a module. This test should not have
previously passed.
Add a proxy provider block to the module and update the provider to
match the schema.
The state after EvalReadDataDiff is no longer nil during plan, which
means that we can't use that as a proxy for requiring the diff.
Rather than exiting early to save the EvalWriteState and EvalWriteDiff
evaluations, continue normally regardless to ensure we have the latest
diff and state after the plan. This also aligns the data data source
handling with that of the managed resource.
The Provider field in ResourceState is now required, whereas before it
could be omitted and have Terraform try to discover a fitting provider
configuration automatically.
The automatic behavior was a compatibility shim added in v0.11 to support
states from prior versions without an explicit migration, but for v0.12
we will have a migration to our new state format anyway and so we will
fix this up during that migration pass.
This comprehensive test was covering a few different behaviors that are
intentionally different for v0.12:
- Applying the splat operator to a list of resource instances that haven't
been created yet produces a list of unknown values rather than a single
unknown list as before. This is important because it allows that list
to be passed into length().
- Wrapping a splat expression in another round of brackets now produces
a list of lists, whereas before we had a special case (for compatibility
with prior to v0.10) that would flatten this away in the schema layer.
Previously we would just retain an empty InstanceState in this case, but
now that we must enumerate all of the available instances during
expression evaluation it's important that we be able to recognize
instances that have been deleted.
Because we currently rely on the ReferenceTransformer to introduce the
necessary edges between local/output values and resource destroy nodes, we
must include the destroy phase of any resource we depend on in the
references of these.
This works in conjunction with the changes in the prior commit to restore
correct handling of dependencies for local and output values during
destroy.
With the current design, several seemingly-separate parts of the code must
all coincidentally agree with one another for destroy edges to be created
properly, which makes this code very hard to maintain. In future we should
refactor this so that ReferenceTransformer doesn't create edges for
destroy nodes at all, and have _all_ destroy edges (including
create_before_destroy) be dealt with in the single DestroyEdgeTransformer,
where they can be maintained and unit tested together.
Prior to the introduction of our "addrs" package, we represented destroy
nodes as a special kind of address string ending in ".destroy" or
".destroy-cbd".
Using references to resolve these dependencies is a strange idea to begin
with, since these are not user-visible addresses, but rather than refactor
that now we instead have these weird pseudo-address types ResourcePhase
and ResourceInstancePhase that correspond go those weird address suffixes,
thus restoring the prior behavior.
In future we should rework this so that destroy node edges are not handled
as references at all, and instead handled as part of
DestroyEdgeTransformer where there's better context for implementing this
logic and it can be maintained and tested in a single place.
The old testApplyFn would overwrite ID with "foo" in all cases there
wasn't a diff, which made the test fixtures harder to reason about. If
there's an ID, keep it the same.
The initial destroyer map is constructed using DestroyAddr(), which
returns resource instance addresses, but we were then going on to _use_
that map with resource addresses, which means the keys can't match when
indexed instances are being destroyed.
Now we'll use resource instance addresses in all cases.
This also includes some additional logging that was helpful in debugging
this issue.
The adaptation of ModuleState.RemovedOutputs for the new config types
was incorrect because it took the absence of any output map as "nothing to
do", rather than "everything has been removed" as expected.
Now it treats a nil map like an empty map, detecting _all_ of the outputs
as having been removed if the output map is nil.
This is temporarily broken until we implement the new plan file format,
since terraform.Plan is no longer serializable with gob. Rather than have
an error that seems like it needs immediate fixing, we'll be explicit
about it in the error message and focus our efforts on other test failures
for now, and return to implement the new file format later.
An earlier commit today reworked this to handle non-fatal errors, which
are returned "smuggled" as a special type of error to avoid changing the
EvalNode interface.
Unfortunately, that change then broke the _other_ special thing we smuggle
through the error return path: early exit.
Now we'll handle them both. This is not perfect because the early-exit
path causes us to discard any warnings we've already collected, but it's
more important that we bail early than retain warnings.
We previously added a special case for dealing with references to
instances in the plan graph where there are only resource nodes. However,
this was too general a fix and so it upset the handling of graphs where
instances _are_ present.
Now we'll do that fallback behavior only if there is no instance node in
the graph already, so the exact matching behavior will be used in graphs
where the instances are present.
The provider transforms now depend on analyzing references in order to
properly create provider edges, and so we need to now insert all of the
nodes that can have references and attach schemas before we run
TransformProviders.
This was done for the main graph builders in a previous commit, but as
usual we missed this surprising hidden graph builder that lives inside
a graph transformer. 🙄
Due to the need for schema in order to resolve references in expressions,
we now create additional provider dependency edges when a node refers to
an attribute from a resource.
During import we constrain provider configuration to allow only references
to variables, but since provider configurations in child modules might
refer to variables from the parent, we still need to include the module
variables, outputs and locals in the graph here and attach the provider
schemas.
In future a better check would be that the provider configuration doesn't
refer to anything that is currently unknown, but we'll save that for
another day.
The previous wording of this message was a little awkward, and a little
confusing due to the mention of it being a non-existing "resource", when
elsewhere in our output we use that noun to refer to the configuration
construct rather than the remote object.
Here we rework it as a diagnostic message, and while here also include an
extra note about a common problem of using an id from a different region
than the provider is configured for, to help the user realize what is
wrong in that case.
The previous commit rewrote this incorrectly because the fatal message
made it seem like it was failing when an error occurs, but an error is
actually expected here.
Also includes a more detailed error message for this case, consistent with
our new diagnostics style.
ctx.Import now returns tfdiags.Diagnostics rather than "error", so these
tests need to now expect that API for proper behavior.
Several of these tests are still failing for other reasons. That will be
addressed in subsequent commits.
To avoid a massively-disruptive change to how EvalNode works, we're now
"smuggling" warnings through the error return value for these, but this
depends on all of the Eval machinery correctly handling this special case
and continuing evaluation when only warnings are returned.
Previous changes missed EvalSequence as a place where execution halts on
error. Now it will accumulate diagnostics itself, aborting if any of
them are error diagnostics, and then wrap its own result up in an error
to be returned by the main Eval function, which already treats non-fatal
errors as a special case, though now produces an explicit log message
about that situation to make it easier to spot in trace logs.
This also includes a more detailed warning message for the warning about
provider input being disabled. While this warning should be removed before
we release anyway, having this additional detail is helpful in debugging
tests where it's being returned.
Since outputs now rely on providers in order to ensure that a schema is
available for evaluation, we need to exclude providers from checking
TargetDownstream.
A provider's schema is the same regardless of its address in the
config. Key them by type so that an evaluation referencing a provider
from an address not included in the graph can still find the schema.
We no longer have this merge behavior, because it is inconsistent with how
variables behave in all other contexts and similar behavior can now be
achieved by merging the user's input with a predefined map in a local
value expression.
Previously we'd create the stub provider in any case where we didn't need
a configured provider, but we also need to skip creating it if there's
already a provider node present, or else we can end up with multiple
stub nodes in the graph.
Since ProviderTransformer now needs the schema in order to infer indirect
references to providers, we must run AttachSchemaTransformer before the
provider transformers in order to calculate the correct ordering of
operations.
The provider schema cache is keyed by provider configuration address
rather than provider type, so we need to do the same inheritance logic
to resolve providers needed because of reference as we do for providers
needed for direct use.
This allows resources that override "provider" or resources in child
modules that have their own provider configurations to be associated
with the provider config they will eventually get schema from, rather
than (as before) always the default configuration for the provider in
the root module.
Eventually it'd probably be better to switch to using a provider cache
that is keyed by provider _type_ rather than provider config, but since
it's currently fetched by visiting the individual provider graph nodes
we currently visit each provider configuration separately and fetch a
schema for each.
Any non-resource (outputs, variables, locals) that references a resource
type must also be connected to that resources provider. This is required
during apply, because the graph built from the diff may not include the
referenced resources because they are being evaluated from the state.
If the provider isn't present already, add a NodeEvalableProvider to
fetch the provider schema.
The provider transformers now need to happen after the outputs, locals,
and variables are transformed.
This test seems to have been buggy before our current work, with the test
fixture containing a reference to a resource that doesn't exist.
This both fixes the fixture and adds a mock schema for it, though this
just revealed another error which isn't fixed here, where the a_ids value
seems to come through as unknown after apply. That will be fixed in a
subsequent commit.
We no longer support using "self.count" in a provisioner to access the
resolved count meta-argument value of the associated resource.
This was only possible before because of a special exception in how
Terraform resolved variables, and in new HCL that exception isn't possible
because resource instances are real values in the scope and we don't want
to add this implied "count" attribute to all of them.
"count" is a property of the resource config rather than of the resource
instances, and since "self" is a resource _instance_ it doesn't make sense
to expose it there.
There is no replacement for this feature. In the rare case where it is
needed, the user must factor the count out into a named local value and
refer to that both in the count meta-argument and in the provisioner.
Most of these changes are just adding schema to describe the expectations
of the existing test fixtures. However, some of them require the fixtures
themselves to be changed due to changing assumptions in the language.
This was previously an apply-time failure due to our inability to
type-check unknowns in 0.11, but we now retain type information for
unknown values and so this check now fails during plan instead.
The fixtures for this test assume some atypical arguments to the resources
and also need a provisioner schema.
This doesn't actually fix the test, but by fixing the schema/fixture this
exposes a problem that seems to exist in the main code, which will be
fixed in a subsequent commit.
On the initial pass here I reached a faulty conclusion about what from
the new world should shim into a NewInstanceInfo, based on a poor read
of existing code.
It actually _should've_ been based on an absolute instance after all,
as evidenced by the expected result of TestContext2Refresh_targetedCount.
Therefore the signature is changed here, and all of the callers (which,
in retrospect, were all holding a full instance address anyway!) are
updated to that new signature.
References can't be connected directly to the instances, because the
resources are expanded when ReferenceTransformer is run. Lookup
references by the resource type.
Although there isn't really a good reason why there should be no schema
in practice, it's better for us not to crash right now while we're still
updating all of the callers (mostly tests) to make schema available.
This was trying to test gathering input from a default and an alias
provider configuration, but it was incorrectly setting the provider ref
on the instance that was supposed to belong to the alias. This was working
before because Terraform would gather input from the aliased provider
anyway, but now the invalid "alias" argument in the resource is producing
a validation error.
This doesn't actually make the test work again yet, because we still have
provider input disabled at this time, pending a forthcoming change to
how provider input is handled.
This test was previously not setting InputModeVarUnset, causing us to
overwrite the "amis" map that _is_ already set. This worked before because
we used to treat the empty result as an empty map and then merge it with
the given value, but since we no longer do that merging behavior we were
ending up with an empty map after input.
Since the intent of this test is to see that the "foo" variable gets
populated by input, here we add InputModeVarUnset which then matches how
the input walk is triggered by the "real" codepath in the local backend.
This also includes some updates to make the test fixture v0.12-idiomatic
(applied after it was seen to work with the old fixture) and to properly
handle the "diags" return value from the various context methods.
This attribute is referenced in order to include a computed value into
another resource, and so it must be present in the schema so that it can
be properly resolved.
This requires making the "components" object available to the resource
node so it can be used during DynamicExpand. It also involved splitting
the provisioner schema attachment into a separate interface from
GraphNodeProvisionerConsumer so that it can now be handled within
AttachSchemaTransformer, along with all of the other schema attachment
steps.
The initial rework of this function to support traversals didn't correctly
handle the "all" case, due to a logic error where the ignoreAll branch
could be visited only if ignoreChanges were non-empty, but yet the two
are mutually exclusive in practice.
Now we process ignoreAll separately from ignoreChanges, and invert the
two loops so that we will visit all attributes regardless of what is
in the ignoreChanges slice.
Prior to our v0.12 changes this test was confusingly using an attribute
named "set", but assigning a map to it. The expected test result suggested
that it was actually expecting legacy HCL2's weird interpretation of a
single map as a list of maps, and so to retain the intent of the test here
(in spite of the contrary name) we type "set" as list of map of string,
update the fixture to _actually_ be a list of maps, and then we get the
expected test result.
Update test fixtures to work in our new world.
This is mostly changing out attribute names for those in the schema,
adding Providers to states, and updating the test-fixture
configurations.
Previously this test's fixture was depending on the fact that attribute
access of an unknown value would always succeed and return another unknown
value, but under the new language interpreter an unknown value still
retains type information and so accessing this "bar" attribute would fail
the semantic check.
We also have to fuss a bit here to work around the limitations of the
testDiffFn implementation, which doesn't have enough context to understand
that "list" in the ResourceConfig is the same as "list.#" in its result.
Since this part of the provider API will change soon to use cty values
directly, this change just accepts a slightly-odd-looking diff in the mean
time, with both "list" and "list.#" populated.
Previously we only handled the "count cannot be computed" check during
validate, leaving other walks to just report "a number is required"
(because "unknown" was represented as a special string) but now we have
unknown as first-class we handle it during all walks, and so this error
message is now the more appropriate one saying that the value is not
yet known.
The behavior of the "count" meta-argument has changed so that its presence
(rather than its value) chooses whether the associated resource has
indexes. As a consequence, these tests which set count = 1 now produce
a single indexed instance with index 0.
Previously Terraform Core was unaware of the structure of a resource type
schema and so the strange behavior of our testDiffFn caused some
attributes to not appear at all in the result. With core now more aware,
it "fills in" these missing items before calling, and as a result they
now appear in the diff even with the testDiffFn.
In real code, where helper/schema is constructing diffs, this situation
doesn't arise because the framework always produces schema-compatible
diffs.
The diff stringer now uses the standard serialization of a module address,
so we need to update the golden representations to restore their
associated tests to passing.
The old testDiffFn used th raw config to dynamically set computed values
in the diff. Since the schema now defines what values should be there,
all test diffs end up with unkown computed values. Filter these out by
looking for a value set to "compute"
This was assuming our old practice of a slice starting with the string
"root". We'll normalize here and then stringify the result to ensure that
we get a string consistent with what's used elsewhere.
This is primarily aimed at fixing some of the context plan tests.
Most changes here just introduce some custom schema into the test mocks.
In some cases, a fixture is lightly updated to more modern assumptions.
The test for accessing count.index in a resource block without count set
is removed, because that is no longer valid under the new language
implementation.
This problem should now be caught at validate time rather than plan time,
because we can use the schema to detect the problem before the resource
has been resolved.
The evaluate data source was using a guessed provider configuration
address from configuration in this case, but that isn't necessarily
correct since the resource might actually be associated with a config
inherited from a parent module.
We still need to retain that fallback to config because we are sometimes
asked to evaluate when state is incomplete (like in "terraform console"),
but if possible we'll take the stored provider address from the state
and use that, even if the resource is otherwise "pending".
Destroy nodes were being referenced by their regular paths, which was
causing cycles in the graphs. Destroy nodes can't be referenced directly
in any way, so override the inherited method for a referenceable address.
The id field is always computed, and never directly related to a diff.
Since that field is being converted to a regular schema attribute, we
need to handle the behavior in the mocks too.
Mostly this is about updating ctx.Plan callers to expect diags instead of
err, but also includes a few light updates to test fixtures, and a fix to
testModuleInline.
There are still some other issues with some of these tests right now, but
all the ones that need to have schema should now have it.
It seems that there is a bug with the evaluation of child module input
variables where they can't find their schema even when a mock is provided.
Will attack this in a subsequent commit.
This should actually have been caught by !val.IsWhollyKnown, since
DynamicVal is always unknown, but something isn't working quite right here
and so for now we'll redundantly check also if it's of the dynamic
pseudo-type, and then revisit cty later to see if there's a real bug
hiding down there.
The recent changes for v0.12 have moved the responsibilities around here
so that it's the caller's responsibility to specify the provider address
in all cases, with the real UI (in the "command" package) providing a
suitable default if nothing is specified.
Therefore the tests at _this_ level must all include an explicit provider
address, since these tests are acting as if they _are_ the UI code.
The approach here is a little hacky, since this edge case applies only to
validate and all of the other evaluateResourceCountExpression callers
don't care about it: we overload the "count" return value as a flag to
allow NodeValidatableResource to allow it to detect this situation and
silently ignore errors in this case.
We were previously doing this for all of the reference types except this
one. Now we do it for resources and resource instances too, which both
allows us to produce a proper error message when one is missing (rather
than returning an unknown value) and allows us to properly handle the
case where there are no instances yet present in the state (e.g. because
we're in the validate walk) but "count" isn't set, and so a single
unknown value is expected rather than an empty tuple.
Previously InitProvider was incorrectly using only the relative address,
which (due to the ambiguity in the string representation of absolute vs.
relative addresses) caused it to always initialize providers in the root
module.
Now we use the absolute address as the key, which then agrees with the
Provider method and ensures that each module gets its own separate
instance of each provider if explicit configuration is present.
This should never happen in real code, but it comes up a lot in test code
where incomplete mock schemas are being used to test with very simple
configurations.
Previously we were skipping all of the validation steps if a provider was
being configured implicitly, and thus had no block in configuration.
This is incorrect, since a provider must still get an opportunity to
configure itself with an empty configuration and possibly reject that
empty configuration with errors.
EvalValidateSelfRef needs schema in order to extract references. It was
previously expecting a *configschema.Block directly, but we weren't
actually passing that in from anywhere except the tests because it's not
available directly in that form during the evaltree for
node_resource_validate.
Instead, we now pass in the whole *ProviderSchema for the associated
provider and have this EvalNode find the schema itself based on the
address. This breaks some of the generality of this node (now only really
works for resource addresses) but that's okay since we have no other
use-case right now anyway.
Our old state format requires all primitive values to be strings. We were
trying to enforce that before, but this didn't work properly because
gocty does not perform automatic type conversions.
Instead, we now convert to string first and then convert the result into
a native Go string afterwards.
At the moment this must be handled as a special case because we're still
using the old representation of output state, but we do still need to
handle this so that unknown values can properly pass between modules
during validate and plan.
Since our ignoring is now implemented in terms of cty objects and HCL
traversals, rather than flatmap keys, it no longer makes sense to test
for the flatmap detail of keeping the map count in a separate key.
It _does_ make sense to ignore an entire block or map, but that's already
covered by another existing tests for just []string{"resource"} above.
This was incorrectly comparing a cty.Value to an hcl.Body. Now we decode
the body first so we can compare two of cty.Value.
Also includes a fix to a stale comment in buildProviderConfig that was no
longer accurate.
The interface of this eval node has changed for v0.12, now requiring both
a provider address and the actual provider object.
We also need to give it a working ctx.EvalBlock implementation on the
mock EvalContext, so we just use installSimpleEval here to get our simple
implementation that just knows how to evaluate constant expressions.
An instance like aws_instance.foo[0] is not permitted to refer to
aws_instance.foo, since that result contains the individual instance along
with all other instances.
A schema is now required for any validation, so these tests now use the
simpleMockProvider function to produce a provider with a simple schema
already configured, and test against that schema.
EvaluateBlock and EvaluateExpr are often called multiple times in a single
operation, and our usual approach of mocking with static values is a poor
fit for those cases.
To accommodate more complex tests, we allow the test to optionally provide
a callback function to use instead of the static return values.
Since a pretty common need is to just evaluate the given block or
expression in a simple way, we also now have a helper method
installSimpleEval that installs reasonable implementations of these
two methods that can (assuming no other customization) just evaluate
constant expressions, which is sufficient for many tests.
The config loader already extracts this during its initial pass and saves
it as a separate field in a configs.Connection value, so requiring it
again here causes confusing errors about this attribute being provided but
not set.
These are some remnants of the shadow graph functionality that was added
to support the graph builder changes in v0.8, but that has since been
removed and so there are no remaining callers for these types and
functions.
The contract for this function has changed as part of the 0.12
reorganization so that, while before it would accept a partial variables
map if all missing variables were optional, it now expects that the caller
has already collected values for all variables and is just checking them
for type-correctness here.
Although this rarely matters, making it always be nil when empty makes
deep assertions simpler in tests.
This also includes a minor update to the test in the core package that
first encountered this problem, to improve the quality of its output
on failure.
These tests now need schema available to run properly, so rather than
crafting a separate schema for each of these we'll just use the one
created by simpleMockProvider, which contains a resource type called
test_object that we'll now use as the basis for all of the tests here.
In this codepath we're still using the old "internal" ResourceAddress
parser to parse resource addresses from the diff, but the string being
parsed does not include a module path and so the result is interpreted
as a resource in the root module.
We need to rewrite this to be in the correct module path as part of
converting to the new address representation, in order to get a correct
absolute address.
A trivial mistake in the rework of this function meant that it was just
discarding its result rather than returning it. It will now return its
result as expected, allowing reference analysis to work for this node
type.
The final "<resource> provided by <provider>" message was showing the
original selection made by the resource itself rather than the provider
address finally resolved after inheritance, which caused the log to
misrepresent what was actually being created in the graph.
Previously this was just stubbing out provider types, but we now need to
include schema for each of the providers and resource types the tests
use in order for the references to be properly detected.
The test fixtures are adjusted slightly here so we can use the
simpleTestSchema as the schema for all of the different blocks in these
tests. The relationships between the resources are still preserved, but
the attributes are renamed to comply with this schema.
Since expression evaluation is now done via an EvalContext method, and
since these tests use the MockEvalContext, we need to provide a value for
the mock call to return. Previously expression evaluation happened at a
different layer that was not mocked here.
These transformers both construct temporary graphs using many of the same
transformers used in the apply graph, and properly doing this now requires
access to the providers and provisioners in order to obtain their schemas.
Along with this, we also update the tests here to use the
simpleMockComponentFactory helper to get a mock provider with a schema
already configured, which means we also need to update the test fixtures
and assertions to use the resource type and attributes defined in that
mock factory.
Having a missing schema is a programming error, but at the time of this
commit we're in the midst of introducing schema all over Terraform and so
there are inevitably some places -- particularly in older unit tests --
where schema isn't yet being provided.
This error allows us to catch those cases and fail gracefully, rather
than panicking further down here when we access t.Components methods.
Also includes some additional logging to aid debugging of this
transformer.
Previously we were attempting to construct a default manually here, and
actually getting it wrong by using the resource type name as a whole
rather than the expected inferrence by prefix.
Now we use the method provided in the addrs package for this purpose,
which implements the standard behavior of shaving off the first
underscore-separated word from the resource type name.
Now that any configuration processing requires schema, we need either a
standalone schema or a provider/provisioner configured with one a lot more
often in tests.
To avoid adding loads of extra boilerplate to the tests, these new helper
functions produce objects pre-configured with a schema that should be
useful for a number of different cases, and can be customized further for
more interesting situations.
A lot of our tests can then just use these pre-defined object and
attribute names in their fixtures in situations where the canned schemas
here are good enough.
The logic under test has intentionally changed here so that setting count
to any value -- even 1 -- causes Terraform to prefer to keep indexed
instances if both non-indexed and indexed are present.
Previously Terraform treated count = 1 as equivalent to count not set at
all, but we now recognize these as two different situations and treat
count = 1 as the same as any other non-zero count, to ensure that if
count is set conditionally it'll always produce indexed instances, even
if the dynamic expression ends up evaluating to 1.
This function was previously checking for a path length greater than one
because the older path format included an always present "root" element
at the start.
We now need to check for a totally-empty list, because otherwise we fail
to add the expected prefix to the front of a path with only one element.
This also includes some adjustments to the related tests and transforms
that do not change behavior but do make the test results easier to
understand and debug.
Due to some disagreement about what representation of provider addresses
we were using, the inherited providers map wasn't matching. Now we'll
consistenly use the "compact" form (just the provider name and optional
alias).
Also includes some other tweaks to make this test better-behaved.
Although the AddModule method now takes a new-style
module address as an argument, internally we're still
shimming it to a []string.
Therefore this test needs to still expect [][]string as
a result, rather tan []addrs.ModuleInstance.
Prior to the refactoring to move provider address parsing/selection up
into the frontend, there was some logic here to just-in-time default a
provider config based on the given resource type.
This is now expected to happen at a higher layer, with ImportTarget
expecting an already-valid provider configuration address.
The normal import codepath was already updated with this in mind, but some
of the provider transform tests are using ImportStateTransformer as a
shortcut for getting some resource nodes added to the graph, and so those
tests now need to include a valid provider address in their ImportTarget
values.
Also includes some adjustments to test output to make the tests easier
to debug.
The prior implementation of this function (before the rewrite to use
addrs.AbsProviderConfig values) was relying on some implicit behavior of
our provider address normalization to generate an address in the root
module. Since that wasn't explicit, I introduced a bug here when
introducing the new address type, where I generated an address in the
node's own module, rather than in the root as expected.
This new implementation is functionally equivalent to the prior, but is
written to make the intent more obvious: take _just_ the type from the
node's provider address and create an implicit configuration for it
_in the root module_.
Along with the change in approach, this new implementation also has an
updated documentation comment that better describes its current intent
(previously it was outdated) and hopefully clearer trace logging to
better communicate what it's doing.
In the change to using addrs types rather than string keys directly here
I incorrectly made this use the _relative_ provider config instead of
the absolute one, causing MissingProviderTransformer to only match
providers defined in the root module (due to ambiguity in the string
representations of these address types).
The rest of this change is improved logging and test output that helped
with debugging this issue.
Some of these tests were using an outdated form to reference a local
directory as a module. While here, I also updated the provider references
to the new unquoted form, which is preferred from v0.12 onwards.
The changes to our GraphNodeReferencable and GraphNodeReferencer
interfaces were not also reflected in our testing structs here, and so
these tests were no longer working.
This updates these implementations to the new required signatures,
adapting the simplified model used in the structs to generate local
variable references, since the reference model no longer uses the
arbitrary strings that this test originally depended on.
We also remove some of the tests here since the functionality they were
testing no longer applies: inter-module dependencies are now handled by
the graph nodes producing extra ReferencableAddrs, and the "backup" form
is no longer used because our new address model is able to distinguish
resources from resource instances without the need for the magical
backup reference forms we previously used.
Resolution of dependencies automatically from expressions now requires
a schema to be available. To avoid the need to provide mock schemas for
all of these different resource types, we instead use the depends_on
argument here to mark the dependencies explicitly.
Previously we had a bug where we'd use the resource type name instead of
the provider name. Now we use the DefaultProviderConfig helper function
to extract the provider name from the resource type name.
The ReferenceTransformer can't do its work here unless our resources have
schema attached, since otherwise it doesn't know which attributes to
search to find references.
Now that a schema is required to decode resource configuration, a lot more
tests than before need access to mock providers in order to get a schema.
These new helpers allow a mock provider with a schema to be constructed
concisely.
This test required some adaptation for the new variable handling model in
the HCL2-oriented codepaths, where more of the handling is now done in the
"command" package.
The initial reorganization of this test was not correct, so this is a
further revision to adapt the original assertions to the new model as much
as possible, removing a few tests that no longer make sense in the new
world.
A number of our test fixtures were previously using the non-idiomatic form
of including a single child attribute all on one line with the block
header and bounding braces.
This non-idiomatic form is an error in HCL2, and hclfmt has always "fixed"
it to the expected form of each attribute being on a line of its own, and
so here we just update all the affected test fixtures to canonical form
(using hclfmt), allowing them to be parsed as intended.
Since the these entire files were processed with hclfmt, there are some
other unrelated style changes included in situations where the file
layouts were non-idiomatic in other ways.
These are all things that ought to be present in normal use but can end up
being nil in incorrect tests. Test debugging is simpler if these things
return errors gracefully, rather than crashing.
While there's no good reason for this to happen in practice, it can arise
in tests if mocks aren't set up quite right, and so we'll catch it and
report it nicely to make test debugging a little easier.
There's actually no good reason for this to happen in the normal case, but
it can happen reasonably easy if a test doesn't properly configure the
MockEvalContext, and so having this check here makes test debugging a
little easier.
Since this test case is using t.Run, it must be sure to pass the nested
*testing.T over to the testDiffs function or else calls to t.Fatal will
crash the whole test process, since they would otherwise be called on the
wrong instance of *testing.T.
The only reason these cases are arising right now is because we have tests
that haven't yet been updated to properly support schema, but it can't
hurt to add some robustness here to reduce the risk of real crashes.
After the refactoring to integrate HCL2 many of the tests were no longer
using correct types, attribute names, etc.
This is a bulk update of all of the tests to make them compile again, with
minimal changes otherwise. Although the tests now compile, many of them
do not yet pass. The tests will be gradually repaired in subsequent
commits, as we continue to complete the refactoring and retrofit work.
Some of the objects that are referencable from expressions are transient
values computed only during a graph walk, and not persisted in state. In
order to support arbitrary evaluation of expressions, such as in the
"terraform console" CLI command, it's necessary to be able to evaluate
these values before we start evaluating.
This new Eval method achieves this by performing a special graph walk that
ignores resources (except for dependency resolution) and just focuses on
evaluating all of these transient values, before returning an evaluation
scope that can then resolve expressions in terms of that result.
This replaces the Context.Interpolator method, which was fraught with
various issues due to it not properly priming the state before evaluating.
Here we replace the stub implementations in evaluationStateData with real
implementations that are based on their equivalents in the old
Interpolator type.
The behavior here is a little different due to the different semantics
expected under HCL2, but the principle remains the same: the main
references are resolved from the state, using config for validation
in order to produce some helpful error messages.
I took some missteps here while doing the initial refactor for HCL2 types.
This restores the map of maps that retains all of the variable values, and
then makes it available to the evaluator.
Previously our evaluationStateData object was constructed inside
Evaluator.Scope, but this was awkward because all of the fields inside it
need to be populated from BuiltinEvalContext fields, and so the signature
of Evaluator.Scope kept growing new arguments over time.
Instead, we reassign the responsibilities here so that Evaluator.Scope
takes an already-constructed lang.Data, and then teach BuiltinEvalContext
to build this object itself from its own internal values.
Due to a logic error here we were trying to find our our module's parent
as a descendent of itself, rather than as a descendent of the root. It
turns out that we can do this even more simply by just accessing the
Parent field on the given config, avoiding the need to traverse the tree
down from the root at all.
While here, this also switches to using the path.Call helper method rather
than manually slicing the path array, since this better communicates our
intent.
This is a little awkward since we need to instantiate the providers much
earlier than before. To avoid a lot of reshuffling here we just spin each
one up and then immediately shut it down again, letting our existing init
functionality during the graph walk still do the main initialization.
We've not yet adjusted any of the state structs to reflect our new address
types because they are used with encoding/json to produce our state file
format, but the shimming here previously was incorrect because it failed
to include the special "root" string that's always required at element
zero of a module path in the state.
Due to how deeply the configuration types go into Terraform Core, there
isn't a great way to switch out to HCL2 gradually. As a consequence, this
huge commit gets us from the old state to a _compilable_ new state, but
does not yet attempt to fix any tests and has a number of known missing
parts and bugs. We will continue to iterate on this in forthcoming
commits, heading back towards passing tests and making Terraform
fully-functional again.
The three main goals here are:
- Use the configuration models from the "configs" package instead of the
older models in the "config" package, which is now deprecated and
preserved only to help us write our migration tool.
- Do expression inspection and evaluation using the functionality of the
new "lang" package, instead of the Interpolator type and related
functionality in the main "terraform" package.
- Represent addresses of various objects using types in the addrs package,
rather than hand-constructed strings. This is not critical to support
the above, but was a big help during the implementation of these other
points since it made it much more explicit what kind of address is
expected in each context.
Since our new packages are built to accommodate some future planned
features that are not yet implemented (e.g. the "for_each" argument on
resources, "count"/"for_each" on modules), and since there's still a fair
amount of functionality still using old-style APIs, there is a moderate
amount of shimming here to connect new assumptions with old, hopefully in
a way that makes it easier to find and eliminate these shims later.
I apologize in advance to the person who inevitably just found this huge
commit while spelunking through the commit history.
This is currently not very ergonomic due to the API exposed by providers.
We'll smooth this out in a later change to improve the provider API, since
we know we always want the entire schema.
Initially the intent here was to tease these apart a little more since
they don't really share much behavior in common in core, but in practice
it'll take a lot of refactoring to tease apart these assumptions in core
right now and so we'll keep these things unified at the configuration
layer in the interests of minimizing disruption at the core layer.
The two types are still kept in separate maps to help reinforce the fact
that they are separate concepts with some behaviors in common, rather than
the same concept.
For the moment this is just a lightly-adapted copy of
ModuleTreeDependencies named ConfigTreeDependencies, with the goal that
the two can live concurrently for the moment while not all callers are yet
updated and then we can drop ModuleTreeDependencies and its helper
functions altogether in a later commit.
This can then be used to make "terraform init" and "terraform providers"
work properly with the HCL2-powered configuration loader.
This is a rather-messy, complex change to get the "command" package
building again against the new backend API that was updated for
the new configuration loader.
A lot of this is mechanical rewriting to the new API, but
meta_config.go and meta_backend.go in particular saw some major
changes to interface with the new loader APIs and to deal with
the change in order of steps in the backend API.
When computing the count value, make sure to include walkDestroy with
walkApply, as the former is only a special case of the latter.
When applying a saved plan, the computed count values are lost and we
can no longer query the state for those values. The apply walk was
already considered in the `resourceCountMax` function, but the destroy
walk was not. This worked when destroying in a single operation
("terraform destroy"), since the state would still be updated with the
latest counts from the plan.
During a full destroy when outputs are removed, the
NodeDestroyableOutput was preventing it's sibling output from being
destroyed. Prune the output node if it only has its destroy node as a
dependent.
The destroy output test is simply run a second time with no state, which
would cause the output interpolation to fail if it remained in the
graph.
If an existing resources is scaled back to 0, locals and outputs will
still have a multi-variable reference to evaluate, which should return
an empty list. Due to how the resource is removed, the resource will
still exist in the state but with no primary instance, which needs to be
ignored in the instance count.
filterPartialOutputs was not taking into account that some dependent
resources might yet be removed from the graph. Check that they are not
in the targeted set before declaring that the output remain.
ReadState would hide any errors, assuming that it was an empty state.
This can mask errors on Windows, where the OS enforces read locks on the
state file.
The provisionerFail_createBeforeDestroy test was verifying the incorrect
output. The create_before_destroy instance in the state has an ID of
"bar" with require_new="abc", and a new instance would get an ID of
"foo" with require_new="xyz". The existing test was expecting the
following state:
aws_instance.bar: (1 deposed)
ID = bar
provider = provider.aws
require_new = abc
Deposed ID 1 = foo (tainted)
Which showed "bar" still the primary instance in the state, with the new
instance "foo" as being the deposed instance, though properly tainted.
The new output is:
aws_instance.bar: (tainted) (1 deposed)
ID = foo
provider = provider.aws
require_new = xyz
type = aws_instance
Deposed ID 1 = bar
Showing the new "foo instance as being the primary instance in the
state, with "bar" as the deposed instance.
If a count field references another count field which is interpolated
but is attached to a resource already in the state, the result of that
first interpolation will be lost when a plan is serialized. This is
because the result of the first interpolation is stored directly in the
module config, in an unexported config field.
This is not a general fix for the above situation, which would require
refactoring how counts are handles throughout the config. Ignoring the
error works, because in most cases the count will be properly
handled during the resource's interpolation.
An interpolated count value that is determined during plan, is lost
during plan serialization, causing apply to fail when the interpolation
string can't be evaluated.
While not normally possible, manual manipulation of the state and config
can cause us to end up with a nil config in
evalTreeManagedResourceNoState.
Regardless of how it got here, we can't ever assume the Config field is
not nil, and EvalInterpolate happily accepts a nil RawConfig
Add `host_key` and `bastion_host_key` fields to the ssh communicator
config for strict host key checking.
Both fields expect the contents of an openssh formated public key. This
key can either be the remote host's public key, or the public key of the
CA which signed the remote host certificate.
Support for signed certificates is limited, because the provisioner
usually connects to a remote host by ip address rather than hostname, so
the certificate would need to be signed appropriately. Connecting via
a hostname needs to currently be done through a secondary provisioner,
like one attached to a null_resource.
Similar to NodeApplyableOuptut, NodeDestroyableOutputs also need to stay
in the graph if any ancestor nodes
Use the same GraphNodeTargetDownstream method to keep them from being
pruned, since they are dependent on the output node and all its
descendants.
The id attribute can be missing during the destroy operation.
While the new destroy-time ordering of outputs and locals should prevent
resources from having their id attributes set to an empty string,
there's no reason to error out if we have the canonical ID field
available.
This still interrogates the attributes map first to retain any previous
behavior, but in the future we should settle on a single ID location.
Since outputs and local nodes are always evaluated, if the reference a
resource form the configuration that isn't in the state, the
interpolation could fail.
Prune any local or output values that have no references in the graph.
Now that outputs are always evaluated, we still need a way to remove
them from state when they are destroyed.
Previously, outputs were removed during destroy from the same
"Applyable" node type that evaluates them. Now that we need to possibly
both evaluate and remove output during an apply, we add a new node -
NodeDestroyableOutput.
This new node is added to the graph by the DestroyOutputTransformer,
which make the new destroy node depend on all descendants of the output
node. This ensures that the output remains in the state as long as
everything which may interpolate the output still exists.
Always evaluate outputs during destroy, just like we did for locals.
This breaks existing tests, which we will handle separately.
Don't reverse output/local node evaluation order during destroy, as they
are both being evaluated.
Add a complex destroy provisioner testcase using locals, outputs and
variables.
Add that pesky "id" attribute to the instance states for interpolation.
Destroy-time provisioners require us to re-evaluate during destroy.
Rather than destroying local values, which doesn't do much since they
aren't persisted to state, we always evaluate them regardless of the
type of apply. Since the destroy-time local node is no longer a
"destroy" operation, the order of evaluation need to be reversed. Take
the existing DestroyValueReferenceTransformer and change it to reverse
the outgoing edges, rather than in incoming edges. This makes it so that
any dependencies of a local or output node are destroyed after
evaluation.
Having locals evaluated during destroy failed one other test, but that
was the odd case where we need `id` to exist as an attribute as well as
a field.
Previously we would interpolate the count config (ResourceConfig.RawCount)
only while preparing to dynamic-expand aggregate resource nodes. This is
problematic because we do not dynamic-expand any resource nodes during the
apply walk, and so previously the count value was not available for
interpolation during apply and would result in an error.
Now we interpolate RawCount once for each resource we visit during the
apply walk -- even though that redundantly interpolates the same config
multiple times when count > 1 -- to ensure that it's available by the
time we interpolate any remaining expressions in the config and any
expressions within "connection" and "provisioner" blocks.
This error was masked by us sharing a single RawConfig instance between
the plan and apply walks when "terraform apply" is run with no explicit
plan file argument, but was exposed by the workflow where the plan is
written first to disk since in that case the interpolation result from
during the plan phase is not present in the deflated plan object. For
this reason, the new context test serializes the plan into an in-memory
buffer and reloads it in order to simulate the effect of the two-step
workflow.
Containers (maps, lists, sets) in an InstanceDiff need to be handled in
their entirety. Unchanged values cannot be filtered out from diffs, as
providers expect attribute containers to be complete.
If a value in ignore_changes maps to a single key in an attribute
container, and there are other changes present, that ignored value must
be included in the diff as well.
There was no test checking that Close wsa called on the mock provider.
This fails now since the CloseProviderTransformer isn't using the fully
resolved provider name.
The interrupt tests for providers no longer check for the condition
during the diff operation. defer the lock so other test's DiffFns don't
need to be as carefull locking themselves.
The init command needs to parse the state to resolve providers, but
changes to the state format can cause that to fail with difficult to
understand errors. Check the terraform version during init and provide
the same error that would be returned by plan or apply.
The bounds checking in ResourceConfig.get() was insufficient: it detected when the index was greater than or equal to cv.Len() but not when the index was less than zero. If the user provided an (invalid) configuration that referenced "foo.-1.bar", the provider would panic.
Now it behaves the same way as if the index were too high.
Validation is the best time to return detailed diagnostics
to the user since we're much more likely to have source
location information, etc than we are in later operations.
This change doesn't actually add any detail to the messages
yet, but it changes the interface so that we can gradually
introduce more detailed diagnostics over time.
While here there are some minor adjustments to some of the
messages to improve their consistency with terminology we
use elsewhere.
The provider name coming from ProvidedBy may be resolved if it only
exists in the state. Make sure to strip the module and provider
prefixes for the provider name when adding missing providers.
It's become apparent that passing in a provider config for an implicitly
used provider would be very useful. While the ProviderConfigTransformer
efficiently added providers to the graph, the algorithm was reversed
from what would be needed to allow overriding implicit providers.
Change the ProviderConfigTransformer to fist add all configured
provider, even if they are empty stubs. Then run through all providers
being passed in from the parent, and replace the provider nodes we
created with proxies, and add implicit proxies where none existed. The
extra nodes will then be pruned later.
There was a bug where all references would be discarded in the case when
a self-reference was encountered. Since a module references all
descendants by it's own path, it returns a self-reference by definition.
Our new resource-to-provider matching is stricter about explicitly
matching aliases when config is present (no longer automatically
inherited) and with locating providers to destroy removed resources.
With this in mind, this is an attempt to expand slightly on this error
message now that users are more likely to see it.
In future it would be nice to do some explicit validation of this a bit
closer to the UI, so we can have room for more explanatory text, but this
additional messaging is intended to help users understand why they might
be seeing this message after removing a provider configuration block from
configuration, whether directly or as a side-effect of removing a module.
Remove the module entry from the state if a module is no longer in the
configuration. Modules are not removed if there are any existing
resources with the module path as a prefix. The only time this should be
the case is if a module was removed in the config, but the apply didn't
target that module.
Create a NodeModuleRemoved and an associated EvalDeleteModule to track
the module in the graph then remove it from the state. The
NodeModuleRemoved dependencies are simply any other node which contains
the module path as a prefix in its path.
This could have probably been done much easier as a step in pruning the
state, but modules are going to have to be promoted to full graph nodes
anyway in order to support count.
You can't find orphans by walking the config, because by definition
orphans aren't in the config.
Leaving the broken test for when empty modules are removed from the
state as well.
Now that the resolved provider is always stored in state, we need to
udpate all the test data to match. There will probably be some more
breakage once the provider field is properly diffed.
Use the ResourceState.Provider field to store the full name of the
provider used during apply. This field is only used when a resource is
removed from the config, and will allow that resource to be removed by
the exact same provider with which it was created.
Modify the locations which might accept the alue of the
ResourceState.Provider field to detect that the name is resolved.
Here we complete the passing of providers between modules via the
module/providers configuration, add another test and update broken test
outputs.
The DisbableProviderTransformer is being removed, since it was really
only for provider configuration inheritance. Since configuration is no
longer inherited, there's no need to keep around unused providers. The
actually shouldn't be any unused providers going into the graph any
longer, but put off verifying that condition for later. Replace it's
usage with the PruneProviderTransformer, and use that to also remove the
unneeded proxy provider nodes.
Implement the adding of provider through the module/providers map in the
configuration.
The way this works is that we start walking the module tree from the
top, and for any instance of a provider that can accept a configuration
through the parent's module/provider map, we add a proxy node that
provides the real name and a pointer to the actual parent provider node.
Multiple proxies can be chained back to the original provider. When
connecting resources to providers, if that provider is a proxy, we can
then connect the resource directly to the proxied node. The proxies are
later removed by the DisabledProviderTransformer.
This should re-instate the 0.11 beta inheritance behavior, but will
allow us to later store the actual concrete provider used by a resource,
so that it can be re-connected if it's orphaned by removing its module
configuration.
A missing provider alias should not be implicitly added to the graph.
Run the AttachaProviderConfigTransformer immediately after adding the
providers, since the ProviderConfigTransformer should have just added
these nodes.
We have a few pesky functions that don't act like proper functions and
instead return different values on each call. These are tricky because
we need to make sure we don't trip over ourselves by re-generating these
between plan and apply.
Here we add a context test to verify correct behavior in the presence
of such functions.
There's actually a pre-existing bug which this test caught as originally
written: we re-evaluate the interpolation expressions during apply,
causing these unstable functions to produce new values, and so the
applied value ends up not exactly matching the plan. This is a bug that
needs fixing, but it's been around at least since v0.7.6 (random old
version I tried this with to see) so we'll put it on the list and address
it separately. For now, this part of the test is commented out with a
TODO attached.
We previously didn't compare values but had a TODO to start doing so,
which we then recently did. Unfortunately it turns out that we _depend_
on not comparing values here, because when we use EvalCompareDiff (a key
user of Diff.Same) we pass in a diff made from a fresh re-interpolation
of the configuration and so any non-pure function results (timestamp,
uuid) have produced different values.
We can remove the AllowAny option which is no longer used, and providers
don't need to be connected to their resources at this stage, since that
will happen in the ProviderTransformer.
Simplify the MissingProviderTransformer so that it only adds missing
providers at the root level. There's no need for the multitple providers
added at every level of the path
ParentProviderTransformer then only needs to connect providers with the
equivalent type at the root level.
The the grandChild missing test has a provider declared in a child
module which is missing in a grandchildmodule. Verify that the
grandchild gets connected to the child provider, and they all are
connected to the root providers.
Update some test outputs to match the expected behavior of only adding
missing providers at the root level.
The CloserProviderTransformer requires that the resources be connected
to their provider first, so that it cen get the correct dependencies,
and adding the ProviderTransformer changed the test output slightly.
This turned out to be a big messy commit, since the way providers are
referenced is tightly coupled throughout the code. That starts to unify
how providers are referenced, using the format output node Name method.
Add a new field to the internal resource data types called
ResolvedProvider. This is set by a new setter method SetProvider when a
resource is connected to a provider during graph creation. This allows
us to later lookup the provider instance a resource is connected to,
without requiring it to have the same module path.
The InitProvider context method now takes 2 arguments, one if the
provider type and the second is the full name of the provider. While the
provider type could still be parsed from the full name, this makes it
more explicit and, and changes to the name format won't effect this
code.
The first step in only using the required provider nodes in a graph is
to be able to specifically add them from the configuration.
The MissingProviderTransformer was previously responsible for adding
all providers. Now it is really just adding any that are missing from
the config.
Needed to add more cases to support value comparison exceptions that the
rest of TF expects to work (this fixes tests in various places).
Also moved things to a switch block so that it's a little more compact.
A diff new needs to pass basic value checks to be considered the
"same". Several provisions have been added to ensure that the list, set,
and RequiresNew behaviours that have needed some exceptions in the past
are preserved in this new logic.
This ensures that we are checking for value equality as much as
possible, which will be more important when we transition to the
possibility of diffs being sourced from external data.
While merging the cached Input configs in the correct order prevents
overwriting existing config values, it doesn't prevent an earlier
provider from inserting unwanted values into later provider
configurations.
Diff the key-values returned by Input with the pre-input config, and
store only the "answers" that were added during the Input call.
Always call Input, even if we already have some values, since a
previously cached config may not be complete.
Previously when looking up cached provider input, the Input was taken in
its entirety, and only provider configuration fields that weren't in the
saved input were added. This would cause providers in modules to use the
entire configuration from parent modules, even if they themselves had
entirely different configs.
Note: this is only marginally beter than the old behavior. It may be
slightly more correct, but stil can't account for the user's intent, and
may be adding configured values from one provider into another.
Change the PathCacheKey to just join the path on a non-path character
(|), which makes for easier debugging.
Use the configured providers directly, rather than looking for inherited
provider configuration during graph evaluation.
First remove the provider config cache, and the associated
SetProviderConfig and ParentProviderConfig methods on the eval context.
Every provider must be configured, so there's no need to look for
configuration from other provider instances.
The config.ProviderConfig struct now has a Scope field which stores the
proper path for the interpolation scope. To get this metadata to the
interpolator, we add an EvalInterpolatProvider node which can carry the
ProviderConfig, and an InterpolateProvider context method to carry the
ProviderConfig.Scope into the InterplationScope.
Some of the tests could be adjusted to account for the new inheritance
behavior, and some were simply no longer valid and will be removed.
The remaining tests have questions on how they should work in practice.
This mostly concerns orphaned modules where there is no longer a way to
obtain a provider. In some cases we may require that a minimal provider
config be present to handle the destroy process, but we need further
testing.
All disabled code was commented out in this commit to record any
additional comments. The following commit will be a cleanup pass.
Update all references to the version values to use the new package.
The VersionString function was left in the terraform package
specifically for the aws provider, which is vendored. We can remove that
last call once the provider is updated.
In order to parse provider, resource and data source configuration from
HCL2 config files, we need to know the relevant configuration schema.
This new method allows Terraform Core to request these from a provider.
This is a breaking change to this interface, so all of its implementers
in this package are updated too. This includes concrete implementations
of the new method in helper/schema that use the schema conversion code
added in an earlier commit to produce a configschema.Block automatically.
Plugins compiled against prior versions of helper/schema will not have
support for this method, and so calls to them will fail. Callers of
this new method will therefore need to sniff for support using the
SchemaAvailable field added to both ResourceType and DataSource.
This careful handling will need to persist until next time we increment
the plugin protocol version, at which point we can make the breaking
change of requiring this information to be available.
DestroyValueReferenceTransformer is used during destroy to reverse the
edges for output and local values. Because destruction is going to
remove these from the state, nodes that depend on their value need to be
visited first.
When working on an existing plan, the context always used walkApply,
even if the plan was for a full destroy. Mark in the plan if it was
icreated for a destroy, and transfer that to the context when reading
the plan.
A Targeted graph may include outputs that were transitively included,
but if they are missing any dependencies they will fail to interpolate
later on.
Prune any outputs in the TargetsTransformer that have missing
dependencies, and are not depended on by any resource. This will
maintain the existing behavior of outputs failing silently ni most
cases, but allow errors to be surfaced where the output value is
required.
Module outputs may not have complete information during Input, because
it happens before refresh. Continue process on output interpolation
errors during the Input walk.
Remove the Input flag threaded through the input graph creation process
to prevent interpolation failures on module variables.
Use an EvalOpFilter instead to inset the correct EvalNode during
walkInput. Remove the EvalTryInterpolate type, and use the same
ContinueOnErr flag as the output node for consistency and to try and
keep the number possible eval node types down.
Locals don't need to be evaluated during destroy. Rather than simply
skipping them, remove them from the state as they are encountered. Even
though they are not persisted in the state, it keeps the state up to
date as the destroy happens, and we reduce the chance of other
inconstancies later on.
The fact that we clean up data source state by applying a "destroy" action
for them is an implementation detail, and so should not be visible to
outside callers or to the user.
Signalling these as real destroys creates confusion for users because
they see Terraform say things like:
data.template_file.foo: Refreshing state..."
...which, to an understandably-nervous sysadmin, might make them suspect
that the underlying object was deleted, rather than just Terraform's
record of it.
Previously the rendered plan output was constructed directly from the
core plan and then annotated with counts derived from the count hook.
At various places we applied little adjustments to deal with the fact that
the user-facing diff model is not identical to the internal diff model,
including the special handling of data source reads and destroys. Since
this logic was just muddled into the rendering code, it behaved
inconsistently with the tally of adds, updates and deletes.
This change reworks the plan formatter so that it happens in two stages:
- First, we produce a specialized Plan object that is tailored for use
in the UI. This applies all the relevant logic to transform the
physical model into the user model.
- Second, we do a straightforward visual rendering of the display-oriented
plan object.
For the moment this is slightly overkill since there's only one rendering
path, but it does give us the benefit of letting the counts be derived
from the same data as the full detailed diff, ensuring that they'll stay
consistent.
Later we may choose to have other UIs for plans, such as a
machine-readable output intended to drive a web UI. In that case, we'd
want the web UI to consume a serialization of the _display-oriented_ plan
so that it doesn't need to re-implement all of these UI special cases.
This introduces to core a new diff action type for "refresh". Currently
this is used _only_ in the UI layer, to represent data source reads.
Later it would be good to use this type for the core diff as well, to
improve consistency, but that is left for another day to keep this change
focused on the UI.