These need their output strings updated for the new behavior that all
resource instances recorded in state have a provider configuration
associated, whereas before we only did it for non-default ones.
Only the count and for_each expressions are evaluated by this node type,
so it doesn't need to declare dependencies for any other refs in the
configuration body. Using this more refined set of dependencies means
we can avoid graph cycles in the case where one instance of a resource
refers to another instance of the same resource.
We'll still get cycles if count or for_each self-reference, but that's
forbidden anyway (caught during validate) and makes sense because those
two are whole-resource-level config rather than per-instance config.
The underlying References function includes duplicates and returns refs
in the order they appeared in source (approximately), but after we reduce
to just the raw addresses it's better to dedupe and return in a
predictable order.
An earlier update to make this not use info.HumanId selected the wrong
fake "ami" name in the branch here.
Also, the error message for this failure was terrible. :(
This is computed in the special case where compute = "unknown" in order
to force inclusion of an unknown value into the ultimate result, which
is invalid.
This fixes TestContext2Apply_unknownAttribute, which is intending to test
this error handling behavior.
Previously we kept the dependencies one level higher on the resource
instance itself, which meant that updating it was handled in a different
EvalNode, but now we consider these to be dependencies of the object
itself (derived from the configuration that was current at the time it
was created), so we must handle this during EvalApply.
The subtle difference here is that if an object is moved to "deposed"
during a create_before_destroy replace then it will retain the
dependencies it had on its last apply, rather than them being replaced
by the dependencies of the newly-created object.
We now treat states.ResourceInstanceObject values as immutable once
constructed, preferring to replace them completely rather than update them
in-place to avoid weird race conditions.
Therefore EvalRefresh must copy the state it is given before mutating the
Value field of it to reflect the updated value from the provider.
Some earlier updates to it changed some things in our expected state
string. This doesn't fully fix it since there seems to still be a bug
related to recording dependencies.
This method is now removed, because our shims to the old provider API
(which used InstanceInfo) now populate only the Type attribute and so
HumanId would just generate garbage results anyway.
Our shims from new provider API to old can't populate the InstanceInfo
fully since the new API only includes the type name, and so anyone
depending on this method is now broken anyway.
In practice only our own tests depend on this, and so we'll drop it to
make it explicit that it no longer works (rather than having it return
nonsense) and then fix up the remaining tests that were depending on it
to use a different strategy.
This test was relying on the fact that we used to expose the full resource
instance address to providers via the InstanceInfo value, but we no longer
do that (since in practice no "real" providers depended on it, nor should
depend on it) so we need to instead include in the config itself a key
to use for tracking each resource instance for later test assertions.
InstanceInfo.HumanId() is no longer functional, since our shim from the
new to the old provider API doesn't populate it. Therefore we must use
other means to distingush the two instances here, and we'll use the "ami"
attribute value to do so.
This test was depending on InstanceInfo.HumanId, which is not something
any real providers use and therefore not something our shims from new to
old provider API supports.
Instead, we'll give each of the instances a different id and use that
to distinguish them for tracking apply order.
In the old protocol, returning a nil InstanceState was a way to indicate
that the object had been deleted. In the new world we signal that with
an actual object that contains a null value, which Terraform Core itself
will then recognize and turn into a nil state, eventually removing the
entry from state altogether.
If the plan called for us to delete but the result isn't null then that's
suspect, because it suggests the object wasn't deleted after all.
Likewise, no other apply action should cause the the result to be missing.
In order to avoid the confusing user experience that results in this case
(since it often looks like Terraform did nothing at all) we'll produce
some errors about it, but still update the state to reflect what the
provider returned anyway to allow for debugging and recovery.
Incorrect pointer discipline here was causing the error to be lost rather
than returned as expected.
Additionally we'll include a log line in this case because otherwise an
apply error is reported so far from the actual apply operation that it
can be difficult to understand what happened.
Previously we had a bug where we would fail to populate resource-level
metadata in the state during apply when count = 0, because the apply
graph would contain only instance nodes, not whole-resource nodes.
To address this, we add to the apply graph a node for each resource in
the configuration alongside the separate resource instance nodes. This
node's job is just to populate the state metadata for the resource, which
ensures it gets updated correctly even when count = 0.
When count is not zero this ends up doing some redundant work that
would've happened as a side-effect of applying individual resource
instances anyway, but it's harmless and makes the updating of our
resource-level metadata more explicit.
Our state models cannot store unknown values (since state only deals with
knowns) and so following the lead of recent similar changes for resource
instances we'll treat the planned changeset as a sort of overlay on the
state, preferring values stored there if present, and then write in basic
planned output changes to the plan when we evaluate them.
We're abusing the plan model a little here: its current design is intended
to lay the groundwork for a future release where output values have a
full lifecycle similar to resource instances where we can properly track
changes during the plan phase, but the rest of Terraform isn't yet ready
for that and so we'll just retain an approximation of the planned action
by only using Create and Destroy actions.
A future release should change this so that output changes can be tracked
accurately using an approach similar to that of resource instances.
We've intentionally changed the behavior for "count = 1" so that it'll
assign an index to the created instance even though there's only one. The
un-indexed behavior now applies only if count isn't set _at all_, thus
avoiding weird behavior if a count is _dynamically_ set to 1 via an
expression but is assumed to be a list elsewhere in configuration.
We previously tried to take a shortcut for an empty diff, just returning
the given value directly. This is incorrect in the weird case where we're
creating a new instance but it has no attributes (and thus an empty diff)
because in that case we'd return the given null value, turning the result
into a no-op or destroy change.
To fix this, we just always do the work to construct a new value, even
if we might end up doing all this just to reconstruct the same value we
started with in some cases.
This allows the provider to distinguish whether a particular value is set
in configuration or whether it's coming from prior state. It has no
particular purpose other than that.
We often want to bail out of a test if diagnostics are present, and it's
easiest to debug that when the diagnostics are printed in a compact but
complete manner that is non-trivial to produce.
Rather than duplicating that diagnostic formatting in every test, these
helpers allow us to succinctly print diagnostics and bail out when they
are present.
This is a pretty basic attempt to turn a pair of values into an old-school
diff. It probably won't work correctly for all tests, but hopefully works
well enough that we can just update the remaining tests in-place to use
the new API directly.
We now handle impure functions by having them return an unknown value
during plan, since we can't predict what the value will be during apply.
This test was assuming the old behavior.
The provider is allowed to return a partial result if it also includes
error diagnostics. Real providers still return at least a null value in
that case due to the RPC format, but test mocks are often more sloppy.
Since the refresh walk creates a partial plan to account for objects that
are yet to be created, we need to provide at least a basic mock of the
PlanProviderChange provider method.
For now we're using the old-style "DiffFn" shim interface since that's
already available for use in other tests.
It's not possible for a normal RPC-based provider to get into this
situation because a nil value can't go over the wire, but it's easy to
cause this by not correctly configuring a provider mock during tests.
By panicking early here we produce a more helpful error message and stack
trace than we'd otherwise produce if we let this nil value escape out
into the rest of Terraform.
Significant changes to the provider interface left a lot of the
tests in a non-buildable state. This set of changes gets the
tests building again but does not attempt to make them run to
completion or pass.
After this commit, it is possible to build a test program for
the ./terraform package but it will panic during its run. That
will be addressed in subsequent commits.
Since we do our deletes using a separate graph node from all of the other
actions, and a "Replace" change implies both a delete _and_ a create, we
need to pretend at apply time that a single replace change was actually
two separate changes.
This will also early-exit eval if a destroy node finds a non-Delete change
or if an apply node finds a Delete change. These should not happen in
practice because we leave these nodes out of the graph when they are not
needed for the given action, but we do this here for robustness so as not
to have an invisible dependency between the graph builder and the eval
phase.
When we're working on a create or destroy change it's expected for one of
the values to be null. Here we mimick the pre-0.12 behavior of producing
just an empty map in that case, which the helper/schema code (now the only
caller of this shim) then ignores completely.
Prior to our refactoring here, we were relying on a lucky coincidence for
correct behavior of the plan walk following a refresh in the same run:
- The refresh phase created placeholder objects in the state to represent
any resource instance pending creation, to allow the interpolator to
read attributes from them when evaluating "provider" and "data" blocks.
In effect, the refresh walk is creating a partial plan that only covers
creation actions, but was immediately discarding the actual diff entries
and storing only the planned new state.
- It happened that objects pending creation showed up in state with an
empty ID value, since that only gets assigned by the provider during
apply.
- The Refresh function concluded by calling terraform.State.Prune, which
deletes from the state any objects that have an empty ID value, which
therefore prevented these temporary objects from surviving into the
plan phase.
After refactoring, we no longer have this special ID field on instance
object state, and we instead rely on the Status field for tracking such
things. We also no longer have an explicit "prune" step on state, since
the state mutation methods themselves keep the structure pruned.
To address this, here we introduce a new instance object status "planned",
which is equivalent to having an empty ID value in the old world. We also
introduce a new method on states.SyncState that deletes from the state
any planned objects, which therefore replaces that portion of the old
State.prune operation just for this refresh use-case.
Finally, we are now expecting the expression evaluator to pull pending
objects from the planned changeset rather than from the state directly,
and so for correct results these placeholder resource creation changes
must also be reported in a throwaway changeset during the refresh walk.
The addition of states.ObjectPlanned also permits a previously-missing
safety check in the expression evaluator to prevent us from relying on the
incomplete value stored in state for a pending object, in the event that
some bug prevents the real pending object from being written into the
planned changeset.
We no longer use strings to represent addresses, so this method was a
leftover outlier from previous refactoring efforts.
At this time the result is not actually being used due to the state type
refactoring, which is a bug we'll address in a subsequent commit.
This is a light adaptation of our earlier prototype of structural diff
rendering, as a starting point for what we'll actually ship. This is not
consistent with the latest mocks, so will need some additional work before
it is ready, but integrating this allows us to at least see the plan
contents while fixing up remaining issues elsewhere.
We were previously tracking this as a []cty.Path, but having it turned
into a pathset on creation makes downstream use of it more convenient and
ensures that it'll obey expected invariants like not containing the same
path twice.
We're now writing the "planned new value" to OutputValue, but the data
resource nodes during refresh need to see the verbatim config value in
order to decide whether read must be deferred to the apply phase, so we'll
optionally export that here too.
Our state representation is not able to preserve unknown values, so it's
not suitable for retaining the transient incomplete values we produce
during planning.
Instead, we'll discard the unknown values when writing to state and have
the expression evaluator prefer an object from the plan where possible.
We still use the shape of the transient state to inform things like the
resource's "each mode", so the plan only masks the object values
themselves.
This is no longer a call into the provider, since all of the data diff
logic is standard for all data sources anyway. Instead, we just compute
the planned new value and construct a planned change from that as-is.
Previously the provider could, in principle, customize the read diff. In
practice there is no real reason to do that and the existing SDK didn't
pass that possibility through to provider code, so we can safely change
this without impacting provider compatibility.
Previously we just left these out of the plan altogether, but in the new
plan types we intentionally include change information for every resource
instance, even if no changes are actually planned, to allow alternative
plan file viewers to show what isn't changing as well as what is.
This also includes passing in the provider schema to a few more EvalNodes
that were expecting it but not getting it, in order to be able to
successfully test the implementation of EvalReadDiff here.
Chaange ResourceProvider to providers.Interface starting from the
context, and fix all type errors.
This only replaced some of method calls directly applicable to the
providers themselves. The resource methods will follow.
MockProvider and MockProvisioner implement the new plugin interfaces,
and are built following the patterns used by the legacy
MockResourceProvider and MockResourceProvisioner
Due to how often the state and plan types are referenced throughout
Terraform, there isn't a great way to switch them out gradually. As a
consequence, this huge commit gets us from the old world to a _compilable_
new world, but still has a large number of known test failures due to
key functionality being stubbed out.
The stubs here are for anything that interacts with providers, since we
now need to do the follow-up work to similarly replace the old
terraform.ResourceProvider interface with its replacement in the new
"providers" package. That work, along with work to fix the remaining
failing tests, will follow in subsequent commits.
The aim here was to replace all references to terraform.State and its
downstream types with states.State, terraform.Plan with plans.Plan,
state.State with statemgr.State, and switch to the new implementations of
the state and plan file formats. However, due to the number of times those
types are used, this also ended up affecting numerous other parts of core
such as terraform.Hook, the backend.Backend interface, and most of the CLI
commands.
Just as with 5861dbf3fc49b19587a31816eb06f511ab861bb4 before, I apologize
in advance to the person who inevitably just found this huge commit while
spelunking through the commit history.
The "config" package is no longer used and will be removed as part
of the 0.12 release cleanup. Since configschema is part of the
"new world" of configuration modelling, it makes more sense for
it to live as a subdirectory of the newer "configs" package.
Since the "References" function on graph nodes can't return errors, we
need to catch invalid depends_on references during the validation pass.
In this case, we're checking that the address is exact, rather than being
part of a traversal into an attribute of the object. In other words,
aws_instance.example is valid but aws_instance.example.id is not.
Previously we would attempt to DynamicExpand during the validate walk and
then validate each expanded instance separately. However, this meant that
we would not be able to validate the contents of a block where count = 0
or if count is not yet known.
Here we instead do a more static validation pass against the resource
configuration itself, setting count.index to cty.UnknownVal(cty.Number) so
we can type-check everything inside the block as being correct regardless
of the final count.
This is another step towards repairing the "validate" command for our
changed assumptions in a world where we have a more sophisticated type
checker.
This doesn't yet address the remaining problem that the expression
evaluator can't, with the current state structures, distinguish between
a completed resource with count = 0 and a resource that doesn't exist
at all (during validate), and so we'll still get errors if an expression
elsewhere in configuration refers to a dynamic index of a resource with
"count" set. That's a pre-existing condition that's no longer being masked
by _this_ problem, but can't be addressed until we've introduced the new
state types (states.State, etc) and thus we _can_ distinguish these two
situations. That will therefore be addressed in a later commit.
Previously we had the evaluate methods accept directly an
addrs.InstanceKey and had our evaluator infer a suitable value for
count.index for it, but that prevents us from setting the index to be
unknown in the validation scenario where we may not be able to predict
the number of instances yet but we still want to be able to check that
the configuration block is type-safe for all possible count values.
To achieve this, we separate the concern of deciding on a value for
count.index from the concern of evaluating it, which then allows for
other implementations of this in future. For the purpose of this commit
there is no change in behavior, with the count.index value being populated
whenever the instance key is a number.
This commit does a little more groundwork for the future implementation
of the for_each feature (which'll support each.key and each.value) but
still doesn't yet implement it, leaving it just stubbed out for the
moment.
Since schemas are required to interpret provider, resource, and
provisioner attributes in configs, states, and plans, these helpers intend
to make it easier to gather up the the necessary provider types in order
to preload all of the needed schemas before beginning further processing.
Config.ProviderTypes returns directly the list of provider types, since
at this level further detail is not useful: we've not yet run the
provider allocation algorithm, and so the only thing we can reliably
extract here is provider types themselves.
State.ProviderAddrs and Plan.ProviderAddrs each return a list of
absolute provider addresses, which can then be turned into a list of
provider types using the new helper providers.AddressedTypesAbs.
Since we're already using configs.Config throughout core, this also
updates the terraform.LoadSchemas helper to use Config.ProviderTypes
to find the necessary providers, rather than implementing its own
discovery logic. states.State is not yet plumbed in, so we cannot yet
use State.ProviderAddrs to deal with the state but there's a TODO comment
to remind us to update that in a later commit when we swap out
terraform.State for states.State.
A later commit will probably refactor this further so that we can easily
obtain schema for the providers needed to interpret a plan too, but that
is deferred here because further work is required to make core work with
the new plan types first. At that point, terraform.LoadSchemas may become
providers.LoadSchemas with a different interface that just accepts lists
of provider and provisioner names that have been gathered by the caller
using these new helpers.
I updated the "Variables" map incorrectly in earlier commit 10fe50bbdb
while making bulk updates to get the tests compiling again with the
changed underlying APIs.
The original value here was "bar", incorrectly changed to "foo" in that
commit. Here we return it back to "bar".
We only support provider input for the root module. This is already
checked in ProviderInput, but was not checked in SetProviderInput. We
can't actually do anything particularly clever with an invalid call here,
but we will at least generate a WARN log to help with debugging.
Also need to update TestBuiltinEvalContextProviderInput to expect this
new behavior of ignoring input for non-root modules.
The prior commit changed the schema-access model so that all schemas are
fetched up front during context creation and are then readily available
for use throughout graph building and evaluation.
As a result, we no longer need to create dependency edges to a provider
when one of its resources is referenced by another node, and so the
ProviderTransformer needs only to worry about direct ownership
dependencies.
This also avoids the need for us to run AttachSchemaTransformer twice,
since ProviderTransformer no longer needs schema and we can therefore
defer attaching until just before ReferenceTransformer, when all of the
referencable and referencing nodes are already present in the graph.
We now fetch all of the necessary schemas during context creation, so we
can just thread that repository of schemas through into EvalContext and
Evaluator and access the schemas as needed without any further fetching.
This requires updating a few tests to have a valid Provider address in
their state objects, because we need that in order to trigger the loading
of the relevant schema.
This test depends on having a correct schema, so we'll specify the minimum
schema for its fixture inline here rather than using the superset schema
returned by testProvider.
Provider input is now longer handled with a graph walk, so the code
related to the input graph and walk are no longer needed.
For now the Input method is retained on the ResourceProvider interface,
but it will never be called. Subsequent work to revamp the provider API
will remove this method.
Add a graphNodeAttachDestroy interface, so destroy nodes can be attached
to their companion create node. The creator can then reference the
CreateBeforeDestroy status of the destroyer, determining if the current
state needs to be replaced or deposed.
This is needed when a node is forced to become CreateBeforeDestroy by a
dependency rather than the config, since because the config is
immutable, only the destroyer is aware that it has been forced
CreateBeforeDestroy.
The earlier change 5f07201a made it so that the state is always rewritten
by EvalDiffDestroy, but that was too disruptive to other users of
EvalDiffDestroy.
Now we follow the lead of EvalDiff and have a separate pointer for the
_output_ state, which allows the caller to opt in to having its state
pointer updated to reflect the new (nil) state.
NodePlannableResourceInstanceOrphan is the only caller that currently opts
in to this, since that was the focus of 5f07201a. We may need to make a
similar change to other plannable resource destroy nodes, but we'll wait
to see if that needs to be done in a subsequent commit.
The TestApplyGraphBuilder_doubleCBD fixture was updated incorrectly with
a cycle in the desired output. The test matches one the expected string
is fixed.
Now that core has access to the provider configuration schema, our input
logic can be implemented entirely within Context.Input, removing the need
to execute a full graph walk to gather input.
This commit replaces the graph walk call with instead just visiting the
provider configurations (explicit and implied) in the root module, using
the schema to prompt.
The code to manage the input graph walk is not yet removed by this commit,
and will be cleaned up in a subsequent commit once we've made sure there
aren't any other callers/tests depending on parts of it.
It was incorrect to use a type switch to detect the optional schema
attachment interfaces, because they are not mutually-exclusive: resource
nodes implement both GraphNodeAttachResourceSchema and
GraphNodeAttachProvisionerSchema.
This fixes a number of test regressions around dependency analysis in
"provisioner" blocks.
In #14526 we fixed a sticky edge-case where a resource with count = 0 set
won't create its containing module state on apply, and thus when another
expression refers to it we need to deal with that absense.
The original bug fixed by #14526 was actually a nil dereference panic in
this case. Our new HCL2-oriented expression evaluation codepath was, on
the other hand, correctly checking for the nil, but was not taking the
correct action in response to it, leading to the result being an
unexpected unknown value.
Here we replicate the fix to #14526 by behaving as if there are just no
instances present in this case. We achieve this in a slightly different
way here by just creating an empty ModuleState, but the effect is the
same as #14526.
This fixes TestContext2Apply_multiVarMissingState.
While we're planning we must always update the state with the proposed new
data resulting from the plan. In this case, we must record that the
orphan instance doesn't exist at all in the proposed new state by storing
its state as nil.
This in turn allows references to the containing resource to evaluate
properly, using the new updated resource count. This fixes
TestContext2Apply_multiVarCountDec.
This also includes a number of changes to the test output of
TestContext2Apply_multiVarCountDec that make it easier to debug failures.
Both ProviderTransformer and ReferenceTransformer need schema information,
and so there's a chicken-and-egg problem here where previously the schemas
were not getting attached to provider nodes created during
ProviderTransformer.
As a stop-gap measure for now we'll just run AttachSchemaTransformer
twice, so we can catch any new nodes created during the provider
transforms.
Previously we fetched schemas during the AttachSchemaTransformer,
potentially multiple times as that was re-run for each graph built. Now
we fetch the schemas just once during context construction, passing that
result into each of the graph builders.
This only addresses the schema accesses during graph construction. We're
still separately loading schemas during the main walk for evaluation
purposes. This will be addressed in a later commit.
An aliased provider should not be automatically inherited, nor
implicitly instantiated in a module. This test should not have
previously passed.
Add a proxy provider block to the module and update the provider to
match the schema.
The state after EvalReadDataDiff is no longer nil during plan, which
means that we can't use that as a proxy for requiring the diff.
Rather than exiting early to save the EvalWriteState and EvalWriteDiff
evaluations, continue normally regardless to ensure we have the latest
diff and state after the plan. This also aligns the data data source
handling with that of the managed resource.
The Provider field in ResourceState is now required, whereas before it
could be omitted and have Terraform try to discover a fitting provider
configuration automatically.
The automatic behavior was a compatibility shim added in v0.11 to support
states from prior versions without an explicit migration, but for v0.12
we will have a migration to our new state format anyway and so we will
fix this up during that migration pass.
This comprehensive test was covering a few different behaviors that are
intentionally different for v0.12:
- Applying the splat operator to a list of resource instances that haven't
been created yet produces a list of unknown values rather than a single
unknown list as before. This is important because it allows that list
to be passed into length().
- Wrapping a splat expression in another round of brackets now produces
a list of lists, whereas before we had a special case (for compatibility
with prior to v0.10) that would flatten this away in the schema layer.
Previously we would just retain an empty InstanceState in this case, but
now that we must enumerate all of the available instances during
expression evaluation it's important that we be able to recognize
instances that have been deleted.
Because we currently rely on the ReferenceTransformer to introduce the
necessary edges between local/output values and resource destroy nodes, we
must include the destroy phase of any resource we depend on in the
references of these.
This works in conjunction with the changes in the prior commit to restore
correct handling of dependencies for local and output values during
destroy.
With the current design, several seemingly-separate parts of the code must
all coincidentally agree with one another for destroy edges to be created
properly, which makes this code very hard to maintain. In future we should
refactor this so that ReferenceTransformer doesn't create edges for
destroy nodes at all, and have _all_ destroy edges (including
create_before_destroy) be dealt with in the single DestroyEdgeTransformer,
where they can be maintained and unit tested together.
Prior to the introduction of our "addrs" package, we represented destroy
nodes as a special kind of address string ending in ".destroy" or
".destroy-cbd".
Using references to resolve these dependencies is a strange idea to begin
with, since these are not user-visible addresses, but rather than refactor
that now we instead have these weird pseudo-address types ResourcePhase
and ResourceInstancePhase that correspond go those weird address suffixes,
thus restoring the prior behavior.
In future we should rework this so that destroy node edges are not handled
as references at all, and instead handled as part of
DestroyEdgeTransformer where there's better context for implementing this
logic and it can be maintained and tested in a single place.
The old testApplyFn would overwrite ID with "foo" in all cases there
wasn't a diff, which made the test fixtures harder to reason about. If
there's an ID, keep it the same.
The initial destroyer map is constructed using DestroyAddr(), which
returns resource instance addresses, but we were then going on to _use_
that map with resource addresses, which means the keys can't match when
indexed instances are being destroyed.
Now we'll use resource instance addresses in all cases.
This also includes some additional logging that was helpful in debugging
this issue.
The adaptation of ModuleState.RemovedOutputs for the new config types
was incorrect because it took the absence of any output map as "nothing to
do", rather than "everything has been removed" as expected.
Now it treats a nil map like an empty map, detecting _all_ of the outputs
as having been removed if the output map is nil.
This is temporarily broken until we implement the new plan file format,
since terraform.Plan is no longer serializable with gob. Rather than have
an error that seems like it needs immediate fixing, we'll be explicit
about it in the error message and focus our efforts on other test failures
for now, and return to implement the new file format later.
An earlier commit today reworked this to handle non-fatal errors, which
are returned "smuggled" as a special type of error to avoid changing the
EvalNode interface.
Unfortunately, that change then broke the _other_ special thing we smuggle
through the error return path: early exit.
Now we'll handle them both. This is not perfect because the early-exit
path causes us to discard any warnings we've already collected, but it's
more important that we bail early than retain warnings.
We previously added a special case for dealing with references to
instances in the plan graph where there are only resource nodes. However,
this was too general a fix and so it upset the handling of graphs where
instances _are_ present.
Now we'll do that fallback behavior only if there is no instance node in
the graph already, so the exact matching behavior will be used in graphs
where the instances are present.
The provider transforms now depend on analyzing references in order to
properly create provider edges, and so we need to now insert all of the
nodes that can have references and attach schemas before we run
TransformProviders.
This was done for the main graph builders in a previous commit, but as
usual we missed this surprising hidden graph builder that lives inside
a graph transformer. 🙄
Due to the need for schema in order to resolve references in expressions,
we now create additional provider dependency edges when a node refers to
an attribute from a resource.
During import we constrain provider configuration to allow only references
to variables, but since provider configurations in child modules might
refer to variables from the parent, we still need to include the module
variables, outputs and locals in the graph here and attach the provider
schemas.
In future a better check would be that the provider configuration doesn't
refer to anything that is currently unknown, but we'll save that for
another day.
The previous wording of this message was a little awkward, and a little
confusing due to the mention of it being a non-existing "resource", when
elsewhere in our output we use that noun to refer to the configuration
construct rather than the remote object.
Here we rework it as a diagnostic message, and while here also include an
extra note about a common problem of using an id from a different region
than the provider is configured for, to help the user realize what is
wrong in that case.
The previous commit rewrote this incorrectly because the fatal message
made it seem like it was failing when an error occurs, but an error is
actually expected here.
Also includes a more detailed error message for this case, consistent with
our new diagnostics style.
ctx.Import now returns tfdiags.Diagnostics rather than "error", so these
tests need to now expect that API for proper behavior.
Several of these tests are still failing for other reasons. That will be
addressed in subsequent commits.
To avoid a massively-disruptive change to how EvalNode works, we're now
"smuggling" warnings through the error return value for these, but this
depends on all of the Eval machinery correctly handling this special case
and continuing evaluation when only warnings are returned.
Previous changes missed EvalSequence as a place where execution halts on
error. Now it will accumulate diagnostics itself, aborting if any of
them are error diagnostics, and then wrap its own result up in an error
to be returned by the main Eval function, which already treats non-fatal
errors as a special case, though now produces an explicit log message
about that situation to make it easier to spot in trace logs.
This also includes a more detailed warning message for the warning about
provider input being disabled. While this warning should be removed before
we release anyway, having this additional detail is helpful in debugging
tests where it's being returned.
Since outputs now rely on providers in order to ensure that a schema is
available for evaluation, we need to exclude providers from checking
TargetDownstream.
A provider's schema is the same regardless of its address in the
config. Key them by type so that an evaluation referencing a provider
from an address not included in the graph can still find the schema.
We no longer have this merge behavior, because it is inconsistent with how
variables behave in all other contexts and similar behavior can now be
achieved by merging the user's input with a predefined map in a local
value expression.
Previously we'd create the stub provider in any case where we didn't need
a configured provider, but we also need to skip creating it if there's
already a provider node present, or else we can end up with multiple
stub nodes in the graph.
Since ProviderTransformer now needs the schema in order to infer indirect
references to providers, we must run AttachSchemaTransformer before the
provider transformers in order to calculate the correct ordering of
operations.
The provider schema cache is keyed by provider configuration address
rather than provider type, so we need to do the same inheritance logic
to resolve providers needed because of reference as we do for providers
needed for direct use.
This allows resources that override "provider" or resources in child
modules that have their own provider configurations to be associated
with the provider config they will eventually get schema from, rather
than (as before) always the default configuration for the provider in
the root module.
Eventually it'd probably be better to switch to using a provider cache
that is keyed by provider _type_ rather than provider config, but since
it's currently fetched by visiting the individual provider graph nodes
we currently visit each provider configuration separately and fetch a
schema for each.
Any non-resource (outputs, variables, locals) that references a resource
type must also be connected to that resources provider. This is required
during apply, because the graph built from the diff may not include the
referenced resources because they are being evaluated from the state.
If the provider isn't present already, add a NodeEvalableProvider to
fetch the provider schema.
The provider transformers now need to happen after the outputs, locals,
and variables are transformed.
This test seems to have been buggy before our current work, with the test
fixture containing a reference to a resource that doesn't exist.
This both fixes the fixture and adds a mock schema for it, though this
just revealed another error which isn't fixed here, where the a_ids value
seems to come through as unknown after apply. That will be fixed in a
subsequent commit.
We no longer support using "self.count" in a provisioner to access the
resolved count meta-argument value of the associated resource.
This was only possible before because of a special exception in how
Terraform resolved variables, and in new HCL that exception isn't possible
because resource instances are real values in the scope and we don't want
to add this implied "count" attribute to all of them.
"count" is a property of the resource config rather than of the resource
instances, and since "self" is a resource _instance_ it doesn't make sense
to expose it there.
There is no replacement for this feature. In the rare case where it is
needed, the user must factor the count out into a named local value and
refer to that both in the count meta-argument and in the provisioner.
Most of these changes are just adding schema to describe the expectations
of the existing test fixtures. However, some of them require the fixtures
themselves to be changed due to changing assumptions in the language.
This was previously an apply-time failure due to our inability to
type-check unknowns in 0.11, but we now retain type information for
unknown values and so this check now fails during plan instead.
The fixtures for this test assume some atypical arguments to the resources
and also need a provisioner schema.
This doesn't actually fix the test, but by fixing the schema/fixture this
exposes a problem that seems to exist in the main code, which will be
fixed in a subsequent commit.
On the initial pass here I reached a faulty conclusion about what from
the new world should shim into a NewInstanceInfo, based on a poor read
of existing code.
It actually _should've_ been based on an absolute instance after all,
as evidenced by the expected result of TestContext2Refresh_targetedCount.
Therefore the signature is changed here, and all of the callers (which,
in retrospect, were all holding a full instance address anyway!) are
updated to that new signature.
References can't be connected directly to the instances, because the
resources are expanded when ReferenceTransformer is run. Lookup
references by the resource type.
Although there isn't really a good reason why there should be no schema
in practice, it's better for us not to crash right now while we're still
updating all of the callers (mostly tests) to make schema available.
This was trying to test gathering input from a default and an alias
provider configuration, but it was incorrectly setting the provider ref
on the instance that was supposed to belong to the alias. This was working
before because Terraform would gather input from the aliased provider
anyway, but now the invalid "alias" argument in the resource is producing
a validation error.
This doesn't actually make the test work again yet, because we still have
provider input disabled at this time, pending a forthcoming change to
how provider input is handled.
This test was previously not setting InputModeVarUnset, causing us to
overwrite the "amis" map that _is_ already set. This worked before because
we used to treat the empty result as an empty map and then merge it with
the given value, but since we no longer do that merging behavior we were
ending up with an empty map after input.
Since the intent of this test is to see that the "foo" variable gets
populated by input, here we add InputModeVarUnset which then matches how
the input walk is triggered by the "real" codepath in the local backend.
This also includes some updates to make the test fixture v0.12-idiomatic
(applied after it was seen to work with the old fixture) and to properly
handle the "diags" return value from the various context methods.
This attribute is referenced in order to include a computed value into
another resource, and so it must be present in the schema so that it can
be properly resolved.
This requires making the "components" object available to the resource
node so it can be used during DynamicExpand. It also involved splitting
the provisioner schema attachment into a separate interface from
GraphNodeProvisionerConsumer so that it can now be handled within
AttachSchemaTransformer, along with all of the other schema attachment
steps.
The initial rework of this function to support traversals didn't correctly
handle the "all" case, due to a logic error where the ignoreAll branch
could be visited only if ignoreChanges were non-empty, but yet the two
are mutually exclusive in practice.
Now we process ignoreAll separately from ignoreChanges, and invert the
two loops so that we will visit all attributes regardless of what is
in the ignoreChanges slice.
Prior to our v0.12 changes this test was confusingly using an attribute
named "set", but assigning a map to it. The expected test result suggested
that it was actually expecting legacy HCL2's weird interpretation of a
single map as a list of maps, and so to retain the intent of the test here
(in spite of the contrary name) we type "set" as list of map of string,
update the fixture to _actually_ be a list of maps, and then we get the
expected test result.
Update test fixtures to work in our new world.
This is mostly changing out attribute names for those in the schema,
adding Providers to states, and updating the test-fixture
configurations.
Previously this test's fixture was depending on the fact that attribute
access of an unknown value would always succeed and return another unknown
value, but under the new language interpreter an unknown value still
retains type information and so accessing this "bar" attribute would fail
the semantic check.
We also have to fuss a bit here to work around the limitations of the
testDiffFn implementation, which doesn't have enough context to understand
that "list" in the ResourceConfig is the same as "list.#" in its result.
Since this part of the provider API will change soon to use cty values
directly, this change just accepts a slightly-odd-looking diff in the mean
time, with both "list" and "list.#" populated.
Previously we only handled the "count cannot be computed" check during
validate, leaving other walks to just report "a number is required"
(because "unknown" was represented as a special string) but now we have
unknown as first-class we handle it during all walks, and so this error
message is now the more appropriate one saying that the value is not
yet known.
The behavior of the "count" meta-argument has changed so that its presence
(rather than its value) chooses whether the associated resource has
indexes. As a consequence, these tests which set count = 1 now produce
a single indexed instance with index 0.
Previously Terraform Core was unaware of the structure of a resource type
schema and so the strange behavior of our testDiffFn caused some
attributes to not appear at all in the result. With core now more aware,
it "fills in" these missing items before calling, and as a result they
now appear in the diff even with the testDiffFn.
In real code, where helper/schema is constructing diffs, this situation
doesn't arise because the framework always produces schema-compatible
diffs.
The diff stringer now uses the standard serialization of a module address,
so we need to update the golden representations to restore their
associated tests to passing.
The old testDiffFn used th raw config to dynamically set computed values
in the diff. Since the schema now defines what values should be there,
all test diffs end up with unkown computed values. Filter these out by
looking for a value set to "compute"
This was assuming our old practice of a slice starting with the string
"root". We'll normalize here and then stringify the result to ensure that
we get a string consistent with what's used elsewhere.
This is primarily aimed at fixing some of the context plan tests.
Most changes here just introduce some custom schema into the test mocks.
In some cases, a fixture is lightly updated to more modern assumptions.
The test for accessing count.index in a resource block without count set
is removed, because that is no longer valid under the new language
implementation.
This problem should now be caught at validate time rather than plan time,
because we can use the schema to detect the problem before the resource
has been resolved.
The evaluate data source was using a guessed provider configuration
address from configuration in this case, but that isn't necessarily
correct since the resource might actually be associated with a config
inherited from a parent module.
We still need to retain that fallback to config because we are sometimes
asked to evaluate when state is incomplete (like in "terraform console"),
but if possible we'll take the stored provider address from the state
and use that, even if the resource is otherwise "pending".
Destroy nodes were being referenced by their regular paths, which was
causing cycles in the graphs. Destroy nodes can't be referenced directly
in any way, so override the inherited method for a referenceable address.
The id field is always computed, and never directly related to a diff.
Since that field is being converted to a regular schema attribute, we
need to handle the behavior in the mocks too.
Mostly this is about updating ctx.Plan callers to expect diags instead of
err, but also includes a few light updates to test fixtures, and a fix to
testModuleInline.
There are still some other issues with some of these tests right now, but
all the ones that need to have schema should now have it.
It seems that there is a bug with the evaluation of child module input
variables where they can't find their schema even when a mock is provided.
Will attack this in a subsequent commit.
This should actually have been caught by !val.IsWhollyKnown, since
DynamicVal is always unknown, but something isn't working quite right here
and so for now we'll redundantly check also if it's of the dynamic
pseudo-type, and then revisit cty later to see if there's a real bug
hiding down there.
The recent changes for v0.12 have moved the responsibilities around here
so that it's the caller's responsibility to specify the provider address
in all cases, with the real UI (in the "command" package) providing a
suitable default if nothing is specified.
Therefore the tests at _this_ level must all include an explicit provider
address, since these tests are acting as if they _are_ the UI code.
The approach here is a little hacky, since this edge case applies only to
validate and all of the other evaluateResourceCountExpression callers
don't care about it: we overload the "count" return value as a flag to
allow NodeValidatableResource to allow it to detect this situation and
silently ignore errors in this case.
We were previously doing this for all of the reference types except this
one. Now we do it for resources and resource instances too, which both
allows us to produce a proper error message when one is missing (rather
than returning an unknown value) and allows us to properly handle the
case where there are no instances yet present in the state (e.g. because
we're in the validate walk) but "count" isn't set, and so a single
unknown value is expected rather than an empty tuple.
Previously InitProvider was incorrectly using only the relative address,
which (due to the ambiguity in the string representation of absolute vs.
relative addresses) caused it to always initialize providers in the root
module.
Now we use the absolute address as the key, which then agrees with the
Provider method and ensures that each module gets its own separate
instance of each provider if explicit configuration is present.
This should never happen in real code, but it comes up a lot in test code
where incomplete mock schemas are being used to test with very simple
configurations.
Previously we were skipping all of the validation steps if a provider was
being configured implicitly, and thus had no block in configuration.
This is incorrect, since a provider must still get an opportunity to
configure itself with an empty configuration and possibly reject that
empty configuration with errors.
EvalValidateSelfRef needs schema in order to extract references. It was
previously expecting a *configschema.Block directly, but we weren't
actually passing that in from anywhere except the tests because it's not
available directly in that form during the evaltree for
node_resource_validate.
Instead, we now pass in the whole *ProviderSchema for the associated
provider and have this EvalNode find the schema itself based on the
address. This breaks some of the generality of this node (now only really
works for resource addresses) but that's okay since we have no other
use-case right now anyway.
Our old state format requires all primitive values to be strings. We were
trying to enforce that before, but this didn't work properly because
gocty does not perform automatic type conversions.
Instead, we now convert to string first and then convert the result into
a native Go string afterwards.
At the moment this must be handled as a special case because we're still
using the old representation of output state, but we do still need to
handle this so that unknown values can properly pass between modules
during validate and plan.
Since our ignoring is now implemented in terms of cty objects and HCL
traversals, rather than flatmap keys, it no longer makes sense to test
for the flatmap detail of keeping the map count in a separate key.
It _does_ make sense to ignore an entire block or map, but that's already
covered by another existing tests for just []string{"resource"} above.
This was incorrectly comparing a cty.Value to an hcl.Body. Now we decode
the body first so we can compare two of cty.Value.
Also includes a fix to a stale comment in buildProviderConfig that was no
longer accurate.
The interface of this eval node has changed for v0.12, now requiring both
a provider address and the actual provider object.
We also need to give it a working ctx.EvalBlock implementation on the
mock EvalContext, so we just use installSimpleEval here to get our simple
implementation that just knows how to evaluate constant expressions.
An instance like aws_instance.foo[0] is not permitted to refer to
aws_instance.foo, since that result contains the individual instance along
with all other instances.
A schema is now required for any validation, so these tests now use the
simpleMockProvider function to produce a provider with a simple schema
already configured, and test against that schema.
EvaluateBlock and EvaluateExpr are often called multiple times in a single
operation, and our usual approach of mocking with static values is a poor
fit for those cases.
To accommodate more complex tests, we allow the test to optionally provide
a callback function to use instead of the static return values.
Since a pretty common need is to just evaluate the given block or
expression in a simple way, we also now have a helper method
installSimpleEval that installs reasonable implementations of these
two methods that can (assuming no other customization) just evaluate
constant expressions, which is sufficient for many tests.