The error handling here is a bit tricky due to the ability for users to
opt out of aborting on error. It's important that we keep straight the
distinction between applyDiags and diags so we can tell the difference
between the errors from _this_ provisioner and the errors for the entire
run so far.
In our old world we always used 1-based indices into a slice of deposed
objects. The new models instead use a map keyed by pseudorandom strings,
so that deposed objects will have a consistent identity across multiple
operations.
However, having that pseudo-random string in our test comparison output
is not helpful, since such strings can never match hard-coded expectation
strings. Therefore for the purposes of generating this test comparison
output we'll revert back to using 1-based indexes.
This should avoid problems for tests that only create one deposed object
per instance, but those which create more than one will need to do some
more work since the _ordering_ of these objects in the output is still
pseudorandom as a result of it coming from a map rather than a slice.
The expected output string for this test is assuming a couple computed
attributes that were not declared in schema. This didn't matter before
because the provider output was not previously subject to schema-based
interpretation, but now our shims to the old provider API rely on the
schema to convert the returned data and so any unexpected attributes are
filtered.
The diffs created by testDiffFn use the flatmap package directly, rather
than running the diff-generation logic in helper/schema. It turns out that
flatmap itself can generate the k+".#" keys used to indicate the length
of a list, but only helper/schema itself knew how to generate the
corresponding k+".%" keys used for maps.
Rather than modifying the now-deprecated flatmap code directly (and risk
breaking assumptions in shims elsewhere), here we just synthesize the
extra required map element within the testDiffFn implementation, which
then in turn allows the MockProvider diffFn shim to correctly recognize
it as a map and convert it into a real cty.Map value to return.
As previously mentioned in a comment here, versions of Terraform prior to
0.12 would store references to module outputs as just references to the
module as a whole in the state. It's not really clear why, but we wanted
to preserve this behavior for 0.12.
The previous implementation actually failed to do so, in spite of the
comment, so this commit fixes it to actually do what the comment
originally claimed.
In a later release we might remove this special case and just depend
directly on outputs where possible, since that'd allow us to produce a
more precise dependency graph for destroy actions, but when we do that
we'll first need to confirm that there isn't a good reason for the
original exception here.
The state only deals in wholly-known values, so here we null out any
unknowns for storage in state. This is okay because we subsequently write
the original, possibly-unknown value into the plan and the expression
evaluator will prefer to use this if present, allowing the unknown values
to properly propagate into other expressions in the calling module.
These need their output strings updated for the new behavior that all
resource instances recorded in state have a provider configuration
associated, whereas before we only did it for non-default ones.
Only the count and for_each expressions are evaluated by this node type,
so it doesn't need to declare dependencies for any other refs in the
configuration body. Using this more refined set of dependencies means
we can avoid graph cycles in the case where one instance of a resource
refers to another instance of the same resource.
We'll still get cycles if count or for_each self-reference, but that's
forbidden anyway (caught during validate) and makes sense because those
two are whole-resource-level config rather than per-instance config.
The underlying References function includes duplicates and returns refs
in the order they appeared in source (approximately), but after we reduce
to just the raw addresses it's better to dedupe and return in a
predictable order.
An earlier update to make this not use info.HumanId selected the wrong
fake "ami" name in the branch here.
Also, the error message for this failure was terrible. :(
This is computed in the special case where compute = "unknown" in order
to force inclusion of an unknown value into the ultimate result, which
is invalid.
This fixes TestContext2Apply_unknownAttribute, which is intending to test
this error handling behavior.
Previously we kept the dependencies one level higher on the resource
instance itself, which meant that updating it was handled in a different
EvalNode, but now we consider these to be dependencies of the object
itself (derived from the configuration that was current at the time it
was created), so we must handle this during EvalApply.
The subtle difference here is that if an object is moved to "deposed"
during a create_before_destroy replace then it will retain the
dependencies it had on its last apply, rather than them being replaced
by the dependencies of the newly-created object.
We now treat states.ResourceInstanceObject values as immutable once
constructed, preferring to replace them completely rather than update them
in-place to avoid weird race conditions.
Therefore EvalRefresh must copy the state it is given before mutating the
Value field of it to reflect the updated value from the provider.
Some earlier updates to it changed some things in our expected state
string. This doesn't fully fix it since there seems to still be a bug
related to recording dependencies.
This method is now removed, because our shims to the old provider API
(which used InstanceInfo) now populate only the Type attribute and so
HumanId would just generate garbage results anyway.
Our shims from new provider API to old can't populate the InstanceInfo
fully since the new API only includes the type name, and so anyone
depending on this method is now broken anyway.
In practice only our own tests depend on this, and so we'll drop it to
make it explicit that it no longer works (rather than having it return
nonsense) and then fix up the remaining tests that were depending on it
to use a different strategy.
This test was relying on the fact that we used to expose the full resource
instance address to providers via the InstanceInfo value, but we no longer
do that (since in practice no "real" providers depended on it, nor should
depend on it) so we need to instead include in the config itself a key
to use for tracking each resource instance for later test assertions.
InstanceInfo.HumanId() is no longer functional, since our shim from the
new to the old provider API doesn't populate it. Therefore we must use
other means to distingush the two instances here, and we'll use the "ami"
attribute value to do so.
This test was depending on InstanceInfo.HumanId, which is not something
any real providers use and therefore not something our shims from new to
old provider API supports.
Instead, we'll give each of the instances a different id and use that
to distinguish them for tracking apply order.
In the old protocol, returning a nil InstanceState was a way to indicate
that the object had been deleted. In the new world we signal that with
an actual object that contains a null value, which Terraform Core itself
will then recognize and turn into a nil state, eventually removing the
entry from state altogether.
If the plan called for us to delete but the result isn't null then that's
suspect, because it suggests the object wasn't deleted after all.
Likewise, no other apply action should cause the the result to be missing.
In order to avoid the confusing user experience that results in this case
(since it often looks like Terraform did nothing at all) we'll produce
some errors about it, but still update the state to reflect what the
provider returned anyway to allow for debugging and recovery.
Incorrect pointer discipline here was causing the error to be lost rather
than returned as expected.
Additionally we'll include a log line in this case because otherwise an
apply error is reported so far from the actual apply operation that it
can be difficult to understand what happened.
Previously we had a bug where we would fail to populate resource-level
metadata in the state during apply when count = 0, because the apply
graph would contain only instance nodes, not whole-resource nodes.
To address this, we add to the apply graph a node for each resource in
the configuration alongside the separate resource instance nodes. This
node's job is just to populate the state metadata for the resource, which
ensures it gets updated correctly even when count = 0.
When count is not zero this ends up doing some redundant work that
would've happened as a side-effect of applying individual resource
instances anyway, but it's harmless and makes the updating of our
resource-level metadata more explicit.
We're not yet showing outputs in the rendered diff, so it doesn't make
sense to count them for the purpose of deciding which change action
symbols to include in the legend.
Our state models cannot store unknown values (since state only deals with
knowns) and so following the lead of recent similar changes for resource
instances we'll treat the planned changeset as a sort of overlay on the
state, preferring values stored there if present, and then write in basic
planned output changes to the plan when we evaluate them.
We're abusing the plan model a little here: its current design is intended
to lay the groundwork for a future release where output values have a
full lifecycle similar to resource instances where we can properly track
changes during the plan phase, but the rest of Terraform isn't yet ready
for that and so we'll just retain an approximation of the planned action
by only using Create and Destroy actions.
A future release should change this so that output changes can be tracked
accurately using an approach similar to that of resource instances.
During the plan operation we need to retain _somewhere_ the planned
changes for all outputs so we can refer to them during expression
evaluation. For consistency with how we handle resource instance changes,
we'll keep them in the plan so we can properly retain unknown values,
which cannot be written to state.
As with output values in the state, only root output plans are retained
in a round-trip through the on-disk plan file format, but that's okay
because we can trivially re-calculate all of these during apply. We
include the _root_ outputs in the plan file only because they are
externally-visible side effects that ought to be included in any rendering
of the plan made from the plan file for user inspection.
We've intentionally changed the behavior for "count = 1" so that it'll
assign an index to the created instance even though there's only one. The
un-indexed behavior now applies only if count isn't set _at all_, thus
avoiding weird behavior if a count is _dynamically_ set to 1 via an
expression but is assumed to be a list elsewhere in configuration.
We previously tried to take a shortcut for an empty diff, just returning
the given value directly. This is incorrect in the weird case where we're
creating a new instance but it has no attributes (and thus an empty diff)
because in that case we'd return the given null value, turning the result
into a no-op or destroy change.
To fix this, we just always do the work to construct a new value, even
if we might end up doing all this just to reconstruct the same value we
started with in some cases.
This allows the provider to distinguish whether a particular value is set
in configuration or whether it's coming from prior state. It has no
particular purpose other than that.
We often want to bail out of a test if diagnostics are present, and it's
easiest to debug that when the diagnostics are printed in a compact but
complete manner that is non-trivial to produce.
Rather than duplicating that diagnostic formatting in every test, these
helpers allow us to succinctly print diagnostics and bail out when they
are present.