incoming values
Addresses an odd state where the priorV of an object to be changed is
known but null.
While this situation should not happen, it seemed prudent to ensure that
core is resilient to providers sending incorrect values (which might
also occur with manually edited state).
We need to check for the known-ness of the prior value before we check for
the null-ness of actual, because it's valid for an unknown value to become
a null.
This test was occasionally failing due to a missing graph edge causing it
to be non-deterministic.
The graph edge was missing because our standard schema doesn't quite match
the config fixture and so the reference checker was not finding the "blah"
argument on aws_instance.a.
This change also includes an 100ms pause for the b node just to make this
potential race more likely to hit the "wrong" ordering when the graph is
not complete.
The weird special-case behaviors of testDiffFn were interfering with the
outcome of this test, but we don't actually need any of those special
behaviors here so we'll use a very simple PlanResourceChangeFn
implementation instead, just letting the built-in merge behavior in core
take care of it.
Since we started using experimental Go Modules our editor tooling hasn't
been fully functional, apparently including format-on-save support. This
is a catchup to get everything back straight again.
One of the assumptions this test was checking no longer holds: we don't
retain outputs for non-root modules in persistent state, because we can
always re-populate these on a future run by evaluating the configuration.
The old-fashioned formatting of "Provider" on the shimmed state here was
causing the state upgrade code to treat it like a module-relative address,
rather than an absolute address as we now expect.
This error message appears in a situation that is often confusing for
users, since the connection between resources and their providers in the
state is not something we draw attention to in the user experience of
Terraform.
This new error message tries to be a bit clearer about what the user must
do to resolve it. It's still not perfect since it doesn't cover the
variant of this problem where an entire module containing a provider block
and resources has been removed at the same time, but since there isn't
an easily-summarizable way to continue in that state this will need to
do for the moment, until we find a way to file off that rough edge in
the workflow.
Configuration blocks can contain sensitive information, so better to just
talk about them by reference (in this case, source location) rather than
embedding them directly, to reduce the risk of accidental information
leakage through sharing logs for debug purposes.
This was already updated for the new state types earlier, but since then
we adjusted how deposed instances are written out in the old string
representation of state, and so this regressed.
These tests show that we're still not fully pruning modules from the state
in all cases, due to us not being able to fully prune out modules that
contain resources with count set after a destroy, but this is no worse
than before so we'll accept it for now and address this separately later.
A module heading without "<no state>" but also without any instances
listed is the rendering for a module containing a resource that has no
instances, since our old string rendering of state doesn't represent
resources themselves.
We previously had mechanisms to clean up only individual instance states,
leaving behind empty resource husks in the state after they were all
destroyed.
This takes care of it in the "orphan" case. It does not yet do it in the
"terraform destroy" or "terraform plan -destroy" cases because we don't
have anywhere to record in the plan that we're actually destroying and so
the resource configurations should be ignored and _everything_ should be
cleaned. We'll let the state be not-quite-empty in that case for now,
since it doesn't really hurt; cleaning up orphans is the main case because
the state will live on afterwards and so leftover cruft will accumulate
over the course of many changes.
Given a module foo and a module foo/bar, the previous code might
incorrectly treat "bar" as a file within "foo" rather than as a module
directory in its own right.
Since this one has a situation where there are two deposed objects for
the same instance at once, we can't rely on comparing state strings: they
are not deterministic when multiple deposed objects are present.
Instead, we do more surgical comparisons directly on the state model
objects, which is not quite as robust but still gets us the main stuff we
care about here, to be followed up by another checkStateString further
down for the final state.
Tainted objects now also remember which provider they belong to (via the
resource state they are attached to) and so the stringified state output
here is slightly different.
This was relying on a no-longer-valid mechanism for accessing the "count"
value from a resource block. The original issue this test was written to
cover is not really such a sharp edge anymore, since the length is taken
from the state during apply rather than from configuration, but it's still
a good case to cover.
This test was incorrectly amended on the first pass to create a
configuration snapshot from the step zero configuration, rather than the
step one configuration that the save plan is built from.
Along with that, it needed various other minor updates to match with
details that have shifted:
- "id" and "type" attributes must be explicitly declared in schema
- template_file.parent has count = 1, which now causes it to get an index
and be a list where before it did not.
If we don't set it, we end up creating an invalid plan where the destroy
changes don't have a provider address set, which then later fails
decoding when round-tripped through a planfile.
This also includes some extra safety checks in EvalDiff and
EvalDiffDestroy so that we can catch this bug sooner in future.
This change is verified by
TestContext2Apply_plannedDestroyInterpolatedCount, which is now passing.
A plan file without a backend set is not valid, but at the level we're
testing in this package we don't really care about backends so we'll just
set a default one if the caller doesn't set something more specific, and
then we'll just ignore it completely when reading back.
It doesn't make sense to ignore_changes when the prior value is null,
since we have to create something before we can ignore changes to it.
This change is verified by TestContext2Apply_ignoreChangesWildcard.
This test was relying on the feature of the old provisioner API that gave
provisioners full access to the instance state of what they were
provisioning.
We no longer do this, and so instead the ApplyFn must distingish the
instances using a value from the provisioner's own configuration.
This can happen if unknown values in the plan actually end up being
identical to the prior values once resolved. In that case, we'll just make
no change at all.
This is verified by TestContext2Apply_ignoreChangesWithDep.
Due to a quirk in how testDiffFn constructs its diff, compute doesn't
actually get included in the final result anymore under the new shims.
This result is still correct, nonetheless.
The prior behavior being asserted by this test was incorrect, since the
configuration calls for there to be two instances of the resource at the
end.
We also now assert on the generated plan since it's important to verify
that we are indeed planning to replace the zeroth instance but not the
first instance (which doesn't yet exist).
We now include attribute changes in destroy diffs, so the expected output
of this test includes these changes.
Also includes a fix to legacyDiffComparisonString to actually sort the
attribute changes by name in the rendered diff.
The testDiffFn doesn't include "compute" in the diff it produces and so
it no longer appears in the shimmed output.
This is just a quirk of this weird mock implementation; real providers
always copy all of the values from thec config into the diff before adding
in any other changes.
The only reasonable usage of these methods is for them to run concurrently
with other methods, so we mustn't hold a lock to do this work. For tests
that deal with stopping, it's the test's own responsibility to deal with
any concurrency issues that arise from their StopFns running concurrently
with other mock functions.
Since stopping is a rather complex mechanism that relies on correct
handling of concurrency, it's handy to have these logs here to debug when
things don't happen in quite the right order.