The old-fashioned formatting of "Provider" on the shimmed state here was
causing the state upgrade code to treat it like a module-relative address,
rather than an absolute address as we now expect.
This error message appears in a situation that is often confusing for
users, since the connection between resources and their providers in the
state is not something we draw attention to in the user experience of
Terraform.
This new error message tries to be a bit clearer about what the user must
do to resolve it. It's still not perfect since it doesn't cover the
variant of this problem where an entire module containing a provider block
and resources has been removed at the same time, but since there isn't
an easily-summarizable way to continue in that state this will need to
do for the moment, until we find a way to file off that rough edge in
the workflow.
Configuration blocks can contain sensitive information, so better to just
talk about them by reference (in this case, source location) rather than
embedding them directly, to reduce the risk of accidental information
leakage through sharing logs for debug purposes.
This was already updated for the new state types earlier, but since then
we adjusted how deposed instances are written out in the old string
representation of state, and so this regressed.
These tests show that we're still not fully pruning modules from the state
in all cases, due to us not being able to fully prune out modules that
contain resources with count set after a destroy, but this is no worse
than before so we'll accept it for now and address this separately later.
A module heading without "<no state>" but also without any instances
listed is the rendering for a module containing a resource that has no
instances, since our old string rendering of state doesn't represent
resources themselves.
We previously had mechanisms to clean up only individual instance states,
leaving behind empty resource husks in the state after they were all
destroyed.
This takes care of it in the "orphan" case. It does not yet do it in the
"terraform destroy" or "terraform plan -destroy" cases because we don't
have anywhere to record in the plan that we're actually destroying and so
the resource configurations should be ignored and _everything_ should be
cleaned. We'll let the state be not-quite-empty in that case for now,
since it doesn't really hurt; cleaning up orphans is the main case because
the state will live on afterwards and so leftover cruft will accumulate
over the course of many changes.
Given a module foo and a module foo/bar, the previous code might
incorrectly treat "bar" as a file within "foo" rather than as a module
directory in its own right.
Since this one has a situation where there are two deposed objects for
the same instance at once, we can't rely on comparing state strings: they
are not deterministic when multiple deposed objects are present.
Instead, we do more surgical comparisons directly on the state model
objects, which is not quite as robust but still gets us the main stuff we
care about here, to be followed up by another checkStateString further
down for the final state.
Tainted objects now also remember which provider they belong to (via the
resource state they are attached to) and so the stringified state output
here is slightly different.
This was relying on a no-longer-valid mechanism for accessing the "count"
value from a resource block. The original issue this test was written to
cover is not really such a sharp edge anymore, since the length is taken
from the state during apply rather than from configuration, but it's still
a good case to cover.
This test was incorrectly amended on the first pass to create a
configuration snapshot from the step zero configuration, rather than the
step one configuration that the save plan is built from.
Along with that, it needed various other minor updates to match with
details that have shifted:
- "id" and "type" attributes must be explicitly declared in schema
- template_file.parent has count = 1, which now causes it to get an index
and be a list where before it did not.
If we don't set it, we end up creating an invalid plan where the destroy
changes don't have a provider address set, which then later fails
decoding when round-tripped through a planfile.
This also includes some extra safety checks in EvalDiff and
EvalDiffDestroy so that we can catch this bug sooner in future.
This change is verified by
TestContext2Apply_plannedDestroyInterpolatedCount, which is now passing.
A plan file without a backend set is not valid, but at the level we're
testing in this package we don't really care about backends so we'll just
set a default one if the caller doesn't set something more specific, and
then we'll just ignore it completely when reading back.
It doesn't make sense to ignore_changes when the prior value is null,
since we have to create something before we can ignore changes to it.
This change is verified by TestContext2Apply_ignoreChangesWildcard.
This test was relying on the feature of the old provisioner API that gave
provisioners full access to the instance state of what they were
provisioning.
We no longer do this, and so instead the ApplyFn must distingish the
instances using a value from the provisioner's own configuration.
This can happen if unknown values in the plan actually end up being
identical to the prior values once resolved. In that case, we'll just make
no change at all.
This is verified by TestContext2Apply_ignoreChangesWithDep.
Due to a quirk in how testDiffFn constructs its diff, compute doesn't
actually get included in the final result anymore under the new shims.
This result is still correct, nonetheless.
The prior behavior being asserted by this test was incorrect, since the
configuration calls for there to be two instances of the resource at the
end.
We also now assert on the generated plan since it's important to verify
that we are indeed planning to replace the zeroth instance but not the
first instance (which doesn't yet exist).
We now include attribute changes in destroy diffs, so the expected output
of this test includes these changes.
Also includes a fix to legacyDiffComparisonString to actually sort the
attribute changes by name in the rendered diff.
The testDiffFn doesn't include "compute" in the diff it produces and so
it no longer appears in the shimmed output.
This is just a quirk of this weird mock implementation; real providers
always copy all of the values from thec config into the diff before adding
in any other changes.
The only reasonable usage of these methods is for them to run concurrently
with other methods, so we mustn't hold a lock to do this work. For tests
that deal with stopping, it's the test's own responsibility to deal with
any concurrency issues that arise from their StopFns running concurrently
with other mock functions.
Since stopping is a rather complex mechanism that relies on correct
handling of concurrency, it's handy to have these logs here to debug when
things don't happen in quite the right order.
Comment out an out of date CBD test. The test no longer works due to
the CBD status being checked during plan, but the test case may still
have some value which we can review later.
update the text in the CBD transformer to reflect the
s/ancestor/descendent/ change.
This brings in a bugfix for analyzing variables inside relative traversal
expressions in HCL, and a cosmetic bugfix in cty for GoString of
cty.NullVal(cty.DynamicPseudoType).
This also updates some other packages, as a result of running "go get -u".
There does not appear to be any real reason that these Schemas fields are
not exported, and exporting them makes it possible to directly construct
Schemas for tests without pulling in an entire context.
Now that we populate a resource-level state pre-emptively for a resource,
before we apply the instances of that resource, it is no longer true that
this produces only one resource state, but the test below (comparing the
string version of the state) already captures the expected behavior and
so this additional check was redundant anyway.
This test was incorrectly updated to the new hook API; it should've been
recording the individual resource state values passed to the hook, not
the original state as a whole (which is not yet updated at the point
when the hook is called).
Reduce the graphs, as they are too ungainly to compare without
reduction. We also depend on reduction in all use cases, so we should be
testing the graph as we use it anyway.
We can't catch invalid attributes in validate at the moment, because the
lack of count information causes the references to return unknown. Make
sure they fail in plan, and mark the validate tests to fix later.
Previously we used a single plan action "Replace" to represent both the
destroy-before-create and the create-before-destroy variants of replacing.
However, this forces the apply graph builder to jump through a lot of
hoops to figure out which nodes need it forced on and rebuild parts of
the graph to represent that.
If we instead decide between these two cases at plan time, the actual
determination of it is more straightforward because each resource is
represented by only one node in the plan graph, and then we can ensure
we put the right nodes in the graph during DiffTransformer and thus avoid
the logic for dealing with deposed instances being spread across various
different transformers and node types.
As a nice side-effect, this also allows us to show the difference between
destroy-then-create and create-then-destroy in the rendered diff in the
CLI, although this change doesn't fully implement that yet.
Remove a test that is no longer needed, since provider must be
explicitly defined for orphaned modules, and is covered in other context
tests.
Udpate a test fixture to better represent the origianl missing map
issue, since the ability to detect nil now made the old test invalid.
This test was relying on an odd definition of "invalid" from prior
versions of Terraform, where it would be an error to access an attribute
that exists as part of the resource type schema but the provider
implementation neglected to set it.
This was an implementation detail though, caused by the flatmap
representation and the fact that core itself didn't have access to the
schema to do static validation. Now the original usage returns a null
value because the "value" attribute is defined, and so we need a new
test fixture that accesses an attribute that is not defined in the schema
at all.