Given a module foo and a module foo/bar, the previous code might
incorrectly treat "bar" as a file within "foo" rather than as a module
directory in its own right.
Since this one has a situation where there are two deposed objects for
the same instance at once, we can't rely on comparing state strings: they
are not deterministic when multiple deposed objects are present.
Instead, we do more surgical comparisons directly on the state model
objects, which is not quite as robust but still gets us the main stuff we
care about here, to be followed up by another checkStateString further
down for the final state.
Tainted objects now also remember which provider they belong to (via the
resource state they are attached to) and so the stringified state output
here is slightly different.
This was relying on a no-longer-valid mechanism for accessing the "count"
value from a resource block. The original issue this test was written to
cover is not really such a sharp edge anymore, since the length is taken
from the state during apply rather than from configuration, but it's still
a good case to cover.
This test was incorrectly amended on the first pass to create a
configuration snapshot from the step zero configuration, rather than the
step one configuration that the save plan is built from.
Along with that, it needed various other minor updates to match with
details that have shifted:
- "id" and "type" attributes must be explicitly declared in schema
- template_file.parent has count = 1, which now causes it to get an index
and be a list where before it did not.
If we don't set it, we end up creating an invalid plan where the destroy
changes don't have a provider address set, which then later fails
decoding when round-tripped through a planfile.
This also includes some extra safety checks in EvalDiff and
EvalDiffDestroy so that we can catch this bug sooner in future.
This change is verified by
TestContext2Apply_plannedDestroyInterpolatedCount, which is now passing.
A plan file without a backend set is not valid, but at the level we're
testing in this package we don't really care about backends so we'll just
set a default one if the caller doesn't set something more specific, and
then we'll just ignore it completely when reading back.
It doesn't make sense to ignore_changes when the prior value is null,
since we have to create something before we can ignore changes to it.
This change is verified by TestContext2Apply_ignoreChangesWildcard.
This test was relying on the feature of the old provisioner API that gave
provisioners full access to the instance state of what they were
provisioning.
We no longer do this, and so instead the ApplyFn must distingish the
instances using a value from the provisioner's own configuration.
This can happen if unknown values in the plan actually end up being
identical to the prior values once resolved. In that case, we'll just make
no change at all.
This is verified by TestContext2Apply_ignoreChangesWithDep.
Due to a quirk in how testDiffFn constructs its diff, compute doesn't
actually get included in the final result anymore under the new shims.
This result is still correct, nonetheless.
The prior behavior being asserted by this test was incorrect, since the
configuration calls for there to be two instances of the resource at the
end.
We also now assert on the generated plan since it's important to verify
that we are indeed planning to replace the zeroth instance but not the
first instance (which doesn't yet exist).
We now include attribute changes in destroy diffs, so the expected output
of this test includes these changes.
Also includes a fix to legacyDiffComparisonString to actually sort the
attribute changes by name in the rendered diff.
The testDiffFn doesn't include "compute" in the diff it produces and so
it no longer appears in the shimmed output.
This is just a quirk of this weird mock implementation; real providers
always copy all of the values from thec config into the diff before adding
in any other changes.
The only reasonable usage of these methods is for them to run concurrently
with other methods, so we mustn't hold a lock to do this work. For tests
that deal with stopping, it's the test's own responsibility to deal with
any concurrency issues that arise from their StopFns running concurrently
with other mock functions.
Since stopping is a rather complex mechanism that relies on correct
handling of concurrency, it's handy to have these logs here to debug when
things don't happen in quite the right order.
Comment out an out of date CBD test. The test no longer works due to
the CBD status being checked during plan, but the test case may still
have some value which we can review later.
update the text in the CBD transformer to reflect the
s/ancestor/descendent/ change.
This brings in a bugfix for analyzing variables inside relative traversal
expressions in HCL, and a cosmetic bugfix in cty for GoString of
cty.NullVal(cty.DynamicPseudoType).
This also updates some other packages, as a result of running "go get -u".
There does not appear to be any real reason that these Schemas fields are
not exported, and exporting them makes it possible to directly construct
Schemas for tests without pulling in an entire context.
Now that we populate a resource-level state pre-emptively for a resource,
before we apply the instances of that resource, it is no longer true that
this produces only one resource state, but the test below (comparing the
string version of the state) already captures the expected behavior and
so this additional check was redundant anyway.
This test was incorrectly updated to the new hook API; it should've been
recording the individual resource state values passed to the hook, not
the original state as a whole (which is not yet updated at the point
when the hook is called).
Reduce the graphs, as they are too ungainly to compare without
reduction. We also depend on reduction in all use cases, so we should be
testing the graph as we use it anyway.
We can't catch invalid attributes in validate at the moment, because the
lack of count information causes the references to return unknown. Make
sure they fail in plan, and mark the validate tests to fix later.
Previously we used a single plan action "Replace" to represent both the
destroy-before-create and the create-before-destroy variants of replacing.
However, this forces the apply graph builder to jump through a lot of
hoops to figure out which nodes need it forced on and rebuild parts of
the graph to represent that.
If we instead decide between these two cases at plan time, the actual
determination of it is more straightforward because each resource is
represented by only one node in the plan graph, and then we can ensure
we put the right nodes in the graph during DiffTransformer and thus avoid
the logic for dealing with deposed instances being spread across various
different transformers and node types.
As a nice side-effect, this also allows us to show the difference between
destroy-then-create and create-then-destroy in the rendered diff in the
CLI, although this change doesn't fully implement that yet.
Remove a test that is no longer needed, since provider must be
explicitly defined for orphaned modules, and is covered in other context
tests.
Udpate a test fixture to better represent the origianl missing map
issue, since the ability to detect nil now made the old test invalid.
This test was relying on an odd definition of "invalid" from prior
versions of Terraform, where it would be an error to access an attribute
that exists as part of the resource type schema but the provider
implementation neglected to set it.
This was an implementation detail though, caused by the flatmap
representation and the fact that core itself didn't have access to the
schema to do static validation. Now the original usage returns a null
value because the "value" attribute is defined, and so we need a new
test fixture that accesses an attribute that is not defined in the schema
at all.
I misunderstood the logic here on the first pass of porting to the new
provider and state types: EvalUndeposeState is supposed to return the
deposed object back to being current again, so we can undo the deposing
in the case where the create leg fails.
If we don't do this, we end up leaving the instance with no current object
at all and with its prior object deposed, and then the later destroy
node deletes that deposed object, leaving the user with no object at all.
For safety we skip this restoration if there _is_ a new current object,
since a failed create can still produce a partial result which we need
to keep to avoid losing track of any remote objects that were successfully
created.
We now correctly prune out empty modules after destroying everything
inside them, so we need to update this expectation string to match the new
behavior, rather than before when it was actually describing a buggy
result.
This was assuming a previous buggy behavior of failing to prune out an
empty module in the state. The new state code doesn't have this bug, so
we must update the expected result to reflect that.
This regressed after the recent changes to include deposed object changes
explicitly in the plan, since it was previously (correctly) asserting that
only the current object was covered by the plan.
Now we assert that both are present, and check that they have the correct
actions associated so that we are sure we're going to only delete the
deposed object and not the current object that sits alongside it.
This shim for the benefit of our old tests was not handling the situation
where an InstanceState only contains deposed instances and no current
instance. This is a strange, rare situation but one that does come up in
practice in some odd cases.
Previously our handling of create_before_destroy -- and of deposed objects
in particular -- was rather "implicit" and spread over various different
subsystems. We'd quietly just destroy every deposed object during a
destroy operation, without any user-visible plan to do so.
Here we make things more explicit by tracking each deposed object
individually by its pseudorandomly-allocated key. There are two different
mechanisms at play here, building on the same concepts:
- During a replace operation with create_before_destroy, we *pre-allocate*
a DeposedKey to use for the prior object in the "apply" node and then
pass that exact id to the destroy node, ensuring that we only destroy
the single object we planned to destroy. In the happy path here the
user never actually sees the allocated deposed key because we use it and
then immediately destroy it within the same operation. However, that
destroy may fail, which brings us to the second mechanism:
- If any deposed objects are already present in state during _plan_, we
insert a destroy change for them into the plan so that it's explicit to
the user that we are going to destroy these additional objects, and then
create an individual graph node for each one in DiffTransformer.
The main motivation here is to be more careful in how we handle these
destroys so that from a user's standpoint we never destroy something
without the user knowing about it ahead of time.
However, this new organization also hopefully makes the code itself a
little easier to follow because the connection between the create and
destroy steps of a Replace is reprseented in a single place (in
DiffTransformer) and deposed instances each have their own explicit graph
node rather than being secretly handled as part of the main instance-level
graph node.
This test was re-using the same context to run three consecutive
plan/apply operations, which is not safe because we will accumulate more
planned changes with each change, creating duplicate entries in the diff.
Instead, to properly simulate a sequence of consecutive runs of Terraform
we must start with a fresh context each time, though still pass forward
the previous state which would in the real world be persisted via a state
manager between these runs.
This inverts the previous logic so that it's the status of an object in
the state that decides whether we'll use its value from the plan. This
fixes the problem that otherwise after we've actually applied the change
the partial planned object will continue to shadow the final object in
state.
These two being distinct is an old-world concept, but we need to ensure
that they match properly here to ensure that we don't leave dangling
incorrect values for "id".