In study of existing providers we've found a pattern we werent previously
accounting for of using a nested block type to represent a group of
arguments that relate to a particular feature that is always enabled but
where it improves configuration readability to group all of its settings
together in a nested block.
The existing NestingSingle was not a good fit for this because it is
designed under the assumption that the presence or absence of the block
has some significance in enabling or disabling the relevant feature, and
so for these always-active cases we'd generate a misleading plan where
the settings for the feature appear totally absent, rather than showing
the default values that will be selected.
NestingGroup is, therefore, a slight variation of NestingSingle where
presence vs. absence of the block is not distinguishable (it's never null)
and instead its contents are treated as unset when the block is absent.
This then in turn causes any default values associated with the nested
arguments to be honored and displayed in the plan whenever the block is
not explicitly configured.
The current SDK cannot activate this mode, but that's okay because its
"legacy type system" opt-out flag allows it to force a block to be
processed in this way anyway. We're adding this now so that we can
introduce the feature in a future SDK without causing a breaking change
to the protocol, since the set of possible block nesting modes is not
extensible.
We were using the wrong cty operation to access map members, causing a
panic whenever a prior value was present for a resource type with a nested
block backed by a map value.
When dynamically-typed attributes are in the schema, we use different
conventions for representing nested blocks containing them (using tuples
and objects instead of lists and maps).
The normalization code here doesn't deal with those because the legacy
SDK never generates them, but we must still pass them through properly or
else other SDKs will be blocked from using dynamic attributes.
Previously this function would panic in that situation. Now it will just
pass through nested blocks containing dynamic attribute values entirely
as-is, with no normalization whatsoever. That's okay, because the scope
of this function is only to normalize inconsistencies that the legacy
SDK is known to produce, and the legacy SDK never produces dynamic-typed
attributes.
Now that we have an opt-out to let the legacy SDK return values that are
inconsistent with the new conventions for representing configuration,
various parts of Terraform must now be prepared to deal with
inconsistencies.
This function normalizes the most egregious inconsistencies relating to
the representation of nested blocks, freeing any recipient of its result
from worrying about these inconsistencies itself.
Our usual "ground rules" for mapping configschema to cty call for the
collection values representing nested block types to always be known and
non-null, using an empty collection to represent the absense of any blocks
of that type so that users can always safely use length(...) etc on them
without worrying about them sometimes being null.
However, due to some different behaviors in the legacy SDK we've allowed
it an exception to this rule which means that we can see unknown and null
collections in these positions in object values returned from provider
operations like PlanResourceChange and ApplyResourceChange when the legacy
SDK opt-out is activated.
As a consequence of this, we need to be mindful in our safety check
functions, like AssertObjectCompatible here, of tolerating these non-ideal
situations to allow the safety checks to complete. We run these checks
even when the provider requests an opt-out, because we want to note any
inconsistencies as WARNING level log lines to aid in debugging.
Since these error messages get printed in Terraform's output and we
encourage users to share them as part of bug reports, we should avoid
including sensitive information in them to reduce the risk of accidental
exposure.
Completing our set of provider result safety-check functions,
AssertPlanValid checks a result from a provider's PlanResourceChange to
make sure it doesn't propose a change that is not valid within the user
model of Terraform.
Specifically, it forbids the provider from planning a value that
contradicts what the user gave in configuration, which is important to
ensure that making a reference to an attribute elsewhere in the
configuration will produce exactly the given result, as users reasonably
expect.
Providers _are_ allowed (and, in fact, required) to make changes to
any Computed attribute values declared in the schema in order to fill in
the default values that the provider has generated. Later checks during
the apply phase will ensure that the provider remains true to these
planned values, to ensure that Terraform can keep its promise of doing
exactly what was planned or presenting an error explaining why not.
In an earlier commit we changed objchange.ProposedNewObject so that the
task of populating unknown values for attributes not known during apply
is the responsibility of the provider's PlanResourceChange method, rather
than being handled automatically.
However, we were also using objchange.ProposedNewObject to construct the
placeholder new object for a deferred data resource read, and so we
inadvertently broke that deferral behavior. Here we restore the old
behavior by introducing a new function objchange.PlannedDataResourceObject
which is a specialized version of objchange.ProposedNewObject that
includes the forced behavior of populating unknown values, because the
provider gets no opportunity to customize a deferred read.
TestContext2Plan_createBeforeDestroy_depends_datasource required some
updates here because its implementation of PlanResourceChange was not
handling the insertion of the unknown value for attribute "computed".
The other changes here are just in an attempt to make the flow of this
test more obvious, by clarifying that it is simulating a -refresh=false
run, which effectively forces a deferred read since we skip the eager
read that would normally happen in the refresh step.
Previously we would construct a proposed new state with unknown values in
place of any not-set-in-config computed attributes, trying to save the
provider a little work in specifying that itself.
Unfortunately that turns out to be problematic because it conflates two
concerns: attributes can be explicitly set in configuration to an unknown
value, in which case the final result of that unknown overrides any
default value the provider might normally populate.
In other words, this allows the provider to recognize in the proposed new
state the difference between an Optional+Computed attribute being set to
unknown in the config vs not being set in the config at all.
The provider now has the responsibility to replace these proposed null
values with unknown values during PlanResourceChange if it expects to
select a value during the apply step. It may also populate a known value
if the final result can be predicted at plan time, as is the case for
constant defaults specified in the provider code.
This change comes from a realization that from core's perspective the
helper/schema ideas of zero values, explicit default values, and
customizediff tweaks are all just examples of "defaults", and by allowing
the provider to see during plan whether these attributes are being
explicitly set in configuration and thus decide whether the default will
be provided immediately during plan or deferred until apply.
Previously we were just asserting that the number of elements didn't grow
between planned and actual. We still can't precisely correlate elements in
sets with unknown values, but here we adapt some logic we added earlier
to config/hcl2shim to ensure that we can find a plausible correlation for
each element in each set to at least one element in the other set, and
thus catch more cases where set elements might vanish or appear between
plan and apply, for improved safety.
This will still generate false negatives in some cases where unknown
values are present due to having to assume correlation is intended
wherever it is possible, but we'll catch situations where the actual value
is obviously contrary to what was planned.
This was broken by an earlier change to verify the Terraform version
number when reading a state file. To fix it, we'll use our current version
in our constructed file which should then match when it's read back in.
If plan and apply are both run against the same context then we still have
the planned output values in memory while we're doing the apply walk, so
we need to make sure to update them along with the state as we learn the
final known values of each output.
There were actually two different bugs here:
- We weren't removing any existing planned change for an output when
setting a new one. In retrospect a map would've been a better data
structure for the output changes, rather than a slice to mimic what we
do for resource instance objects, but for now we'll leave the structures
alone and clean up as needed. (The set of outputs should be small for
any reasonable configuration, so the main impact of this is some ugly
code in RemoveOutputChange.)
- RemoveOutputChange itself had a bug where it was iterating over the
resource changes rather than the output changes. This didn't matter
before because we weren't actually using that function, but now we are.
This fix is confirmed by restoring various existing context apply tests
back to passing again.
Since protoc is not go-gettable, and most development tasks in Terraform
won't involve recompiling protoc files anyway, we'll use a separate
mechanism for these.
This way "go generate" only depends on things we can "go get" in the
"make tools" target.
In a later commit we should also in some way specify a particular version
of protoc to use so that we don't get "flapping" regenerations as
developers work with different versions, but the priority here is just to
make "make generate" minimally usable again to restore the dev workflow
documented in the README.
This also includes some updates that resulted from running "make generate"
and "make protobuf" after those Makefile changes were in place.
ProposedNewObject intentionally replaces a null prior with an unknown
prior in order to easily fill in unknown values where they "show through"
under values not set explicitly in config, but it was failing to handle
that situation when dealing with nested blocks that are backed by sets.
We can't generate a valid plan file without a backend configuration to
write into it, but it's the responsibility of the caller (the command
package) to manage the backend configuration mechanism, so we require it
to tell us what to write here.
This feels a little strange because the backend in principle knows its
own config, but in practice the backend only knows the _processed_ version
of the config, not the raw configuration value that was used to configure
it.
converted the existing testPlanState() from terraform.State to
states.State to fix various plan tests.
reverted the "bandaid" in plans/planfile/tfplan.go - at this moment the
backend tests do not include backend configuration, and so the planfile
package can write the plan file but not read it back in. That will be
revisted in a separate track of work.
I have no confidence in the change to plans/planfile/tfplan.go. The
tests were passing an empty backend config, which planfile was able to
write to a file but not read from the same file. This change let me move
past that and it did not break any tests in the planfile package, but I
am concerned that it introduces undesired behavior.
incoming values
Addresses an odd state where the priorV of an object to be changed is
known but null.
While this situation should not happen, it seemed prudent to ensure that
core is resilient to providers sending incorrect values (which might
also occur with manually edited state).
We need to check for the known-ness of the prior value before we check for
the null-ness of actual, because it's valid for an unknown value to become
a null.
Previously we used a single plan action "Replace" to represent both the
destroy-before-create and the create-before-destroy variants of replacing.
However, this forces the apply graph builder to jump through a lot of
hoops to figure out which nodes need it forced on and rebuild parts of
the graph to represent that.
If we instead decide between these two cases at plan time, the actual
determination of it is more straightforward because each resource is
represented by only one node in the plan graph, and then we can ensure
we put the right nodes in the graph during DiffTransformer and thus avoid
the logic for dealing with deposed instances being spread across various
different transformers and node types.
As a nice side-effect, this also allows us to show the difference between
destroy-then-create and create-then-destroy in the rendered diff in the
CLI, although this change doesn't fully implement that yet.
During the plan operation we need to retain _somewhere_ the planned
changes for all outputs so we can refer to them during expression
evaluation. For consistency with how we handle resource instance changes,
we'll keep them in the plan so we can properly retain unknown values,
which cannot be written to state.
As with output values in the state, only root output plans are retained
in a round-trip through the on-disk plan file format, but that's okay
because we can trivially re-calculate all of these during apply. We
include the _root_ outputs in the plan file only because they are
externally-visible side effects that ought to be included in any rendering
of the plan made from the plan file for user inspection.
Since we do our deletes using a separate graph node from all of the other
actions, and a "Replace" change implies both a delete _and_ a create, we
need to pretend at apply time that a single replace change was actually
two separate changes.
This will also early-exit eval if a destroy node finds a non-Delete change
or if an apply node finds a Delete change. These should not happen in
practice because we leave these nodes out of the graph when they are not
needed for the given action, but we do this here for robustness so as not
to have an invisible dependency between the graph builder and the eval
phase.
This algorithm is the usual first step when generating diffs. This package
is a bit of a strange home for it, but since it works with changes to
cty.Value this feels more natural than any other place it could be.
We were previously tracking this as a []cty.Path, but having it turned
into a pathset on creation makes downstream use of it more convenient and
ensures that it'll obey expected invariants like not containing the same
path twice.
This also includes passing in the provider schema to a few more EvalNodes
that were expecting it but not getting it, in order to be able to
successfully test the implementation of EvalReadDiff here.
We need to make the collection itself be a tuple or object rather than
list or map in this case, since otherwise all of the elements of the
collection are constrained to be of the same type and that isn't the
intent of a provider indicating that it accepts any type.
This function's goal is to ensure that the "final" plan value produced
by a provider during the apply step is always consistent with the known
parts of the planned value produced during the plan step.
Any error produced here indicates a bug in the provider.
This produces a "proposed new state", which already has prior computed
values propagated into it (since that behavior is standard for all
resource types) but could be customized further by the provider to make
the "_planned_ new state".
In the process of implementing this it became clear that our configschema
DecoderSpec behavior is incorrect, since it's producing list values for
NestingList and map values for NestingMap. While that seems like it should
be right, we should actually be using tuple and object types respectively
to allow each block to have a different runtime type in situations where
an attribute is given the type cty.DynamicPseudoType. That's not fixed
here, and so without a further fix list and map blocks will panic here.
The DecoderSpec implementation will be fixed in a subsequent commit.
This allows a provider to retain arbitrary extra data in the plan and
make use of it during apply. The contents are not used by Terraform and
never shown to the user.