This involved some refactoring of how block bodies are migrated, which
still needs some additional work to deal with meta-arguments but is now
at least partially generalized to support both resource and provider
blocks.
Any state modifying functions can only be run once during the plan-apply
cycle. When regenerating the Diff during ApplyResourceChange, strip out
all StateFunc and CustomizeDiff functions from the schema.
Thew NewExtra diff field was where config data that was modified by a
StateFunc was stored, and needs to be maintained between plan and apply.
During PlanResourceChange, store any NewExtra data from the Diff in the
PlannedPrivate data, and re-insert the NewExtra data into the Diff
generated during ApplyResourceChange.
Variables values are marshalled with an explicit type of
cty.DynamicPseudoType, but were being decoded using `Implied Type` to
try and guess the type. This was causing errors because `Implied Type`
does not expect to find a late-bound value.
Errors were being ignore with the intention that they would be caught
later in validation, but it turns out we nee dto catch those earlier.
The legacy schemas also allowed providers to set and empty string for a
bool value, which we need to handle here, since it's not being handled
from user input like a normal config value.
If an instance object in state has an earlier schema version number then
it is likely that the schema we're holding won't be able to decode the
raw data that is stored. Instead, we must ask the provider to upgrade it
for us first, which might also include translating it from flatmap form
if it was last updated with a Terraform version earlier than v0.12.
This ends up being a "seam" between our use of int64 for schema versions
in the providers package and uint64 everywhere else. We intend to
standardize on int64 everywhere eventually, but for now this remains
consistent with existing usage in each layer to keep the type conversion
noise contained here and avoid mass-updates to other Terraform components
at this time.
This also includes a minor change to the test helpers for the
backend/local package, which were inexplicably setting a SchemaVersion of
1 on the basic test state but setting the mock schema version to zero,
creating an invalid situation where the state would need to be downgraded.
The rest of Terraform is still using uint64 for this in various spots, but
we'll update that gradually later. We use int64 here because that matches
what's used in our protobuf definition, and unsigned integers are not
portable across all of the protobuf target languages anyway.
There's no reason for a negative version, so by blocking it now we'll
ensure that none creep in.
The more practical short-term motivation for this is that we're still
using uint64 for these internally in some cases and so this restriction
ensures that we won't run into rough edges when converting from int64 to
uint64 at those boundaries until we later fix everything to use int64
consistently.
Add support for the new `force-unlock` API and at the same time improve
performance a bit by reducing the amount of API calls made when using
the remote backend for state storage only.
Previously we were making an invalid assumption in evaluating module call
references (like module.foo) that the module must exist, which is
incorrect for that particular case because it's a reference to a child
module, not to an object within the current module.
However, now that we have the mechanism for static validation of
references, we'll deal with this one there so it can be caught sooner.
That then makes the original assumption valid, though for a different
reason.
This is verified by two new context tests for validation:
- TestContext2Validate_invalidModuleRef
- TestContext2Validate_invalidModuleOutputRef
Now that our language supports tuple/object types in addition to list/map
types, it's convenient for zipmap to be able to produce an object type
given a tuple, since this makes it symmetrical with "keys" and "values"
such the the following identity holds for any map or object value "a"
a == zipmap(keys(a), values(a))
When the values sequence is a tuple, the result has an object type whose
attribute types correspond to the given tuple types.
Since an object type has attribute names as part of its definition, there
is the additional constraint here that the result has an unknown type
(represented by the dynamic pseudo-type) if the given values is a tuple
and the given keys contains any unknown values. This isn't true for values
as a list because we can predict the resulting map element type using
just the list element type.
Previously we were fetching these from the provider but then immediately
discarding the version numbers because the schema API had nowhere to put
them.
To avoid a late-breaking change to the internal structure of
terraform.ProviderSchema (which is constructed directly all over the
tests) we're retaining the resource type schemas in a new map alongside
the existing one with the same keys, rather than just switching to
using the providers.Schema struct directly there.
The methods that return resource type schemas now return two arguments,
intentionally creating a little API friction here so each new caller can
be reminded to think about whether they need to do something with the
schema version, though it can be ignored by many callers.
Since this was a breaking change to the Schemas API anyway, this also
fixes another API wart where there was a separate method for fetching
managed vs. data resource types and thus every caller ended up having a
switch statement on "mode". Now we just accept mode as an argument and
do the switch statement within the single SchemaForResourceType method.
When normalizing flatmapped containers, compare the attributes to the
prior state and preserve pre-existing zero-length or unknown values. A
zero-length value that was previously unknown is preserved as a
zero-length value, as that may have been computed as such by the
provider.
Since the SDK's schema system conflates attributes and nested blocks, it's
possible to state some nonsensical schema situations such as:
- A nested block is both optional but has MinItems > 0
- A nested block is entirely computed but has MinItems or MaxItems set
Both of these weird situations are handled here in the same way that the
existing helper/schema validation code would've handled them: by
effectively disabling the MinItems/MaxItems checks where they would've
been ignored before.
the MinItems/MaxItems
The SDK has a mechanism that effectively makes it possible to declare an
attribute as being _conditionally_ required, which is not a concept that
Terraform Core is aware of.
Since this mechanism is in practice only used for a small UX improvement
in prompting for these values interactively when the environment variable
is not set, we avoid here introducing all of this complexity into the
plugin protocol by just having the provider selectively modify its schema
if it detects that such an attribute might be set dynamically.
This then prevents Terraform Core from validating the presence of the
argument or prompting for a new value for it, allowing the null value to
pass through into the provider so that the default value can be generated
again dynamically.
This is a kinda-kludgey solution which we're accepting here because the
alternative would be a much-more-complex two-pass decode operation within
Core itself, and that doesn't seem worth it.
This fixes#19139.