First check the ComputedValues field in the config when reading config
field, so that we can detect if there is an unknown value in a
container. Since maps, lists and sets are verified to exist by looking
for a "length" first, an unknown config value in the config is ignored.
removeConfigUnknowns need to remove the value completely from the config
map. Removing this value allows GetOk and GetOkExists to indicate if the
value was set in the config in the case of an Optional+Computed
attribute.
We previously attempted to make the special diff apply behavior for nested
sets of objects work with attribute mode by totally discarding attribute
mode for all shims.
In practice, that is too broad a solution: there are lots of other shimming
behaviors that we _don't_ want when attribute mode is enabled. In
particular, we need to make sure that the difference between null and
empty can be seen in configuration.
As a compromise then, we will give all of the shims access to the real
ConfigMode and then do a more specialized fixup within the diff-apply
logic: we'll construct a synthetic nested block schema and then use that
to run our existing logic to deal with nested sets of objects, while
using the previous behavior in all other cases.
In effect, this means that the special new behavior only applies when the
provider uses the opt-in ConfigMode setting on a particular attribute,
and thus this change has much less risk of causing broad, unintended
regressions elsewhere.
Due to the lossiness of our legacy models for diff and state, shimming a
diff and then creating a state from it produces a different result than
shimming a state directly. That means that ImportStateVerify no longer
works as expected if there are any Computed attributes in the schema where
d.Set isn't called during Read.
Fixing that for every case would require some risky changes to the shim
behavior, so we're instead going to ask provider developers to address it
by adding `d.Set` calls where needed, since that is the contract for
"Computed" anyway -- a default value should be produced during Create, and
thus by extension during Import.
However, since a common situation where this occurs is attributes marked
as "Removed", since all of the code that deals with them has generally
been deleted, we'll avoid problems in that case here by treating Removed
attributes as ignored for the purposes of ImportStateVerify.
This required exporting some functionality that was formerly unexported
in helper/schema, but it's a relatively harmless schema introspection
function so shouldn't be a big deal to export it.
The synthetic config value used to create the Apply diff should contain
no unknown config values. Any remaining UnknownConfigValues were due to
that being used as a placeholder for values yet to be computed, and
these should be marked NewComputed in the diff.
The previous commit added this flag but did not implement it. Here we
implement it by adjusting the shape of schema we return to Terraform Core
to mark the attribute as untyped and then ensure that gets handled
correctly on the SDK side.
When running in v0.12-and-higher mode, this will cause the SDK to report
the type of the attribute as "any", effectively skipping type checking
on the Core side altogether and checking only in the SDK and provider
code.
The practical impact of this is to restore the v0.11-style checking
behavior of allowing object values to be missing certain attributes as
long as they are marked as optional in the schema. The SDK can do this
because it uses a unified schema model for both object values and nested
blocks, while Terraform Core only supports the idea of "optional" when
talking about attributes in nested blocks.
This is a continuation of the pile of workarounds that also includes
the ConfigMode and AsSingle fields, allowing providers to selectively opt
out of new v0.12 behaviors in situations where they conflict with
decisions made in the design of the providers in our old world where
Terraform Core delegated _all_ validation to providers.
This is designed as an opt-in so that we can limit its impact only to
specific cases where it's needed and minimize the risk of regressions
elsewhere. Providers should use this sparingly only in situations where
prevailing usage disagrees with the new expectations of Terraform Core in
v0.12.
This commit only adds the flag, and does not implement any behavior for it
yet. That means this commit can exist in both the v0.11 and v0.12
codebases, allowing for API compatibility. A subsequent commit for v0.12
(not included in v0.11) will then implement this behavior.
The previous commit added a new flag to schema.Schema which is documented
to make a list with MaxItems: 1 be presented to Terraform Core as a single
value instead, giving a way to switch to non-list nested resources without
it being a breaking change for Terraform v0.11 users as long as it's done
prior to a provider's first v0.12-compatible release.
This is the implementation of that mechanism. It's intentionally
implemented as a suite of extra fixups rather than direct modifications to
existing shim code because we want to ensure that this has no effect
whatsoever on the result of a resource type that _isn't_ using AsSingle.
Although there is some small unit test coverage of the fixup steps here,
the primary testing for this is in the test provider since the integration
of all of these fixup steps in the correct order is the more important
result than any of the intermediate fixup steps.
This setting indicates that an attribute defined as TypeList or TypeSet
should be presented to Terraform Core as a single value instead when
running in Terraform v0.12 or later. It has no effect for Terraform v0.10
or v0.11.
This commit just introduces the setting without any associated behavior,
so it can be included in both the v0.12 and v0.11 branches. A subsequent
commit only to the v0.12 branch will introduce the behavior as part of
the protocol version 5 shims.
This makes some slight adjustments to the shape of the schema we
present to Terraform Core without affecting how it is consumed by the
SDK and thus the provider. This mechanism is designed specifically to
avoid changing how the schema is interpreted by the SDK itself or by the
provider, so that prior behavior can be preserved in Terraform v0.11 mode.
This also includes a new rule that Computed-only (i.e. not also Optional)
schemas _always_ map to attributes, because that is a better mapping of
the intent: they are object values to be used in expressions. Nested
blocks conceptually represent nested objects that are in some sense
independent of what they are embedded in, and so they cannot themselves be
computed.
This allows a provider developer slightly more control over how an SDK
schema is mapped into the Terraform configuration language, overriding
some default assumptions.
ConfigMode overrides the default assumption that a schema with
an Elem of type *Resource is to be mapped to configuration as a nested
block, allowing mapping as an attribute containing an object type instead.
These behaviors only apply when a provider is being used with Terraform
v0.12 or later. They are ignored altogether in Terraform v0.11 mode, to
preserve compatibility. We are adding these primarily to allow the v0.12
version of a resource type schema to be specified to match the prevailing
usage of it in existing configurations, in situations where the default
mapping to v0.12 concepts is not appropriate.
This commit adds only the fields themselves and the InternalValidate rules
for them. A subsequent commit for Terraform v0.12 will add the behavior
as part of the protocol version 5 shim layer.
When the user aborts input, it may end up as an unknown value, which
needs to be converted to null for PrepareConfig.
Allow PrepareConfig to accept null config values in order to fill in
missing defaults.
This mirrors the change made for providers, so that default values can
be inserted into the config by the backend implementation. This is only
the interface and method name changes, it does not yet add any default
values.
If set elements are computed, we can't be certain that they are actually
equal. Catch identical computed set hashes when they are added to the
set, and alter the set key slightly to keep the set counts correct.
In previous versions the interpolation string would be included in the
set, and different string values would cause the set to hash
differently, so this is change is only activated for the new protocol.
Sets rely on diffs being complete for all elements, even when they are
unchanged. When encountering a DiffSuppressFunc inside a set the diffs
were being dropped entirely, possible causing set elements to be lost.
The new decoder is more precise, and unpacks the timeout block into a
single map, which ResourceTimeout.ConfigDecode was updated to handle.
We however still need to work with legacy versions of terraform, with
the old decoder.
Historically helper/schema did not support non-primitive map attributes
because they cannot be represented unambiguously in flatmap. When we
initially implemented CoreConfigSchema here we mapped that situation to
a nested block of mode NestingMap, even though that'd never worked until
now, assuming that it'd be harmless because providers wouldn't be using
it.
It turns out that some providers are, in fact, incorrectly populating
a TypeMap schema with Elem: &schema.Resource, apparently under the false
assumption that it would constrain the keys allowed in the map. In
practice, helper/schema has just been ignoring this and treating such
attributes as map of string. (#20076)
In order to preserve the behavior of these existing incorrectly-specified
attribute definitions, here we mimic the helper/schema behavior by
presenting as an attribute of type map(string).
These attributes have also been shown in some documentation as nested
blocks (with no equals sign), so that'll need to be fixed in user
configurations as they upgrade to Terraform 0.12. However, the existing
upgrade tool rules will take care of that as a natural consequence of the
name being indicated as an attribute in the schema, rather than as a block
type.
This fixes#20076.
With the introduction of explicit "null" in 0.12 it's possible for a value
that is unknown during plan to become a known null during apply, so we
need to slightly weaken our validation rules to accommodate that, in
particular skipping the validation of conflicting attributes if the result
could potentially be valid after the unknown values become known.
This change is in the codepath that is common to both 0.12 and 0.11
callers, but that's safe because 0.11 re-runs validation during the apply
step and so will still catch problems here, albeit in the apply step
rather than in the plan step, thus matching the 0.12 behavior. This new
behavior is a superset of the old in the sense that everything that was
valid before is still valid.
The implementation here also causes us to skip all other validation for
an attribute whose value is unknown. Most of the downstream validation
functions handle this directly anyway, but again this doesn't add any new
failure cases, and should clean up some of the rough edges we've seen with
unknown values in 0.11 once people upgrade to 0.12-compatible providers.
Any issues we now short-circuit during planning will still be caught
during apply.
While working on this I found that the existing "Not a list" test was not
actually testing the correct behavior, so this also includes a tweak to
that to ensure that it really is checking the "should be a list" path
rather than the "cannot be set" codepath it was inadvertently testing
before.
The added test in this commit, without the fix, will make d.Set return
the following error:
`Invalid address to set: []string{"ports", "0", "set"}`
This was due to the fact that setSet in feild_writer_map tried to
convert a slice into a set by creating a temp set schema and calling
writeField on that with the address(`[]string{"ports", "0", "set"}"` in
this case). However the temp schema was only for the set and not the
whole schema as seen in the address so, it should have been `[]string{"set"}"`
so it would align with the schema.
This commits adds another variable there(tempAddr) which will only
contain the last entry of the address that would be the set key, which
would match the created schema
This commit potentially fixes the problem described in #16331
The rest of Terraform is still using uint64 for this in various spots, but
we'll update that gradually later. We use int64 here because that matches
what's used in our protobuf definition, and unsigned integers are not
portable across all of the protobuf target languages anyway.
Since the SDK's schema system conflates attributes and nested blocks, it's
possible to state some nonsensical schema situations such as:
- A nested block is both optional but has MinItems > 0
- A nested block is entirely computed but has MinItems or MaxItems set
Both of these weird situations are handled here in the same way that the
existing helper/schema validation code would've handled them: by
effectively disabling the MinItems/MaxItems checks where they would've
been ignored before.
the MinItems/MaxItems
The SDK has a mechanism that effectively makes it possible to declare an
attribute as being _conditionally_ required, which is not a concept that
Terraform Core is aware of.
Since this mechanism is in practice only used for a small UX improvement
in prompting for these values interactively when the environment variable
is not set, we avoid here introducing all of this complexity into the
plugin protocol by just having the provider selectively modify its schema
if it detects that such an attribute might be set dynamically.
This then prevents Terraform Core from validating the presence of the
argument or prompting for a new value for it, allowing the null value to
pass through into the provider so that the default value can be generated
again dynamically.
This is a kinda-kludgey solution which we're accepting here because the
alternative would be a much-more-complex two-pass decode operation within
Core itself, and that doesn't seem worth it.
This fixes#19139.
This makes sure the diff is generated with the matching set ids from
helper/schema.
Update the tests to add ID fields to the state, which will exists in
practice, since any state traversing through the shims will have the ID
inserted.