If a set element is nil in validateConfigNulls, we don't want to
append that element to the diagnostic path, since it doesn't offer any
useful info to the user.
Fix 2 specific panics in the sdk when reading nil or computed maps from
various configurations. The legacy sdk code is too dependent on undefined
behavior to attempt to find and fix the root cause at this point.
Since the code is essentially frozen for future development, these
changes are specifically targeted to only prevent panics from within
providers. Because any code effected by these changes would have
panicked, there cannot be anything depending on the behavior, and these
should be safe to fix.
Load private data for read, so the resource can get it's configured
timeouts if they exist.
Ensure PlanResourceChange returns the saved private data when there is
an empty diff.
Handle the timeout decoding into Meta in the PlanResourceChange, so that
it's always there for later operations.
The helper/schema diff process loses empty strings, causing them to show
up as unset (null) during apply. Besides failing to show as set by
GetOk, the absence of the value also triggers the schema to insert a
default value again during apply.
It would also be be preferable if the defaults weren't re-evaluated
again during ApplyResourceChange, but that would require a more invasive
patch to the field readers, and ensuring the empty string is stored in
the plan should block the default.
When upgrading from a flatmap state, unset blocks would not exist in the
state, while they will represented as empty in the new cty.Value. This
will cause an unexpected diff in the first plan after upgrade. This
situation may normally be applied with no impact, but some providers may
have unexpected behavior, and if the attributes force replacement it may
require manual alteration of the state to complete the upgrade.
When a Diff contains a NewRemoved attribute (which would have been null
in the planned state), the final value is often the "zero" value string
for the type, which the provider itself still applies to the state.
Rather than risking a change of behavior in helper/schema by fixing the
inconsistency, we'll remove the NewRemoved attributes after apply to
prevent further issues resulting from the change in planned value.
Re-seeding the PRNG every time only serves to make the output an
obfuscated timestamp. On windows with a low clock resolution, this
manifests itself by outputting the same value on calls within the
minimum time delta of the clock.
Reference: https://github.com/terraform-providers/terraform-provider-aws/issues/8828
Prior to Terraform 0.12, providers were not required to implement the `MigrateState` function when increasing the `SchemaVersion` above `0`. Effectively there is no flatmap state difference between version 0 and defined `SchemaVersion` or lowest `StateUpgrader` `Version` (whichever is lowest) when `MigrateState` is undefined, so here we remove the error and increase the schema version automatically.
Send Private data blob through ReadResource as well. This will allow for
extra flexibility for future providers that may want to pass data out of
band through to their resource Read functions.
Having removed the methods, it is straightforward to mechanically update
this file to get rid of all references to the "legacy schema". There is
now only one config schema type to deal with in the sdk.
This experiment is no longer needed for handling computed blocks, since
the legacy SDK can't reasonably handle Dynamic types, we need to remove
this before the final release.
Remove LegacySchema functions as well, since handling SkipCoreTypeCheck
was the only thing left they were handling.
When handling the json state in UpgradeResourceState, the schema
must be what core uses, because that is the schema used for
encoding/decoding the json state.
When converting from flatmap to json state, the legacy schema will be
used to decode the flatmap to a cty value, but the resulting json will
be encoded using the CoreConfigSchema to match what core expects.
When normalizing the state during read, if the resource was previously
imported, most nil-able values will be nil, and we need to prefer the
values returned by the latest Read operation. This didn't come up
before, because Read is usually working with a state create by plan and
Apply which has already shaped the state with the expected empty values.
Having the src value preferred only during Apply better follows the
intent of this function, which should allow Read to return whatever
values it deems necessary. Since Read and Plan use the same
normalization logic, the implied Read before plan should take care of any
perpetual diffs.
The new type system only has a Number type, but helper schema
differentiates between Int and Float values. Verify that a new config
value is an integer during Validate, because the existing WeakDecode
validation will decode a float value into an integer while the config
FieldReader will attempt to parse the float exactly.
Since we're limiting this to protoV5, we can be certain that any valid
config value will be converted to an `int` type by the shims. The only
case where an integral float value will appear is if the integer is out
of range for the systems `int` type, but we also need to prevent that
anyway since it would fail to read in the same manner.
Terraform core would previously ignore unexpected attributes found in
the state, but since we now need to encode/decode the state according
the schema, the attributes must match the schema.
On any state upgrade, remove attributes no longer present in the schema
from the state. The only change this requires from providers is that
going forward removal of attribute is considered a schema change, and
requires an increment of the SchemaVersion in order to trigger the
removal of the attributes from state.
Computed primitive values must see the UnknownConfigValue or they are
assumed to be unchanged. Restrict the usage of the protov5 ComputedKeys
to containers.
First check the ComputedValues field in the config when reading config
field, so that we can detect if there is an unknown value in a
container. Since maps, lists and sets are verified to exist by looking
for a "length" first, an unknown config value in the config is ignored.
The grpc server does not shutdown when the listener is closed. Since
tests aren't run through go-plugin, which has a separate RPC Shutdown
channel to stop the server, we need to track and stop the server
directly.
Make sure values removed from a map during apply are not copied into the
new map. The broken test is no longer valid in this case, and the
updated diff.Apply should prevent the case it used to cover.
removeConfigUnknowns need to remove the value completely from the config
map. Removing this value allows GetOk and GetOkExists to indicate if the
value was set in the config in the case of an Optional+Computed
attribute.
We previously attempted to make the special diff apply behavior for nested
sets of objects work with attribute mode by totally discarding attribute
mode for all shims.
In practice, that is too broad a solution: there are lots of other shimming
behaviors that we _don't_ want when attribute mode is enabled. In
particular, we need to make sure that the difference between null and
empty can be seen in configuration.
As a compromise then, we will give all of the shims access to the real
ConfigMode and then do a more specialized fixup within the diff-apply
logic: we'll construct a synthetic nested block schema and then use that
to run our existing logic to deal with nested sets of objects, while
using the previous behavior in all other cases.
In effect, this means that the special new behavior only applies when the
provider uses the opt-in ConfigMode setting on a particular attribute,
and thus this change has much less risk of causing broad, unintended
regressions elsewhere.
Due to the lossiness of our legacy models for diff and state, shimming a
diff and then creating a state from it produces a different result than
shimming a state directly. That means that ImportStateVerify no longer
works as expected if there are any Computed attributes in the schema where
d.Set isn't called during Read.
Fixing that for every case would require some risky changes to the shim
behavior, so we're instead going to ask provider developers to address it
by adding `d.Set` calls where needed, since that is the contract for
"Computed" anyway -- a default value should be produced during Create, and
thus by extension during Import.
However, since a common situation where this occurs is attributes marked
as "Removed", since all of the code that deals with them has generally
been deleted, we'll avoid problems in that case here by treating Removed
attributes as ignored for the purposes of ImportStateVerify.
This required exporting some functionality that was formerly unexported
in helper/schema, but it's a relatively harmless schema introspection
function so shouldn't be a big deal to export it.
In study of existing providers we've found a pattern we werent previously
accounting for of using a nested block type to represent a group of
arguments that relate to a particular feature that is always enabled but
where it improves configuration readability to group all of its settings
together in a nested block.
The existing NestingSingle was not a good fit for this because it is
designed under the assumption that the presence or absence of the block
has some significance in enabling or disabling the relevant feature, and
so for these always-active cases we'd generate a misleading plan where
the settings for the feature appear totally absent, rather than showing
the default values that will be selected.
NestingGroup is, therefore, a slight variation of NestingSingle where
presence vs. absence of the block is not distinguishable (it's never null)
and instead its contents are treated as unset when the block is absent.
This then in turn causes any default values associated with the nested
arguments to be honored and displayed in the plan whenever the block is
not explicitly configured.
The current SDK cannot activate this mode, but that's okay because its
"legacy type system" opt-out flag allows it to force a block to be
processed in this way anyway. We're adding this now so that we can
introduce the feature in a future SDK without causing a breaking change
to the protocol, since the set of possible block nesting modes is not
extensible.
The synthetic config value used to create the Apply diff should contain
no unknown config values. Any remaining UnknownConfigValues were due to
that being used as a placeholder for values yet to be computed, and
these should be marked NewComputed in the diff.
Stripping these was a patch for some provider behavior which was fixed
in other ways, and is no longer needed.
Removing this allows us to implement correct CusomizeDiffFuncs in
providers so that they can mark fields with empty values as computed
during a plan.
A list-like attribute containing null values will present a list to
helper/schema with nils, which can cause panics. Since null values were
not possible in configuration before HCL2 and not supported by the
legacy SDK, return an error to the user.
It turns out that collections containing only unknowns could be lost,
meaning there wasn't a direct correlation between the unknown and null
value which would have otherwise been restored.
The legacy diff process inserts unknown values into an optional+computed
map. Fix these up in post-plan normalization process, by looking for
known strings that were changed to unknown.
Because schema.ResourceDiff can't differentiate between unknown
values and new computed values, unknowns can be lost during an update.
If a planned value converted an unknown to a null, restore the unknown
so that it can be correctly replaced in the final plan.
Add the (forces new resource) annotation to the diff output for provider
tests failures when we can. This helps providers narrow down what might
be triggering changes when encountering test failures with the new SDK.
The previous commit added this flag but did not implement it. Here we
implement it by adjusting the shape of schema we return to Terraform Core
to mark the attribute as untyped and then ensure that gets handled
correctly on the SDK side.
When running in v0.12-and-higher mode, this will cause the SDK to report
the type of the attribute as "any", effectively skipping type checking
on the Core side altogether and checking only in the SDK and provider
code.
The practical impact of this is to restore the v0.11-style checking
behavior of allowing object values to be missing certain attributes as
long as they are marked as optional in the schema. The SDK can do this
because it uses a unified schema model for both object values and nested
blocks, while Terraform Core only supports the idea of "optional" when
talking about attributes in nested blocks.
This is a continuation of the pile of workarounds that also includes
the ConfigMode and AsSingle fields, allowing providers to selectively opt
out of new v0.12 behaviors in situations where they conflict with
decisions made in the design of the providers in our old world where
Terraform Core delegated _all_ validation to providers.
This is designed as an opt-in so that we can limit its impact only to
specific cases where it's needed and minimize the risk of regressions
elsewhere. Providers should use this sparingly only in situations where
prevailing usage disagrees with the new expectations of Terraform Core in
v0.12.
This commit only adds the flag, and does not implement any behavior for it
yet. That means this commit can exist in both the v0.11 and v0.12
codebases, allowing for API compatibility. A subsequent commit for v0.12
(not included in v0.11) will then implement this behavior.
It's important to preserve the provider address because during the destroy
phase of provider tests we'll use the references in the state to determine
which providers are required, and so without this attempts to override
the provider using the "provider" meta-argument can cause failures at
destroy time when the wrong provider gets selected.
(This is particularly acute in the google-beta provider tests because that
provider is _always_ used with provider = "google-beta" to override the
default behavior of using the normal "google" provider.)
The previous commit added a new flag to schema.Schema which is documented
to make a list with MaxItems: 1 be presented to Terraform Core as a single
value instead, giving a way to switch to non-list nested resources without
it being a breaking change for Terraform v0.11 users as long as it's done
prior to a provider's first v0.12-compatible release.
This is the implementation of that mechanism. It's intentionally
implemented as a suite of extra fixups rather than direct modifications to
existing shim code because we want to ensure that this has no effect
whatsoever on the result of a resource type that _isn't_ using AsSingle.
Although there is some small unit test coverage of the fixup steps here,
the primary testing for this is in the test provider since the integration
of all of these fixup steps in the correct order is the more important
result than any of the intermediate fixup steps.
This setting indicates that an attribute defined as TypeList or TypeSet
should be presented to Terraform Core as a single value instead when
running in Terraform v0.12 or later. It has no effect for Terraform v0.10
or v0.11.
This commit just introduces the setting without any associated behavior,
so it can be included in both the v0.12 and v0.11 branches. A subsequent
commit only to the v0.12 branch will introduce the behavior as part of
the protocol version 5 shims.
This will allow resources to return an unexpected change to set blocks
and attributes, otherwise we could mask these changes during
normalization.
Change the "plan" argument in normalizeNullValues to "preferDst" to more
accurately describe what the option is doing, since it no longer applies
only to PlanResourceChange.
This should be the final change from removing the flatmap normalization.
Since we're no longer trying to a consistent zero or null value in the
flatmap config, rather we're trying to maintain the previously applied
value, ReadResource also needs to apply the normalizeNullValues step in
order to prevent unexpected diffs.
This method was added early on when the diff was being applied as the
legacy code would have done, which is no longer the case. Everything
that normalizeFlatmapContainers does should be covered by the
combination of the initial diff.Apply and the normalizeNullValues on the
final cty.Value.
This makes some slight adjustments to the shape of the schema we
present to Terraform Core without affecting how it is consumed by the
SDK and thus the provider. This mechanism is designed specifically to
avoid changing how the schema is interpreted by the SDK itself or by the
provider, so that prior behavior can be preserved in Terraform v0.11 mode.
This also includes a new rule that Computed-only (i.e. not also Optional)
schemas _always_ map to attributes, because that is a better mapping of
the intent: they are object values to be used in expressions. Nested
blocks conceptually represent nested objects that are in some sense
independent of what they are embedded in, and so they cannot themselves be
computed.
This allows a provider developer slightly more control over how an SDK
schema is mapped into the Terraform configuration language, overriding
some default assumptions.
ConfigMode overrides the default assumption that a schema with
an Elem of type *Resource is to be mapped to configuration as a nested
block, allowing mapping as an attribute containing an object type instead.
These behaviors only apply when a provider is being used with Terraform
v0.12 or later. They are ignored altogether in Terraform v0.11 mode, to
preserve compatibility. We are adding these primarily to allow the v0.12
version of a resource type schema to be specified to match the prevailing
usage of it in existing configurations, in situations where the default
mapping to v0.12 concepts is not appropriate.
This commit adds only the fields themselves and the InternalValidate rules
for them. A subsequent commit for Terraform v0.12 will add the behavior
as part of the protocol version 5 shim layer.
As we've improved the cty.Value normalization, we need to remove
normalization procedures from the flatmap handling. Keeping the empty
containers in the flatmap will prevent unexpected nils from being added
to some schema configurations
Use objchange.NormalizeObjectFromLegacySDK to ensure that all objects
returned from the provider match what is expected based on the
configuration according to the schemas.
Providers were not strict (and were not forced to be) about customizing
the diff when a computed attribute needed to be updated during apply.
The fix we have in place to prevent loss of information during the
helper/schema apply process would add in single missing value back in.
The first place this was caught was when we attempt to fix up the
flatmapped attributes. The 1->0 count error is now better handled by our
cty.Value normalization step, so we can remove the special apply case
here altogether
The next place is in normalizeNullValues, and since the intent was to
re-insert missing zero-value lists and sets, adding a check for a length
of 0 protects us from adding in extra elements.
The new test fixture emulated common provider behavior of re-computing
values without customizing the diff. Since we can work around it, and
core will provider appropriate warnings, the shims should try to
maintain the legacy behavior.
The NewExtra values are stored outside the diff from plan, and the
original keys may not contain the ~ prefix. Adding the NewExtra back
into the diff with the mismatched key was causing an entire new set
element to be populated. Since this symbol isn't used to apply the diff
in helper/schema, we can simply strip them out.
The hcl2shims will always add in the timeouts block, because there's no
way to differentiate a null single block from an empty one in the
flatmapped state. Since we are only concerned with keeping the prior
timeouts value, always set the new value to null, and then copy over the
prior value if it exists.
When the user aborts input, it may end up as an unknown value, which
needs to be converted to null for PrepareConfig.
Allow PrepareConfig to accept null config values in order to fill in
missing defaults.
The new normalization should make preventing those changes unnecessary,
and will also prevent extra empty elements from being added when
resources are refreshed.
This mirrors the change made for providers, so that default values can
be inserted into the config by the backend implementation. This is only
the interface and method name changes, it does not yet add any default
values.
cty.Value.AsValueMap can return nil if called on an empty map or object.
The logic above was dealing with that case for maps, but object types
were falling through into this codepath and panicking when trying to
assign a new key into the nil dstMap.
This also includes a bonus fix where we were calling ty.ElementType in
a switch case that accepts object types. Object types don't have a single
element type, so we can't call ElementType on those (that also panics)
but we _can_ use the type of the value we selected from src to construct
our placeholder null value.
This comment seems to imply that you can put CRUD functions on nested schema.Resource objects.
The comment goes back to the first commit to this file, but AFAICT this functionality has never been implemented.
Due to the inprecision of our shimming from the legacy SDK type system to
the new Terraform Core type system, the legacy SDK produces a number of
inconsistencies that produce only minor quirky behavior or broken
edge-cases. To retain compatibility with those existing weird behaviors,
the legacy SDK opts out of our safety checks.
The intent here is to allow existing providers to continue to do their
previous unsafe behaviors for now, accepting that this will allow certain
quirky bugs from previous releases to persist, and then gradually migrate
away from the legacy SDK and remove this opt-out on a per-resource basis
over time.
As with the apply-time safety check opt-out, this is reserved only for
the legacy SDK and must not be used in any new SDK implementations. We
still include any inconsistencies as warnings in the logs as an aid to
anyone debugging weird behavior, so that they can see situations where
blame may be misplaced in the user-visible error messages.
Terraform core expects a sane state even when the provider returns an
error. Make sure at the prior state is always the default value to
return, and then alway attempt to process any state returned by
provider.Apply.
This was changed in the single attribute test cases, but the AttrPair
test is used a lot for data source. As far as tests are concerned, 0 and
unset should be treated equally for flatmapped collections.
Check attributes on null objects, and fill in unknowns. If we're
evaluating the object, it either means we are at the top level, or a
NestingSingle block was present, and in either case we need to treat the
attributes as null rather than the entire object.
Switch on the block types rather than Nesting, so we don't need add any
logic to change between List/Tuple or Map/Object when DynamicPseudoType
is involved.
The shim layer for the legacy SDK type system is not precise enough to
guarantee it will produce identical results between plan and apply. In
particular, values that are null during plan will often become zero-valued
during apply.
To avoid breaking those existing providers while still allowing us to
introduce this check in the future, we'll introduce a rather-hacky new
flag that allows the legacy SDK to signal that it is the legacy SDK and
thus disable the check.
Once we start phasing out the legacy SDK in favor of one that natively
understands our new type system, we can stop setting this flag and thus
get the additional safety of this check without breaking any
previously-released providers.
No other SDK is permitted to set this flag, and we will remove it if we
ever introduce protocol version 6 in future, assuming that any provider
supporting that protocol will always produce consistent results.