If an attribute was not wholly known, helper/schema was skipping the
`validateType` function which (among other things) returned deprecation
messages. This PR checks for deprecation before returning when skipping
validateType.
You can signal the same information in `Read` with an empty ID if the object does not exist, Implementing `Exists` is not the only way to do so and in some providers is also not the preferred way.
Nil values were not previously expected during validation, but they can
appear in some situations with the new protocol. Add checks to prevent
using zero reflect.Values.
If there are unknowns, the block may have come from a dynamic
declaration, and we can't validate MinItems. Once the blocks are
expanded, we will get the full config for validation without any unknown
values.
If a set element is nil in validateConfigNulls, we don't want to
append that element to the diagnostic path, since it doesn't offer any
useful info to the user.
Fix 2 specific panics in the sdk when reading nil or computed maps from
various configurations. The legacy sdk code is too dependent on undefined
behavior to attempt to find and fix the root cause at this point.
Since the code is essentially frozen for future development, these
changes are specifically targeted to only prevent panics from within
providers. Because any code effected by these changes would have
panicked, there cannot be anything depending on the behavior, and these
should be safe to fix.
Load private data for read, so the resource can get it's configured
timeouts if they exist.
Ensure PlanResourceChange returns the saved private data when there is
an empty diff.
Handle the timeout decoding into Meta in the PlanResourceChange, so that
it's always there for later operations.
The helper/schema diff process loses empty strings, causing them to show
up as unset (null) during apply. Besides failing to show as set by
GetOk, the absence of the value also triggers the schema to insert a
default value again during apply.
It would also be be preferable if the defaults weren't re-evaluated
again during ApplyResourceChange, but that would require a more invasive
patch to the field readers, and ensuring the empty string is stored in
the plan should block the default.
When upgrading from a flatmap state, unset blocks would not exist in the
state, while they will represented as empty in the new cty.Value. This
will cause an unexpected diff in the first plan after upgrade. This
situation may normally be applied with no impact, but some providers may
have unexpected behavior, and if the attributes force replacement it may
require manual alteration of the state to complete the upgrade.
When a Diff contains a NewRemoved attribute (which would have been null
in the planned state), the final value is often the "zero" value string
for the type, which the provider itself still applies to the state.
Rather than risking a change of behavior in helper/schema by fixing the
inconsistency, we'll remove the NewRemoved attributes after apply to
prevent further issues resulting from the change in planned value.
Re-seeding the PRNG every time only serves to make the output an
obfuscated timestamp. On windows with a low clock resolution, this
manifests itself by outputting the same value on calls within the
minimum time delta of the clock.
Reference: https://github.com/terraform-providers/terraform-provider-aws/issues/8828
Prior to Terraform 0.12, providers were not required to implement the `MigrateState` function when increasing the `SchemaVersion` above `0`. Effectively there is no flatmap state difference between version 0 and defined `SchemaVersion` or lowest `StateUpgrader` `Version` (whichever is lowest) when `MigrateState` is undefined, so here we remove the error and increase the schema version automatically.
Send Private data blob through ReadResource as well. This will allow for
extra flexibility for future providers that may want to pass data out of
band through to their resource Read functions.
Having removed the methods, it is straightforward to mechanically update
this file to get rid of all references to the "legacy schema". There is
now only one config schema type to deal with in the sdk.
This experiment is no longer needed for handling computed blocks, since
the legacy SDK can't reasonably handle Dynamic types, we need to remove
this before the final release.
Remove LegacySchema functions as well, since handling SkipCoreTypeCheck
was the only thing left they were handling.
When handling the json state in UpgradeResourceState, the schema
must be what core uses, because that is the schema used for
encoding/decoding the json state.
When converting from flatmap to json state, the legacy schema will be
used to decode the flatmap to a cty value, but the resulting json will
be encoded using the CoreConfigSchema to match what core expects.
When normalizing the state during read, if the resource was previously
imported, most nil-able values will be nil, and we need to prefer the
values returned by the latest Read operation. This didn't come up
before, because Read is usually working with a state create by plan and
Apply which has already shaped the state with the expected empty values.
Having the src value preferred only during Apply better follows the
intent of this function, which should allow Read to return whatever
values it deems necessary. Since Read and Plan use the same
normalization logic, the implied Read before plan should take care of any
perpetual diffs.
The new type system only has a Number type, but helper schema
differentiates between Int and Float values. Verify that a new config
value is an integer during Validate, because the existing WeakDecode
validation will decode a float value into an integer while the config
FieldReader will attempt to parse the float exactly.
Since we're limiting this to protoV5, we can be certain that any valid
config value will be converted to an `int` type by the shims. The only
case where an integral float value will appear is if the integer is out
of range for the systems `int` type, but we also need to prevent that
anyway since it would fail to read in the same manner.
Terraform core would previously ignore unexpected attributes found in
the state, but since we now need to encode/decode the state according
the schema, the attributes must match the schema.
On any state upgrade, remove attributes no longer present in the schema
from the state. The only change this requires from providers is that
going forward removal of attribute is considered a schema change, and
requires an increment of the SchemaVersion in order to trigger the
removal of the attributes from state.