Remove reflect.DeepEqual from path comparisons to get reliable results.
The equality issues were only noticed going the grpc interface, so add a
corresponding test to the test provider.
The helper/schema diff process loses empty strings, causing them to show
up as unset (null) during apply. Besides failing to show as set by
GetOk, the absence of the value also triggers the schema to insert a
default value again during apply.
It would also be be preferable if the defaults weren't re-evaluated
again during ApplyResourceChange, but that would require a more invasive
patch to the field readers, and ensuring the empty string is stored in
the plan should block the default.
When a Diff contains a NewRemoved attribute (which would have been null
in the planned state), the final value is often the "zero" value string
for the type, which the provider itself still applies to the state.
Rather than risking a change of behavior in helper/schema by fixing the
inconsistency, we'll remove the NewRemoved attributes after apply to
prevent further issues resulting from the change in planned value.
The new type system only has a Number type, but helper schema
differentiates between Int and Float values. Verify that a new config
value is an integer during Validate, because the existing WeakDecode
validation will decode a float value into an integer while the config
FieldReader will attempt to parse the float exactly.
Since we're limiting this to protoV5, we can be certain that any valid
config value will be converted to an `int` type by the shims. The only
case where an integral float value will appear is if the integer is out
of range for the systems `int` type, but we also need to prevent that
anyway since it would fail to read in the same manner.
Computed primitive values must see the UnknownConfigValue or they are
assumed to be unchanged. Restrict the usage of the protov5 ComputedKeys
to containers.
We were previously catching some errors at read time, but some type errors
were panicking because the cty.DynamicPseudoType arguments have no
automatic pre-type-checking done but this code was assuming they would
be objects.
Here we add an explicit validation step that includes both the backend
validation we were previously doing during read and some additional
type checking to ensure the two dynamic arguments are suitably-typed.
Having the separate validation step means that these problems can be
detected by "terraform validate", rather than only in "terraform plan"
or "terraform apply".
If a dynamic block (in the HCL dynamic block extension sense) has an
unknown value for its for_each argument, it gets expanded to a single
placeholder block with all of its attributes set to a unknown values.
We can use this as part of a heuristic to relax our object compatibility
checks for situations where the plan included an object that appears to
be (but isn't necessarily) such a placeholder, allowing for the fact that
the one placeholder block could be replaced with zero or more real blocks
once the for_each value is known.
Previously our heuristic was too strict: it would match only if the only
block present was a dynamic placeholder. In practice, users may mix
dynamic blocks with static blocks of the same type, so we need to be more
liberal to avoid generating incorrect incompatibility errors in such
cases.
removeConfigUnknowns need to remove the value completely from the config
map. Removing this value allows GetOk and GetOkExists to indicate if the
value was set in the config in the case of an Optional+Computed
attribute.
We previously attempted to make the special diff apply behavior for nested
sets of objects work with attribute mode by totally discarding attribute
mode for all shims.
In practice, that is too broad a solution: there are lots of other shimming
behaviors that we _don't_ want when attribute mode is enabled. In
particular, we need to make sure that the difference between null and
empty can be seen in configuration.
As a compromise then, we will give all of the shims access to the real
ConfigMode and then do a more specialized fixup within the diff-apply
logic: we'll construct a synthetic nested block schema and then use that
to run our existing logic to deal with nested sets of objects, while
using the previous behavior in all other cases.
In effect, this means that the special new behavior only applies when the
provider uses the opt-in ConfigMode setting on a particular attribute,
and thus this change has much less risk of causing broad, unintended
regressions elsewhere.
When an operation fails, providers may return a null new value rather than
returning a partial state. In that case, we'd prefer to keep the old value
so that we stand the best chance of being able to retry on a subsequent
run.
Previously we were making an exception for the delete action, allowing
the result of that to be null even when an error is returned. In practice
that was a bad idea because it would cause Terraform to lose track of the
object even though it might not actually have been deleted.
Now we'll retain the old object even in the delete case. Providers can
still return partial new objects if they were able to partially complete
a delete operation, in which case we'll discard what we had before, but
if the result is null with errors then we'll assume the delete failed
entirely and so just keep the old state as-is, giving us the opportunity
to refresh it on the next run to see if anything actually happened after
all.
(This also includes a new resource in the test provider which isn't used
by the patch but was useful for some manual UX testing here, so I thought
I'd include it in case it's similarly useful in future, given how simple
its implementation is.)
Due to the lossiness of our legacy models for diff and state, shimming a
diff and then creating a state from it produces a different result than
shimming a state directly. That means that ImportStateVerify no longer
works as expected if there are any Computed attributes in the schema where
d.Set isn't called during Read.
Fixing that for every case would require some risky changes to the shim
behavior, so we're instead going to ask provider developers to address it
by adding `d.Set` calls where needed, since that is the contract for
"Computed" anyway -- a default value should be produced during Create, and
thus by extension during Import.
However, since a common situation where this occurs is attributes marked
as "Removed", since all of the code that deals with them has generally
been deleted, we'll avoid problems in that case here by treating Removed
attributes as ignored for the purposes of ImportStateVerify.
This required exporting some functionality that was formerly unexported
in helper/schema, but it's a relatively harmless schema introspection
function so shouldn't be a big deal to export it.
This is not a recommended method, but it does serve to verify that the
set values in the ResourceData internal state are correctly computed,
which indicates that the expected configuration was passed in.
Add a diff test using a shcema with ConfigModeAttr.
It's in the test provider, because that is what is mostly responsible
for exercising diff.Apply, and where the other tests are.
These are the largest source of the old "diffs didn't match after apply"
errors. It's almost always an upstream dependency that caused the final
error.
For any block content we evaluate dynamically via this API, we'll make a
special allowance for users to optionally write members of a list
attribute instead as a sequence of nested blocks, thus allowing some
existing provider features that were assuming this capability to continue
to support it after v0.12.
This should not be used for any new provider features, and should ideally
be eventually phased out so that there aren't two
similar-but-slightly-different syntaxes for saying the same thing.
The previous commit added this flag but did not implement it. Here we
implement it by adjusting the shape of schema we return to Terraform Core
to mark the attribute as untyped and then ensure that gets handled
correctly on the SDK side.
The previous commit added a new flag to schema.Schema which is documented
to make a list with MaxItems: 1 be presented to Terraform Core as a single
value instead, giving a way to switch to non-list nested resources without
it being a breaking change for Terraform v0.11 users as long as it's done
prior to a provider's first v0.12-compatible release.
This is the implementation of that mechanism. It's intentionally
implemented as a suite of extra fixups rather than direct modifications to
existing shim code because we want to ensure that this has no effect
whatsoever on the result of a resource type that _isn't_ using AsSingle.
Although there is some small unit test coverage of the fixup steps here,
the primary testing for this is in the test provider since the integration
of all of these fixup steps in the correct order is the more important
result than any of the intermediate fixup steps.
This should be the final change from removing the flatmap normalization.
Since we're no longer trying to a consistent zero or null value in the
flatmap config, rather we're trying to maintain the previously applied
value, ReadResource also needs to apply the normalizeNullValues step in
order to prevent unexpected diffs.
This makes some slight adjustments to the shape of the schema we
present to Terraform Core without affecting how it is consumed by the
SDK and thus the provider. This mechanism is designed specifically to
avoid changing how the schema is interpreted by the SDK itself or by the
provider, so that prior behavior can be preserved in Terraform v0.11 mode.
This also includes a new rule that Computed-only (i.e. not also Optional)
schemas _always_ map to attributes, because that is a better mapping of
the intent: they are object values to be used in expressions. Nested
blocks conceptually represent nested objects that are in some sense
independent of what they are embedded in, and so they cannot themselves be
computed.