The backend gets to "prepare" the configuration before Configure is
called, in order to validate the values and insert defaults. We don't
want to store this value in the "config state", because it will often
not match the raw config after it is prepared, forcing unecessary
backend migrations during init.
Since PrepareConfig is always called before Configure, we can store the
config value directly, and assume that it will be prepared in the same
manner each time.
If the backend config hashes match during init, and there are no new
backend override options, then we assume the existing config is OK.
Since init should be idempotent, we should be able to run init with no
options or config changes, and not effect the backends at all.
A longer-form guide will follow in the Sentinel section of the Terraform
Enterprise documentation, once it's ready. For now, this section isn't
saying anything useful since it was always just a stub for a guide we
planned to write later.
Previously the type-selection codepath for an input tuple referred
unconditionally to the start and end index values. In the Type callback,
only the types of the arguments are guaranteed to be known, so any access
to the values must be guarded with an .IsKnown check, or else the function
will not short-circuit properly when an unknown value is passed.
Now we will check the start and end indices are in range when we have
enough information to do so, but we'll return an approximate result if
either is unknown.
The upgrade tool is assuming that a type of "list" means list(string) and
a type of "map" means map(string), because that was what we documented
those as meaning.
In practice, Terraform 0.11 was lacking some validation which allowed
more complex nested structures in some cases even though they were pretty
inconvenient to use due to other language limitations.
The upgrade tool doesn't have enough context to make a reliable decision
on this, so instead we'll rely on the upgrade guide for this. We don't
need a TF-UPGRADE-TODO comment in this case because we reserve those for
things where a subsequent operation might cause the configuration to be
misinterpred, rather than just causing an error. Instead, we'll show an
example of the comment in the upgrade guide so the reader can easily
match it, and give some advice in the guide on how to address it.
This is a "should never happen" case, because we shouldn't ever have
resources in the plan that aren't in the configuration, but since we've
got a report of a crash here (which went away before we got a chance to
debug it) here's just an extra guard to ensure that we'll still exit
gracefully in that case.
If we see this error crop up again in future, it'd be nice to gather a
full trace log so we can see what GraphNodeAttachResourceConfig did and
why it did not attach a configuration.
This includes a small fix to ensure the parser doesn't produce an invalid
body for block parsing syntax errors, and instead produces an incomplete
result that calling applications like Terraform can still analyze.
The problem here was affecting our version-constraint-sniffing code, which
intentionally tried to find a core version constraint even if there's a
syntax error so that it can report that a new version of Terraform is a
likely cause of the syntax error. It was working in most cases, unless
it was the "terraform" block itself that contained the error, because then
we'd try to analyze a broken hcl.Block with a nil body.
This includes a new test for "terraform init" that exercises this
recovery codepath.
Having removed the methods, it is straightforward to mechanically update
this file to get rid of all references to the "legacy schema". There is
now only one config schema type to deal with in the sdk.
This experiment is no longer needed for handling computed blocks, since
the legacy SDK can't reasonably handle Dynamic types, we need to remove
this before the final release.
Remove LegacySchema functions as well, since handling SkipCoreTypeCheck
was the only thing left they were handling.
When handling the json state in UpgradeResourceState, the schema
must be what core uses, because that is the schema used for
encoding/decoding the json state.
When converting from flatmap to json state, the legacy schema will be
used to decode the flatmap to a cty value, but the resulting json will
be encoded using the CoreConfigSchema to match what core expects.
Previously we tried to short-circuit this if the schema version hadn't
changed and we were already using JSON serialization. However, if we
instead call into UpgradeResourceState every time we can let the provider
or the SDK do some general, systematic normalization and cleanup steps
without always requiring a schema version increase.
What exactly would be fixed up this way is for the SDK to decide, but for
example the SDK might choose to automatically delete from the state
anything that is no longer present in the schema, avoiding the need to
write explicit state migration functions for that common case where the
remedy is always the same.
The actual update logic is delegated to the provider/SDK intentionally so
that it can evolve over time and potentially differ depending on how
each SDK thinks about schema.