Previously we tried to short-circuit this if the schema version hadn't
changed and we were already using JSON serialization. However, if we
instead call into UpgradeResourceState every time we can let the provider
or the SDK do some general, systematic normalization and cleanup steps
without always requiring a schema version increase.
What exactly would be fixed up this way is for the SDK to decide, but for
example the SDK might choose to automatically delete from the state
anything that is no longer present in the schema, avoiding the need to
write explicit state migration functions for that common case where the
remedy is always the same.
The actual update logic is delegated to the provider/SDK intentionally so
that it can evolve over time and potentially differ depending on how
each SDK thinks about schema.
We've seen in the past that some users try to use this form with the
ssh:// URL prefix, so we'll mention explicitly that this is invalid and
show a working example of how to use it without the URL scheme prefix.
The "err" variable in the MaybeRelativePathErr condition was masking the
original err with nil in the "else" case of this branch, causing the
error message to be incomplete.
While here, also tweaked the wording to say "Could not download" rather
than "Error attempting to download", both to say the same thing in fewer
words and because the summary line above already starts with "Error:"
when we print out this message, so it looks weird to have both lines
start with the same word.
When normalizing the state during read, if the resource was previously
imported, most nil-able values will be nil, and we need to prefer the
values returned by the latest Read operation. This didn't come up
before, because Read is usually working with a state create by plan and
Apply which has already shaped the state with the expected empty values.
Having the src value preferred only during Apply better follows the
intent of this function, which should allow Read to return whatever
values it deems necessary. Since Read and Plan use the same
normalization logic, the implied Read before plan should take care of any
perpetual diffs.
The new type system only has a Number type, but helper schema
differentiates between Int and Float values. Verify that a new config
value is an integer during Validate, because the existing WeakDecode
validation will decode a float value into an integer while the config
FieldReader will attempt to parse the float exactly.
Since we're limiting this to protoV5, we can be certain that any valid
config value will be converted to an `int` type by the shims. The only
case where an integral float value will appear is if the integer is out
of range for the systems `int` type, but we also need to prevent that
anyway since it would fail to read in the same manner.
Terraform core would previously ignore unexpected attributes found in
the state, but since we now need to encode/decode the state according
the schema, the attributes must match the schema.
On any state upgrade, remove attributes no longer present in the schema
from the state. The only change this requires from providers is that
going forward removal of attribute is considered a schema change, and
requires an increment of the SchemaVersion in order to trigger the
removal of the attributes from state.
Our original upgrade guide was drafted while some things were still in
flux and not all of our upgrade tooling was in place yet.
This redraft now attempts to be more specific and direct, showing exact
commands to run and (where relevant) exact error messages that Terraform
might return.
I also took this opportunity for some general copy-editing, though we'll
probably want to do one more pass of that alone (without changing any
content at the same time) before final release.
This new content presumes the existence of a Terraform v0.11.14 release,
which isn't published yet at the time of writing but should be published
before v0.12.0 final, once we've done final verification and review of
the upgrade path including it.
There are a number of use cases that can require a user to select a workspace after initializing Terraform.
To make sure we cover all these use cases, we will always call the selectWorkspace method to verify a valid workspace is already selected or (if needed) offer to select one before moving on.
Computed primitive values must see the UnknownConfigValue or they are
assumed to be unchanged. Restrict the usage of the protov5 ComputedKeys
to containers.
- Note that we intentionally omitted it from the sidebar, to reduce confusion.
- Write a summary up top so you can stop reading sooner if you don't actually need this.
With the new ConfigModeAttr, we can now have complex structures come in
as attributes rather than blocks. Previously attributes were either
known, or unknown, and there was no reason to descend into them. We now
need to record the complete path to unknown values within complex
attributes to create a proper diff after shimming the config.
We were previously catching some errors at read time, but some type errors
were panicking because the cty.DynamicPseudoType arguments have no
automatic pre-type-checking done but this code was assuming they would
be objects.
Here we add an explicit validation step that includes both the backend
validation we were previously doing during read and some additional
type checking to ensure the two dynamic arguments are suitably-typed.
Having the separate validation step means that these problems can be
detected by "terraform validate", rather than only in "terraform plan"
or "terraform apply".
This corrects a bug in the HCL 2 scanner where a $ or % symbol would cause
incorrect tokenization if appearing immediately before a " .
This also includes some updates to Go extension libraries that the HCL
update brings in. Some of these changes update to support Unicode 11, but
only when compiling with Go 1.13, so we won't see the effect of these
changes until we start building Terraform with Go 1.13.
If a datasource's dependencies have planned changes, then we need to
plan a read for the data source, because the config may change once the
dependencies have been applied.
First check the ComputedValues field in the config when reading config
field, so that we can detect if there is an unknown value in a
container. Since maps, lists and sets are verified to exist by looking
for a "length" first, an unknown config value in the config is ignored.