The shim layer for the legacy SDK type system is not precise enough to
guarantee it will produce identical results between plan and apply. In
particular, values that are null during plan will often become zero-valued
during apply.
To avoid breaking those existing providers while still allowing us to
introduce this check in the future, we'll introduce a rather-hacky new
flag that allows the legacy SDK to signal that it is the legacy SDK and
thus disable the check.
Once we start phasing out the legacy SDK in favor of one that natively
understands our new type system, we can stop setting this flag and thus
get the additional safety of this check without breaking any
previously-released providers.
No other SDK is permitted to set this flag, and we will remove it if we
ever introduce protocol version 6 in future, assuming that any provider
supporting that protocol will always produce consistent results.
Previously we would allow providers to change anything about the planned
object value during apply, possibly returning an entirely-unrelated object
of the same type. In practice this led to some subtle bugs where a single
planned attribute value would change during apply and cause a downstream
failure due to a dependent resource now seeing input other than what
_it_ expected during plan.
Now we'll produce an explicit error message for this case which places the
blame with the correct party: the upstream resource that changed. Without
this, unexpected changes would often lead to the downstream resource
implementation being blamed in error message even though it was just
reacting to the change from upstream.
As with most errors during apply, we'll still save the updated value in
the state but we'll halt the walk to ensure that the unexpected value
cannot propagate further and cause the result to potentially diverge
greatly from the changeset shown in the plan.
Compared to Terraform 0.11, we expect to see this error in many of the
same cases we saw the "diffs didn't match during apply" error in earlier
versions, since it is likely that many errors of that sort were the result
of unexpected upstream changes being incorrectly blamed on the downstream
resource that then used the result.
Because Terraform Core has traditionally not checked that the final apply
result is consistent with what was planned, some of our apply tests were
producing inconsistent results.
Here we fix all of that so that they produce something compatible with
what they planned. This doesn't actually achieve anything in isolation,
but we're about to start enforcing this consistency in a subsequent
commit.
It seems that all of the tools we run here are now sufficiently
modules-aware to run without problems in modules mode, and indeed running
_not_ in modules mode was causing problems with locating packages in
mockgen.
In 0.12, the outputs for a data source of terraform_remote_state are
nested under the 'outputs' attribute [1]. This updates the docs
to make this change clearer.
Worked with @radeksimko at Terraform hackday, who has submitted a
related upgrade guide [2]
[1] 1f4d2f4c50/builtin/providers/terraform/data_source_state.go (L16-L43)
[2] d8e00191b7
If set elements are computed, we can't be certain that they are actually
equal. Catch identical computed set hashes when they are added to the
set, and alter the set key slightly to keep the set counts correct.
In previous versions the interpolation string would be included in the
set, and different string values would cause the set to hash
differently, so this is change is only activated for the new protocol.
This turns it on at the last moment, and in one place for all uses of
helper/schema. There's no way to use the new protocol without calling
GetSchema, so we can be sure that any subsequent api calls have this set
when required.
Sets rely on diffs being complete for all elements, even when they are
unchanged. When encountering a DiffSuppressFunc inside a set the diffs
were being dropped entirely, possible causing set elements to be lost.
Previously we were just asserting that the number of elements didn't grow
between planned and actual. We still can't precisely correlate elements in
sets with unknown values, but here we adapt some logic we added earlier
to config/hcl2shim to ensure that we can find a plausible correlation for
each element in each set to at least one element in the other set, and
thus catch more cases where set elements might vanish or appear between
plan and apply, for improved safety.
This will still generate false negatives in some cases where unknown
values are present due to having to assume correlation is intended
wherever it is possible, but we'll catch situations where the actual value
is obviously contrary to what was planned.
Because of the different possibilities for arranging the nav sidebars, we want
to make sure:
- IDs for the 0.11 and 0.12 language docs have a common prefix.
- That prefix is not the exact string `docs-config`.
Have I mentioned before that I really dislike this prefix matching behavior.
This is a non-working commit, because a bunch of links (including the sidebar
nav) are broken. Using a transition commit like this makes it easier to see the
changes necessary to get this content woven into the site.
Just as with 336b352d6f, this refreshes our rate limit bypass cookie for
go.googlesource.com since the old one has apparently expired.
I did this by visiting https://go.googlesource.com/ and selecting
"Generate Password" from the top navigation. This cookie belongs to a
test account used by the Terraform team and should not be used by
non-Terraform codebases; please generate your own!
Previously we were using the type name requested in the import to select
the schema, but a provider is free to return additional objects of other
types as part of an import result, and so it's important that we perform
schema selection separately for each returned object.
If we don't do this, we get confusing downstream errors where the
resulting object decodes to the wrong type and breaks various invariants
expected by Terraform Core.
The testResourceImportOther test in the test provider didn't catch this
previously because it happened to have an identical schema to the other
resource type being imported. Now the schema is changed and also there's
a computed attribute we can set as part of the refresh phase to make sure
we're completing the Read call properly during import. Refresh was working
correctly, but we didn't have any tests for it as part of the import flow.
This fixes a bug in the TestConformance function that was generating false
positives when given two object types with the same number of attributes
but not identical attribute names.
* command/jsonstate: do not hide SchemaVersion of '0'
* command/jsonconfig: module_calls should be a map
* command/jsonplan: include current terraform version in output
* command/jsonconfig: properly marshal expressions from a module call
Previously this was looking at the root module's variables, instead of
the child module variables, to build the module schema. This fixes that
bug.
This checking helper is frequently used in provider tests for data
sources, as a shorthand to verify that an attribute of the data source
matches with the corresponding attribute on a managed resource.
Since we now leave empty collections null in more cases, this function is
sometimes effectively asked to verify that a given attribute is _unset_
in both the data source and the resource, so here we slightly adjust the
definition of the check to consider two nulls to be equal to one another,
which at this layer manifests as the keys not being present in the state
attributes map at all.
This check function didn't previously have tests, so this commit also adds
a basic suite of tests, including coverage for the new behavior.
While copyMissingValues was meant to re-insert empty values that were
null after apply, it turns out plan is sometimes not predictable as
well.
normalizeNullValue is meant to fix up any null/empty transitions between
to values, and be useful during plan as well. For plan the function only
concerns itself with individual, known values, and skips sets entirely.
The result of running with plan == true is that only changes between
empty and null collections should be fixed.
We were previously using cty.Path.Apply, which serves a similar purpose
but implements the more restrictive traversal behaviors down at the cty
layer. hcl.ApplyPath uses the same rules as HCL expressions and so ensures
consistent behavior with normal user expressions.
cty.Path.Apply also previously had a crashing bug (discussed in #20084)
that was causing a panic here. That has now been fixed in cty, but since
we're no longer using it here that's a moot point. The HCL traversing
implementation has been fuzz-tested and unit tested a lot more thoroughly
so should not run into the same crashers we saw with cty before.
The cty change here fixes a panic situation when cty.Path.Apply is given
a null value, making it now correctly return an error.
However, the HCL2 change includes an alternative to cty.Path.Apply that
uses HCL-level rules rather than cty-level rules, so the result behaves
like an HCL expression would. Most uses of cty.Path.Apply ought to use
hcl.ApplyPath instead, to ensure that the behavior is consistent with what
users expect in the main language.
An earlier commit incorrectly updated some versions in go.mod without also
updating the vendor tree, so this also rolls those back to where they used
to be so that we can roll them forward carefully and make sure the tests
actually pass. (If we just accept these new versions as specified the
tests do not pass, so some work is required to fix those regressions.)