The previous commit added this flag but did not implement it. Here we
implement it by adjusting the shape of schema we return to Terraform Core
to mark the attribute as untyped and then ensure that gets handled
correctly on the SDK side.
When running in v0.12-and-higher mode, this will cause the SDK to report
the type of the attribute as "any", effectively skipping type checking
on the Core side altogether and checking only in the SDK and provider
code.
The practical impact of this is to restore the v0.11-style checking
behavior of allowing object values to be missing certain attributes as
long as they are marked as optional in the schema. The SDK can do this
because it uses a unified schema model for both object values and nested
blocks, while Terraform Core only supports the idea of "optional" when
talking about attributes in nested blocks.
This is a continuation of the pile of workarounds that also includes
the ConfigMode and AsSingle fields, allowing providers to selectively opt
out of new v0.12 behaviors in situations where they conflict with
decisions made in the design of the providers in our old world where
Terraform Core delegated _all_ validation to providers.
This is designed as an opt-in so that we can limit its impact only to
specific cases where it's needed and minimize the risk of regressions
elsewhere. Providers should use this sparingly only in situations where
prevailing usage disagrees with the new expectations of Terraform Core in
v0.12.
This commit only adds the flag, and does not implement any behavior for it
yet. That means this commit can exist in both the v0.11 and v0.12
codebases, allowing for API compatibility. A subsequent commit for v0.12
(not included in v0.11) will then implement this behavior.
This includes a fix to prevent unintentional infinite recursion when
trying to unify multiple object types to a single type for conversion to
list(any).
Sadly I wasn't able to reproduce the problem as reported (in #20728), so
therefore I wasn't able to write a Terraform test for it, but I have
confirmed that the cty behavior here was incorrect anyway (recursively
calling the same function we're already in with the same arguments is
clearly not productive) and so this change will allow whatever situation
that was to terminate with a type conversion error, rather than causing a
stack overflow.
It's likely that there is another bug lurking under this, since the
problematic code here was supposed to be unreachable, but avoiding the
crash is the priority for now. If the problem re-surfaces then it should
at least be an error message with some additional context about what the
goal of the caller was.
This also includes an unrelated fix for the gocty package, which doesn't
affect Terraform because it makes very little use of that package.
The templatefile function only has two arguments, so ArgErrorf can be
called with only zero or one as the argument index. If we are out of
bounds then HCL itself will panic trying to build the error message for
this call when called as an HCL function.
Unfortunately there isn't really a great layer in Terraform to test for
this class of bug systematically, because we are currently testing these
functions directly rather than going through HCL to do it. For the moment
we'll just live with that, but if we see this class of error arise again
we might consider either reworking the tests in this package to work with
HCL expression source code instead of direct calls or adding some
additional tests elsewhere that do so.
Due to these tests happening in the wrong order, removing an object from
the end of a sequence of objects would previously cause a bounds-check
panic.
Rather than a more severe rework of the logic here, for now we'll just
introduce an extra precondition to prevent the panic. The code that
follows already handles the case where there _is_ no new object (i.e. the
"old" object is being deleted) as long as we're able to pass through this
type-checking logic.
The new "JSON list of objects - removing item" test covers this problem
by rendering a diff for an object being removed from the end of a list
of objects within a JSON value.
It's important to preserve the provider address because during the destroy
phase of provider tests we'll use the references in the state to determine
which providers are required, and so without this attempts to override
the provider using the "provider" meta-argument can cause failures at
destroy time when the wrong provider gets selected.
(This is particularly acute in the google-beta provider tests because that
provider is _always_ used with provider = "google-beta" to override the
default behavior of using the normal "google" provider.)
* docs: elaborate on supported remote backend versions
This PR adds a few lines to the docs to indicate which commands are
supported by what version of the remote backend and it makes a
recommendation about which version to use.
* Clarify remote state storage w/ TFE [skip ci]
Specifically, that this is the backend to use with remote state (all
tiers) and Free-Tier vs. Enterprise tiers differ in remote operations
* website: Arrange remote backend info differently
The hcldec package has no awareness of the dynamic block extension, so the
hcldec.Variables function misses any variables declared inside dynamic
blocks.
dynblock.VariablesHCLDec is a drop-in replacement for hcldec.Variables
that _is_ aware of dynamic blocks, returning all of the same variables
that hcldec would find naturally plus also any variables used inside
the dynamic block "for_each" and "labels" arguments and inside the
nested "content" block.
This includes improved functionality for HCL's "dynamic block extension",
which will allow us (in a subsequent commit) to properly detect
dependencies inside nested "dynamic" blocks, where currently they get
missed.
For this commit though, we just upgrade HCL to a version that includes it
and make a small change to our "lang" package to align with an upstream
renaming.
We have released the v0.12-oriented content to the website early in order
to support the beta process, but in some places we neglected to explicitly
mark features or content as being v0.12-only.
Here we add explicit markers to the main cases we've seen where readers
have reported confusion, along with some other tweaks in similar vein.
Terraform Registry (and other registry implementations) can now return
an array of warnings with the versions response. These warnings are now
displayed to the user during a `terraform init`.
In earlier refactoring we updated these commands to support the new
address and state types, but attempted to partially retain the old-style
"StateFilter" abstraction that originally lived in the Terraform package,
even though that was no longer being used for any other functionality.
Unfortunately the adaptation of the existing filtering to the new types
wasn't exact and so these commands ended up having a few bugs that were
not covered by the existing tests.
Since the old StateFilter behavior was the source of various misbehavior
anyway, here it's removed altogether and replaced with some simpler
functions in the state_meta.go file that are tailored to the use-cases of
these sub-commands.
As well as just generally behaving more consistently with the other
parts of Terraform that use the new resource address types, this commit
fixes the following bugs:
- A resource address of aws_instance.foo would previously match an
resource of that type and name in any module, which disagreed with the
expected interpretation elsewhere of meaning a single resource in the
root module.
- The "terraform state mv" command was not supporting moves from a single
resource address to an indexed address and vice-versa, because the old
logic didn't need to make that distinction while they are two separate
address types in the new logic. Now we allow resources that do not have
count/for_each to be treated as if they are instances for the purposes
of this command, which is a better match for likely user intent and for
the old behavior.
Finally, we also clean up a little some of the usage output from these
commands, which hasn't been updated for some time and so had both some
stale information and some inaccurate terminology.
Our post-refresh safety check had the constraint and real type inverted,
so previously any refresh of a resource type with a dynamically-typed
attribute would fail this type check.
Also includes a small tweak to the error message from this check since the
old one was a little awkward to read in practice when the error is a
cty.PathError rendered with an attribute path prefix.
We were using the wrong cty operation to access map members, causing a
panic whenever a prior value was present for a resource type with a nested
block backed by a map value.
This includes two upstream fixes:
- Handle explicit JSON "null" consistently during decode of JSON syntax.
- Properly detect the end of a "heredoc" when formatting to avoid messing
up indentation of other lines following the heredoc.