While copyMissingValues was meant to re-insert empty values that were
null after apply, it turns out plan is sometimes not predictable as
well.
normalizeNullValue is meant to fix up any null/empty transitions between
to values, and be useful during plan as well. For plan the function only
concerns itself with individual, known values, and skips sets entirely.
The result of running with plan == true is that only changes between
empty and null collections should be fixed.
We were previously using cty.Path.Apply, which serves a similar purpose
but implements the more restrictive traversal behaviors down at the cty
layer. hcl.ApplyPath uses the same rules as HCL expressions and so ensures
consistent behavior with normal user expressions.
cty.Path.Apply also previously had a crashing bug (discussed in #20084)
that was causing a panic here. That has now been fixed in cty, but since
we're no longer using it here that's a moot point. The HCL traversing
implementation has been fuzz-tested and unit tested a lot more thoroughly
so should not run into the same crashers we saw with cty before.
The cty change here fixes a panic situation when cty.Path.Apply is given
a null value, making it now correctly return an error.
However, the HCL2 change includes an alternative to cty.Path.Apply that
uses HCL-level rules rather than cty-level rules, so the result behaves
like an HCL expression would. Most uses of cty.Path.Apply ought to use
hcl.ApplyPath instead, to ensure that the behavior is consistent with what
users expect in the main language.
An earlier commit incorrectly updated some versions in go.mod without also
updating the vendor tree, so this also rolls those back to where they used
to be so that we can roll them forward carefully and make sure the tests
actually pass. (If we just accept these new versions as specified the
tests do not pass, so some work is required to fix those regressions.)
The new decoder is more precise, and unpacks the timeout block into a
single map, which ResourceTimeout.ConfigDecode was updated to handle.
We however still need to work with legacy versions of terraform, with
the old decoder.
When elements are removed from a list, all attributes may not be present
in the diff. Once the individual attributes diffs are applied, use the
length to truncate the flatmapped list to the correct length.
With the new diff.Apply we can keep the diff mostly intact, but we need
turn off all RequiresNew flags so that the prior state is not removed
from the apply.
One quirky aspect of our import feature is that we allow the importer to
produce additional resources alongside the one that was imported, such as
to create separate rules for each rule of an imported security group.
Providers need to be able to set the types of these other resources since
they may not match the "main" resource type. They do this by calling
ResourceData.SetType, which in turn sets InstanceState.Ephemeral.Type.
In our shims here we therefore need to copy that out into our new TypeName
field so that the new core import code can see it and create the right
type in the state.
Testing this required a minor change to the test harness to allow the
ImportStateCheck function to see the resource type.
* command/show: add support for -json output for state
* command/jsonconfig: do not marshal empty count/for each expressions
* command/jsonstate: continue gracefully if the terraform version is somehow missing from state
Historically helper/schema did not support non-primitive map attributes
because they cannot be represented unambiguously in flatmap. When we
initially implemented CoreConfigSchema here we mapped that situation to
a nested block of mode NestingMap, even though that'd never worked until
now, assuming that it'd be harmless because providers wouldn't be using
it.
It turns out that some providers are, in fact, incorrectly populating
a TypeMap schema with Elem: &schema.Resource, apparently under the false
assumption that it would constrain the keys allowed in the map. In
practice, helper/schema has just been ignoring this and treating such
attributes as map of string. (#20076)
In order to preserve the behavior of these existing incorrectly-specified
attribute definitions, here we mimic the helper/schema behavior by
presenting as an attribute of type map(string).
These attributes have also been shown in some documentation as nested
blocks (with no equals sign), so that'll need to be fixed in user
configurations as they upgrade to Terraform 0.12. However, the existing
upgrade tool rules will take care of that as a natural consequence of the
name being indicated as an attribute in the schema, rather than as a block
type.
This fixes#20076.
In prior versions, we recommended using hash functions in conjunction with
the file function as an idiom for detecting changes to upstream blobs
without fetching and comparing the whole blob.
That approach relied on us being able to return raw binary data from
file(...). Since Terraform strings pass through intermediate
representations that are not binary-safe (e.g. the JSON state), there was
a risk of string corruption in prior versions which we have avoided for
0.12 by requiring that file(...) be used only with UTF-8 text files.
The specific case of returning a string and immediately passing it into
another function was not actually subject to that corruption risk, since
the HIL interpreter would just pass the string through verbatim, but this
is still now forbidden as a result of the stricter handling of file(...).
To avoid breaking these use-cases, here we introduce variants of the hash
functions a with "file" prefix that take a filename for a disk file to
hash rather than hashing the given string directly. The configuration
upgrade tool also now includes a rule to detect the documented idiom and
rewrite it into a single function call for one of these new functions.
This does cause a bit of function sprawl, but that seems preferable to
introducing more complex rules for when file(...) can and cannot read
binary files, making the behavior of these various functions easier to
understand in isolation.
These all follow the pattern of creating a hash and converting it to a
string using some encoding function, so we can write this implementation
only once and parameterize it with a hash factory function and an encoding
function.
This also includes a new test for the sha512 function, which was
previously missing a test and, it turns out, actually computing sha256
instead.
* command/jsonplan: sort resources by address
* command/show: extend test case to include resources with count
* command/json*: document resource ordering as consistent but undefined
This fixes some consistency problems with how number strings were parsed
in the msgpack decoder vs other situations.
This commit also includes an upgrade of HCL2 to use this new cty function,
though there's no change in behavior here since the new function is
functionally equivalent to what it replaced.
If there were no matching keys, and there was no diff at all, don't set
a zero count for the container. Normally Providers can't reliably detect
empty vs unset here, but there are some cases that worked.
Since terraform-bundle is just a different frontend to Terraform's module
installer, it is subject to the same installation constraints as Terraform
itself.
Terraform 0.12 cannot install providers targeting Terraform 0.11 and
earlier, and so therefore terraform-bundle built with Terraform 0.12
cannot either. A build of terraform-bundle from the v0.11 line must be
used instead.
Without this change, the latest revisions of terraform-bundle would
install plugins for Terraform 0.12 to bundle along with Terraform 0.10 or
0.11, which will not work at runtime due to the plugin protocol mismatch.
Until now, terraform-bundle was incorrectly labelled with its own version
number even though in practice it has no version identity separate from
Terraform itself. Part of this change, then, is to make the
terraform-bundle version match the Terraform version it was built against,
though any prior builds will of course continue to refer to themselves
as 0.0.1.
If asked to create a bundle for a version of Terraform v0.12 or greater,
an error will be returned instructing the user to use a build from the
v0.11 branch or one of the v0.11.x tags in order to bundle those versions.
This also includes a small fix for a bug where the tool would not fail
properly when the requested Terraform version is not available for
installation, instead just producing a zip file with no "terraform"
executable inside at all. Now it will fail, allowing automated build
processes to detect it and not produce a broken archive for distribution.
Our new diff handling no longer requires stripping the empty diffs out,
and provider may be relying on some of the empty-value quirks in
helper/schema.
Missing prefix in map recount. This generally passes tests since the
actual count should already be there and be correct, then ethe extra key
is ignored by the shims.