Historically helper/schema did not support non-primitive map attributes
because they cannot be represented unambiguously in flatmap. When we
initially implemented CoreConfigSchema here we mapped that situation to
a nested block of mode NestingMap, even though that'd never worked until
now, assuming that it'd be harmless because providers wouldn't be using
it.
It turns out that some providers are, in fact, incorrectly populating
a TypeMap schema with Elem: &schema.Resource, apparently under the false
assumption that it would constrain the keys allowed in the map. In
practice, helper/schema has just been ignoring this and treating such
attributes as map of string. (#20076)
In order to preserve the behavior of these existing incorrectly-specified
attribute definitions, here we mimic the helper/schema behavior by
presenting as an attribute of type map(string).
These attributes have also been shown in some documentation as nested
blocks (with no equals sign), so that'll need to be fixed in user
configurations as they upgrade to Terraform 0.12. However, the existing
upgrade tool rules will take care of that as a natural consequence of the
name being indicated as an attribute in the schema, rather than as a block
type.
This fixes#20076.
In prior versions, we recommended using hash functions in conjunction with
the file function as an idiom for detecting changes to upstream blobs
without fetching and comparing the whole blob.
That approach relied on us being able to return raw binary data from
file(...). Since Terraform strings pass through intermediate
representations that are not binary-safe (e.g. the JSON state), there was
a risk of string corruption in prior versions which we have avoided for
0.12 by requiring that file(...) be used only with UTF-8 text files.
The specific case of returning a string and immediately passing it into
another function was not actually subject to that corruption risk, since
the HIL interpreter would just pass the string through verbatim, but this
is still now forbidden as a result of the stricter handling of file(...).
To avoid breaking these use-cases, here we introduce variants of the hash
functions a with "file" prefix that take a filename for a disk file to
hash rather than hashing the given string directly. The configuration
upgrade tool also now includes a rule to detect the documented idiom and
rewrite it into a single function call for one of these new functions.
This does cause a bit of function sprawl, but that seems preferable to
introducing more complex rules for when file(...) can and cannot read
binary files, making the behavior of these various functions easier to
understand in isolation.
These all follow the pattern of creating a hash and converting it to a
string using some encoding function, so we can write this implementation
only once and parameterize it with a hash factory function and an encoding
function.
This also includes a new test for the sha512 function, which was
previously missing a test and, it turns out, actually computing sha256
instead.
* command/jsonplan: sort resources by address
* command/show: extend test case to include resources with count
* command/json*: document resource ordering as consistent but undefined
This fixes some consistency problems with how number strings were parsed
in the msgpack decoder vs other situations.
This commit also includes an upgrade of HCL2 to use this new cty function,
though there's no change in behavior here since the new function is
functionally equivalent to what it replaced.
If there were no matching keys, and there was no diff at all, don't set
a zero count for the container. Normally Providers can't reliably detect
empty vs unset here, but there are some cases that worked.
Since terraform-bundle is just a different frontend to Terraform's module
installer, it is subject to the same installation constraints as Terraform
itself.
Terraform 0.12 cannot install providers targeting Terraform 0.11 and
earlier, and so therefore terraform-bundle built with Terraform 0.12
cannot either. A build of terraform-bundle from the v0.11 line must be
used instead.
Without this change, the latest revisions of terraform-bundle would
install plugins for Terraform 0.12 to bundle along with Terraform 0.10 or
0.11, which will not work at runtime due to the plugin protocol mismatch.
Until now, terraform-bundle was incorrectly labelled with its own version
number even though in practice it has no version identity separate from
Terraform itself. Part of this change, then, is to make the
terraform-bundle version match the Terraform version it was built against,
though any prior builds will of course continue to refer to themselves
as 0.0.1.
If asked to create a bundle for a version of Terraform v0.12 or greater,
an error will be returned instructing the user to use a build from the
v0.11 branch or one of the v0.11.x tags in order to bundle those versions.
This also includes a small fix for a bug where the tool would not fail
properly when the requested Terraform version is not available for
installation, instead just producing a zip file with no "terraform"
executable inside at all. Now it will fail, allowing automated build
processes to detect it and not produce a broken archive for distribution.
Our new diff handling no longer requires stripping the empty diffs out,
and provider may be relying on some of the empty-value quirks in
helper/schema.
Missing prefix in map recount. This generally passes tests since the
actual count should already be there and be correct, then ethe extra key
is ignored by the shims.
* command/show: properly marshal attribute values to json
marshalAttributeValues in jsonstate and jsonplan packages was returning
a cty.Value, which json/encoding could not marshal. These functions now
convert those cty.Values into json.RawMessages.
* command/jsonplan: planned values should include resources that are not changing
* command/jsonplan: return a filtered list of proposed 'after' attributes
Previously, proposed 'after' attributes were not being shown if the
attributes were not WhollyKnown. jsonplan now iterates through all the
`after` attributes, omitting those which are not wholly known.
The same was roughly true for after_unknown, and that structure is now
correctly populated. In the future we may choose to filter the
after_unknown structure to _only_ display unknown attributes, instead of
all attributes.
* command/jsonconfig: use a unique key for providers so that aliased
providers don't get munged together
This now uses the same "provider" key from configs.Module, e.g.
`providername.provideralias`.
* command/jsonplan: unknownAsBool needs to iterate through objects that are not wholly known
* command/jsonplan: properly display actions as strings according to the RFC,
instead of a plans.Action string.
For example:
a plans.Action string DeleteThenCreate should be displayed as ["delete",
"create"]
Tests have been updated to reflect this.
* command/jsonplan: return "null" for unknown list items.
The length of a list could be meaningful on its own, so we will turn
unknowns into "null". The same is less likely true for maps and objects,
so we will continue to omit unknown values from those.
The SDK uses only the native int and float64 types internally for values
that are specified as being "number" in schema, so for SDK purposes only
a float64 level of precision is significant.
To avoid any weirdness introduced as we shim and un-shim numbers, we'll
reduce floating point numbers to float64 precision before comparing them
to try to mimic the result the SDK itself would've gotten from comparing
its own float64 versions of these values using the Go "==" operator.
Due to various inprecisions in the old SDK implementation, applying the
generated diff can potentially make changes to the data structure that
have no real effect, such as replacing an empty list with a null list or
vice-versa.
Although we can't totally eliminate such diff noise, here we attempt to
avoid it in situations where there are _only_ meaningless changes -- where
the prior state and planned state are equivalent -- by just echoing back
the prior state verbatim to ensure that Terraform will treat it as a noop
change.
If there _are_ some legitimate changes then the result may still contain
meaningless changes alongside it, but that is just a cosmetic problem for
the diff renderer, because the meaningless changes will be ignored
altogether during a subsequent apply anyway. The primary goal here is just
to ensure we can converge on a fixpoint when there are no explicit changes
in the configuration.
This is a first pass of an "approximately equal" function that tries to
mimic the reduced precision caused by the field reader abstraction in
helper/schema so that we can distinguish between meaningful changes to
the proposed new state and incidental ones that just result from the loss
of precision in the SDK implementation.
The previous version assumed the diff could be applied verbatim, and
only used the schema at the top level since diffs are "flat". This
turned out to not work reliably with nested blocks. The new Apply method
is driven completely by the schema, and handles nested blocks separately
from other collections.
In Terraform 0.11 and earlier we just silently ignored undeclared
variables in -var-file and the automatically-loaded .tfvars files. This
was a bad user experience for anyone who made a typo in a variable name
and got no feedback about it, so we made this an error for 0.12.
However, several users are now relying on the silent-ignore behavior for
automation scenarios where they pass the same .tfvars file to all
configurations in their organization and expect Terraform to ignore any
settings that are not relevant to a specific configuration. We never
intentionally supported that, but we don't want to immediately break that
workflow during 0.12 upgrade.
As a compromise, then, we'll make this a warning for v0.12.0 that contains
a deprecation notice suggesting to move to using environment variables
for this "cross-configuration variables" use-case. We don't produce errors
for undeclared variables in environment variables, even though that
potentially causes the same UX annoyance as ignoring them in vars files,
because environment variables are assumed to live in the user's session
and this it would be very inconvenient to have to unset such variables
when moving between directories. Their "ambientness" makes them a better
fit for these automatically-assigned general variable values that may or
may not be used by a particular configuration.
This can revert to being an error in a future major release, after users
have had the opportunity to migrate their automation solutions over to
use environment variables.
We don't seem to have any tests covering this specific situation right
now. That isn't ideal, but this change is so straightforward that it would
be relatively expensive to build new targeted test cases for it and so
I instead just hand-tested that it is indeed now producing a warning where
we were previously producing an error. Hopefully if there is any more
substantial work done on this codepath in future that will be our prompt
to add some unit tests for this.
In an ideal world, providers are supposed to respond to errors during
apply by returning a partial new state alongside the error diagnostics.
In practice though, our SDK leaves the new value set to nil for certain
errors, which was causing Terraform to "forget" the object altogether by
assuming that the provider intended to say "null".
We now adjust that assumption to apply only in the delete case. In all
other cases (including updates) we retain the prior state if the new
state is given as nil. Although we could potentially fix this in the SDK
itself, I expect this is a likely bug in other future SDKs for other
languages too, so this new assumption is a safer one to make to be
resilient to data loss when providers don't behave perfectly.
Providers that return both nil new value and no errors are considered
buggy, but unfortunately that applies to the mocks in many of our tests,
so for pragmatic reasons we can't generate an error for that case as we do
for other "should never happen" situations. Instead, we'll just retain the
prior value in the state so the user can retry.