HEREDOC tokens are a little more fussy than normal string sequences
because we need to preserve the whitespace within them along with the
start and end markers while we upgrade any interpolated expressions inside.
We need to do some work locally here because the HCL heredoc processing
"does too much" and throws away information we need to do a faithful
upgrade.
We also need to contend with the fact that Terraform <=0.11 had an older
version of HCL that accidentally permitted a degenerate form of heredoc
where the marker was at the end of the final line, like this:
degenerate = <<EOT
this should never have workedEOT
When we migrate this, we'll introduce the additional newline that is now
required, which will unfortunately slightly change the result string to
include a newline when parsed by 0.12, and so we'll need to call this out
as a caveat in the upgrade guide.
The init error was output deep in the backend by detecting a
special ResourceProviderError and formatted directly to the CLI.
Create some Diagnostics closer to where the problem is detected, and
passed that back through the normal diagnostic flow. While the output
isn't as nice yet, this restores the helpful error message and makes the
code easier to maintain. Better formatting can be handled later.
* command/show: add "module_version" to "module_calls" in config portion
of `terraform show`.
Also extended the `terraform show -json` test to run `init` so we could
add examples with modules. This does _not_ test the "module_version"
yet, but it _did_ help expose a bug in jsonplan where modules were
duplicated. This is also fixed in this PR.
* command/jsonconfig: rename version to version_constraint and
resolved_source to source.
The API surface area is much smaller when we use the remote backend for remote state only.
So in order to try and prevent any backwards incompatibilities when TF runs inside of TFE, we’ve split up the discovery services into `state.v2` (which can be used for remote state only configurations, so when running in TFE) and `tfe.v2.1` (which can be used for all remote configurations).
The go-getter library that is used by the module loader validates S3 URLs in the parseURL function. That function assumes path-style URLs and fails on virtual-hosted-style URLs.
Our usual "ground rules" for mapping configschema to cty call for the
collection values representing nested block types to always be known and
non-null, using an empty collection to represent the absense of any blocks
of that type so that users can always safely use length(...) etc on them
without worrying about them sometimes being null.
However, due to some different behaviors in the legacy SDK we've allowed
it an exception to this rule which means that we can see unknown and null
collections in these positions in object values returned from provider
operations like PlanResourceChange and ApplyResourceChange when the legacy
SDK opt-out is activated.
As a consequence of this, we need to be mindful in our safety check
functions, like AssertObjectCompatible here, of tolerating these non-ideal
situations to allow the safety checks to complete. We run these checks
even when the provider requests an opt-out, because we want to note any
inconsistencies as WARNING level log lines to aid in debugging.
cty.Value.AsValueMap can return nil if called on an empty map or object.
The logic above was dealing with that case for maps, but object types
were falling through into this codepath and panicking when trying to
assign a new key into the nil dstMap.
This also includes a bonus fix where we were calling ty.ElementType in
a switch case that accepts object types. Object types don't have a single
element type, so we can't call ElementType on those (that also panics)
but we _can_ use the type of the value we selected from src to construct
our placeholder null value.
A provider may react to a create or update failing by returning error
diagnostics and a partially-updated or nil new value, in which case we
do not expect our AssertObjectCompatible consistency check to succeed: the
provider is just assumed to be doing the best it can to preserve whatever
partial outcome it was able to achieve.
However, if errors are accompanied with a nil new value after an update,
we'll assume that the provider is telling us it wasn't able to get far
enough to make any change at all, and so we'll retain the prior value in
state. This ensures that a provider can't cause an object to be forgotten
from the state just because an update failed.