* Introduce "Local" terminology for non-absolute provider config addresses
In a future change AbsProviderConfig and LocalProviderConfig are going to
become two entirely distinct types, rather than Abs embedding Local as
written here. This naming change is in preparation for that subsequent
work, which will also include introducing a new "ProviderConfig" type
that is an interface that AbsProviderConfig and LocalProviderConfig both
implement.
This is intended to be largely just a naming change to get started, so
we can deal with all of the messy renaming. However, this did also require
a slight change in modeling where the Resource.DefaultProviderConfig
method has become Resource.DefaultProvider returning a Provider address
directly, because this method doesn't have enough information to construct
a true and accurate LocalProviderConfig -- it would need to refer to the
configuration to know what this module is calling the provider it has
selected.
In order to leave a trail to follow for subsequent work, all of the
changes here are intended to ensure that remaining work will become
obvious via compile-time errors when all of the following changes happen:
- The concept of "legacy" provider addresses is removed from the addrs
package, including removing addrs.NewLegacyProvider and
addrs.Provider.LegacyString.
- addrs.AbsProviderConfig stops having addrs.LocalProviderConfig embedded
in it and has an addrs.Provider and a string alias directly instead.
- The provider-schema-handling parts of Terraform core are updated to
work with addrs.Provider to identify providers, rather than legacy
strings.
In particular, there are still several codepaths here making legacy
provider address assumptions (in order to limit the scope of this change)
but I've made sure each one is doing something that relies on at least
one of the above changes not having been made yet.
* addrs: ProviderConfig interface
In a (very) few special situations in the main "terraform" package we need
to make runtime decisions about whether a provider config is absolute
or local.
We currently do that by exploiting the fact that AbsProviderConfig has
LocalProviderConfig nested inside of it and so in the local case we can
just ignore the wrapping AbsProviderConfig and use the embedded value.
In a future change we'll be moving away from that embedding and making
these two types distinct in order to represent that mapping between them
requires consulting a lookup table in the configuration, and so here we
introduce a new interface type ProviderConfig that can represent either
AbsProviderConfig or LocalProviderConfig decided dynamically at runtime.
This also includes the Config.ResolveAbsProviderAddr method that will
eventually be responsible for that local-to-absolute translation, so
that callers with access to the configuration can normalize to an
addrs.AbsProviderConfig given a non-nil addrs.ProviderConfig. That's
currently unused because existing callers are still relying on the
simplistic structural transform, but we'll switch them over in a later
commit.
* rename LocalType to LocalName
Co-authored-by: Kristin Laemmert <mildwonkey@users.noreply.github.com>
Renamed file.ProviderRequirements to file.RequiredProviders to match the
name of the block in the configuration. file.RequiredProviders contains
the contents of the file(s); module.ProviderRequirements contains the
parsed and merged provider requirements.
Extended decodeRequiredProvidersBlock to parse the new provider source
syntax (version only, it will ignore any other attributes).
Added some tests; swapped deep.Equal with cmp.Equal in the
terraform/module_dependencies_test.go because deep was not catching
incorrect constraints.
The existing "type" argument allows specifying a type constraint that
allows for some basic validation, but often there are more constraints on
a variable value than just its type.
This new feature (requiring an experiment opt-in for now, while we refine
it) allows specifying arbitrary validation rules for any variable which
can then cause custom error messages to be returned when a caller provides
an inappropriate value.
variable "example" {
validation {
condition = var.example != "nope"
error_message = "Example value must not be \"nope\"."
}
}
The core parts of this are designed to do as little new work as possible
when no validations are specified, and thus the main new checking codepath
here can therefore only run when the experiment is enabled in order to
permit having validations.
* deps: bump terraform-config-inspect library
* configs: parse `version` in new required_providers block
With the latest version of `terraform-config-inspect`, the
required_providers attribute can now be a string or an object with
attributes "source" and "version". This change allows parsing the
version constraint from the new object while ignoring any given source attribute.
In an earlier change we switched to defining our own sets of detectors,
getters, etc for go-getter in order to insulate us from upstream changes
to those sets that might otherwise change the user-visible behavior of
Terraform's module installer.
However, we apparently neglected to actually refer to our local set of
detectors, and continued to refer to the upstream set. Here we catch up
with the latest detectors from upstream (taken from the version of
go-getter we currently have vendored) and start using that fixed set.
Currently we are maintaining these custom go-getter sets in two places
due to the configload vs. initwd distinction. That was already true for
goGetterGetters and goGetterDecompressors, and so I've preserved that for
now just to keep this change relatively simple; in later change it would
be nice to factor these "get with go getter" functions out into a shared
location which we can call from both configload and initwd.
* configs: move ProviderConfigCompact[Str] from addrs to configs
The configs package is aware of provider name and type (which are the
same thing today, but expected to be two different things in a future
release), and should be the source of truth for a provider config
address. This is an intermediate step; the next step will change the returned types to something based in the configs package.
* command: rename choosePlugins to chooseProviders to clarify scope of function
* use `Provider.LegacyString()` (instead of `Provider.Type`) consistently
* explicitly create legacy-style provider (continuing from above change)
Traditionally we've preferred to release new language features in major
releases only, because we can then use the beta cycle to gather feedback
on the feature and learn about any usability challenges or other
situations we didn't consider during our design in time to make those
changes before inclusion in a stable release.
This "experiments" feature is intended to decouple the feedback cycle for
new features from the major release rhythm, and thus allow us to release
new features in minor releases by first releasing them as experimental for
a minor release or two, adjust for any feedback gathered during that
period, and then finally remove the experiment gate and enable the feature
for everyone.
The intended model here is that anything behind an experiment gate is
subject to breaking changes even in patch releases, and so any module
using these experimental features will be broken by a future Terraform
upgrade.
The behavior implemented here is:
- Recognize a new "experiments" setting in the "terraform" block which
allows module authors to explicitly opt in to experimental features.
terraform {
experiments = [resource_for_each]
}
- Generate a warning whenever loading a module that has experiments
enabled, to avoid accidentally depending on experimental features and
thus risking unexpected breakage on next Terraform upgrade.
- We check the enabled experiments against the configuration at module
load time, which means that experiments are scoped to a particular
module. Enabling an experiment in one module does not automatically
enable it in any other module.
This experiments mechanism is itself an experiment, and so I'd like to
use the resource for_each feature to trial it. Because any configuration
using experiments is subject to breaking changes, we are free to adjust
this experiments feature in future releases as we see fit, but once
for_each is shipped without an experiment gate we'll be blocked from
making significant changes to it until the next major release at least.
The configs package is aware of provider name and type (which are the
same thing today, but expected to be two different things in a future
release), and should be the source of truth for a provider config
address.
* huge change to weave new addrs.Provider into addrs.ProviderConfig
* terraform: do not include an empty string in the returned Providers /
Provisioners
- Fixed a minor bug where results included an extra empty string
* terraform/context: use new addrs.Provider as map key in provider factories
* added NewLegacyProviderType and LegacyString funcs to make it explicit that these are temporary placeholders
This PR introduces a new concept, provider fully-qualified name (FQN), encapsulated by the `addrs.Provider` struct.
Add deprecation warning for references from destroy provisioners or
their connections to external resources or values. In order to ensure
resource destruction can be completed correctly, destroy nodes must be
able to evaluate with only their instance state.
We have sufficient information to validate destroy-time provisioners
early on during the config loading process. Later on these can be
converted to hard errors, and only allow self, count.index, and each.key
in destroy provisioners. Limited the provisioner and block evaluation
scope later on is tricky, but if the references can never be loaded,
then they will never be encountered during evaluation.
`terraform 0.12upgrade` assumes that the configuration has passed 0.11
init, but did not explicitly check that the configuration was valid.
Certain issues would not get caught because the configuration was
syntactically valid. In this case, int or float values out of range
resulted in a panic from `Value()`.
Since running a 0.11 validate command is a breaking change, this PR
merely moves the `Value()` logic for ints and floats into `configupgrade` so
the error can be returned to the user, instead of causing a panic.
Following on from de652e22a26b, this introduces deprecation warnings for
when an attribute value expression is a template with only a single
interpolation sequence, and for variable type constraints given in quotes.
As with the previous commit, we allowed these deprecated forms with no
warning for a few releases after v0.12.0 to ensure that folks who need to
write cross-compatible modules for a while during upgrading would be able
to do so, but we're now marking these as explicitly deprecated to guide
users towards the new idiomatic forms.
The "terraform 0.12upgrade" tool would've already updated configurations
to not hit these warnings for those who had pre-existing configurations
written for Terraform 0.11.
The main target audience for these warnings are newcomers to Terraform who
are learning from existing examples already published in various spots on
the wider internet that may be showing older Terraform syntax, since those
folks will not be running their configurations through the upgrade tool.
These warnings will hopefully guide them towards modern Terraform usage
during their initial experimentation, and thus reduce the chances of
inadvertently adopting the less-readable legacy usage patterns in
greenfield projects.
Terraform 0.12.0 removed the need for putting references and keywords
in quotes, but we disabled the deprecation warnings for the initial
release in order to avoid creating noise for folks who were intentionally
attempting to maintain modules that were cross-compatible with both
Terraform 0.11 and Terraform 0.12.
However, with Terraform 0.12 now more widely used, the lack of these
warnings seems to be causing newcomers to copy the quoted versions from
existing examples on the internet, which is perpetuating the old and
confusing quoted form in newer configurations.
In preparation for phasing out these deprecated forms altogether in a
future major release, and for the shorter-term benefit of giving better
feedback to newcomers when they are learning from outdated examples, we'll
now re-enable those deprecation warnings, and be explicit that the old
forms are intended for removal in a future release.
In order to properly test this, we establish a new set of test
configurations that explicitly mark which warnings they are expecting and
verify that they do indeed produce those expected warnings. We also
verify that the "success" tests do _not_ produce warnings, while removing
the ones that were previously written to succeed but have their warnings
ignored.
During the 0.12 work we intended to move all of the variable value
collection logic into the UI layer (command package and backend packages)
and present them all together as a unified data structure to Terraform
Core. However, we didn't quite succeed because the interactive prompts
for unset required variables were still being handled _after_ calling
into Terraform Core.
Here we complete that earlier work by moving the interactive prompts for
variables out into the UI layer too, thus allowing us to handle final
validation of the variables all together in one place and do so in the UI
layer where we have the most context still available about where all of
these values are coming from.
This allows us to fix a problem where previously disabling input with
-input=false on the command line could cause Terraform Core to receive an
incomplete set of variable values, and fail with a bad error message.
As a consequence of this refactoring, the scope of terraform.Context.Input
is now reduced to only gathering provider configuration arguments. Ideally
that too would move into the UI layer somehow in a future commit, but
that's a problem for another day.
Previously we were using the experimental HCL 2 repository, but now we'll
shift over to the v2 import path within the main HCL repository as part of
actually releasing HCL 2.0 as stable.
This is a mechanical search/replace to the new import paths. It also
switches to the v2.0.0 release of HCL, which includes some new code that
Terraform didn't previously have but should not change any behavior that
matters for Terraform's purposes.
For the moment the experimental HCL2 repository is still an indirect
dependency via terraform-config-inspect, so it remains in our go.sum and
vendor directories for the moment. Because terraform-config-inspect uses
a much smaller subset of the HCL2 functionality, this does still manage
to prune the vendor directory a little. A subsequent release of
terraform-config-inspect should allow us to completely remove that old
repository in a future commit.
copyDir is used in configload/getter.go to copy previously downloaded modules instead of using the go-getter client every time. The go-getter client downloads dotfiles, but copyDir did not copy dotfiles, leading to inconsistent behaviour when reusing the same module source.
In order to allow lazy evaluation of resource indexes, we can't index
resources immediately via GetResourceInstance. Change the evaluation to
always return whole Resources via GetResource, and index individual
instances during expression evaluation.
This will allow us to always check for invalid index errors rather than
returning an unknown value and ignoring it during apply.
We can only validate MinItems >= 1 (equiv to "Required") during
decoding, as dynamic blocks each only decode as a single block. MaxItems
cannot be validated at all, also because of dynamic blocks, which may
have any number of blocks in the config.
Due to both the nature of dynamic blocks, and the need for resources to
sometimes communicate incomplete values, we cannot validate MinItems and
MaxItems in CoerceValue.
A provider may not have the data to fill in required block values in all
cases during the resource Read operation. This is more common in import,
because there is no initial configuration or state, and it's possible
some values are only provided in the configuration.
The original intent of MinItems and MaxItems in the schema was to
enforce configuration constraints, not to enforce what the resource
could save in the state. Since the configuration is already statically
validated, and the Schema is validated against the configuration in a
separate step, we can drop these extra validation constraints in
CoerceValue and relax it to only ensure the types conform to what is
expected.
If a block was defined via "dynamic", there will be only one block value
until the expansion is known. Since we can't detect dynamic blocks at
this point, don't verify MinItems while there are unknown values in the
config.
The decoder spec can also only check for existence of a block, so limit
the check to 0 or 1.
This also fixes a few things with resource for_each:
It makes validation more like validation for count.
It makes sure the index is stored in the state properly.
In some cases (see #22020 for a specific example), the parsed hilNode
can be nil. This causes a series of panics. Instead, return an error and
move on.
When loading nested modules, the child module diagnostics were dropped
in the recursive function. This mean that the config from the submodules
wasn't fully loaded, even though no errors were reported to the user.
This caused further problems if the plan was stored in a plan file, when
means only the partial configuration was stored for the subsequent apply
operation, which would result in unexplained "Resource node has no
configuration attached" errors later on.
Also due to the child module diagnostics being lost, any newly added
nested modules would be silently ignored until `init` was run again
manually.
Previously, adding a version constraint to a module that was previously
recorded without a version in the module manifest would cause a panic.
Instead, we now use a slight variant of the "dependencies have changed"
error that doesn't try to print out a specific version number.
Previously we were trying to access a field of the analysis object before
checking if analysis produced errors. The analysis function usually
returns a nil analysis on error, so this would result in a panic whenever
that happened.
Now we'll dereference the analysis object pointer only after checking for
errors, so we'll get a chance to report the analysis error to the user.
The expression upgrade functionality mostly ignores comments because in
the old language the syntax prevented comments from appearing in the
middle of expressions, but there was one exception: object expressions.
Because HCL 1 used ObjectType both for blocks and for object expressions,
that is the one situation where something we consider to be an expression
could have inline attached comments in the old language.
We migrate these here so we don't lose these comments that don't appear
anywhere else. Other comments get gathered up into a general comments
set maintained inside the analysis object and so will be printed out as
required _between_ expressions, just as they did before.
Previously we were treating "dynamic" blocks in configuration the same as
any other block type when merging config bodies, so that dynamic blocks
in the override would override any dynamic blocks present in the base,
without considering the dynamic block type.
It's more useful and intuitive for us to treat dynamic blocks as if they
are instances of their given block type for the purpose of overriding.
That means a foo block can be overridden by a dynamic "foo" block and
vice-versa, and dynamic blocks of different types do not interact at all
during overriding.
This requires us to recognize dynamic blocks and treat them specially
during decoding of a merged body. We leave them unexpanded here because
this package is not responsible for dynamic block expansion (that happens
in the sibling "lang" package) but we do decode them enough to recognize
their labels so we can treat them as if they were blocks of the labelled
type.
In study of existing providers we've found a pattern we werent previously
accounting for of using a nested block type to represent a group of
arguments that relate to a particular feature that is always enabled but
where it improves configuration readability to group all of its settings
together in a nested block.
The existing NestingSingle was not a good fit for this because it is
designed under the assumption that the presence or absence of the block
has some significance in enabling or disabling the relevant feature, and
so for these always-active cases we'd generate a misleading plan where
the settings for the feature appear totally absent, rather than showing
the default values that will be selected.
NestingGroup is, therefore, a slight variation of NestingSingle where
presence vs. absence of the block is not distinguishable (it's never null)
and instead its contents are treated as unset when the block is absent.
This then in turn causes any default values associated with the nested
arguments to be honored and displayed in the plan whenever the block is
not explicitly configured.
The current SDK cannot activate this mode, but that's okay because its
"legacy type system" opt-out flag allows it to force a block to be
processed in this way anyway. We're adding this now so that we can
introduce the feature in a future SDK without causing a breaking change
to the protocol, since the set of possible block nesting modes is not
extensible.
These helpers determine the value that would be used for a particular
schema construct if the configuration construct it represents is not
present (or, in the case of *Block, empty) in the configuration.
This is different than cty.NullVal on the implied type because it might
return non-null "empty" values for certain constructs if their absence
would be reported as such during a decode with no required attributes or
blocks.
The v0.12 language supports numeric constants only in decimal notation, as
a simplification. For rare situations where a different base is more
appropriate, such as unix-style file modes, we've found it better for
providers to accept a string containing a representation in the
appropriate base, since that way the interpretation can be validated and
it will be displayed in the same way in the rendered plan diff, in
outputs, etc.
We use tv.Value() here to mimick how HCL 1 itself would have interpreted
these, and then format them back out in the canonical form, which
implicitly converts any non-decimal constants to decimal on the way
through.
In order to preserve pre-v0.12 idiom for list-of-object attributes, we'll
prefer to use block syntax for them except for the special situation where
the user explicitly assigns an empty list, where attribute syntax is
required in order to allow existing provider logic to differentiate from
an implicit lack of blocks.
* configs/configupgrade: detect invalid resource names and print a TODO
message
In terraform 0.11 and prior it was possible to start a resource name
with a number. This is no longer valid, as the resource name would would
be ambiguous with number values in HCL expressions.
Fixes#19919
* Update configs/configupgrade/test-fixtures/valid/invalid-resource-name/want/resource.tf
Co-Authored-By: mildwonkey <mildwonkey@users.noreply.github.com>
This includes two upstream fixes:
- Handle explicit JSON "null" consistently during decode of JSON syntax.
- Properly detect the end of a "heredoc" when formatting to avoid messing
up indentation of other lines following the heredoc.
* configs/configupgrade: detect possible relative module sources
If a module source appears to be a relative local path but does not have
a preceding ./, print a #TODO message for the user.
* internal/initwd: limit go-getter detectors to those supported by terraform
* internal/initwd: move isMaybeRelativeLocalPath check into getWithGoGetter
To avoid making two calls to getter.Detect, which potentially makes
non-trivial API calls, the "isMaybeRelativeLocalPath" check was moved to
a later step and a custom error type was added so user-friendly
diagnostics could be displayed in the event that a possible relative local
path was detected.
configs/configload and internal/initwd both had a copyDir function that
would fail if the source directory contained a symlinked directory,
because the os.FileMode.IsDir() returns false for symlinks.
This PR adds a check for a symlink and copies that symlink in the
target directory. It handles symlinks for both files and directories
(with included tests).
Fixes#20539
Terraform 0.11 and prior had an odd special case where a resource
attribute access for "count" would be resolved as the count for the
whole resource, rather than as an attribute of an individual instance as
for all other attributes.
Because Terraform 0.12 makes test_instance.foo appear as a list when count
is set (so it can be used in other expressions), it's no longer possible
to have an attribute in that position: lists don't have attributes.
Fortunately we don't really need that special case anymore since it
doesn't do anything we can't now do with the length(...) function.
This upgrade rule, then, detects references like test_instance.foo.count
and rewrites to length(test_instance.foo). As a special case, if
test_instance.foo doesn't have "count" set then it just rewrites as the
constant 1, which mimics what would've happened in that case in Terraform
0.11.
Prior to Terraform v0.12 it was possible for a provider to secretly set
some default arguments for the "connection" block, which most commonly
included a hard-coded type of "ssh" and a value from "host".
In the interests of "explicit is better than implicit", Terraform 0.12 no
longer has this feature and instead requires connection settings to be
written explicitly in terms of the resource's exported attributes. For
compatibility though, the upgrade tool will insert expressions that are
as close as possible to the logic the provider formerly implemented, or
in a few rare cases a TF-UPGRADE-TODO comment to fix it up manually.
Some of the existing resource type implementations have incredibly
complicated implementations of selecting a single host IP address to use
and don't expose the result of that as an attribute, so for now we handle
those via a complicated Terraform language expression achieving the same
result. Ideally these providers would introduce a new attribute that
exports the same address formerly exported as the hostname before their
initial v0.12-compatible release, in which case we can simplify these to
just reference the attribute in question. That would be preferable also
because it would allow use of that exported attribute in other contexts,
such as in a null_resource provisioner somewhere else or in an output
to allow a caller to deal with the SSH part itself.
This uses the fixed "superset" schema from the main terraform package to
apply our standard expression mapping, with the exception of "type" where
interpolation sequences are not supported due to the type being evaluated
early to retrieve the schema for decoding the rest.
Aside from the two special meta-arguments "connection" and "provisioner"
this is just our standard mapping from schema to conversion rules, using
the provisioner's configuration schema.
Due to a copy-paste error, this was using the message from the providers
map in a "module" block.
This new message is not particularly helpful, but we should only see it
for a configuration that wouldn't have been valid in 0.11 either, and so
it's unlikely to be displayed.
Although sets do not have indexed elements, in Terraform 0.11 and earlier
element(...) would work with sets because we'd automatically convert them
to lists on entry to HIL -- with an arbitrary-but-consistent ordering --
and this return an arbitrary-but-consistent element from the list.
The element(...) function in Terraform 0.12 does not allow this because it
is not safe in general, but there was an existing pattern relying on this
in Terraform 0.11 configs which this upgrade rule is intended to preserve:
resource "example" "example" {
count = "${length(any_set_attribute)}"
foo = "${element(any_set_attribute, count.index}"
}
The above works because the exact indices assigned in the conversion are
irrelevant: we're just asking Terraform to create one resource for each
distinct element in the set.
This upgrade rule therefore inserts an explicit conversion to list if it
is able to successfully provide that the given expression will return a
set type:
foo = "${element(tolist(any_set_attribute), count.index}"
This makes the conversion explicit, allowing users to decide if it is
safe and rework the configuration if not. Since our static type analysis
functionality focuses mainly on resource type attributes, in practice this
rule will only apply when the given expression is a statically-checkable
resource reference. Since sets are an SDK-only concept in Terraform 0.11
and earlier anyway, in practice that works out just right: it's not
possible for sets to appear anywhere else in older versions anyway.
The comma-separated syntax is now reserved only for object constructor
expressions in attribute values, so the upgrade tool rewrites block
arguments to be newline-separated instead.
This was already working but we didn't have an explicit test for it until
now.
Prior to Terraform 0.12, ignore_changes was implemented in a
flatmap-oriented fashion and so users found that they could (and in fact,
were often forced to) use the internal .% and .# suffixes flatmap uses to
ignore changes to the number of elements in a list or map.
Terraform 0.12 no longer uses that representation, so we'll interpret
ignoring changes to the length as ignoring changes to the entire
collection. While this is not a totally-equivalent change, in practice
this pattern was most often used in conjunction with specific keys from a
map in order to _effectively_ ignore the entire map, even though Terraform
didn't really support that.
HIL implemented its type conversions by rewriting its AST to include calls
to some undocumented builtin functions. Unfortunately those functions were
still explicitly callable if you could figure out the name for them, and
so they may have been used in the wild.
In particular, __builtin_StringToFloat was used as part of a workaround
for a HIL design flaw where it would prefer to convert strings to integers
rather than floats when performing arithmetic operations. This issue was,
indeed, the main reason for unifying int ant float into a single number
type in HCL. Since we published that as a suggested workaround, the
upgrade tool ought to fix it up.
The other cases have never been documented as a workaround, so they are
less likely to appear in the wild, but we might as well fix them up anyway
since we already have the conversion functions required to get the same
result in the new language.
To be safe/conservative, most of these convert to _two_ function calls
rather than just one, which ensures that these new expressions retain the
behavior of implicitly converting to the source type before running the
conversion. The new conversion functions only specify target type, and so
cannot guarantee identical results if the argument type does not exactly
match what was previously given as the parameter type in HIL.
HEREDOC tokens are a little more fussy than normal string sequences
because we need to preserve the whitespace within them along with the
start and end markers while we upgrade any interpolated expressions inside.
We need to do some work locally here because the HCL heredoc processing
"does too much" and throws away information we need to do a faithful
upgrade.
We also need to contend with the fact that Terraform <=0.11 had an older
version of HCL that accidentally permitted a degenerate form of heredoc
where the marker was at the end of the final line, like this:
degenerate = <<EOT
this should never have workedEOT
When we migrate this, we'll introduce the additional newline that is now
required, which will unfortunately slightly change the result string to
include a newline when parsed by 0.12, and so we'll need to call this out
as a caveat in the upgrade guide.
Since these error messages get printed in Terraform's output and we
encourage users to share them as part of bug reports, we should avoid
including sensitive information in them to reduce the risk of accidental
exposure.
Previously, configupgrade would panic if it encountered a HEREDOC. For
the time being, we will simply print out the HEREDOC as-is.
Unfortunately, we discovered that terraform 0.11's version of HCL
allowed for HEREDOCs with the termination delimiter inline (instead of
on a newline, which is technically correct). Since 0.12configupgrade
needs to be bug-compatible with terraform 0.11, we must roll back to the
same version of HCL used in terraform 0.11.
Objects with DynamicPseudoType attributes can't be coerced within a map
if a concrete type is set. Change the Value type used to an Object when
there is a type mismatch.
In prior versions, we recommended using hash functions in conjunction with
the file function as an idiom for detecting changes to upstream blobs
without fetching and comparing the whole blob.
That approach relied on us being able to return raw binary data from
file(...). Since Terraform strings pass through intermediate
representations that are not binary-safe (e.g. the JSON state), there was
a risk of string corruption in prior versions which we have avoided for
0.12 by requiring that file(...) be used only with UTF-8 text files.
The specific case of returning a string and immediately passing it into
another function was not actually subject to that corruption risk, since
the HIL interpreter would just pass the string through verbatim, but this
is still now forbidden as a result of the stricter handling of file(...).
To avoid breaking these use-cases, here we introduce variants of the hash
functions a with "file" prefix that take a filename for a disk file to
hash rather than hashing the given string directly. The configuration
upgrade tool also now includes a rule to detect the documented idiom and
rewrite it into a single function call for one of these new functions.
This does cause a bit of function sprawl, but that seems preferable to
introducing more complex rules for when file(...) can and cannot read
binary files, making the behavior of these various functions easier to
understand in isolation.
There are a few constructs from 0.11 and prior that cause 0.12 parsing to
fail altogether, which previously created a chicken/egg problem because
we need to install the providers in order to run "terraform 0.12upgrade"
and thus fix the problem.
This changes "terraform init" to use the new "early configuration" loader
for module and provider installation. This is built on the more permissive
parser in the terraform-config-inspect package, and so it allows us to
read out the top-level blocks from the configuration while accepting
legacy HCL syntax.
In the long run this will let us do version compatibility detection before
attempting a "real" config load, giving us better error messages for any
future syntax additions, but in the short term the key thing is that it
allows us to install the dependencies even if the configuration isn't
fully valid.
Because backend init still requires full configuration, this introduces a
new mode of terraform init where it detects heuristically if it seems like
we need to do a configuration upgrade and does a partial init if so,
before finally directing the user to run "terraform 0.12upgrade" before
running any other commands.
The heuristic here is based on two assumptions:
- If the "early" loader finds no errors but the normal loader does, the
configuration is likely to be valid for Terraform 0.11 but not 0.12.
- If there's already a version constraint in the configuration that
excludes Terraform versions prior to v0.12 then the configuration is
probably _already_ upgraded and so it's just a normal syntax error,
even if the early loader didn't detect it.
Once the upgrade process is removed in 0.13.0 (users will be required to
go stepwise 0.11 -> 0.12 -> 0.13 to upgrade after that), some of this can
be simplified to remove that special mode, but the idea of doing the
dependency version checks against the liberal parser will remain valuable
to increase our chances of reporting version-based incompatibilities
rather than syntax errors as we add new features in future.
The parent commit fixes an issue where this would previously have led to
a crash. These new test cases verify that parsing is now able to complete
without crashing, though the result is still invalid.
In early versions of Terraform where the interpolation language didn't
have any real list support, list brackets around a single string was the
signal to split the string on a special uuid separator to produce a list
just in time for processing, giving expressions like this:
foo = ["${test_instance.foo.*.id}"]
Logically this is weird because it looks like it should produce a list
of lists of strings. When we added real list support in Terraform 0.7 we
retained support for this behavior by trimming off extra levels of list
during evaluation, and inadvertently continued relying on this notation
for correct type checking.
During the Terraform 0.10 line we fixed the type checker bugs (a few
remaining issues notwithstanding) so that it was finally possible to
use the more intuitive form:
foo = "${test_instance.foo.*.id}"
...but we continued trimming off extra levels of list for backward
compatibility.
Terraform 0.12 finally removes that compatibility shim, causing redundant
list brackets to be interpreted as a list of lists.
This upgrade rule attempts to identify situations that are relying on the
old compatibility behavior and trim off the redundant extra brackets. It's
not possible to do this fully-generally using only static analysis, but
we can gather enough information through or partial type inference
mechanism here to deal with the most common situations automatically and
produce a TF-UPGRADE-TODO comment for more complex scenarios where the
user intent isn't decidable with only static analysis.
In particular, this handles by far the most common situation of wrapping
list brackets around a splat expression like the first example above.
After this and the other upgrade rules are applied, the first example
above will become:
foo = test_instance.foo.*.id
By collecting information about the input variables during analysis, we
can return approximate type information for any references to those
variables in expressions.
Since Terraform 0.11 allowed maps of maps and lists of lists in certain
circumstances even though this was documented as forbidden, we
conservatively return collection types whose element types are unknown
here, which allows us to do shallow inference on them but will cause
us to get an incomplete result if any operations are performed on
elements of the list or map value.
Although we can't do fully-precise type inference with access only to a
single module's configuration, we can do some approximate inference using
some clues within the module along with our resource type schemas.
This depends on HCL's ability to pass through type information even if the
input values are unknown, mapping our partial input type information into
partial output type information by evaluating the same expressions.
This will allow us to do some upgrades that require dynamic analysis to
fully decide, by giving us three outcomes: needed, not needed, or unknown.
If it's unknown then that'll be our prompt to emit a warning for the user
to make a decision.
This is a temporary implementation of these rules just so that these can
be passed through verbatim (rather than generating an error) while we
do testing of other features.
A subsequent commit will finish these with their own custom rulesets.
The main tricky thing here is ignore_changes, which contains strings that
are better given as naked traversals in 0.12. We also handle here mapping
the old special case ["*"] value to the new "all" keyword.
Both resource blocks and module blocks contain references to providers
that are expressed as short-form provider addresses ("aws.foo" rather than
"provider.aws.foo").
These rules call for those to be unwrapped as naked identifiers during
upgrade, rather than appearing as quoted strings. This also introduces
some further rules for other simpler meta-arguments that are required
for the test fixtures for this feature.
Some further rules are required here to deal with the meta-arguments we
accept inside these blocks, but this is good enough to pass through most
module blocks using the standard attribute-expression-based mapping.
Previously we were handling this one as a special case, effectively
duplicating most of the logic from upgradeBlockBody.
By doing some prior analysis of the block we can produce a "rules" that
just passes through all of the attributes as-is, allowing us to reuse
upgradeBlockBody. This is a little weird for the locals block since
everything in it is user-selected names, but this facility will also be
useful in a future commit for dealing with module blocks, which contain
a mixture of user-chosen and reserved argument names.
We don't change JSON files at all and instead just emit a warning about
them since JSON files are usually generated rather than hand-written and
so any updates need to happen in the generator program rather than in its
output.
However, we do still need to copy them verbatim into the output map so
that we can keep track of them through any subsequent steps.
Prior to v0.12 Terraform was liberal about these and allowed them to
mismatch, but now it's important to get this right so that resources
and resource instances can be used directly as object values, and so
we'll fix up any sloppy existing references so things keep working as
expected.
This is particularly important for the pattern of using count to create
conditional resources, since previously the "true" case would create one
instance and Terraform would accept an unindexed reference to that.
The reference syntax is not significantly changed, but there are some
minor additional restrictions on identifiers in HCL2 and as a special case
we need to rewrite references to data.terraform_remote_state .
Along with those mandatory upgrades, we will also switch references to
using normal index syntax where it's safe to do so, as part of
de-emphasizing the old strange integer attribute syntax (like foo.0.bar).
Previously we were erroneously moving these out of their original block
into the surrounding body. Now we'll make sure to collect up any remaining
ad-hoc comments inside a nested block body before closing it.
Early on it looked like we wouldn't need to distinguish these since we
were only analyzing for provider types, but we're now leaning directly
on the resource addresses later on and so we need to make sure we produce
valid ones when data resources are present.
Users discovered that they could exploit some missing validation in
Terraform v0.11 and prior to treat block types as if they were attributes
and assign dynamic expressions to them, with some significant caveats and
gotchas resulting from the fact that this was never intended to work.
However, since such patterns are in use in the wild we'll convert them
to a dynamic block during upgrade. With only static analysis we must
unfortunately generate a very conservative, ugly dynamic block with
every possible argument set. Users ought to then clean up the generated
configuration after confirming which arguments are actually required.
We're using break elsewhere in here so it was weird to have a small set
of situations that return instead, which could then cause confusion for
future maintenance if a reader doesn't notice that control doesn't always
leave the outer switch statement.
If lookup is being used with only two arguments then it is equivalent to
index syntax and more readable that way, so we'll replace it.
Ideally we'd do similarly for element(...) here but sadly we cannot
because we can't prove in static analysis that the user is not relying
on the modulo wraparound behavior of that function.
We now have native language features for declaring tuples and objects,
which are the idiomatic way to construct sequence and mapping values that
can then be converted to list, set, and map types as needed.
In the old world, lists and maps could be created either using functions
in HIL or list/object constructs in HCL. Here we ensure that in the HCL
case we'll apply any required expression transformations to the individual
items within HCL's compound constructs.
Previously we were using the line count difference between the start of
one item and the next to decide whether to insert a blank line between
two items, but that is incorrect for multi-line items.
Instead, we'll now count the difference from the final line of the
previous item to the first line of the next, as best we can with the
limited position info recorded by the HCL1 parser.
The old parser was forgiving in allowing the use of block syntax where a
map attribute was expected, but the new parser is not (in order to allow
for dynamic map keys, for expressions, etc) and so the upgrade tool must
fix these to use attribute syntax.
The main area of interest in upgrading is dealing with special cases for
individual block items, so this generalization allows us to use the same
overall body-processing logic for everything but to specialize just how
individual items are dealt with, which we match by their names as given
in the original input source code.
This involved some refactoring of how block bodies are migrated, which
still needs some additional work to deal with meta-arguments but is now
at least partially generalized to support both resource and provider
blocks.
This allows basic static validation of a traversal against a schema, to
verify that it represents a valid path through the structural parts of
the schema.
The main purpose of this is to produce better error messages (using our
knowledge of the schema) than we'd be able to achieve by just relying
on HCL expression evaluation errors. This is particularly important for
nested blocks because it may not be obvious whether one is represented
internally by a set or a list, and incorrect usage would otherwise produce
a confusing HCL-oriented error message.
We want the forthcoming v0.12.0 release to be the last significant
breaking change to our main configuration constructs for a long time, but
not everything could be implemented in that release.
As a compromise then, we reserve various names we have some intent of
using in a future release so that such future uses will not be a further
breaking change later.
Some of these names are associated with specific short-term plans, while
others are reserved conservatively for possible later work and may be
"un-reserved" in a later release if we don't end up using them. The ones
that we expect to use in the near future were already being handled, so
we'll continue to decode them at the config layer but also produce an
error so that we don't get weird behavior downstream where the
corresponding features don't work yet.
Since our new approach here works by installing with a synthetic module
configuration block, we need to treat relative paths as a special case
for two reasons:
- Relative paths in module addresses are relative to the file containing
the call rather than the working directory, but -from-module uses the
working directory (and the call is in a synthetic "file" anyway)
- We need to force Terraform to pass the path through to go-getter rather
than just treating it as a relative reference, since we really do want
a copy of the directory in this case, even if it is local.
To address both of these things, we'll detect a relative path and turn it
into an absolute path before beginning installation. This is a bit hacky,
but this is consistent with the general philosophy of the -from-module
implementation where it does hacky things so that the rest of the
installer code can be spared of dealing with its special cases.
This is covered by a couple of existing tests that run init -from-module,
including TestInit_fromModule_dstInSrc which now passes.
The tests in here are illustrating that this package is not yet finished,
but we plan to run a release before we finish this and so we'll skip those
tests for now with the intent of reinstating this again once we return
to finish this up.
The test provider comes with a lot of baggage since it's designed to be
used as a plugin, so instead we'll just use the mock provider
implementation directly, and so we can (in a later commit) configure it
appropriately for what our tests need here.
This is still not compileable because the test provider needs to be
updated to the new provider interface, but all the rest of the types are
now correct so we can update the test provider in a later commit to make
this work again.
Given a module foo and a module foo/bar, the previous code might
incorrectly treat "bar" as a file within "foo" rather than as a module
directory in its own right.
We need to make the collection itself be a tuple or object rather than
list or map in this case, since otherwise all of the elements of the
collection are constrained to be of the same type and that isn't the
intent of a provider indicating that it accepts any type.
This produces a "proposed new state", which already has prior computed
values propagated into it (since that behavior is standard for all
resource types) but could be customized further by the provider to make
the "_planned_ new state".
In the process of implementing this it became clear that our configschema
DecoderSpec behavior is incorrect, since it's producing list values for
NestingList and map values for NestingMap. While that seems like it should
be right, we should actually be using tuple and object types respectively
to allow each block to have a different runtime type in situations where
an attribute is given the type cty.DynamicPseudoType. That's not fixed
here, and so without a further fix list and map blocks will panic here.
The DecoderSpec implementation will be fixed in a subsequent commit.
Due to how often the state and plan types are referenced throughout
Terraform, there isn't a great way to switch them out gradually. As a
consequence, this huge commit gets us from the old world to a _compilable_
new world, but still has a large number of known test failures due to
key functionality being stubbed out.
The stubs here are for anything that interacts with providers, since we
now need to do the follow-up work to similarly replace the old
terraform.ResourceProvider interface with its replacement in the new
"providers" package. That work, along with work to fix the remaining
failing tests, will follow in subsequent commits.
The aim here was to replace all references to terraform.State and its
downstream types with states.State, terraform.Plan with plans.Plan,
state.State with statemgr.State, and switch to the new implementations of
the state and plan file formats. However, due to the number of times those
types are used, this also ended up affecting numerous other parts of core
such as terraform.Hook, the backend.Backend interface, and most of the CLI
commands.
Just as with 5861dbf3fc49b19587a31816eb06f511ab861bb4 before, I apologize
in advance to the person who inevitably just found this huge commit while
spelunking through the commit history.
Any value that is not set in the configuration should decode as a Null
value. Unknown should still be returned if a computed value expression
is unknown.
The "config" package is no longer used and will be removed as part
of the 0.12 release cleanup. Since configschema is part of the
"new world" of configuration modelling, it makes more sense for
it to live as a subdirectory of the newer "configs" package.
In order to properly migrate the contents of resource, data, provider and
provisioner blocks we will need the provider's schema in order to
understand what is expected, so we can resolve some ambiguities inherent
in the legacy HCL AST.
This includes an initial prototype of migrating the content of resource
blocks just to verify that the information is being gathered correctly.
As with the rest of the upgrade_native.go file, this will be reorganized
significantly once the basic end-to-end flow is established and we can
see how to organize this code better.
Since schemas are required to interpret provider, resource, and
provisioner attributes in configs, states, and plans, these helpers intend
to make it easier to gather up the the necessary provider types in order
to preload all of the needed schemas before beginning further processing.
Config.ProviderTypes returns directly the list of provider types, since
at this level further detail is not useful: we've not yet run the
provider allocation algorithm, and so the only thing we can reliably
extract here is provider types themselves.
State.ProviderAddrs and Plan.ProviderAddrs each return a list of
absolute provider addresses, which can then be turned into a list of
provider types using the new helper providers.AddressedTypesAbs.
Since we're already using configs.Config throughout core, this also
updates the terraform.LoadSchemas helper to use Config.ProviderTypes
to find the necessary providers, rather than implementing its own
discovery logic. states.State is not yet plumbed in, so we cannot yet
use State.ProviderAddrs to deal with the state but there's a TODO comment
to remind us to update that in a later commit when we swap out
terraform.State for states.State.
A later commit will probably refactor this further so that we can easily
obtain schema for the providers needed to interpret a plan too, but that
is deferred here because further work is required to make core work with
the new plan types first. At that point, terraform.LoadSchemas may become
providers.LoadSchemas with a different interface that just accepts lists
of provider and provisioner names that have been gathered by the caller
using these new helpers.
This is the frontend to the work-in-progress codepath for upgrading the
source code for a module written for Terraform v0.11 or earlier to use
the new syntax and idiom of v0.12.
The underlying upgrade code is not yet complete as of this commit, and
so the command is not yet very useful. We will continue to iterate on
the upgrade code in subsequent commits.
This covers all of the expression node types in HIL's AST, and also
includes initial support for some of our top-level blocks so that we can
easily test that.
The initial implementations of the "variable" and "output" blocks are
pretty redundant and messy, so we can hopefully improve on these in a
later pass.
This function is the main functionality of this package. So far it just
deals with detecting and renaming JSON files that are mislabeled as
native syntax files. Other functionality will follow in later commits.
This package will do all of its work in-memory so that it can avoid making
partial updates and then failing, so we need to be able to load the
sources files from a particular directory into memory.
The upgrade process isn't idempotent, so we also attempt to detect
heuristically whether an upgrade has already been performed (can parse
with the new parser and has a version constraint that prevents versions
earlier than 0.12) so that the CLI tool that will eventually wrap this
will be able to produce a warning and prompt for confirmation in that
case.
Although the new HCL implementation and configuration loader is broadly
compatible with the prior implementation, it has a number of new idiomatic
forms and it also cannot parse some more extreme non-idiomatic usages
that were possible under the old parser.
To help users migrate to the new implementation, this package will rewrite
configuration to comply with the new idiom and fix as many cases as
possible where the legacy parser was too liberal or exposed implementation
details.
It is common for the same module source package to be referenced multiple
times in the same configuration, either because there are literally
multiple instances of the same module source or because a single package
(or repository) contains multiple modules in sub-directories and many
of them are referenced.
To optimize this, here we introduce a simple caching behavior where the
module installer will detect if it's asked to install multiple times from
the same source and produce the second and subsequent directories by
copying the first, rather than by downloading again over the network.
This optimization is applied once all of the go-getter detection has
completed and sub-directory portions have been trimmed, so it is also
able to normalize differently-specified source addresses that all
ultimately detect to the same resolved address. When installing, we
always extract the entire specified package (or repository) and then
reference the specified sub-directory, so we can safely re-use existing
directories when the base package is the same, even if the sub-directory
is different.
However, as a result we do not yet address the fact that the same package
will be stored multiple times _on disk_, which may still be problematic
when referencing large repositories multiple times in
disk-storage-constrained environments. We could address this in a
subsequent change by investigating the use of symlinks where possible.
Since the Registry installer is implemented just as an extra resolution
step in front of go-getter, this optimization applies to registry
modules too. This does not apply to local relative references, which will
continue to just resolve into the already-prepared directory of their
parent module.
The cache of previously installed paths lives only for the duration of
one call to InstallModules, so we will never re-use directories that
were created by previous runs of "terraform init" and there is no risk
that older versions will pollute the cache when attempting an upgrade
from a source address that doesn't explicitly specify a version.
No additional tests are added here because the existing module installer
tests (when TF_ACC=1) already cover the case of installing multiple
modules from the same source.