* configs: added map of ProviderLocalNames to configs.Module
We will need to lookup any user-supplied local names for a given FQN.
This PR adds a map of ProviderLocalNames to the Module, along with
adding tests for this and for decodeRequiredProvidersBlock.
This also introduces the appearance of support for a required_provider
"source" attribute, but ignores any user-supplied source and instead
continues to assume that addrs.NewLegacyProvider is the way to go.
This is a stepping-stone PR for the provider source project. In this PR
"legcay-stype" FQNs are created from the provider name string. Future
work involves encoding the FQN directly in the AbsProviderConfig and
removing the calls to addrs.NewLegacyProvider().
* Introduce "Local" terminology for non-absolute provider config addresses
In a future change AbsProviderConfig and LocalProviderConfig are going to
become two entirely distinct types, rather than Abs embedding Local as
written here. This naming change is in preparation for that subsequent
work, which will also include introducing a new "ProviderConfig" type
that is an interface that AbsProviderConfig and LocalProviderConfig both
implement.
This is intended to be largely just a naming change to get started, so
we can deal with all of the messy renaming. However, this did also require
a slight change in modeling where the Resource.DefaultProviderConfig
method has become Resource.DefaultProvider returning a Provider address
directly, because this method doesn't have enough information to construct
a true and accurate LocalProviderConfig -- it would need to refer to the
configuration to know what this module is calling the provider it has
selected.
In order to leave a trail to follow for subsequent work, all of the
changes here are intended to ensure that remaining work will become
obvious via compile-time errors when all of the following changes happen:
- The concept of "legacy" provider addresses is removed from the addrs
package, including removing addrs.NewLegacyProvider and
addrs.Provider.LegacyString.
- addrs.AbsProviderConfig stops having addrs.LocalProviderConfig embedded
in it and has an addrs.Provider and a string alias directly instead.
- The provider-schema-handling parts of Terraform core are updated to
work with addrs.Provider to identify providers, rather than legacy
strings.
In particular, there are still several codepaths here making legacy
provider address assumptions (in order to limit the scope of this change)
but I've made sure each one is doing something that relies on at least
one of the above changes not having been made yet.
* addrs: ProviderConfig interface
In a (very) few special situations in the main "terraform" package we need
to make runtime decisions about whether a provider config is absolute
or local.
We currently do that by exploiting the fact that AbsProviderConfig has
LocalProviderConfig nested inside of it and so in the local case we can
just ignore the wrapping AbsProviderConfig and use the embedded value.
In a future change we'll be moving away from that embedding and making
these two types distinct in order to represent that mapping between them
requires consulting a lookup table in the configuration, and so here we
introduce a new interface type ProviderConfig that can represent either
AbsProviderConfig or LocalProviderConfig decided dynamically at runtime.
This also includes the Config.ResolveAbsProviderAddr method that will
eventually be responsible for that local-to-absolute translation, so
that callers with access to the configuration can normalize to an
addrs.AbsProviderConfig given a non-nil addrs.ProviderConfig. That's
currently unused because existing callers are still relying on the
simplistic structural transform, but we'll switch them over in a later
commit.
* rename LocalType to LocalName
Co-authored-by: Kristin Laemmert <mildwonkey@users.noreply.github.com>
Renamed file.ProviderRequirements to file.RequiredProviders to match the
name of the block in the configuration. file.RequiredProviders contains
the contents of the file(s); module.ProviderRequirements contains the
parsed and merged provider requirements.
Extended decodeRequiredProvidersBlock to parse the new provider source
syntax (version only, it will ignore any other attributes).
Added some tests; swapped deep.Equal with cmp.Equal in the
terraform/module_dependencies_test.go because deep was not catching
incorrect constraints.
The existing "type" argument allows specifying a type constraint that
allows for some basic validation, but often there are more constraints on
a variable value than just its type.
This new feature (requiring an experiment opt-in for now, while we refine
it) allows specifying arbitrary validation rules for any variable which
can then cause custom error messages to be returned when a caller provides
an inappropriate value.
variable "example" {
validation {
condition = var.example != "nope"
error_message = "Example value must not be \"nope\"."
}
}
The core parts of this are designed to do as little new work as possible
when no validations are specified, and thus the main new checking codepath
here can therefore only run when the experiment is enabled in order to
permit having validations.
* deps: bump terraform-config-inspect library
* configs: parse `version` in new required_providers block
With the latest version of `terraform-config-inspect`, the
required_providers attribute can now be a string or an object with
attributes "source" and "version". This change allows parsing the
version constraint from the new object while ignoring any given source attribute.
In an earlier change we switched to defining our own sets of detectors,
getters, etc for go-getter in order to insulate us from upstream changes
to those sets that might otherwise change the user-visible behavior of
Terraform's module installer.
However, we apparently neglected to actually refer to our local set of
detectors, and continued to refer to the upstream set. Here we catch up
with the latest detectors from upstream (taken from the version of
go-getter we currently have vendored) and start using that fixed set.
Currently we are maintaining these custom go-getter sets in two places
due to the configload vs. initwd distinction. That was already true for
goGetterGetters and goGetterDecompressors, and so I've preserved that for
now just to keep this change relatively simple; in later change it would
be nice to factor these "get with go getter" functions out into a shared
location which we can call from both configload and initwd.
* configs: move ProviderConfigCompact[Str] from addrs to configs
The configs package is aware of provider name and type (which are the
same thing today, but expected to be two different things in a future
release), and should be the source of truth for a provider config
address. This is an intermediate step; the next step will change the returned types to something based in the configs package.
* command: rename choosePlugins to chooseProviders to clarify scope of function
* use `Provider.LegacyString()` (instead of `Provider.Type`) consistently
* explicitly create legacy-style provider (continuing from above change)
Traditionally we've preferred to release new language features in major
releases only, because we can then use the beta cycle to gather feedback
on the feature and learn about any usability challenges or other
situations we didn't consider during our design in time to make those
changes before inclusion in a stable release.
This "experiments" feature is intended to decouple the feedback cycle for
new features from the major release rhythm, and thus allow us to release
new features in minor releases by first releasing them as experimental for
a minor release or two, adjust for any feedback gathered during that
period, and then finally remove the experiment gate and enable the feature
for everyone.
The intended model here is that anything behind an experiment gate is
subject to breaking changes even in patch releases, and so any module
using these experimental features will be broken by a future Terraform
upgrade.
The behavior implemented here is:
- Recognize a new "experiments" setting in the "terraform" block which
allows module authors to explicitly opt in to experimental features.
terraform {
experiments = [resource_for_each]
}
- Generate a warning whenever loading a module that has experiments
enabled, to avoid accidentally depending on experimental features and
thus risking unexpected breakage on next Terraform upgrade.
- We check the enabled experiments against the configuration at module
load time, which means that experiments are scoped to a particular
module. Enabling an experiment in one module does not automatically
enable it in any other module.
This experiments mechanism is itself an experiment, and so I'd like to
use the resource for_each feature to trial it. Because any configuration
using experiments is subject to breaking changes, we are free to adjust
this experiments feature in future releases as we see fit, but once
for_each is shipped without an experiment gate we'll be blocked from
making significant changes to it until the next major release at least.
The configs package is aware of provider name and type (which are the
same thing today, but expected to be two different things in a future
release), and should be the source of truth for a provider config
address.
* huge change to weave new addrs.Provider into addrs.ProviderConfig
* terraform: do not include an empty string in the returned Providers /
Provisioners
- Fixed a minor bug where results included an extra empty string
* terraform/context: use new addrs.Provider as map key in provider factories
* added NewLegacyProviderType and LegacyString funcs to make it explicit that these are temporary placeholders
This PR introduces a new concept, provider fully-qualified name (FQN), encapsulated by the `addrs.Provider` struct.
Add deprecation warning for references from destroy provisioners or
their connections to external resources or values. In order to ensure
resource destruction can be completed correctly, destroy nodes must be
able to evaluate with only their instance state.
We have sufficient information to validate destroy-time provisioners
early on during the config loading process. Later on these can be
converted to hard errors, and only allow self, count.index, and each.key
in destroy provisioners. Limited the provisioner and block evaluation
scope later on is tricky, but if the references can never be loaded,
then they will never be encountered during evaluation.
`terraform 0.12upgrade` assumes that the configuration has passed 0.11
init, but did not explicitly check that the configuration was valid.
Certain issues would not get caught because the configuration was
syntactically valid. In this case, int or float values out of range
resulted in a panic from `Value()`.
Since running a 0.11 validate command is a breaking change, this PR
merely moves the `Value()` logic for ints and floats into `configupgrade` so
the error can be returned to the user, instead of causing a panic.
Following on from de652e22a26b, this introduces deprecation warnings for
when an attribute value expression is a template with only a single
interpolation sequence, and for variable type constraints given in quotes.
As with the previous commit, we allowed these deprecated forms with no
warning for a few releases after v0.12.0 to ensure that folks who need to
write cross-compatible modules for a while during upgrading would be able
to do so, but we're now marking these as explicitly deprecated to guide
users towards the new idiomatic forms.
The "terraform 0.12upgrade" tool would've already updated configurations
to not hit these warnings for those who had pre-existing configurations
written for Terraform 0.11.
The main target audience for these warnings are newcomers to Terraform who
are learning from existing examples already published in various spots on
the wider internet that may be showing older Terraform syntax, since those
folks will not be running their configurations through the upgrade tool.
These warnings will hopefully guide them towards modern Terraform usage
during their initial experimentation, and thus reduce the chances of
inadvertently adopting the less-readable legacy usage patterns in
greenfield projects.
Terraform 0.12.0 removed the need for putting references and keywords
in quotes, but we disabled the deprecation warnings for the initial
release in order to avoid creating noise for folks who were intentionally
attempting to maintain modules that were cross-compatible with both
Terraform 0.11 and Terraform 0.12.
However, with Terraform 0.12 now more widely used, the lack of these
warnings seems to be causing newcomers to copy the quoted versions from
existing examples on the internet, which is perpetuating the old and
confusing quoted form in newer configurations.
In preparation for phasing out these deprecated forms altogether in a
future major release, and for the shorter-term benefit of giving better
feedback to newcomers when they are learning from outdated examples, we'll
now re-enable those deprecation warnings, and be explicit that the old
forms are intended for removal in a future release.
In order to properly test this, we establish a new set of test
configurations that explicitly mark which warnings they are expecting and
verify that they do indeed produce those expected warnings. We also
verify that the "success" tests do _not_ produce warnings, while removing
the ones that were previously written to succeed but have their warnings
ignored.
During the 0.12 work we intended to move all of the variable value
collection logic into the UI layer (command package and backend packages)
and present them all together as a unified data structure to Terraform
Core. However, we didn't quite succeed because the interactive prompts
for unset required variables were still being handled _after_ calling
into Terraform Core.
Here we complete that earlier work by moving the interactive prompts for
variables out into the UI layer too, thus allowing us to handle final
validation of the variables all together in one place and do so in the UI
layer where we have the most context still available about where all of
these values are coming from.
This allows us to fix a problem where previously disabling input with
-input=false on the command line could cause Terraform Core to receive an
incomplete set of variable values, and fail with a bad error message.
As a consequence of this refactoring, the scope of terraform.Context.Input
is now reduced to only gathering provider configuration arguments. Ideally
that too would move into the UI layer somehow in a future commit, but
that's a problem for another day.
Previously we were using the experimental HCL 2 repository, but now we'll
shift over to the v2 import path within the main HCL repository as part of
actually releasing HCL 2.0 as stable.
This is a mechanical search/replace to the new import paths. It also
switches to the v2.0.0 release of HCL, which includes some new code that
Terraform didn't previously have but should not change any behavior that
matters for Terraform's purposes.
For the moment the experimental HCL2 repository is still an indirect
dependency via terraform-config-inspect, so it remains in our go.sum and
vendor directories for the moment. Because terraform-config-inspect uses
a much smaller subset of the HCL2 functionality, this does still manage
to prune the vendor directory a little. A subsequent release of
terraform-config-inspect should allow us to completely remove that old
repository in a future commit.
copyDir is used in configload/getter.go to copy previously downloaded modules instead of using the go-getter client every time. The go-getter client downloads dotfiles, but copyDir did not copy dotfiles, leading to inconsistent behaviour when reusing the same module source.
In order to allow lazy evaluation of resource indexes, we can't index
resources immediately via GetResourceInstance. Change the evaluation to
always return whole Resources via GetResource, and index individual
instances during expression evaluation.
This will allow us to always check for invalid index errors rather than
returning an unknown value and ignoring it during apply.
We can only validate MinItems >= 1 (equiv to "Required") during
decoding, as dynamic blocks each only decode as a single block. MaxItems
cannot be validated at all, also because of dynamic blocks, which may
have any number of blocks in the config.
Due to both the nature of dynamic blocks, and the need for resources to
sometimes communicate incomplete values, we cannot validate MinItems and
MaxItems in CoerceValue.
A provider may not have the data to fill in required block values in all
cases during the resource Read operation. This is more common in import,
because there is no initial configuration or state, and it's possible
some values are only provided in the configuration.
The original intent of MinItems and MaxItems in the schema was to
enforce configuration constraints, not to enforce what the resource
could save in the state. Since the configuration is already statically
validated, and the Schema is validated against the configuration in a
separate step, we can drop these extra validation constraints in
CoerceValue and relax it to only ensure the types conform to what is
expected.
If a block was defined via "dynamic", there will be only one block value
until the expansion is known. Since we can't detect dynamic blocks at
this point, don't verify MinItems while there are unknown values in the
config.
The decoder spec can also only check for existence of a block, so limit
the check to 0 or 1.
This also fixes a few things with resource for_each:
It makes validation more like validation for count.
It makes sure the index is stored in the state properly.
In some cases (see #22020 for a specific example), the parsed hilNode
can be nil. This causes a series of panics. Instead, return an error and
move on.
When loading nested modules, the child module diagnostics were dropped
in the recursive function. This mean that the config from the submodules
wasn't fully loaded, even though no errors were reported to the user.
This caused further problems if the plan was stored in a plan file, when
means only the partial configuration was stored for the subsequent apply
operation, which would result in unexplained "Resource node has no
configuration attached" errors later on.
Also due to the child module diagnostics being lost, any newly added
nested modules would be silently ignored until `init` was run again
manually.