A common pattern is to conditionally assign to "count"
in a resource in order to decide dynamically whether it
should be created. In that situation it's necessary to
refer to attributes of the resource using the splat
syntax, but historically we didn't show errors in output
expressions and so people "got away with" incorrect usage
in that context.
The intent of this warning is to catch
potentially-problematic usage of attributes on such
resources even if the count happens to be currently
set dynamically to 1, which would not generate the
error. Then the user can quickly locate and fix the
incorrect usage regardless of the current value.
Validation is the best time to return detailed diagnostics
to the user since we're much more likely to have source
location information, etc than we are in later operations.
This change doesn't actually add any detail to the messages
yet, but it changes the interface so that we can gradually
introduce more detailed diagnostics over time.
While here there are some minor adjustments to some of the
messages to improve their consistency with terminology we
use elsewhere.
Use the ResourceState.Provider field to store the full name of the
provider used during apply. This field is only used when a resource is
removed from the config, and will allow that resource to be removed by
the exact same provider with which it was created.
Modify the locations which might accept the alue of the
ResourceState.Provider field to detect that the name is resolved.
If a provider configuration is inherited from another module, any
interpolations in that config won't have variables declared locally. Let
the config only be validated in it's original location.
Add the Version and Providers fields to the module config.
Add ProviderConfig.Scope, which will be used to record the original
path of a ProviderConfig for interpolation.
This early validation uses interpolation of a placeholder value to achieve
some "best effort" validation of the validity of the count attribute.
Since HCL2-specified resources can't be interpolated using the main
interpolator, here we branch and use the HCL2 API to do a
largely-equivalent (though slightly less accurate) check.
In the long run we don't really need this extra check at all, since the
validation walk does a more accurate version of the same thing. However,
we're preserving this for now in the interests of minimizing the amount
of change for the main codepath during our experiment.
There is some additional, early validation on the "count" meta-argument
that verifies that only suitable variable types are used, and adding local
values to this whitelist was missed in the initial implementation.
Previously we were using the "semver" library to parse version
constraints, but we switched over to go-version and encapsulated it
inside our own plugin/discovery package to reduce dependency sprawl in
the code.
This particular situation was missed when updating references to the new
path, which meant that our validation code disagreed with the rest of
the code about what is considered a valid version constraint string.
By using the correct function, we ensure that we catch early any invalid
versions.
Previously the logic for inferring a provider type from a resource name
was buried a utility function in the 'terraform' package. Instead here we
lift it up into the 'config' package where we can make broader use of it
and where it's easier to discover.
In future we will support version constraints on providers, so we're
reserving this attribute name that is currently not used by any builtin
providers.
For now using this will produce an error, since the rest of Terraform
(outside of the config parser) doesn't currently have this notion and we
don't want people to start trying to use it until its behavior is fully
defined and implemented.
It may be used by third-party providers, so this is a breaking change
worth warning about in CHANGELOG but one whose impact should be small.
Any third-party providers using this name should migrate to using a new
attribute name instead moving forward.
Prior to Terraform 0.7, lists in Terraform were just a shallow abstraction
on top of strings with a magic delimiter between items. Wrapping a single
string in brackets in the configuration was Terraform's prompt that it
needed to split the string on that delimiter during interpolation.
In 0.7, when first-class lists were added, this convention was preserved
by flattening lists-of-lists by one level when they were encountered in
configuration. However, there was an oversight in that change where it
did not correctly handle the case where the inner list was unknown.
In #14135 we removed some code that was flattening partially-unknown lists
into fully-unknown (untyped) values. This inadvertently exposed the missed
case from the previous paragraph, causing issues for list-wrapped splat
expressions with unknown members. While this worked fine for resources,
due to some fixup done inside helper/schema, this did not work for other
interpolation contexts such as module blocks.
Various attempts to fix this up and restore the flattening behavior
selectively were unsuccessful, due to a proliferation of assumptions all
over the core code that would be too risky to change just to fix this bug.
This change, then, takes the different approach of removing the
requirement that splats be presented inside list brackets. This
requirement didn't make much sense anymore anyway, since no other
list-returning expression had this constraint and so the rest of Terraform
was already successfully dealing with both cases.
This leaves us with two different scenarios:
- For resource arguments, existing normalization code in helper/schema
does its own flattening that preserves compatibility with the common
practice of using bracketed splats. This change proves this with a test
within the "test" provider that exercises the whole Terraform core and
helper/schema stack that assigns bracketed splats to list and set
attributes.
- For arguments in other blocks, such as in module callsites, the
interpolator's own flattening behavior applies to known lists,
preserving compatibility with configurations from before
partially-computed splats were possible, but those wishing to use
partially-computed splats are required to drop the surrounding brackets.
This is less concerning because this scenario was introduced only in
0.9.5, so the scope for breakage is limited to those who adopted this
new feature quickly after upgrading.
As of this commit, the recommendation is to stop using brackets around
splats but the old form continues to be supported for backward
compatibility. In a future _major_ version of Terraform we will probably
phase out this legacy form to improve consistency, but for now both
forms are acceptable at the expense of some (pre-existing) weird behavior
when _actual_ lists-of-lists are used.
This addresses #14521 by officially adopting the suggested workaround of
dropping the brackets around the splat. However, it doesn't yet allow
passing of a partially-unknown list between modules: that still violates
assumptions in Terraform's core, so for the moment partially-unknown lists
work only within a _single_ interpolation expression, and cannot be
passed around between expressions. Until more holistic work is done to
improve Terraform's type handling, passing a partially-unknown splat
through to a module will result in a fully-unknown list emerging on
the other side, just as was the case before #14135; this change just
addresses the fact that this was failing with an error in 0.9.5.
> This validation checks that there are now splat variables referencing ourself. This currently is not allowed.
=>
> This validation checks that there are no splat variables referencing ourself. This currently is not allowed.
The variable validator assumes that any AST node it gets from an
interpolation walk is an indicator of an interpolation. Unfortunately,
back in f223be15 we changed the interpolation walker to emit a LiteralNode
as a way to signal that the result is a literal but not identical to the
input due to escapes.
The existence of this issue suggests a bit of a design smell in that the
interpolation walker interface at first glance appears to skip over all
literals, but it actually emits them in this one situation. In the long
run we should perhaps think about whether the abstraction is right here,
but this is a shallow, tactical change that fixes#13001.
Fixes#11800
Type check the value of count so we don't panic on the conversion.
I wondered "why didn't we do this before?" There is no excuse for NOT
doing it at all but the reasoning was beacuse prior to the list/map work
in 0.7, the value couldn't be anything other than a string since any
primitive can turn into a string.
Regardless, we should've always done this.
This disables the computed value check for `count` during the validation
pass. This enables partial support for #3888 or #1497: as long as the
value is non-computed during the plan, complex values will work in
counts.
**Notably, this allows data source values to be present in counts!**
The "count" value can be disabled during validation safely because we
can treat it as if any field that uses `count.index` is computed for
validation. We then validate a single instance (as if `count = 1`) just
to make sure all required fields are set.
Fixes#4789
This improves the validation that valid provider aliases are used.
Previously, we required that provider aliases be defined in every module
they're used. This isn't correct because the alias may be used in a
parent module and inherited.
This removes that validation and creates the validation that a provider
alias must be defined in the used module or _any parent_. This allows
inheritance to work properly.
We've always had this type of validation for aliases because we believe
its a good UX tradeoff: typo-ing an alias is really painful, so we
require declaration of alias usage. It may add a small burden to
declare, but since relatively few aliases are used, it improves the
scenario where a user fat-fingers an alias name.
Fixes#10715
`config.Merge` was not updated to support a number of new features. This
updates the codepath to merge various fields, including the `terraform`
block which was the issue in #10715.
The `Merge` API is called when an `_override` file is present to _merge_
configurations. Normally configurations are _appended_. Only an override
file triggers a _merge_.
I started working on a generic library to do this automatically awhile
back but never finished it. This might motivate me to do so. In the
interest of getting a fix out though, we'll continue the manual
approach.
Fixes#10597
This disallows any names for variables, modules, etc. starting with
ints. This causes parse errors with the new HIL parser and actually
causes long term ambiguities if we allow this.
I've also updated the upgrade guide to note this as a backwards
compatibility and how people can fix this going forward.
We allow variables to have descriptions specified, as additional context
for a module user as to what should be provided for a given variable.
We previously lacked a similar mechanism for outputs. Since they too are
part of a module's public interface, it makes sense to be able to add
descriptions for these for symmetry's sake.
This change makes a "description" attribute valid within an "output"
configuration block and stores it within the configuration data structure,
but doesn't yet do anything further with it. For now this is useful only
for third-party tools that might parse a module's config to generate
user documentation; later we could expose the descriptions as part of
the "apply" output, but that is left for a separate change.
This is the limitation of all lifecycle attributes currently. Right now,
interpolations are allowed through and the user ends up thinking it
should work. We should give an error.
In the future it should be possible to support some minimal set of
interpolations (static variables, data sources even perhaps) but for now
let's validate that this doesn't work.
This commit changes config parsing from weak decoding lists and maps
into []string and map[string]string respectively to decode into
[]interface{} and map[string]interface{} respectively. This is in order
to take advantage of the work integrated in #7082 to defeat the backward
compatibility features of the mapstructure library.
Test coverage of loading empty variables and validating their default
types against expectation.
The mapstructure library has a regrettable backward compatibility
concern whereby a WeakDecode of []interface{}{} into a target of
map[string]interface{} yields an empty map rather than an error. One
possibility is to switch to using Decode instead of WeakDecode, but this
loses the nice handling of type conversion, requiring a large volume of
code to be added to Terraform or HIL in order to retain that behaviour.
Instead we add a DecodeHook to our usage of the mapstructure library
which checks for decoding []interface{}{} or []string{} into a map and
returns an error instead.
This has the effect of defeating the code added to retain backwards
compatibility in mapstructure, giving us the correct (for our
circumstances) behaviour of Decode for empty structures and the type
conversion of WeakDecode.
The code is identical to that in the HIL library, and packaged into a
helper.
Since the data resource lifecycle contains no steps to deal with tainted
instances, we must make sure that they never get created.
Doing this out in the command layer is not the best, but this is currently
the only layer that has enough information to make this decision and so
this simple solution was preferred over a more disruptive refactoring,
under the assumption that this taint functionality eventually gets
reworked in terms of StateFilter anyway.
This allows ${data.TYPE.NAME.FIELD} interpolation syntax at the
configuration level, though since there is no special handling of them
in the core package this currently just acts as an alias for
${TYPE.NAME.FIELD}.
Previously resources were assumed to always support the full set of
create, read, update and delete operations, and Terraform's resource
management lifecycle.
Data sources introduce a new kind of resource that only supports the
"read" operation. To support this, a new "Mode" field is added to
the Resource concept within the config layer, which can be set to
ManagedResourceMode (to indicate the only mode previously possible) or
DataResourceMode (to indicate that only "read" is supported).
To support both managed and data resources in the tests, the
stringification of resources in config_string.go is adjusted slightly
to use the Id() method rather than the unusual type[name] serialization
from before, causing a simple mechanical adjustment to the loader tests'
expected result strings.
This commit adds support for native list variables and outputs, building
up on the previous change to state. Interpolation functions now return
native lists in preference to StringList.
List variables are defined like this:
variable "test" {
# This can also be inferred
type = "list"
default = ["Hello", "World"]
}
output "test_out" {
value = "${var.a_list}"
}
This results in the following state:
```
...
"outputs": {
"test_out": [
"hello",
"world"
]
},
...
```
And the result of terraform output is as follows:
```
$ terraform output
test_out = [
hello
world
]
```
Using the output name, an xargs-friendly representation is output:
```
$ terraform output test_out
hello
world
```
The output command also supports indexing into the list (with
appropriate range checking and no wrapping):
```
$ terraform output test_out 1
world
```
Along with maps, list outputs from one module may be passed as variables
into another, removing the need for the `join(",", var.list_as_string)`
and `split(",", var.list_as_string)` which was previously necessary in
Terraform configuration.
This commit also updates the tests and implementations of built-in
interpolation functions to take and return native lists where
appropriate.
A backwards compatibility note: previously the concat interpolation
function was capable of concatenating either strings or lists. The
strings use case was deprectated a long time ago but still remained.
Because we cannot return `ast.TypeAny` from an interpolation function,
this use case is no longer supported for strings - `concat` is only
capable of concatenating lists. This should not be a huge issue - the
type checker picks up incorrect parameters, and the native HIL string
concatenation - or the `join` function - can be used to replicate the
missing behaviour.