This method mirrors that of config.Backend, so we can compare the
configration of a backend read from a config vs that of a backend read
from a state. This will prevent init from reinitializing when using
`-backend-config` options that match the existing state.
The variable validator assumes that any AST node it gets from an
interpolation walk is an indicator of an interpolation. Unfortunately,
back in f223be15 we changed the interpolation walker to emit a LiteralNode
as a way to signal that the result is a literal but not identical to the
input due to escapes.
The existence of this issue suggests a bit of a design smell in that the
interpolation walker interface at first glance appears to skip over all
literals, but it actually emits them in this one situation. In the long
run we should perhaps think about whether the abstraction is right here,
but this is a shallow, tactical change that fixes#13001.
golang/tools commit 23ca8a263 changed the format of the leading comment
to comply with some new standards discussed here:
https://golang.org/issue/13560
This is the result of running generate with the latest version of
stringer. Everyone working on Terraform will need to update stringer
after this is merged, to avoid reverting this:
go get -u golang.org/x/tools/cmd/stringer
Adds `basename` and `dirname` interpolation. I want to add a `stack` tag to our infrastructure, the value of which is set to `${basename(path.cwd)}`. We currently use `${replace(path.cwd, "/^.+\\//", "")}` instead, but this is extremeley unreadable. The existance of a `basename` function would be very useful for this use case.
I don't have an immediate use case for a `dirname` function, but it seemed reasonable to add it as well.
When configuration is read out of JSON, HCL assumes that empty levels of
objects can be flattened, but this removes too much to decode into a
config.Terraform struct.
Reconstruct the appropriate AST to decode the config struct.
Ensure that fields set in an earlier Terraform config block aren't
removed by Append when encountering another Terraform block. When
multiple blocks contain the same field, the later one still wins.
Fixes#12788
We would panic when referencing an output from an undefined module. The
panic above this is correct but in this case Load will not catch
interpolated variables that _reference_ an unloaded/undefined module.
Test included.
It can be tedious fixing a new module with many errors when Terraform
only outputs the first random error it encounters.
Accumulate all errors from validation, and format them for the user.
Fixes#11800
Type check the value of count so we don't panic on the conversion.
I wondered "why didn't we do this before?" There is no excuse for NOT
doing it at all but the reasoning was beacuse prior to the list/map work
in 0.7, the value couldn't be anything other than a string since any
primitive can turn into a string.
Regardless, we should've always done this.
This disables the computed value check for `count` during the validation
pass. This enables partial support for #3888 or #1497: as long as the
value is non-computed during the plan, complex values will work in
counts.
**Notably, this allows data source values to be present in counts!**
The "count" value can be disabled during validation safely because we
can treat it as if any field that uses `count.index` is computed for
validation. We then validate a single instance (as if `count = 1`) just
to make sure all required fields are set.
Fixes#11038
This is a **short term fix**.
Terraform core doesn't currently handle root modules named "root" well
because the prefix `[]string{"root"}` has special meaning and Terraform
core [currently] can't disambiguate between the root module and a module
named "root" in the root module.
This PR introduces a short term fix by simply disallowing root modules
named "root". This shouldn't break any BC because since 0.8.0 this
didn't work at all in many broken ways (including crashes).
Longer term, this should be fixed by removing the special prefix at all
and having empty paths be root. I started down this path but the core
changes necessary are far too scary for a patch release. We can aim for
0.9.
Fixes#4789
This improves the validation that valid provider aliases are used.
Previously, we required that provider aliases be defined in every module
they're used. This isn't correct because the alias may be used in a
parent module and inherited.
This removes that validation and creates the validation that a provider
alias must be defined in the used module or _any parent_. This allows
inheritance to work properly.
We've always had this type of validation for aliases because we believe
its a good UX tradeoff: typo-ing an alias is really painful, so we
require declaration of alias usage. It may add a small burden to
declare, but since relatively few aliases are used, it improves the
scenario where a user fat-fingers an alias name.
Fixes#10715
`config.Merge` was not updated to support a number of new features. This
updates the codepath to merge various fields, including the `terraform`
block which was the issue in #10715.
The `Merge` API is called when an `_override` file is present to _merge_
configurations. Normally configurations are _appended_. Only an override
file triggers a _merge_.
I started working on a generic library to do this automatically awhile
back but never finished it. This might motivate me to do so. In the
interest of getting a fix out though, we'll continue the manual
approach.
Fixes#10597
This disallows any names for variables, modules, etc. starting with
ints. This causes parse errors with the new HIL parser and actually
causes long term ambiguities if we allow this.
I've also updated the upgrade guide to note this as a backwards
compatibility and how people can fix this going forward.
We allow variables to have descriptions specified, as additional context
for a module user as to what should be provided for a given variable.
We previously lacked a similar mechanism for outputs. Since they too are
part of a module's public interface, it makes sense to be able to add
descriptions for these for symmetry's sake.
This change makes a "description" attribute valid within an "output"
configuration block and stores it within the configuration data structure,
but doesn't yet do anything further with it. For now this is useful only
for third-party tools that might parse a module's config to generate
user documentation; later we could expose the descriptions as part of
the "apply" output, but that is left for a separate change.
Fixes#10075Fixes#10013
When interpolating, we were only maintaining the last known slice index.
If you had sibling slices then you could lose your slice index when
exiting the slice. The resulting behavior was that no some runs the
computed key would be: "slice.0.attr" and on others would be
"slice.attr", the latter being incorrect.
We now maintain a list of slice indexes so that as we unnest, we
properly restore the old value.
Surprisingly unrelated to the graph but the shadow graph caught this
which is great. :)
Fixes#7774
This modifies the `import` command to load configuration files from the
pwd. This also augments the configuration loading section for the CLI to
have a new option (default false, same as old behavior) to
allow directories with no Terraform configurations.
For import, we allow directories with no Terraform configurations so
this option is set to true.
Fixes#7846
This changes from using the HCL decoder to manually decoding the
`variable` blocks within the configuration. This gives us a lot more
power to catch validation errors. This PR retains the same tests and
fixes one additional issue (covered by a test) in the case where a
variable has no named assigned.
Fixes#7607
An empty list is a valid value for formatlist which means to just return
an empty list as a result. The logic was somewhat convoluted here so I
cleaned that up a bit too. The function overall can definitely be
cleaned up a lot more but I left it mostly as-is to fix the bug.
This commit adds a new interpolation function, zipmap, which produces a
map given a list of string keys and a list of values of the same length
as the list of keys.
The name comes from the same operation in Clojure (and likely other
functional langauges).
This is the limitation of all lifecycle attributes currently. Right now,
interpolations are allowed through and the user ends up thinking it
should work. We should give an error.
In the future it should be possible to support some minimal set of
interpolations (static variables, data sources even perhaps) but for now
let's validate that this doesn't work.
This changes the key for the storage to be the _raw_ source from the
module, not the fully expanded source. Example: it'll be a relative path
instead of an absolute path.
This allows the ".terraform/modules" directory to be portable when
moving to other machines. This was a behavior that existed in <= 0.7.2
and was broken with #8398. This amends that and adds a test to verify.
As part of working on ResourceConfig.DeepCopy, Equal I updated
reflectwalk (to fix some issues in the new functions) but this
introduced more issues in other parts of Terraform. This update fixes
those.
Data sources should be able to support counts like a resource. We need
to remove "count" when we load the config because the key doesn't exist
in the schema, and the resource won't validate.
When a resource has only a single key set, the HCL parser treats that
key as part of the overall set of object keys. This isn't valid since
we expect resources to have exactly two keys. In this scenario, we have
to "unwrap" the keys back into a set of objects.
Set the default log package output to iotuil.Discard during tests if the
`-v` flag isn't set. If we are verbose, then apply the filter according
to the TF_LOG env variable.
The concat interpolation function now only accepts list arguments.
Strings are no longer supported, for concatenation or appending to
lists. All arguments must be a list, and single elements can be promoted
with the `list` interpolation function.
Fixes the following error when cross compiling:
```
--> freebsd/amd64 error: exit status 2
Stderr: # github.com/hashicorp/terraform/config/module
config/module/inode.go:18: cannot use st.Ino (type uint32) as type uint64 in return argument
```
* `map(key, value, ...)` - Returns a map consisting of the key/value pairs
specified as arguments. Every odd argument must be a string key, and every
even argument must have the same type as the other values specified.
Duplicate keys are not allowed. Examples:
* `map("hello", "world")`
* `map("us-east", list("a", "b", "c"), "us-west", list("b", "c", "d"))`
This will allow the concat interpolation function to accept lists of
lists, and lists of maps as well as strings. We still allow bare strings
for backwards compatibility, but remove some of the old comment wording
as it could cause confusion of this function with actual string
concatenation.
Since maps are now supported in the config, this removes the superfluous
(and failing) TestInterpolationFuncConcatListOfMaps.
Allow lists and maps within the list interpolation function via variable
interpolation. Since this requires setting the variadic type to TypeAny,
we check for non-heterogeneous lists in the callback.
The list() interpolation function provides a way to add support for list
literals (of strings) to HIL without having to invent new syntax for it
and modify the HIL parser.
It presents as a function, thus:
- list() -> []
- list("a") -> ["a"]
- list("a", "b") -> ["a", "b"]
Thanks to @wr0ngway for the idea of this approach, fixes#7460.
Part of the interpolation walk is to detect keys which involve computed
values and therefore cannot be resolved at this time. The interplation
walker keeps sufficient state to be able to populate the ResourceConfig
with a slice of such keys.
Previously they didn't take slice indexes into account, so in the
following case:
```
"services": []interface{}{
map[string]interface{}{
"elb": "___something computed___",
},
map[string]interface{}{
"elb": "___something else computed___",
},
map[string]interface{}{
"elb": "not computed",
},
}
```
Unknown keys would be populated as follows:
```
services.elb
services.elb
```
This is not sufficient information to be useful, as it is impossible to
distinguish which of the `services.elb`s are unknown vs not.
This commit therefore retains the slice indexes as part of the key for
unknown keys - producing for the example above:
```
services.0.elb
services.1.elb
```
When copying a config module, make sure the full path for src and dst
files don't match, and also check the inode in case we resolved a
different path to the same file.
Make a note about the unsafe usage of reusing a tempDir path.
Escaped quotes are no longer supported as HIL syntax (as of the last
update to HIL), so this commit changes the Terraform config-layer test
to verify the non-presence of this behaviour for 0.7.
The `concat()` interpolation function does not yet support types other
than strings / lists of strings. Make it an error message instead of a
panic when a list of non-primitives is supplied.
Fixes the panic in #7030
Dot indexing worked in the "regexps and strings" world of 0.6.x, but it
no longer works on the 0.7 series w/ proper List / Map types.
There is plenty of dot-indexed config out in the wild, so we need to do
what we can to point users to the new syntax.
Here is one place we can do it for user variables (`var.somemap`). We'll
also need to address Resource Variables and Module Variables in a
separate PR.
This fixes the panic in #7103 - a proper error message is now returned.
This commit changes config parsing from weak decoding lists and maps
into []string and map[string]string respectively to decode into
[]interface{} and map[string]interface{} respectively. This is in order
to take advantage of the work integrated in #7082 to defeat the backward
compatibility features of the mapstructure library.
Test coverage of loading empty variables and validating their default
types against expectation.
The mapstructure library has a regrettable backward compatibility
concern whereby a WeakDecode of []interface{}{} into a target of
map[string]interface{} yields an empty map rather than an error. One
possibility is to switch to using Decode instead of WeakDecode, but this
loses the nice handling of type conversion, requiring a large volume of
code to be added to Terraform or HIL in order to retain that behaviour.
Instead we add a DecodeHook to our usage of the mapstructure library
which checks for decoding []interface{}{} or []string{} into a map and
returns an error instead.
This has the effect of defeating the code added to retain backwards
compatibility in mapstructure, giving us the correct (for our
circumstances) behaviour of Decode for empty structures and the type
conversion of WeakDecode.
The code is identical to that in the HIL library, and packaged into a
helper.
Fixes#4474, where lookup() calls fail out the entire interpolation when
the provided key value is not found in the map. This will allow using
coalesce() along with lookup() to greatly improve module flexibility.
Since the data resource lifecycle contains no steps to deal with tainted
instances, we must make sure that they never get created.
Doing this out in the command layer is not the best, but this is currently
the only layer that has enough information to make this decision and so
this simple solution was preferred over a more disruptive refactoring,
under the assumption that this taint functionality eventually gets
reworked in terms of StateFilter anyway.
This allows ${data.TYPE.NAME.FIELD} interpolation syntax at the
configuration level, though since there is no special handling of them
in the core package this currently just acts as an alias for
${TYPE.NAME.FIELD}.
This allows the config loader to read "data" blocks from the config and
turn them into DataSource objects.
This just reads the data from the config file. It doesn't validate the
data nor do anything useful with it.
Previously resources were assumed to always support the full set of
create, read, update and delete operations, and Terraform's resource
management lifecycle.
Data sources introduce a new kind of resource that only supports the
"read" operation. To support this, a new "Mode" field is added to
the Resource concept within the config layer, which can be set to
ManagedResourceMode (to indicate the only mode previously possible) or
DataResourceMode (to indicate that only "read" is supported).
To support both managed and data resources in the tests, the
stringification of resources in config_string.go is adjusted slightly
to use the Id() method rather than the unusual type[name] serialization
from before, causing a simple mechanical adjustment to the loader tests'
expected result strings.
This commit adds support for native list variables and outputs, building
up on the previous change to state. Interpolation functions now return
native lists in preference to StringList.
List variables are defined like this:
variable "test" {
# This can also be inferred
type = "list"
default = ["Hello", "World"]
}
output "test_out" {
value = "${var.a_list}"
}
This results in the following state:
```
...
"outputs": {
"test_out": [
"hello",
"world"
]
},
...
```
And the result of terraform output is as follows:
```
$ terraform output
test_out = [
hello
world
]
```
Using the output name, an xargs-friendly representation is output:
```
$ terraform output test_out
hello
world
```
The output command also supports indexing into the list (with
appropriate range checking and no wrapping):
```
$ terraform output test_out 1
world
```
Along with maps, list outputs from one module may be passed as variables
into another, removing the need for the `join(",", var.list_as_string)`
and `split(",", var.list_as_string)` which was previously necessary in
Terraform configuration.
This commit also updates the tests and implementations of built-in
interpolation functions to take and return native lists where
appropriate.
A backwards compatibility note: previously the concat interpolation
function was capable of concatenating either strings or lists. The
strings use case was deprectated a long time ago but still remained.
Because we cannot return `ast.TypeAny` from an interpolation function,
this use case is no longer supported for strings - `concat` is only
capable of concatenating lists. This should not be a huge issue - the
type checker picks up incorrect parameters, and the native HIL string
concatenation - or the `join` function - can be used to replicate the
missing behaviour.
This changes the representation of maps in the interpolator from the
dotted flatmap form of a string variable named "var.variablename.key"
per map element to use native HIL maps instead.
This involves porting some of the interpolation functions in order to
keep the tests green, and adding support for map outputs.
There is one backwards incompatibility: as a result of an implementation
detail of maps, one could access an indexed map variable using the
syntax "${var.variablename.key}".
This is no longer possible - instead HIL native syntax -
"${var.variablename["key"]}" must be used. This was previously
documented, (though not heavily used) so it must be noted as a backward
compatibility issue for Terraform 0.7.
* core: Add support for marking outputs as sensitive
This commit allows an output to be marked "sensitive", in which case the
value is redacted in the post-refresh and post-apply list of outputs.
For example, the configuration:
```
variable "input" {
default = "Hello world"
}
output "notsensitive" {
value = "${var.input}"
}
output "sensitive" {
sensitive = true
value = "${var.input}"
}
```
Would result in the output:
```
terraform apply
Apply complete! Resources: 0 added, 0 changed, 0 destroyed.
Outputs:
notsensitive = Hello world
sensitive = <sensitive>
```
The `terraform output` command continues to display the value as before.
Limitations: Note that sensitivity is not tracked internally, so if the
output is interpolated in another module into a resource, the value will
be displayed. The value is still present in the state.
hil.Eval() now returns (hil.EvaluationResult, error) instead of (value,
type, error). This commit updates the call sites, but retains all
previous behaviour. Tests are also updated.
These tests demonstrates a problem where the types to a module input are
not checked. For example, if a module - inner - defines a variable
"should_be_a_map" as a map, or with a default variable of map, we do not
fail if the user sets the variable value in the outer module to a string
value. This is also a problem in nested modules.
The implementation changes add a type checking step into the graph
evaluation process to ensure invalid types are not passed.
Fixes an interpolation race that was occurring when a tainted destroy
node and a primary destroy node both tried to interpolate a computed
count in their config. Since they were sharing a pointer to the _same_
config, depending on how the race played out one of them could catch the
config uninterpolated and would then throw a syntax error.
The `Copy()` tree implemented for this fix can probably be used
elsewhere - basically we should copy the config whenever we drop nodes
into the graph - but for now I'm just applying it to the place that
fixes this bug.
Fixes#4982 - Includes a test covering that race condition.
This function returns -1 for negative numbers, 0 for 0 and 1 for positive numbers.
Useful when you need to set a value for the first resource and a different value for the rest of the resources.
Example: `${element(split(",", var.r53_failover_policy), signum(count.index))}`
If a variable type which is invalid (e.g. "stringg") is declared, we now
include the invalid type description in the error message to make it
easier to track down the source of the error in the source file.
This commit adds support for declaring variable types in Terraform
configuration. Historically, the type has been inferred from the default
value, defaulting to string if no default was supplied. This has caused
users to devise workarounds if they wanted to declare a map but provide
values from a .tfvars file (for example).
The new syntax adds the "type" key to variable blocks:
```
variable "i_am_a_string" {
type = "string"
}
variable "i_am_a_map" {
type = "map"
}
```
This commit does _not_ extend the type system to include bools, integers
or floats - the only two types available are maps and strings.
Validation is performed if a default value is provided in order to
ensure that the default value type matches the declared type.
In the case that a type is not declared, the old logic is used for
determining the type. This allows backwards compatiblity with previous
Terraform configuration.
The render code path in `template_file` was doing unsynchronized access
to a shared mapping of functions in `config.Func`.
This caused a race condition that was most often triggered when a
`template_file` had a `count` of more than one, and expressed itself as
a panic in the plugin followed by a cascade of "unexpected EOF" errors
through the plugin system.
Here, we simply turn the FuncMap from shared state into a generated
value, which avoids the race. We do more re-initialization of the data
structure, but the performance implications are minimal, and we can
always revisit with a perf pass later now that the race is fixed.
This may be brittle as it makes use of .gitattributes to override the
autocrlf setting in order to have an input file with Windows line
endings across multiple platforms.
This was never intended to be valid syntax, but it worked in the old HCL
parser, and we've found a decent number of examples of it in the wild.
Fixed in https://github.com/hashicorp/hcl/pull/62 and we'll keep this
test in Terraform to cover the behavior.
Building on the work of #3846, deprecate `filename` in favor of a
`template` attribute that accepts file contents instead of a path.
Required a bit of work in the interpolation code to prevent Terraform
from assuming that template interpolations were resource variables that
needed to be resolved. Leaving them as "Unknown Variables" prevents
interpolation from happening early and lets the `template_file` resource
do its thing.
This test reproduces the issue which is likely the root cause of #3840.
Test is currently failing with an "illegal character" message
corresponding with the location of the heredoc, which is also seen in
various acceptance tests for providers.
It has improvements to error messaging that we want.
We'll use this occasion begin developing / building with Go 1.5 from
here on out. Build times will be slower, but we have core development
plans that will help mitigate that.
/cc @hashicorp/terraform-committers
These new functions allow Terraform to be used for network address space
planning tasks, and make it easier to produce reusable modules that
contain or depend on network infrastructure.
For example:
- cidrsubnet allows an aws_subnet to derive its
CIDR prefix from its parent aws_vpc.
- cidrhost allows a fixed IP address for a resource to be assigned within
an address range defined elsewhere.
- cidrnetmask provides the dotted-decimal form of a prefix length that is
accepted by some systems such as routing tables and static network
interface configuration files.
The bulk of the work here is done by an external library I authored called
go-cidr. It is MIT licensed and was implemented primarily for the purpose
of using it within Terraform. It has its own unit tests and so the unit
tests within this change focus on simple success cases and on the correct
handling of the various error cases.
There isn't any precedent for abbreviating words in the interpolation
function names, and it may not be clear to all users what "enc" and "dec"
are short for, so instead we'll prefer to spell out the whole words for
improved readability.
It seems there are 4 locations left that use the `helper/multierror`
package, where the rest is TF settled on the `hashicorp/go-multierror`
package.
Functionally this doesn’t change anything, so I suggest to delete the
builtin version as it can only cause confusion (both packages have the
same name, but are still different types according to Go’s type system.
Had to handle a lot of implicit leaning on a few properties of the old
representation:
* Old representation allowed plain strings to be treated as lists
without problem (i.e. shoved into strings.Split), now strings need to
be checked whether they are a list before they are treated as one
(i.e. shoved into StringList(s).Slice()).
* Tested behavior of 0 and 1 length lists in formatlist() was a side
effect of the representation. Needs to be special cased now to
maintain the behavior.
* Found a pretty old context test failure that was wrong in several
different ways. It's covered by TestContext2Apply_multiVar so I
removed it.
This is the initial pure "all tests passing without a diff" stage. The
plan is to change the internal representation of StringList to include a
suffix delimiter, which will allow us to recognize empty and
single-element lists.
Without this 12 line function it’s impossible to use any of the
Terraform code without the need for having the files on disk. As more
and more people are using (parts of) Terraform in other software, this
seems to be a very welcome addition. It has no negative impact on
Terraform itself whatsoever (the function is never called), but it
opens up a lot of other use cases.
Next to the single new function, I renamed the existing function (and
related tests) to better reflect what the function does. So now there
is a `LoadDir` function which calls `LoadFile` for each file, which
kind of made sense to me, especially when now adding a `LoadJSON`
function as well.
But of course if the rename is a problem, I can revert that part as
it’s not related to the added `LoadJSON` function.
Thanks!
formatlist distributes formatting over lists.
See the docs for details.
As a colleague commented:
"It happens all the time that we want a set of
outputs, but in a slightly different way than
just simple joining or concatting."
formatlist (combined with join)
makes it easy to satisfy those needs.
Adds an "alias" field to the provider which allows creating multiple instances
of a provider under different names. This provides support for configurations
such as multiple AWS providers for different regions. In each resource, the
provider can be set with the "provider" field.
(thanks to Cisco Cloud for their support)
When the `prevent_destroy` flag is set on a resource, any plan that
would destroy that resource instead returns an error. This has the
effect of preventing the resource from being unexpectedly destroyed by
Terraform until the flag is removed from the config.