The map function assumed that the key arguments were strings, and would
panic if they were not.
After this commit, calling `map(1, 2)` will result in a map `{"1" = 1}`,
and calling `map(null, 1)` will result in a syntax error.
Fixes#23346, fixes#23043
Previously the templatefile function would permit any arbitrary string as
a variable name, but due to the HCL template syntax it would be impossible
to refer to one that isn't a valid HCL identifier without causing an
HCL syntax error.
The HCL syntax errors are correct, but don't really point to the root
cause of the problem. Instead, we'll pre-verify that the variable names
are valid before we even try to render the template, and given a
specialized error message that refers to the vars argument expression as
the problematic part, which will hopefully make the resolution path
clearer for a user encountering this situation.
The syntax error still remains for situations where all of the variable
names are correct but e.g. the user made a typo referring to one, which
makes sense because in that case the problem _is_ inside the template.
This function has a number of different error cases with hopefully-helpful
error messages for each, so it's good to test we're getting the error
message we were actually expecting in each case.
* add setdifference and setsubtract functions and docs
* remove setdifference as it is not implemented correct in underlying lib
* Update setintersection.html.md
* Update setproduct.html.md
* Update setunion.html.md
This PR implements 2 changes to the merge function.
- Rather than always defining the merge return type as dynamic, return
a precise type when all argument types match, or all possible object
attributes are known.
- Always return a value containing all keys when the keys are known.
This allows the use of merge output in for_each, even when keys are yet
to be determined.
These are intended to make it easier to work with arbitrary data
structures whose shape might not be known statically, such as the result
of jsondecode(...) or yamldecode(...) of data from a separate system.
For example, in an object value which has attributes that may or may not
be set we can concisely provide a fallback value to use when the attribute
isn't set:
try(local.example.foo, "fallback-foo")
Using a "try to evaluate" model rather than explicit testing fits better
with the usual programming model of the Terraform language where values
are normally automatically converted to the necessary type where possible:
the given expression is subject to all of the same normal type conversions,
which avoids inadvertently creating a more restrictive evaluation model
as might happen if this were handled using checks like a hypothetical
isobject(...) function, etc.
The fallback type for GetResource from an EachMap is a cty.Object,
because resource schemas may contain dynamically typed attributes.
Check for an Object type in the evaluation of self, to use the proper
GetAttr method when extracting the value.
self references do not need to be added to `managedResource`, and in
fact that could cause issues later if self is allowed in contexts other
than managed resources.
Coalesce 2 cases in the Referenceable switch, be take the
ContainingResource address of an instance beforehand.
Previously we were using the experimental HCL 2 repository, but now we'll
shift over to the v2 import path within the main HCL repository as part of
actually releasing HCL 2.0 as stable.
This is a mechanical search/replace to the new import paths. It also
switches to the v2.0.0 release of HCL, which includes some new code that
Terraform didn't previously have but should not change any behavior that
matters for Terraform's purposes.
For the moment the experimental HCL2 repository is still an indirect
dependency via terraform-config-inspect, so it remains in our go.sum and
vendor directories for the moment. Because terraform-config-inspect uses
a much smaller subset of the HCL2 functionality, this does still manage
to prune the vendor directory a little. A subsequent release of
terraform-config-inspect should allow us to completely remove that old
repository in a future commit.
This is a companion to cidrsubnet that allows bulk-allocation of multiple
subnet addresses at once, with automatic numbering.
Unlike cidrsubnet, cidrsubnets allows each of the allocations to have a
different prefix length, and will pack the networks consecutively into the
given address space. cidrsubnets can potentially create more complicated
addressing schemes than cidrsubnet alone can, because it's able to take
into account the full set of requested prefix lengths rather than just
one at a time.
Continue only evaluating resource at a whole and push the indexing of
the resource down into the expression evaluation.
The exception here is that `self` must be an instance which must be
extracted from the resource. We now also add the entire resource to the
context, which was previously only partially populated with the self
referenced instance.
In order to allow lazy evaluation of resource indexes, we can't index
resources immediately via GetResourceInstance. Change the evaluation to
always return whole Resources via GetResource, and index individual
instances during expression evaluation.
This will allow us to always check for invalid index errors rather than
returning an unknown value and ignoring it during apply.
Reference: https://github.com/hashicorp/terraform/issues/16697
Enumerates a set of regular file names from a given glob pattern. Implemented via the Go stdlib `path/filepath.Glob()` functionality. Notably, stdlib does not support `**` or `{}` extended patterns. See also: https://github.com/golang/go/issues/11862
To support the extended glob patterns, it will require adding a dependency on a third party library or adding our own matching code.
These existing upstream cty functions allow matching strings against
regular expression patterns, which can be useful if you need to consume
a non-standard string format that Terraform doesn't (and can't) have a
built-in function for.
* lang/funcs: lookup() can work with maps of lists, maps and objects
lookup() can already handle aribtrary objects of (whatever) and should
handle maps of (whatever) similarly.
Mistakenly using dynamic on an attribute will lead to a panic when
attempting to resolve variable references with a partial body, because
the dynamic blocks have yet to be expanded and validated. Check that the
block element type is actually an object before generating a schema.
The function would previously panic when one or more null values were among the arguments.
The new behavior treats nulls as empty strings, therefore, it removes them.
These follow the same principle as jsondecode and jsonencode, but use
YAML instead of JSON.
YAML has a much more complex information model than JSON, so we can only
support a subset of it during decoding, but hopefully the subset supported
here is a useful one.
Because there are many different ways to _generate_ YAML, the yamlencode
function is forced to make some decisions, and those decisions are likely
to affect compatibility with other real-world YAML parsers. Although the
format here is intended to be generic and compatible, we may find that
there are problems with it that'll we'll want to adjust for in a future
release, so yamlencode is therefore marked as experimental for now until
the underlying library is ready to commit to ongoing byte-for-byte
compatibility in serialization.
The main use-case here is met by yamldecode, which will allow reading in
files written in YAML format by humans for use in Terraform modules, in
situations where a higher-level input format than direct Terraform
language declarations is helpful.
This is similar to the function of the same name in Python, generating a
sequence of numbers as a list that can then be used in other
sequence-oriented operations.
The primary use-case for it is to turn a count expressed as a number into
a list of that length, which can then be iterated over or passed to a
collection function to produce that number of something else, as shown
in the example at the end of its documentation page.
Added higher-level test for matchkeys to exercise mixing
types in searchset. This had to be in the functions tests so the HCL
auto conversion from tuple to list would occur.
`matchkeys` was returning a (false) error if the searchset was a
variable, since then the type of the keylist and searchset parameters
would not match.
This does slightly change the behavior: previously matchkeys would
produce an error if the parameters were not of the same type, for e.g.
if searchset was a list of strings and keylist was a list of integers.
This no longer produces an error.
If a dynamic block is evaluated zero times, the body content will
contain 0 blocks. Allow the probe for ConfigModeAttr to accept that no
blocks with a matching attribute should still be converted to a block if
they are called with dynamicExpand.
Previously the type-selection codepath for an input tuple referred
unconditionally to the start and end index values. In the Type callback,
only the types of the arguments are guaranteed to be known, so any access
to the values must be guarded with an .IsKnown check, or else the function
will not short-circuit properly when an unknown value is passed.
Now we will check the start and end indices are in range when we have
enough information to do so, but we'll return an approximate result if
either is unknown.
FlattenFunc can return lists and tuples when individual elements are
unknown. Only return an unknown tuple if the number of elements cannot
be determined because it contains an unknown series.
Make sure flatten can handle non-series elements, which were previously
lost due to passing a slice value as the argument.
When slicing a list containing unknown values, the list itself is known,
and slice should return the proper slice of that list.
Make SliceFunc return the correct type for slices of tuples, and
disallow slicing sets.
cty now guarantees that sets of primitive values will iterate in a
reasonable order. Previously it was the caller's responsibility to deal
with that, but we invariably neglected to do so, causing inconsistent
ordering. Since cty prioritizes consistent behavior over performance, it
now imposes its own sort on set elements as part of iterating over them so
that calling applications don't have to worry so much about it.
This change also causes cty to consistently push unknown and null values
in sets to the end of iteration, where before that was undefined. This
means that our diff output will now consistently list additions before
removals when showing sets, rather than the ordering being undefined as
before.
The ordering of known, non-null, non-primitive values is still not
contractually fixed but remains consistent for a particular version of
cty.
* lang/funcs: testing of functions through the lang package API
The function-specific unit tests do not cover the HCL conversion that happens when the functions are called in a terraform configuration. For e.g., HCL converts sets to lists before passing it to the function. This means that we could not test passing a set in the function _unit_ tests.
This adds a higher-level acceptance test, plus a check that every (pure) function has a test.
* website/docs: update function documentation
* funcs/coalesce: return the first non-null, non-empty element from a
sequence.
The go-cty coalesce function, which was originally used here, returns the
first non-null element from a sequence. Terraform 0.11's coalesce,
however, returns the first non-empty string from a list of strings.
This new coalesce function aims to preserve terraform's documented
functionality while adding support for additional argument types. The
tests include those in go-cty and adapted tests from the 0.11 version of
coalesce.
* website/docs: update coalesce function document
When a top-level list-of-object contains an attribute that is also
list-of-object we need to do the fixup again inside the nested body (using
our synthetic attributes-only schema) so that the attr-as-blocks mechanism
can apply within the nested blocks too.
Previously it was calling directly to hcldec.Variables, and thus missing
the special fixups we do inside ReferencesInBlock to deal with
Terraform-specific concerns such as our attribute-as-blocks preprocessing.
In order to preserve pre-v0.12 idiom for list-of-object attributes, we'll
prefer to use block syntax for them except for the special situation where
the user explicitly assigns an empty list, where attribute syntax is
required in order to allow existing provider logic to differentiate from
an implicit lack of blocks.
For compatibility with documented patterns from existing providers we are
now allowing (via a pre-processing step) any attribute whose type is a
list-of-object or set-of-object type to optionally be assigned using one
or more blocks whose type is the attribute name.
The pre-processing functionality was implemented in previous commits but
we were not correctly detecting references within these blocks that are,
from the perspective of the primary schema, invalid. Now we'll use an
alternative implementation of variable detection that is able to apply the
same schema rewriting technique we used to implement the transform and
thus can find all of the references as if they were already in their
final locations.
Because we handle FixUpBlockAttrs after dynamic block expansion, when
resolving variables we unfortunately need to consider the possibility of
both dynamic block expansion _and_ the block attrs fixup.
To accommodate this we have a variant of dynblock.VariablesHCLDec that
instead walks using the configschema.Block representation of the schema
and applies the same opportunistic schema rewrite used by FixUpBlockAttrs
at each body encountered during the walk.
For any block content we evaluate dynamically via this API, we'll make a
special allowance for users to optionally write members of a list
attribute instead as a sequence of nested blocks, thus allowing some
existing provider features that were assuming this capability to continue
to support it after v0.12.
This should not be used for any new provider features, and should ideally
be eventually phased out so that there aren't two
similar-but-slightly-different syntaxes for saying the same thing.
This preprocessing step allows users to use nested block syntax to specify
elements of an attribute that is defined as being a list or set of an
object type.
This restores part of the unintended flexibility permitted in Terraform
v0.11 so that we can work around a few tricky edges where provider
implementations were relying on Terraform's failure to validate this in
earlier versions.
For any body that is pre-processed using this new helper, we will
recognize when a configuration author uses nested block syntax with a
name that is specified in the schema as an attribute of a suitable type
and tweak the schema just in time before decoding to expect that usage
and then fix up the result on the way out to conform to the original
schema.
Achieving this requires an abstraction inversion because only Terraform's
high-level schema has enough information to decide how to rewrite the
incoming low-level schema. We must therefore here implement HCL's
lowest-level API interface in terms of the higher-level abstractions of
hcldec and Terraform's configschema.
Because of the abstraction inversion this fixup mechanism cannot be used
generally for arbitrary HCL bodies but we can use it carefully inside the
lang package where its own API can guarantee the necessary invariants for
this to work.
The templatefile function only has two arguments, so ArgErrorf can be
called with only zero or one as the argument index. If we are out of
bounds then HCL itself will panic trying to build the error message for
this call when called as an HCL function.
Unfortunately there isn't really a great layer in Terraform to test for
this class of bug systematically, because we are currently testing these
functions directly rather than going through HCL to do it. For the moment
we'll just live with that, but if we see this class of error arise again
we might consider either reworking the tests in this package to work with
HCL expression source code instead of direct calls or adding some
additional tests elsewhere that do so.
The hcldec package has no awareness of the dynamic block extension, so the
hcldec.Variables function misses any variables declared inside dynamic
blocks.
dynblock.VariablesHCLDec is a drop-in replacement for hcldec.Variables
that _is_ aware of dynamic blocks, returning all of the same variables
that hcldec would find naturally plus also any variables used inside
the dynamic block "for_each" and "labels" arguments and inside the
nested "content" block.
This includes improved functionality for HCL's "dynamic block extension",
which will allow us (in a subsequent commit) to properly detect
dependencies inside nested "dynamic" blocks, where currently they get
missed.
For this commit though, we just upgrade HCL to a version that includes it
and make a small change to our "lang" package to align with an upstream
renaming.
In prior versions, we recommended using hash functions in conjunction with
the file function as an idiom for detecting changes to upstream blobs
without fetching and comparing the whole blob.
That approach relied on us being able to return raw binary data from
file(...). Since Terraform strings pass through intermediate
representations that are not binary-safe (e.g. the JSON state), there was
a risk of string corruption in prior versions which we have avoided for
0.12 by requiring that file(...) be used only with UTF-8 text files.
The specific case of returning a string and immediately passing it into
another function was not actually subject to that corruption risk, since
the HIL interpreter would just pass the string through verbatim, but this
is still now forbidden as a result of the stricter handling of file(...).
To avoid breaking these use-cases, here we introduce variants of the hash
functions a with "file" prefix that take a filename for a disk file to
hash rather than hashing the given string directly. The configuration
upgrade tool also now includes a rule to detect the documented idiom and
rewrite it into a single function call for one of these new functions.
This does cause a bit of function sprawl, but that seems preferable to
introducing more complex rules for when file(...) can and cannot read
binary files, making the behavior of these various functions easier to
understand in isolation.
These all follow the pattern of creating a hash and converting it to a
string using some encoding function, so we can write this implementation
only once and parameterize it with a hash factory function and an encoding
function.
This also includes a new test for the sha512 function, which was
previously missing a test and, it turns out, actually computing sha256
instead.
It's not normally necessary to make explicit type conversions in Terraform
because the language implicitly converts as necessary, but explicit
conversions are useful in a few specialized cases:
- When defining output values for a reusable module, it may be desirable
to force a "cleaner" output type than would naturally arise from a
computation, such as forcing a string containing digits into a number.
- Our 0.12upgrade mechanism will use some of these to replace use of the
undocumented, hidden type conversion functions in HIL, and force
particular type interpretations in some tricky cases.
- We've found that type conversion functions can be useful as _temporary_
workarounds for bugs in Terraform and in providers where implicit type
conversion isn't working correctly or a type constraint isn't specified
precisely enough for the automatic conversion behavior.
These all follow the same convention of being named "to" followed by a
short type name. Since we've had a long-standing convention of running all
the words together in lowercase in function names, we stick to that here
even though some of these names are quite strange, because these should
be rarely-used functions anyway.
The sethaselement, setintersection, and setunion functions are defined in
the cty stdlib. Making them available in Terraform will make it easier to
work with sets, and complement the currently-Terraform-specific setproduct
function.
In the long run setproduct should probably move into the cty stdlib too,
but since it was submitted as a Terraform function originally we'll leave
it here now for simplicity's sake and reorganize later.
In our new world it produces either a set of a tuple type or a list of a
tuple type, depending on the given argument types.
The resulting collection's element tuple type is decided by the element
types of the given collections, allowing type information to propagate
even if unknown values are present.
We missed this one on a previous pass of bringing in most of the cty
stdlib functions.
This will resolve#17625 by allowing conversion from Terraform's
conventional RFC 3339 timestamps into various other formats.
This function is similar to the template_file data source offered by the
template provider, but having it built in to the language makes it more
convenient to use, allowing templates to be rendered from files anywhere
an inline template would normally be allowed:
user_data = templatefile("${path.module}/userdata.tmpl", {
hostname = format("petserver%02d", count.index)
})
Unlike the template_file data source, this function allows values of any
type in its variables map, passing them through verbatim to the template.
Its tighter integration with Terraform also allows it to return better
error messages with source location information from the template itself.
The template_file data source was originally created to work around the
fact that HIL didn't have any support for map values at the time, and
even once map support was added it wasn't very usable. With HCL2
expressions, there's little reason left to use a data source to render
a template; the only remaining reason left to use template_file is to
render a template that is constructed dynamically during the Terraform
run, which is a very rare need.
Both depends_on and ignore_changes contain references to objects that we
can validate.
Historically Terraform has not validated these, instead just ignoring
references to non-existent objects. Since there is no reason to refer to
something that doesn't exist, we'll now verify this and return errors so
that users get explicit feedback on any typos they may have made, rather
than just wondering why what they added seems to have no effect.
This is particularly important for ignore_changes because users have
historically used strange values here to try to exploit the fact that
Terraform was resolving ignore_changes against a flatmap. This will give
them explicit feedback for any odd constructs that the configuration
upgrade tool doesn't know how to detect and fix.
This actually seems to be a bug in the underlying cty Convert function
since converting to cty.DynamicPseudoType should always just return the
input verbatim, but it seems like it's actually converting unknown values
of any type to be cty.DynamicVal, losing the type information.
We should eventually fix this in cty too, but having this extra check in
the Terraform layer is harmless and allows us to make progress without
context-switching.
Now that our language supports tuple/object types in addition to list/map
types, it's convenient for zipmap to be able to produce an object type
given a tuple, since this makes it symmetrical with "keys" and "values"
such the the following identity holds for any map or object value "a"
a == zipmap(keys(a), values(a))
When the values sequence is a tuple, the result has an object type whose
attribute types correspond to the given tuple types.
Since an object type has attribute names as part of its definition, there
is the additional constraint here that the result has an unknown type
(represented by the dynamic pseudo-type) if the given values is a tuple
and the given keys contains any unknown values. This isn't true for values
as a list because we can predict the resulting map element type using
just the list element type.
In the initial move to HCL2 we started relying only on full expression
evaluation to catch attribute errors, but that's not sufficient for
resource attributes in practice because during validation we can't know
yet whether a resource reference evaluates to a single object or to a
list of objects (if count is set).
To address this, here we reinstate some static validation of resource
references by analyzing directly the reference objects, disregarding any
instance index if present, and produce errors if the remaining subsequent
traversal steps do not correspond to items within the resource type
schema.
This also allows us to produce some more specialized error messages for
certain situations. In particular, we can recognize a reference like
aws_instance.foo.count, which in 0.11 and prior was a weird special case
for determining the count value of a resource block, and offer a helpful
error showing the new length(aws_instance.foo) usage pattern.
This eventually delegates to the static traversal validation logic that
was added to the configschema package in a previous commit, which also
includes some specialized error messages that distinguish between
attributes and block types in the schema so that the errors relate more
directly to constructs the user can see in the configuration.
In future we could potentially move more of the checks from the dynamic
schema construction step to the static validation step, but resources
are the reference type that most needs this immediately due to the
ambiguity caused by the instance indexing syntax. We can safely refactor
other reference types to be statically validated in later releases.
This is verified by two pre-existing context validate tests which we
temporarily disabled during earlier work (now re-enabled) and also by a
new validate test aimed specifically at the special case for the "count"
attribute.
The "values" function wasn't producing consistently-ordered keys in its
result, leading to crashes. This fixes#19204.
While working on these functions anyway, this also improves slightly their
precision when working with object types, where we can produce a more
complete result for unknown values because the attribute names are part
of the type. We can also produce results for known maps that have unknown
elements; these unknowns will also appear in the values(...) result,
allowing them to propagate through expressions.
Finally, this adds a few more test cases to try different permutations
of empty and unknown values.
When the value we're looking in has an object type, we need to know the
key in order to decide the result type. Therefore an object lookup with
an unknown key must produce cty.DynamicVal, not an unknown value with a
known type.