The function would previously panic when one or more null values were among the arguments.
The new behavior treats nulls as empty strings, therefore, it removes them.
These follow the same principle as jsondecode and jsonencode, but use
YAML instead of JSON.
YAML has a much more complex information model than JSON, so we can only
support a subset of it during decoding, but hopefully the subset supported
here is a useful one.
Because there are many different ways to _generate_ YAML, the yamlencode
function is forced to make some decisions, and those decisions are likely
to affect compatibility with other real-world YAML parsers. Although the
format here is intended to be generic and compatible, we may find that
there are problems with it that'll we'll want to adjust for in a future
release, so yamlencode is therefore marked as experimental for now until
the underlying library is ready to commit to ongoing byte-for-byte
compatibility in serialization.
The main use-case here is met by yamldecode, which will allow reading in
files written in YAML format by humans for use in Terraform modules, in
situations where a higher-level input format than direct Terraform
language declarations is helpful.
This is similar to the function of the same name in Python, generating a
sequence of numbers as a list that can then be used in other
sequence-oriented operations.
The primary use-case for it is to turn a count expressed as a number into
a list of that length, which can then be iterated over or passed to a
collection function to produce that number of something else, as shown
in the example at the end of its documentation page.
Added higher-level test for matchkeys to exercise mixing
types in searchset. This had to be in the functions tests so the HCL
auto conversion from tuple to list would occur.
`matchkeys` was returning a (false) error if the searchset was a
variable, since then the type of the keylist and searchset parameters
would not match.
This does slightly change the behavior: previously matchkeys would
produce an error if the parameters were not of the same type, for e.g.
if searchset was a list of strings and keylist was a list of integers.
This no longer produces an error.
If a dynamic block is evaluated zero times, the body content will
contain 0 blocks. Allow the probe for ConfigModeAttr to accept that no
blocks with a matching attribute should still be converted to a block if
they are called with dynamicExpand.
Previously the type-selection codepath for an input tuple referred
unconditionally to the start and end index values. In the Type callback,
only the types of the arguments are guaranteed to be known, so any access
to the values must be guarded with an .IsKnown check, or else the function
will not short-circuit properly when an unknown value is passed.
Now we will check the start and end indices are in range when we have
enough information to do so, but we'll return an approximate result if
either is unknown.
FlattenFunc can return lists and tuples when individual elements are
unknown. Only return an unknown tuple if the number of elements cannot
be determined because it contains an unknown series.
Make sure flatten can handle non-series elements, which were previously
lost due to passing a slice value as the argument.
When slicing a list containing unknown values, the list itself is known,
and slice should return the proper slice of that list.
Make SliceFunc return the correct type for slices of tuples, and
disallow slicing sets.
cty now guarantees that sets of primitive values will iterate in a
reasonable order. Previously it was the caller's responsibility to deal
with that, but we invariably neglected to do so, causing inconsistent
ordering. Since cty prioritizes consistent behavior over performance, it
now imposes its own sort on set elements as part of iterating over them so
that calling applications don't have to worry so much about it.
This change also causes cty to consistently push unknown and null values
in sets to the end of iteration, where before that was undefined. This
means that our diff output will now consistently list additions before
removals when showing sets, rather than the ordering being undefined as
before.
The ordering of known, non-null, non-primitive values is still not
contractually fixed but remains consistent for a particular version of
cty.
* lang/funcs: testing of functions through the lang package API
The function-specific unit tests do not cover the HCL conversion that happens when the functions are called in a terraform configuration. For e.g., HCL converts sets to lists before passing it to the function. This means that we could not test passing a set in the function _unit_ tests.
This adds a higher-level acceptance test, plus a check that every (pure) function has a test.
* website/docs: update function documentation
* funcs/coalesce: return the first non-null, non-empty element from a
sequence.
The go-cty coalesce function, which was originally used here, returns the
first non-null element from a sequence. Terraform 0.11's coalesce,
however, returns the first non-empty string from a list of strings.
This new coalesce function aims to preserve terraform's documented
functionality while adding support for additional argument types. The
tests include those in go-cty and adapted tests from the 0.11 version of
coalesce.
* website/docs: update coalesce function document
When a top-level list-of-object contains an attribute that is also
list-of-object we need to do the fixup again inside the nested body (using
our synthetic attributes-only schema) so that the attr-as-blocks mechanism
can apply within the nested blocks too.
Previously it was calling directly to hcldec.Variables, and thus missing
the special fixups we do inside ReferencesInBlock to deal with
Terraform-specific concerns such as our attribute-as-blocks preprocessing.
In order to preserve pre-v0.12 idiom for list-of-object attributes, we'll
prefer to use block syntax for them except for the special situation where
the user explicitly assigns an empty list, where attribute syntax is
required in order to allow existing provider logic to differentiate from
an implicit lack of blocks.
For compatibility with documented patterns from existing providers we are
now allowing (via a pre-processing step) any attribute whose type is a
list-of-object or set-of-object type to optionally be assigned using one
or more blocks whose type is the attribute name.
The pre-processing functionality was implemented in previous commits but
we were not correctly detecting references within these blocks that are,
from the perspective of the primary schema, invalid. Now we'll use an
alternative implementation of variable detection that is able to apply the
same schema rewriting technique we used to implement the transform and
thus can find all of the references as if they were already in their
final locations.
Because we handle FixUpBlockAttrs after dynamic block expansion, when
resolving variables we unfortunately need to consider the possibility of
both dynamic block expansion _and_ the block attrs fixup.
To accommodate this we have a variant of dynblock.VariablesHCLDec that
instead walks using the configschema.Block representation of the schema
and applies the same opportunistic schema rewrite used by FixUpBlockAttrs
at each body encountered during the walk.
For any block content we evaluate dynamically via this API, we'll make a
special allowance for users to optionally write members of a list
attribute instead as a sequence of nested blocks, thus allowing some
existing provider features that were assuming this capability to continue
to support it after v0.12.
This should not be used for any new provider features, and should ideally
be eventually phased out so that there aren't two
similar-but-slightly-different syntaxes for saying the same thing.
This preprocessing step allows users to use nested block syntax to specify
elements of an attribute that is defined as being a list or set of an
object type.
This restores part of the unintended flexibility permitted in Terraform
v0.11 so that we can work around a few tricky edges where provider
implementations were relying on Terraform's failure to validate this in
earlier versions.
For any body that is pre-processed using this new helper, we will
recognize when a configuration author uses nested block syntax with a
name that is specified in the schema as an attribute of a suitable type
and tweak the schema just in time before decoding to expect that usage
and then fix up the result on the way out to conform to the original
schema.
Achieving this requires an abstraction inversion because only Terraform's
high-level schema has enough information to decide how to rewrite the
incoming low-level schema. We must therefore here implement HCL's
lowest-level API interface in terms of the higher-level abstractions of
hcldec and Terraform's configschema.
Because of the abstraction inversion this fixup mechanism cannot be used
generally for arbitrary HCL bodies but we can use it carefully inside the
lang package where its own API can guarantee the necessary invariants for
this to work.
The templatefile function only has two arguments, so ArgErrorf can be
called with only zero or one as the argument index. If we are out of
bounds then HCL itself will panic trying to build the error message for
this call when called as an HCL function.
Unfortunately there isn't really a great layer in Terraform to test for
this class of bug systematically, because we are currently testing these
functions directly rather than going through HCL to do it. For the moment
we'll just live with that, but if we see this class of error arise again
we might consider either reworking the tests in this package to work with
HCL expression source code instead of direct calls or adding some
additional tests elsewhere that do so.
The hcldec package has no awareness of the dynamic block extension, so the
hcldec.Variables function misses any variables declared inside dynamic
blocks.
dynblock.VariablesHCLDec is a drop-in replacement for hcldec.Variables
that _is_ aware of dynamic blocks, returning all of the same variables
that hcldec would find naturally plus also any variables used inside
the dynamic block "for_each" and "labels" arguments and inside the
nested "content" block.
This includes improved functionality for HCL's "dynamic block extension",
which will allow us (in a subsequent commit) to properly detect
dependencies inside nested "dynamic" blocks, where currently they get
missed.
For this commit though, we just upgrade HCL to a version that includes it
and make a small change to our "lang" package to align with an upstream
renaming.
In prior versions, we recommended using hash functions in conjunction with
the file function as an idiom for detecting changes to upstream blobs
without fetching and comparing the whole blob.
That approach relied on us being able to return raw binary data from
file(...). Since Terraform strings pass through intermediate
representations that are not binary-safe (e.g. the JSON state), there was
a risk of string corruption in prior versions which we have avoided for
0.12 by requiring that file(...) be used only with UTF-8 text files.
The specific case of returning a string and immediately passing it into
another function was not actually subject to that corruption risk, since
the HIL interpreter would just pass the string through verbatim, but this
is still now forbidden as a result of the stricter handling of file(...).
To avoid breaking these use-cases, here we introduce variants of the hash
functions a with "file" prefix that take a filename for a disk file to
hash rather than hashing the given string directly. The configuration
upgrade tool also now includes a rule to detect the documented idiom and
rewrite it into a single function call for one of these new functions.
This does cause a bit of function sprawl, but that seems preferable to
introducing more complex rules for when file(...) can and cannot read
binary files, making the behavior of these various functions easier to
understand in isolation.
These all follow the pattern of creating a hash and converting it to a
string using some encoding function, so we can write this implementation
only once and parameterize it with a hash factory function and an encoding
function.
This also includes a new test for the sha512 function, which was
previously missing a test and, it turns out, actually computing sha256
instead.
It's not normally necessary to make explicit type conversions in Terraform
because the language implicitly converts as necessary, but explicit
conversions are useful in a few specialized cases:
- When defining output values for a reusable module, it may be desirable
to force a "cleaner" output type than would naturally arise from a
computation, such as forcing a string containing digits into a number.
- Our 0.12upgrade mechanism will use some of these to replace use of the
undocumented, hidden type conversion functions in HIL, and force
particular type interpretations in some tricky cases.
- We've found that type conversion functions can be useful as _temporary_
workarounds for bugs in Terraform and in providers where implicit type
conversion isn't working correctly or a type constraint isn't specified
precisely enough for the automatic conversion behavior.
These all follow the same convention of being named "to" followed by a
short type name. Since we've had a long-standing convention of running all
the words together in lowercase in function names, we stick to that here
even though some of these names are quite strange, because these should
be rarely-used functions anyway.
The sethaselement, setintersection, and setunion functions are defined in
the cty stdlib. Making them available in Terraform will make it easier to
work with sets, and complement the currently-Terraform-specific setproduct
function.
In the long run setproduct should probably move into the cty stdlib too,
but since it was submitted as a Terraform function originally we'll leave
it here now for simplicity's sake and reorganize later.
In our new world it produces either a set of a tuple type or a list of a
tuple type, depending on the given argument types.
The resulting collection's element tuple type is decided by the element
types of the given collections, allowing type information to propagate
even if unknown values are present.
We missed this one on a previous pass of bringing in most of the cty
stdlib functions.
This will resolve#17625 by allowing conversion from Terraform's
conventional RFC 3339 timestamps into various other formats.
This function is similar to the template_file data source offered by the
template provider, but having it built in to the language makes it more
convenient to use, allowing templates to be rendered from files anywhere
an inline template would normally be allowed:
user_data = templatefile("${path.module}/userdata.tmpl", {
hostname = format("petserver%02d", count.index)
})
Unlike the template_file data source, this function allows values of any
type in its variables map, passing them through verbatim to the template.
Its tighter integration with Terraform also allows it to return better
error messages with source location information from the template itself.
The template_file data source was originally created to work around the
fact that HIL didn't have any support for map values at the time, and
even once map support was added it wasn't very usable. With HCL2
expressions, there's little reason left to use a data source to render
a template; the only remaining reason left to use template_file is to
render a template that is constructed dynamically during the Terraform
run, which is a very rare need.
Both depends_on and ignore_changes contain references to objects that we
can validate.
Historically Terraform has not validated these, instead just ignoring
references to non-existent objects. Since there is no reason to refer to
something that doesn't exist, we'll now verify this and return errors so
that users get explicit feedback on any typos they may have made, rather
than just wondering why what they added seems to have no effect.
This is particularly important for ignore_changes because users have
historically used strange values here to try to exploit the fact that
Terraform was resolving ignore_changes against a flatmap. This will give
them explicit feedback for any odd constructs that the configuration
upgrade tool doesn't know how to detect and fix.
This actually seems to be a bug in the underlying cty Convert function
since converting to cty.DynamicPseudoType should always just return the
input verbatim, but it seems like it's actually converting unknown values
of any type to be cty.DynamicVal, losing the type information.
We should eventually fix this in cty too, but having this extra check in
the Terraform layer is harmless and allows us to make progress without
context-switching.
Now that our language supports tuple/object types in addition to list/map
types, it's convenient for zipmap to be able to produce an object type
given a tuple, since this makes it symmetrical with "keys" and "values"
such the the following identity holds for any map or object value "a"
a == zipmap(keys(a), values(a))
When the values sequence is a tuple, the result has an object type whose
attribute types correspond to the given tuple types.
Since an object type has attribute names as part of its definition, there
is the additional constraint here that the result has an unknown type
(represented by the dynamic pseudo-type) if the given values is a tuple
and the given keys contains any unknown values. This isn't true for values
as a list because we can predict the resulting map element type using
just the list element type.
In the initial move to HCL2 we started relying only on full expression
evaluation to catch attribute errors, but that's not sufficient for
resource attributes in practice because during validation we can't know
yet whether a resource reference evaluates to a single object or to a
list of objects (if count is set).
To address this, here we reinstate some static validation of resource
references by analyzing directly the reference objects, disregarding any
instance index if present, and produce errors if the remaining subsequent
traversal steps do not correspond to items within the resource type
schema.
This also allows us to produce some more specialized error messages for
certain situations. In particular, we can recognize a reference like
aws_instance.foo.count, which in 0.11 and prior was a weird special case
for determining the count value of a resource block, and offer a helpful
error showing the new length(aws_instance.foo) usage pattern.
This eventually delegates to the static traversal validation logic that
was added to the configschema package in a previous commit, which also
includes some specialized error messages that distinguish between
attributes and block types in the schema so that the errors relate more
directly to constructs the user can see in the configuration.
In future we could potentially move more of the checks from the dynamic
schema construction step to the static validation step, but resources
are the reference type that most needs this immediately due to the
ambiguity caused by the instance indexing syntax. We can safely refactor
other reference types to be statically validated in later releases.
This is verified by two pre-existing context validate tests which we
temporarily disabled during earlier work (now re-enabled) and also by a
new validate test aimed specifically at the special case for the "count"
attribute.