These existing upstream cty functions allow matching strings against
regular expression patterns, which can be useful if you need to consume
a non-standard string format that Terraform doesn't (and can't) have a
built-in function for.
* lang/funcs: lookup() can work with maps of lists, maps and objects
lookup() can already handle aribtrary objects of (whatever) and should
handle maps of (whatever) similarly.
Mistakenly using dynamic on an attribute will lead to a panic when
attempting to resolve variable references with a partial body, because
the dynamic blocks have yet to be expanded and validated. Check that the
block element type is actually an object before generating a schema.
The function would previously panic when one or more null values were among the arguments.
The new behavior treats nulls as empty strings, therefore, it removes them.
These follow the same principle as jsondecode and jsonencode, but use
YAML instead of JSON.
YAML has a much more complex information model than JSON, so we can only
support a subset of it during decoding, but hopefully the subset supported
here is a useful one.
Because there are many different ways to _generate_ YAML, the yamlencode
function is forced to make some decisions, and those decisions are likely
to affect compatibility with other real-world YAML parsers. Although the
format here is intended to be generic and compatible, we may find that
there are problems with it that'll we'll want to adjust for in a future
release, so yamlencode is therefore marked as experimental for now until
the underlying library is ready to commit to ongoing byte-for-byte
compatibility in serialization.
The main use-case here is met by yamldecode, which will allow reading in
files written in YAML format by humans for use in Terraform modules, in
situations where a higher-level input format than direct Terraform
language declarations is helpful.
This is similar to the function of the same name in Python, generating a
sequence of numbers as a list that can then be used in other
sequence-oriented operations.
The primary use-case for it is to turn a count expressed as a number into
a list of that length, which can then be iterated over or passed to a
collection function to produce that number of something else, as shown
in the example at the end of its documentation page.
Added higher-level test for matchkeys to exercise mixing
types in searchset. This had to be in the functions tests so the HCL
auto conversion from tuple to list would occur.
`matchkeys` was returning a (false) error if the searchset was a
variable, since then the type of the keylist and searchset parameters
would not match.
This does slightly change the behavior: previously matchkeys would
produce an error if the parameters were not of the same type, for e.g.
if searchset was a list of strings and keylist was a list of integers.
This no longer produces an error.
If a dynamic block is evaluated zero times, the body content will
contain 0 blocks. Allow the probe for ConfigModeAttr to accept that no
blocks with a matching attribute should still be converted to a block if
they are called with dynamicExpand.
Previously the type-selection codepath for an input tuple referred
unconditionally to the start and end index values. In the Type callback,
only the types of the arguments are guaranteed to be known, so any access
to the values must be guarded with an .IsKnown check, or else the function
will not short-circuit properly when an unknown value is passed.
Now we will check the start and end indices are in range when we have
enough information to do so, but we'll return an approximate result if
either is unknown.
FlattenFunc can return lists and tuples when individual elements are
unknown. Only return an unknown tuple if the number of elements cannot
be determined because it contains an unknown series.
Make sure flatten can handle non-series elements, which were previously
lost due to passing a slice value as the argument.
When slicing a list containing unknown values, the list itself is known,
and slice should return the proper slice of that list.
Make SliceFunc return the correct type for slices of tuples, and
disallow slicing sets.
cty now guarantees that sets of primitive values will iterate in a
reasonable order. Previously it was the caller's responsibility to deal
with that, but we invariably neglected to do so, causing inconsistent
ordering. Since cty prioritizes consistent behavior over performance, it
now imposes its own sort on set elements as part of iterating over them so
that calling applications don't have to worry so much about it.
This change also causes cty to consistently push unknown and null values
in sets to the end of iteration, where before that was undefined. This
means that our diff output will now consistently list additions before
removals when showing sets, rather than the ordering being undefined as
before.
The ordering of known, non-null, non-primitive values is still not
contractually fixed but remains consistent for a particular version of
cty.
* lang/funcs: testing of functions through the lang package API
The function-specific unit tests do not cover the HCL conversion that happens when the functions are called in a terraform configuration. For e.g., HCL converts sets to lists before passing it to the function. This means that we could not test passing a set in the function _unit_ tests.
This adds a higher-level acceptance test, plus a check that every (pure) function has a test.
* website/docs: update function documentation
* funcs/coalesce: return the first non-null, non-empty element from a
sequence.
The go-cty coalesce function, which was originally used here, returns the
first non-null element from a sequence. Terraform 0.11's coalesce,
however, returns the first non-empty string from a list of strings.
This new coalesce function aims to preserve terraform's documented
functionality while adding support for additional argument types. The
tests include those in go-cty and adapted tests from the 0.11 version of
coalesce.
* website/docs: update coalesce function document
When a top-level list-of-object contains an attribute that is also
list-of-object we need to do the fixup again inside the nested body (using
our synthetic attributes-only schema) so that the attr-as-blocks mechanism
can apply within the nested blocks too.
Previously it was calling directly to hcldec.Variables, and thus missing
the special fixups we do inside ReferencesInBlock to deal with
Terraform-specific concerns such as our attribute-as-blocks preprocessing.
In order to preserve pre-v0.12 idiom for list-of-object attributes, we'll
prefer to use block syntax for them except for the special situation where
the user explicitly assigns an empty list, where attribute syntax is
required in order to allow existing provider logic to differentiate from
an implicit lack of blocks.
For compatibility with documented patterns from existing providers we are
now allowing (via a pre-processing step) any attribute whose type is a
list-of-object or set-of-object type to optionally be assigned using one
or more blocks whose type is the attribute name.
The pre-processing functionality was implemented in previous commits but
we were not correctly detecting references within these blocks that are,
from the perspective of the primary schema, invalid. Now we'll use an
alternative implementation of variable detection that is able to apply the
same schema rewriting technique we used to implement the transform and
thus can find all of the references as if they were already in their
final locations.
Because we handle FixUpBlockAttrs after dynamic block expansion, when
resolving variables we unfortunately need to consider the possibility of
both dynamic block expansion _and_ the block attrs fixup.
To accommodate this we have a variant of dynblock.VariablesHCLDec that
instead walks using the configschema.Block representation of the schema
and applies the same opportunistic schema rewrite used by FixUpBlockAttrs
at each body encountered during the walk.
For any block content we evaluate dynamically via this API, we'll make a
special allowance for users to optionally write members of a list
attribute instead as a sequence of nested blocks, thus allowing some
existing provider features that were assuming this capability to continue
to support it after v0.12.
This should not be used for any new provider features, and should ideally
be eventually phased out so that there aren't two
similar-but-slightly-different syntaxes for saying the same thing.
This preprocessing step allows users to use nested block syntax to specify
elements of an attribute that is defined as being a list or set of an
object type.
This restores part of the unintended flexibility permitted in Terraform
v0.11 so that we can work around a few tricky edges where provider
implementations were relying on Terraform's failure to validate this in
earlier versions.
For any body that is pre-processed using this new helper, we will
recognize when a configuration author uses nested block syntax with a
name that is specified in the schema as an attribute of a suitable type
and tweak the schema just in time before decoding to expect that usage
and then fix up the result on the way out to conform to the original
schema.
Achieving this requires an abstraction inversion because only Terraform's
high-level schema has enough information to decide how to rewrite the
incoming low-level schema. We must therefore here implement HCL's
lowest-level API interface in terms of the higher-level abstractions of
hcldec and Terraform's configschema.
Because of the abstraction inversion this fixup mechanism cannot be used
generally for arbitrary HCL bodies but we can use it carefully inside the
lang package where its own API can guarantee the necessary invariants for
this to work.
The templatefile function only has two arguments, so ArgErrorf can be
called with only zero or one as the argument index. If we are out of
bounds then HCL itself will panic trying to build the error message for
this call when called as an HCL function.
Unfortunately there isn't really a great layer in Terraform to test for
this class of bug systematically, because we are currently testing these
functions directly rather than going through HCL to do it. For the moment
we'll just live with that, but if we see this class of error arise again
we might consider either reworking the tests in this package to work with
HCL expression source code instead of direct calls or adding some
additional tests elsewhere that do so.
The hcldec package has no awareness of the dynamic block extension, so the
hcldec.Variables function misses any variables declared inside dynamic
blocks.
dynblock.VariablesHCLDec is a drop-in replacement for hcldec.Variables
that _is_ aware of dynamic blocks, returning all of the same variables
that hcldec would find naturally plus also any variables used inside
the dynamic block "for_each" and "labels" arguments and inside the
nested "content" block.
This includes improved functionality for HCL's "dynamic block extension",
which will allow us (in a subsequent commit) to properly detect
dependencies inside nested "dynamic" blocks, where currently they get
missed.
For this commit though, we just upgrade HCL to a version that includes it
and make a small change to our "lang" package to align with an upstream
renaming.
In prior versions, we recommended using hash functions in conjunction with
the file function as an idiom for detecting changes to upstream blobs
without fetching and comparing the whole blob.
That approach relied on us being able to return raw binary data from
file(...). Since Terraform strings pass through intermediate
representations that are not binary-safe (e.g. the JSON state), there was
a risk of string corruption in prior versions which we have avoided for
0.12 by requiring that file(...) be used only with UTF-8 text files.
The specific case of returning a string and immediately passing it into
another function was not actually subject to that corruption risk, since
the HIL interpreter would just pass the string through verbatim, but this
is still now forbidden as a result of the stricter handling of file(...).
To avoid breaking these use-cases, here we introduce variants of the hash
functions a with "file" prefix that take a filename for a disk file to
hash rather than hashing the given string directly. The configuration
upgrade tool also now includes a rule to detect the documented idiom and
rewrite it into a single function call for one of these new functions.
This does cause a bit of function sprawl, but that seems preferable to
introducing more complex rules for when file(...) can and cannot read
binary files, making the behavior of these various functions easier to
understand in isolation.
These all follow the pattern of creating a hash and converting it to a
string using some encoding function, so we can write this implementation
only once and parameterize it with a hash factory function and an encoding
function.
This also includes a new test for the sha512 function, which was
previously missing a test and, it turns out, actually computing sha256
instead.
It's not normally necessary to make explicit type conversions in Terraform
because the language implicitly converts as necessary, but explicit
conversions are useful in a few specialized cases:
- When defining output values for a reusable module, it may be desirable
to force a "cleaner" output type than would naturally arise from a
computation, such as forcing a string containing digits into a number.
- Our 0.12upgrade mechanism will use some of these to replace use of the
undocumented, hidden type conversion functions in HIL, and force
particular type interpretations in some tricky cases.
- We've found that type conversion functions can be useful as _temporary_
workarounds for bugs in Terraform and in providers where implicit type
conversion isn't working correctly or a type constraint isn't specified
precisely enough for the automatic conversion behavior.
These all follow the same convention of being named "to" followed by a
short type name. Since we've had a long-standing convention of running all
the words together in lowercase in function names, we stick to that here
even though some of these names are quite strange, because these should
be rarely-used functions anyway.
The sethaselement, setintersection, and setunion functions are defined in
the cty stdlib. Making them available in Terraform will make it easier to
work with sets, and complement the currently-Terraform-specific setproduct
function.
In the long run setproduct should probably move into the cty stdlib too,
but since it was submitted as a Terraform function originally we'll leave
it here now for simplicity's sake and reorganize later.