These aim to allow hinting to Terraform about situations where it's not
able to automatically infer value sensitivity.
"nonsensitive" is for situations where Terraform's behavior is too
conservative, such as when a new value is derived from a sensitive value
in such a way that all of the sensitive content is removed.
"sensitive", on the other hand, is for situations where Terraform can't
otherwise infer that a value is sensitive. These situations should be
pretty rare in a module that's making effective use of sensitive input
variables and output values, but the documentation shows one example of
an uncommon situation where a more direct hint via this function would
be needed.
Both of these functions are aimed at only occasional use in unusual
situations. They are here for reasons of pragmatism, not because we
expect them to be used routinely or recommend their use.
This is a new part of the existing module_variable_optional_attrs
experiment, because it's intended to complement the ability to declare
an input variable whose type constraint is an object type with optional
attributes. Module authors can use this to replace null values (that were
either explicitly set or implied by attribute omission) with other
non-null values of the same type.
This function is a bit more type-fussy than our functions typically are
because it's intended for use primarily with input variables that have
fully-specified type constraints, and thus it uses that type information
to help inform how the defaults data structure should be interpreted.
Other uses of this function will probably be harder today because it takes
a lot of extra annotation to build a value of a specific type if it isn't
passing through a variable type constraint. Perhaps later language
features for more general type conversion will make this more applicable,
but for now the more general form of this problem is better solved other
ways.
So far all of our language experiments have been new constructs handled
statically up in the configs package, but functions are another common
extention point where experiments could be useful to gather feedback and
so this intends to pass the information down into the right place to allow
for that to happen, even though as of this commit there are no
experimental functions to use it.
This is an analog to the "alltrue" function, using OR as the reduce
operator rather than AND.
This also includes some simplification of the "alltrue" implementation
to implement it similarly as a sort of reduce operation with AND
as the reduce operator, but with the same effective behavior.
These were initially introduced as functions with "encode" and "decode"
prefixes, but that doesn't match with our existing convention of putting
the encoding format first so that the encode and decode functions will
group together in a alphabetically-ordered function list.
"text" is not really a defined serialization format, but it's a short word
that hopefully represents well enough what these functions are aiming to
encode and decode, while being consistent with existing functions like
jsonencode/jsondecode, yamlencode/yamldecode, etc.
The "base64" at the end here is less convincing because there is precedent
for that modifier to appear both at the beginning and the end in our
existing function names. I chose to put it at the end here because that
seems to be our emergent convention for situations where the base64
encoding is a sort of secondary modifier alongside the primary purpose
of the function, as we see with "filebase64". (base64gzip is an exception
here, but it seems outvoted by the others.)
This commit adds an `alltrue` function to Terraform configuration. A
reason we might want this function is because it will enable more
powerful custom variable validations. For example:
```hcl
variable "amis" {
type = list(object({
id = string
}))
validation {
condition = (alltrue([
for a in var.amis : length(a.id) > 4 && substr(a.id, 0, 4) == "ami-"
]))
error_message = "The ID of at least one AMI was invalid."
}
}
```
* add setdifference and setsubtract functions and docs
* remove setdifference as it is not implemented correct in underlying lib
* Update setintersection.html.md
* Update setproduct.html.md
* Update setunion.html.md
These are intended to make it easier to work with arbitrary data
structures whose shape might not be known statically, such as the result
of jsondecode(...) or yamldecode(...) of data from a separate system.
For example, in an object value which has attributes that may or may not
be set we can concisely provide a fallback value to use when the attribute
isn't set:
try(local.example.foo, "fallback-foo")
Using a "try to evaluate" model rather than explicit testing fits better
with the usual programming model of the Terraform language where values
are normally automatically converted to the necessary type where possible:
the given expression is subject to all of the same normal type conversions,
which avoids inadvertently creating a more restrictive evaluation model
as might happen if this were handled using checks like a hypothetical
isobject(...) function, etc.
This is a companion to cidrsubnet that allows bulk-allocation of multiple
subnet addresses at once, with automatic numbering.
Unlike cidrsubnet, cidrsubnets allows each of the allocations to have a
different prefix length, and will pack the networks consecutively into the
given address space. cidrsubnets can potentially create more complicated
addressing schemes than cidrsubnet alone can, because it's able to take
into account the full set of requested prefix lengths rather than just
one at a time.
Reference: https://github.com/hashicorp/terraform/issues/16697
Enumerates a set of regular file names from a given glob pattern. Implemented via the Go stdlib `path/filepath.Glob()` functionality. Notably, stdlib does not support `**` or `{}` extended patterns. See also: https://github.com/golang/go/issues/11862
To support the extended glob patterns, it will require adding a dependency on a third party library or adding our own matching code.
These existing upstream cty functions allow matching strings against
regular expression patterns, which can be useful if you need to consume
a non-standard string format that Terraform doesn't (and can't) have a
built-in function for.
These follow the same principle as jsondecode and jsonencode, but use
YAML instead of JSON.
YAML has a much more complex information model than JSON, so we can only
support a subset of it during decoding, but hopefully the subset supported
here is a useful one.
Because there are many different ways to _generate_ YAML, the yamlencode
function is forced to make some decisions, and those decisions are likely
to affect compatibility with other real-world YAML parsers. Although the
format here is intended to be generic and compatible, we may find that
there are problems with it that'll we'll want to adjust for in a future
release, so yamlencode is therefore marked as experimental for now until
the underlying library is ready to commit to ongoing byte-for-byte
compatibility in serialization.
The main use-case here is met by yamldecode, which will allow reading in
files written in YAML format by humans for use in Terraform modules, in
situations where a higher-level input format than direct Terraform
language declarations is helpful.
This is similar to the function of the same name in Python, generating a
sequence of numbers as a list that can then be used in other
sequence-oriented operations.
The primary use-case for it is to turn a count expressed as a number into
a list of that length, which can then be iterated over or passed to a
collection function to produce that number of something else, as shown
in the example at the end of its documentation page.
* funcs/coalesce: return the first non-null, non-empty element from a
sequence.
The go-cty coalesce function, which was originally used here, returns the
first non-null element from a sequence. Terraform 0.11's coalesce,
however, returns the first non-empty string from a list of strings.
This new coalesce function aims to preserve terraform's documented
functionality while adding support for additional argument types. The
tests include those in go-cty and adapted tests from the 0.11 version of
coalesce.
* website/docs: update coalesce function document
In prior versions, we recommended using hash functions in conjunction with
the file function as an idiom for detecting changes to upstream blobs
without fetching and comparing the whole blob.
That approach relied on us being able to return raw binary data from
file(...). Since Terraform strings pass through intermediate
representations that are not binary-safe (e.g. the JSON state), there was
a risk of string corruption in prior versions which we have avoided for
0.12 by requiring that file(...) be used only with UTF-8 text files.
The specific case of returning a string and immediately passing it into
another function was not actually subject to that corruption risk, since
the HIL interpreter would just pass the string through verbatim, but this
is still now forbidden as a result of the stricter handling of file(...).
To avoid breaking these use-cases, here we introduce variants of the hash
functions a with "file" prefix that take a filename for a disk file to
hash rather than hashing the given string directly. The configuration
upgrade tool also now includes a rule to detect the documented idiom and
rewrite it into a single function call for one of these new functions.
This does cause a bit of function sprawl, but that seems preferable to
introducing more complex rules for when file(...) can and cannot read
binary files, making the behavior of these various functions easier to
understand in isolation.
It's not normally necessary to make explicit type conversions in Terraform
because the language implicitly converts as necessary, but explicit
conversions are useful in a few specialized cases:
- When defining output values for a reusable module, it may be desirable
to force a "cleaner" output type than would naturally arise from a
computation, such as forcing a string containing digits into a number.
- Our 0.12upgrade mechanism will use some of these to replace use of the
undocumented, hidden type conversion functions in HIL, and force
particular type interpretations in some tricky cases.
- We've found that type conversion functions can be useful as _temporary_
workarounds for bugs in Terraform and in providers where implicit type
conversion isn't working correctly or a type constraint isn't specified
precisely enough for the automatic conversion behavior.
These all follow the same convention of being named "to" followed by a
short type name. Since we've had a long-standing convention of running all
the words together in lowercase in function names, we stick to that here
even though some of these names are quite strange, because these should
be rarely-used functions anyway.
The sethaselement, setintersection, and setunion functions are defined in
the cty stdlib. Making them available in Terraform will make it easier to
work with sets, and complement the currently-Terraform-specific setproduct
function.
In the long run setproduct should probably move into the cty stdlib too,
but since it was submitted as a Terraform function originally we'll leave
it here now for simplicity's sake and reorganize later.
In our new world it produces either a set of a tuple type or a list of a
tuple type, depending on the given argument types.
The resulting collection's element tuple type is decided by the element
types of the given collections, allowing type information to propagate
even if unknown values are present.
We missed this one on a previous pass of bringing in most of the cty
stdlib functions.
This will resolve#17625 by allowing conversion from Terraform's
conventional RFC 3339 timestamps into various other formats.
This function is similar to the template_file data source offered by the
template provider, but having it built in to the language makes it more
convenient to use, allowing templates to be rendered from files anywhere
an inline template would normally be allowed:
user_data = templatefile("${path.module}/userdata.tmpl", {
hostname = format("petserver%02d", count.index)
})
Unlike the template_file data source, this function allows values of any
type in its variables map, passing them through verbatim to the template.
Its tighter integration with Terraform also allows it to return better
error messages with source location information from the template itself.
The template_file data source was originally created to work around the
fact that HIL didn't have any support for map values at the time, and
even once map support was added it wasn't very usable. With HCL2
expressions, there's little reason left to use a data source to render
a template; the only remaining reason left to use template_file is to
render a template that is constructed dynamically during the Terraform
run, which is a very rare need.