When rendering a stored plan file as JSON, we include a data structure
representing the sensitivity of the changed resource values. Prior to
this commit, this was a direct representation of the sensitivity marks
applied to values via mechanisms such as sensitive variables, sensitive
outputs, and the `sensitive` function.
This commit extends this to include sensitivity based on the provider
schema. This is in line with the UI rendering of the plan, which
considers these two different types of sensitivity to be equivalent.
Co-authored-by: Kristin Laemmert <mildwonkey@users.noreply.github.com>
* lang/funcs: add (console-only) TypeFunction
The type() function, which is only available for terraform console,
prints out the type of a given value. This is mainly intended for
debugging - it's nice to be able to print out terraform's understanding
of a complex variable.
This introduces a new field for Scope: ConsoleMode. When ConsoleMode is true, any additional functions intended for use in the console (only) may be added.
When logging in to Terraform Cloud or Terraform Enterprise, change the
success output to be a bit more customized for the platform. For
Terraform Cloud, fetch a dynamic welcome banner that intentionally fails
open and defaults to a hardcoded message if its not available for any
reason.
Defaults will now preserve marks from non-null inputs and apply marks from any default values used. I've added tests for various structural types with marks, as well as some basic unknown cases.
This commit adds test cases to TestTranspose to document how this function handles marks.
The short version is that any marks anywhere will be applied to the return value, be they marks on the input map or marks on elements (either the entire list of strings, or individual elemnets of those lists).
Non-root module outputs no longer strip sensitivity marks from their
values, allowing dynamically sensitive values to propagate through the
configuration. We also remove the requirement for non-root module
outputs to be defined as sensitive if the value is marked as sensitive.
This avoids a static/dynamic clash when using shared modules that might
unknowingly receive sensitive values via input variables.
Co-authored-by: Martin Atkins <mart@degeneration.co.uk>
This includes the improvements to various collection-related functions to
make them handle marks more precisely. For Terraform in particular that
translates into handling sensitivity more precisely, so that non-sensitive
collections that happen to contain sensitive elements won't get simplified
into wholly-sensitive collections when using these functions.
Now that provisioners for directly with the plugin API and cty data
types, we need to add a few null checks to catch invalid input that
would have otherwise been masked by the legacy SDK.
In order to avoid updating every one of our existing functions with
explicit support for sensitive values, there's a default rule in the
functions system which makes the result of a function sensitive if any
of its arguments contain sensitive values.
We were applying that default to the various type conversion functions,
like tomap and tolist, which meant that converting a complex-typed value
with a sensitive value anywhere inside it would result in a
wholly-sensitive result.
That's unnecessarily conservative because the cty conversion layer (which
these functions are wrapping) already knows how to handle sensitivity
in a more precise way. Therefore we can opt in to handling marked values
(which Terraform uses for sensitivity) here and the only special thing
we need to do is handle errors related to sensitive values differently,
so we won't print their values out literally in case of an error (and so
that the attempt to print them out literally won't panic trying to
extract the marked values).
We previously had a shallow IsMarked call in compactValueStr's caller but
then a more-conservative deep ContainsMarked call inside compactValueStr
with a different resulting message. As well as causing an inconsistency
in messages, this was also a bit confusing because it made it seem like
a non-sensitive collection containing a sensitive element value was wholly
sensitive, making the debug information in the diagnostic messages not
trustworthy for debugging certain varieties of problem.
I originally considered just removing the redundant check in
compactValueStr here, but ultimately I decided to keep it as a sort of
defense in depth in case a future refactoring disconnects these two
checks. This should also serve as a prompt to someone making later changes
to compactValueStr to think about the implications of sensitive values
in there, which otherwise wouldn't be mentioned at all.
Disclosing information about a collection containing sensitive values is
safe here because compactValueStr only discloses information about the
value's type and element keys, and neither of those can be sensitive in
isolation. (Constructing a map with sensitive keys reduces to a sensitive
overall map.)
The destroy plan walk was identifying itself as a normal plan, and
causing providers to be configured when they were not needed. Since the
provider configuration may not be complete during the minimal destroy
plan walk, validation or configuration may fail.
All the information is available to resolve provider types when building
the configuration, but some provider references still had no FQN. This
caused validation to assume a default type, and incorrectly reject valid
module calls with non-default namespaced providers.
Resolve as much provider type information as possible when loading the
config. Only use this internally for now, but this should be useful
outside of the package to avoid re-resolving the providers later on. We
can come back and find where this might be useful elsewhere, but for now
keep the change as small as possible to avoid any changes in behavior.
To ensure that the apply command can determine whether an operation is
executed locally or remotely, we add an IsLocalOperations method on the
remote backend. This returns the internal forceLocal boolean.
We also update this flag after checking if the corresponding remote
workspace is in local operations mode or not. This ensures that we know
if an operation is running locally (entirely on the practitioner's
machine), pseudo-locally (on a Terraform Cloud worker), or remotely
(executing on a worker, rendering locally).
Disabling the resource count and outputs rendering when the remote
backend is in use causes them to be omitted from Terraform Cloud runs.
This commit changes the condition to render these values if either the
remote backend is not in use, or the command is running in automation
via the TF_IN_AUTOMATION flag. As this is intended to be set by
Terraform Cloud and other remote backend implementations, this addresses
the problem.
Fix two bugs which surface when using the remote backend:
- When migrating to views, we removed the call to `(*Meta).process`
which initialized the color boolean. This resulted in the legacy UI
calls in the remote backend stripping color codes. To fix this, we
populate this boolean from the common arguments.
- Remote apply will output the resource summary and output changes, and
these are rendered via the remote backend streaming. We need to
special case this in the apply command and prevent displaying a
zero-change summary line.
Neither of these are coverable by automated tests, as we don't have any
command-package level testing for the remote backend. Manually verified.
Terraform v0.15 includes the conclusion of the deprecation cycle for some
renamed arguments in the "azure" backend.
We missed this on the first draft of the upgrade guide because this change
arrived along with various other more innocuous updates and so we didn't
spot it during our change review.
Unfortunately it seems that this link got lost in a merge conflict when
we did the big nav refactor earlier in the v0.15 cycle, so here we'll
retoractively add it to the new location for upgrade guide nav, in the
language layout rather than the downloads layout.
Dynamic blocks with unknown for_each expressions are now decoded into an
unknown value rather than using a sentinel object with unknown
and null attributes. This will allow providers to precisely plan the
block values, rather than trying to heuristically paper over the
incorrect plans when dynamic is in use.
* website: v0.15 upgrade guide for sensitive resource attributes
Our earlier draft of this guide didn't include a section about the
stabilization of the "provider_sensitive_attrs" language experiment. This
new section aims to address the situation where a module might previously
have been returning a sensitive value without having marked it as such,
and thus that module will begin returning an error after upgrading to
Terraform v0.15.
As part of that, I also reviewed the existing documentation about these
features and made some edits aiming to make these four different sections
work well together if users refer to them all at once, as they are likely
to do if they follow the new links from the upgrade guide. I aimed to
retain all of the content we had before, but some of it is now in a new
location.
In particular, I moved the discussion about the v0.14 language experiment
into the upgrade guide, because it seems like a topic only really relevant
to those upgrading from an earlier version and not something folks need to
know about if they are using Terraform for the first time in v0.15 or
later.
* minor fixups
Co-authored-by: Kristin Laemmert <mildwonkey@users.noreply.github.com>
In the Terraform language we typically use lists of zero or one values in
some sense interchangably with single values that might be null, because
various Terraform language constructs are designed to work with
collections rather than with nullable values.
In Terraform v0.12 we made the splat operator [*] have a "special power"
of concisely converting from a possibly-null single value into a
zero-or-one list as a way to make that common operation more concise.
In a sense this "one" function is the opposite operation to that special
power: it goes from a zero-or-one collection (list, set, or tuple) to a
possibly-null single value.
This is a concise alternative to the following clunky conditional
expression, with the additional benefit that the following expression is
also not viable for set values, and it also properly handles the case
where there's unexpectedly more than one value:
length(var.foo) != 0 ? var.foo[0] : null
Instead, we can write:
one(var.foo)
As with the splat operator, this is a tricky tradeoff because it could be
argued that it's not something that'd be immediately intuitive to someone
unfamiliar with Terraform. However, I think that's justified given how
often zero-or-one collections arise in typical Terraform configurations.
Unlike the splat operator, it should at least be easier to search for its
name and find its documentation the first time you see it in a
configuration.
My expectation that this will become a common pattern is also my
justification for giving it a short, concise name. Arguably it could be
better named something like "oneornull", but that's a pretty clunky name
and I'm not convinced it really adds any clarity for someone who isn't
already familiar with it.