The sum() function accepts a collection of values which must all convert
to numbers. It is valid for this to be a collection of string values
representing numbers.
Previously the function would panic if the first element of a collection
was a non-number type, as we didn't attempt to convert it to a number
before calling the cty `Add` method.
* fix: local variables should not be overridden by remote variables during `terraform import`
* chore: applied the same fix in the 'internal/cloud' package
* backport changes from cloud package to remote package
Co-authored-by: Alisdair McDiarmid <alisdair@users.noreply.github.com>
Co-authored-by: uturunku1 <luces.huayhuaca@gmail.com>
Variable validation error message expressions which generated sensitive
values would previously crash. This commit updates the logic to align
with preconditions and postconditions, eliding sensitive error message
values and adding a separate diagnostic explaining why.
Precondition and postcondition blocks which evaluated expressions
resulting in sensitive values would previously crash. This commit fixes
the crashes, and adds an additional diagnostic if the error message
expression produces a sensitive value (which we also elide).
Evaluate precondition and postcondition blocks in refresh-only mode, but
report any failures as warnings instead of errors. This ensures that any
deviation from the contract defined by condition blocks is reported as
early as possible, without preventing the completion of a state refresh
operation.
Prior to this commit, Terraform evaluated output preconditions and data
source pre/postconditions as normal in refresh-only mode, while managed
resource pre/postconditions were not evaluated at all. This omission
could lead to confusing partial condition errors, or failure to detect
undesired changes which would otherwise cause resources to become
invalid.
Reporting the failures as errors also meant that changes retrieved
during refresh could cause the refresh operation to fail. This is also
undesirable, as the primary purpose of the operation is to update local
state. Precondition/postcondition checks are still valuable here, but
should be informative rather than blocking.
The graphs used for the CBD tests wouldn't validate because they skipped
adding the root module node. Re add the root module transformer and
transitive reduction transformer to the build steps, and match the new
reduced output in the test fixtures.
Complete the removal of the Validate option for graph building. There is
no case where we want to allow an invalid graph, as the primary reason
for validation is to ensure we have no cycles, and we can't walk a graph
with cycles. The only code which specifically relied on there being no
validation was a test to ensure the Validate flag prevented it.
The previous precondition/postcondition block validation implementation
failed if the enclosing resource was expanded. This commit fixes this by
generating appropriate placeholder instance data for the resource,
depending on whether `count` or `for_each` is used.
This set of diagnostic messages is under a number of unusual constraints
that make them tough to get right:
- They are discussing a couple finicky concepts which authors are
likely to be encountering for the first time in these error messages:
the idea of "local names" for providers, the relationship between those
and provider source addresses, and additional ("aliased") provider
configurations.
- They are reporting concerns that span across a module call boundary,
and so need to take care to be clear about whether they are talking
about a problem in the caller or a problem in the callee.
- Some of them are effectively deprecation warnings for features that
might be in use by a third-party module that the user doesn't control,
in which case they have no recourse to address them aside from opening
a feature request with the upstream module maintainer.
- Terraform has, for backward-compatibility reasons, a lot of implied
default behaviors regarding providers and provider configurations,
and these errors can arise in situations where Terraform's assumptions
don't match the author's intent, and so we need to be careful to
explain what Terraform assumed in order to make the messages
understandable.
After seeing some confusion with these messages in the community, and
being somewhat confused by some of them myself, I decided to try to edit
them a bit for consistency of terminology (both between the messages and
with terminology in our docs), being explicit about caller vs. callee
by naming them in the messages, and making explicit what would otherwise
be implicit with regard to the correspondences between provider source
addresses and local names.
My assumed audience for all of these messages is the author of the caller
module, because it's the caller who is responsible for creating the
relationship between caller and callee. As much as possible I tried to
make the messages include specific actions for that author to take to
quiet the warning or fix the error, but some of the warnings are only
fixable by the callee's maintainer and so those messages are, in effect,
a suggestion to send a request to the author to stop using a deprecated
feature.
I think these new messages are also not ideal by any means, because it's
just tough to pack so much information into concise messages while being
clear and consistent, but I hope at least this will give users seeing
these messages enough context to infer what's going on, possibly with the
help of our documentation.
I intentionally didn't change which cases Terraform will return warnings
or errors -- only the message texts -- although I did highlight in a
comment in one of the tests that what it is a asserting seems a bit
suspicious to me. I don't intend to address that here; instead, I intend
that note to be something to refer to if we later see a bug report that
calls that behavior into question.
This does actually silence some _unrelated_ warnings and errors in cases
where a provider block has an invalid provider local name as its label,
because our other functions for dealing with provider addresses are
written to panic if given invalid addresses under the assumption that
earlier code will have guarded against that. Doing this allowed for the
provider configuration validation logic to safely include more information
about the configuration as helpful context, without risking tripping over
known-invalid configuration and panicking in the process.
PreDiff and PostDiff hooks were designed to be called immediately before
and after the PlanResourceChange calls to the provider. Probably due to
the confusing legacy naming of the hooks, these were scattered about the
nodes involved with planning, causing the hooks to be called in a number
of places where they were designed, including data sources and destroy
plans. Since these hooks are not used at all any longer anyway, we can
removed the extra calls with no effect.
If we choose in the future to call PlanResourceChange for resource
destroy plans, the hooks can be re-inserted (even though they currently
are unused) into the new code path which must diverge from the current
combined path of managed and data sources.
The UI hooks for data source reads were missed during planning. Move the
hook calls to immediatley before and after the ReadDataSource calls to
ensure they are called during both plan and apply.
Custom variable validations specified using JSON syntax would always
parse error messages as string literals, even if they included template
expressions. We need to be as backwards compatible with this behaviour
as possible, which results in this complex fallback logic. More detail
about this in the extensive code comments.
During the validation walk, we attempt to proactively evaluate check
rule condition and error message expressions. This will help catch some
errors as early as possible.
At present, resource values in the validation walk are of dynamic type.
This means that any references to resources will cause validation to be
delayed, rather than presenting useful errors. Validation may still
catch other errors, and any future changes which cause better type
propagation will result in better validation too.
Error messages for preconditions, postconditions, and custom variable
validations have until now been string literals. This commit changes
this to treat the field as an HCL expression, which must evaluate to a
string. Most commonly this will either be a string literal or a template
expression.
When the check rule condition is evaluated, we also evaluate the error
message. This means that the error message should always evaluate to a
string value, even if the condition passes. If it does not, this will
result in an error diagnostic.
If the condition fails, and the error message also fails to evaluate, we
fall back to a default error message. This means that the check rule
failure will still be reported, alongside diagnostics explaining why the
custom error message failed to render.
As part of this change, we also necessarily remove the heuristic about
the error message format. This guidance can be readded in future as part
of a configuration hint system.
This commit stems from the change to make post plan the default run task stage, at the
time of this commit's writing! Since pre apply is under internal revision, we have removed
the block that polls the pre apply stage until the team decides to re-add support for pre apply
run tasks.
This change will await the completion of pre-apply run tasks if they
exist on a run and then report the results.
It also adds an abstraction when interacting with cloud integrations such
as policy checking and cost estimation that simplify and unify output,
although I did not go so far as to refactor those callers to use it yet.
When calculating the unknown values for JSON plan output, we would
previously recursively call the `unknownAsBool` function on the current
sub-tree twice, if any values were unknown. This was wasteful, but not
noticeable for normal Terraform resource shapes.
However for deeper nested object values, such as Kubernetes manifests,
this was a severe performance problem, causing `terraform show -json` to
take several hours to render a plan.
This commit reuses the already calculated unknown value for the subtree,
and adds benchmark coverage to demonstrate the improvement.
* ignore_changes attributes must exist in schema
Add a test verifying that attempting to add a nonexistent attribute to
ignore_changes throws an error.
* ignore_changes cannot be used with Computed attrs
Return a warning if a Computed attribute is present in ignore_changes,
unless the attribute is also Optional.
ignore_changes on a non-Optional Computed attribute is a no-op, so the user
likely did not want to set this in config.
An Optional Computed attribute, however, is still subject to ignore_changes
behaviour, since it is possible to make changes in the configuration that
Terraform must ignore.
This commit introduces a capsule type, `TypeType`, which is used to
extricate type information from the console-only `type` function. In
combination with the `TypeType` mark, this allows us to restrict the use
of this function to top-level display of a value's type. Any other use
of `type()` will result in an error diagnostic.
These instances of marks.Raw usage were semantically only testing the
properties of combining multiple marks. Testing this with an arbitrary
value for the mark is just as valid and clearer.
The console-only `type` function allows interrogation of any value's
type. An implementation quirk is that we use a cty.Mark to allow the
console to display this type information without the usual HCL quoting.
For example:
> type("boop")
string
instead of:
> type("boop")
"string"
Because these marks can propagate when used in complex expressions,
using the type function as part of a complex expression could result in
this "print as raw" mark being attached to a collection. When this
happened, it would result in a crash when we tried to iterate over a
marked value.
The `type` function was never intended to be used in this way, which is
why its use is limited to the console command. Its purpose was as a
pseudo-builtin, used only at the top level to display the type of a
given value.
This commit goes some way to preventing the use of the `type` function
in complex expressions, by refusing to display any non-string value
which was marked by `type`, or contains a sub-value which was so marked.
The JSON plan configuration data now includes a `full_name` field for
providers. This addition warrants a backwards compatible increment to
the version number.
When rendering configuration as JSON, we have a single map of provider
configurations at the top level, since these are globally applicable.
Each resource has an opaque key into this map which points at the
configuration data for the provider.
This commit fixes two bugs in this implementation:
- Resources in non-root modules had an invalid provider config key,
which meant that there was never a valid reference to the provider
config block. These keys were prefixed with the local module name
instead of the path to the module. This is now corrected.
- Modules with passed provider configs would point to either an empty
provider config block or one which is not present at all. This has
been fixed so that these resources point to the provider config block
from the calling module (or wherever up the module tree it was
originally defined).
We also add a "full_name" key-value pair to the provider config block,
with the entire fully-qualified provider name including hostname and
namespace.
Preconditions and postconditions for resources and data sources may not
refer to the address of the containing resource or data source. This
commit adds a parse-time validation for this rule.
This is not currently gated by the experiment only because it is awkward
to do so in the context of evaluationStateData, which doesn't have any
concept of experiments at the moment.
If the configuration contains preconditions and/or postconditions for any
objects, we'll check them during evaluation of those objects and generate
errors if any do not pass.
The handling of post-conditions is particularly interesting here because
we intentionally evaluate them _after_ we've committed our record of the
resulting side-effects to the state/plan, with the intent that future
plans against the same object will keep failing until the problem is
addressed either by changing the object so it would pass the precondition
or changing the precondition to accept the current object. That then
avoids the need for us to proactively taint managed resources whose
postconditions fail, as we would for provisioner failures: instead, we can
leave the resolution approach up to the user to decide.
Co-authored-by: Alisdair McDiarmid <alisdair@users.noreply.github.com>