PreDiff and PostDiff hooks were designed to be called immediately before
and after the PlanResourceChange calls to the provider. Probably due to
the confusing legacy naming of the hooks, these were scattered about the
nodes involved with planning, causing the hooks to be called in a number
of places where they were designed, including data sources and destroy
plans. Since these hooks are not used at all any longer anyway, we can
removed the extra calls with no effect.
If we choose in the future to call PlanResourceChange for resource
destroy plans, the hooks can be re-inserted (even though they currently
are unused) into the new code path which must diverge from the current
combined path of managed and data sources.
The UI hooks for data source reads were missed during planning. Move the
hook calls to immediatley before and after the ReadDataSource calls to
ensure they are called during both plan and apply.
Our existing functionality for dealing with references generally only has
to concern itself with one level of references at a time, and only within
one module, because we use it to draw a dependency graph which then ends
up reflecting the broader context.
However, there are some situations where it's handy to be able to ask
questions about the indirect contributions to a particular expression in
the configuration, particularly for additional hints in the user interface
where we're just providing some extra context rather than changing
behavior.
This new "globalref" package therefore aims to be the home for algorithms
for use-cases like this. It introduces its own special "Reference" type
that wraps addrs.Reference to annotate it also with the usually-implied
context about where the references would be evaluated.
With that building block we can therefore ask questions whose answers
might involve discussing references in multiple packages at once, such as
"which resources directly or indirectly contribute to this expression?",
including indirect hops through input variables or output values which
would therefore change the evaluation context.
The current implementations of this are around mapping references onto the
static configuration expressions that they refer to, which is a pretty
broad and conservative approach that unfortunately therefore loses
accuracy when confronted with complex expressions that might take dynamic
actions on the contents of an object. My hunch is that this'll be good
enough to get some initial small use-cases solved, though there's plenty
room for improvement in accuracy.
It's somewhat ironic that this sort of "what is this value built from?"
question is the use-case I had in mind when I designed the "marks" feature
in cty, yet we've ended up putting it to an unexpected but still valid
use in Terraform for sensitivity analysis and our currently handling of
that isn't really tight enough to permit other concurrent uses of marks
for other use-cases. I expect we can address that later and so maybe we'll
try for a more accurate version of these analyses at a later date, but my
hunch is that this'll be good enough for us to still get some good use out
of it in the near future, particular related to helping understand where
unknown values came from and in tailoring our refresh results in plan
output to deemphasize detected changes that couldn't possibly have
contributed to the proposed plan.
Previously the "providers" package contained only a type for representing
the schema of a particular object within a provider, and the terraform
package had the responsibility of aggregating many of those together to
describe the entire surface area of a provider.
Here we move what was previously terraform.ProviderSchema to instead be
providers.Schemas, retaining its existing API otherwise, and leave behind
a type alias to allow us to gradually update other references over time.
We've gradually been shrinking down the responsibilities of the
"terraform" package to just representing the graph components and
behaviors anyway, but the specific motivation for doing this _now_ is to
allow for other packages to both be called by the terraform package _and_
work with provider schemas at the same time, without creating a package
dependency cycle: instead, these other packages can just import the
"providers" package and not need to import the "terraform" package at all.
For now this does still leave the responsibility for _building_ a
providers.Schemas object over in the "terraform" package, because it's
currently doing that as part of some larger work that isn't easily
separable, and so reorganizing that would be a more involved and riskier
change than just moving the existing type elsewhere.
We've ended up implementing something approximately like this in a few
places now, so this is a centralized version that we can consolidate on
moving forward, gradually removing that duplication.
Custom variable validations specified using JSON syntax would always
parse error messages as string literals, even if they included template
expressions. We need to be as backwards compatible with this behaviour
as possible, which results in this complex fallback logic. More detail
about this in the extensive code comments.
During the validation walk, we attempt to proactively evaluate check
rule condition and error message expressions. This will help catch some
errors as early as possible.
At present, resource values in the validation walk are of dynamic type.
This means that any references to resources will cause validation to be
delayed, rather than presenting useful errors. Validation may still
catch other errors, and any future changes which cause better type
propagation will result in better validation too.
Error messages for preconditions, postconditions, and custom variable
validations have until now been string literals. This commit changes
this to treat the field as an HCL expression, which must evaluate to a
string. Most commonly this will either be a string literal or a template
expression.
When the check rule condition is evaluated, we also evaluate the error
message. This means that the error message should always evaluate to a
string value, even if the condition passes. If it does not, this will
result in an error diagnostic.
If the condition fails, and the error message also fails to evaluate, we
fall back to a default error message. This means that the check rule
failure will still be reported, alongside diagnostics explaining why the
custom error message failed to render.
As part of this change, we also necessarily remove the heuristic about
the error message format. This guidance can be readded in future as part
of a configuration hint system.
`go-slug` has been updated to not upload `terraform.tfstate` to the slug
so that a user will no longer receive the error message about the
leftover state file after migrating from the local backend to TFC.
This commit stems from the change to make post plan the default run task stage, at the
time of this commit's writing! Since pre apply is under internal revision, we have removed
the block that polls the pre apply stage until the team decides to re-add support for pre apply
run tasks.
This change will await the completion of pre-apply run tasks if they
exist on a run and then report the results.
It also adds an abstraction when interacting with cloud integrations such
as policy checking and cost estimation that simplify and unify output,
although I did not go so far as to refactor those callers to use it yet.