Custom variable validations specified using JSON syntax would always
parse error messages as string literals, even if they included template
expressions. We need to be as backwards compatible with this behaviour
as possible, which results in this complex fallback logic. More detail
about this in the extensive code comments.
During the validation walk, we attempt to proactively evaluate check
rule condition and error message expressions. This will help catch some
errors as early as possible.
At present, resource values in the validation walk are of dynamic type.
This means that any references to resources will cause validation to be
delayed, rather than presenting useful errors. Validation may still
catch other errors, and any future changes which cause better type
propagation will result in better validation too.
Error messages for preconditions, postconditions, and custom variable
validations have until now been string literals. This commit changes
this to treat the field as an HCL expression, which must evaluate to a
string. Most commonly this will either be a string literal or a template
expression.
When the check rule condition is evaluated, we also evaluate the error
message. This means that the error message should always evaluate to a
string value, even if the condition passes. If it does not, this will
result in an error diagnostic.
If the condition fails, and the error message also fails to evaluate, we
fall back to a default error message. This means that the check rule
failure will still be reported, alongside diagnostics explaining why the
custom error message failed to render.
As part of this change, we also necessarily remove the heuristic about
the error message format. This guidance can be readded in future as part
of a configuration hint system.
This commit stems from the change to make post plan the default run task stage, at the
time of this commit's writing! Since pre apply is under internal revision, we have removed
the block that polls the pre apply stage until the team decides to re-add support for pre apply
run tasks.
This change will await the completion of pre-apply run tasks if they
exist on a run and then report the results.
It also adds an abstraction when interacting with cloud integrations such
as policy checking and cost estimation that simplify and unify output,
although I did not go so far as to refactor those callers to use it yet.
When calculating the unknown values for JSON plan output, we would
previously recursively call the `unknownAsBool` function on the current
sub-tree twice, if any values were unknown. This was wasteful, but not
noticeable for normal Terraform resource shapes.
However for deeper nested object values, such as Kubernetes manifests,
this was a severe performance problem, causing `terraform show -json` to
take several hours to render a plan.
This commit reuses the already calculated unknown value for the subtree,
and adds benchmark coverage to demonstrate the improvement.
* ignore_changes attributes must exist in schema
Add a test verifying that attempting to add a nonexistent attribute to
ignore_changes throws an error.
* ignore_changes cannot be used with Computed attrs
Return a warning if a Computed attribute is present in ignore_changes,
unless the attribute is also Optional.
ignore_changes on a non-Optional Computed attribute is a no-op, so the user
likely did not want to set this in config.
An Optional Computed attribute, however, is still subject to ignore_changes
behaviour, since it is possible to make changes in the configuration that
Terraform must ignore.
This commit introduces a capsule type, `TypeType`, which is used to
extricate type information from the console-only `type` function. In
combination with the `TypeType` mark, this allows us to restrict the use
of this function to top-level display of a value's type. Any other use
of `type()` will result in an error diagnostic.
These instances of marks.Raw usage were semantically only testing the
properties of combining multiple marks. Testing this with an arbitrary
value for the mark is just as valid and clearer.
The console-only `type` function allows interrogation of any value's
type. An implementation quirk is that we use a cty.Mark to allow the
console to display this type information without the usual HCL quoting.
For example:
> type("boop")
string
instead of:
> type("boop")
"string"
Because these marks can propagate when used in complex expressions,
using the type function as part of a complex expression could result in
this "print as raw" mark being attached to a collection. When this
happened, it would result in a crash when we tried to iterate over a
marked value.
The `type` function was never intended to be used in this way, which is
why its use is limited to the console command. Its purpose was as a
pseudo-builtin, used only at the top level to display the type of a
given value.
This commit goes some way to preventing the use of the `type` function
in complex expressions, by refusing to display any non-string value
which was marked by `type`, or contains a sub-value which was so marked.
The JSON plan configuration data now includes a `full_name` field for
providers. This addition warrants a backwards compatible increment to
the version number.
When rendering configuration as JSON, we have a single map of provider
configurations at the top level, since these are globally applicable.
Each resource has an opaque key into this map which points at the
configuration data for the provider.
This commit fixes two bugs in this implementation:
- Resources in non-root modules had an invalid provider config key,
which meant that there was never a valid reference to the provider
config block. These keys were prefixed with the local module name
instead of the path to the module. This is now corrected.
- Modules with passed provider configs would point to either an empty
provider config block or one which is not present at all. This has
been fixed so that these resources point to the provider config block
from the calling module (or wherever up the module tree it was
originally defined).
We also add a "full_name" key-value pair to the provider config block,
with the entire fully-qualified provider name including hostname and
namespace.
Preconditions and postconditions for resources and data sources may not
refer to the address of the containing resource or data source. This
commit adds a parse-time validation for this rule.
This is not currently gated by the experiment only because it is awkward
to do so in the context of evaluationStateData, which doesn't have any
concept of experiments at the moment.
If the configuration contains preconditions and/or postconditions for any
objects, we'll check them during evaluation of those objects and generate
errors if any do not pass.
The handling of post-conditions is particularly interesting here because
we intentionally evaluate them _after_ we've committed our record of the
resulting side-effects to the state/plan, with the intent that future
plans against the same object will keep failing until the problem is
addressed either by changing the object so it would pass the precondition
or changing the precondition to accept the current object. That then
avoids the need for us to proactively taint managed resources whose
postconditions fail, as we would for provisioner failures: instead, we can
leave the resolution approach up to the user to decide.
Co-authored-by: Alisdair McDiarmid <alisdair@users.noreply.github.com>
If a resource or output value has a precondition or postcondition rule
then anything the condition depends on is a dependency of the object,
because the condition rules will be evaluated as part of visiting the
relevant graph node.
This allows precondition and postcondition checks to be declared for
resources and output values as long as the preconditions_postconditions
experiment is enabled.
Terraform Core doesn't currently know anything about these features, so
as of this commit declaring them does nothing at all.
This construct of a block containing a condition and an error message will
be useful for other sorts of blocks defining expectations or contracts, so
we'll give it a more generic name in anticipation of it being used in
other situations.
Reference: https://github.com/hashicorp/terraform/issues/30373
This change forward ports the `legacy_type_system` boolean fields in the `ApplyResourceChange.Response` and `PlanResourceChange.Response` messages that existed in protocol version 5, so that existing terraform-plugin-sdk/v2 providers can be muxed with protocol version 6 providers (e.g. terraform-plugin-framework) while also taking advantage of the newer protocol features. This functionality should not be used by any providers or SDKs except those built with terraform-plugin-sdk.
Updated via:
```shell
cp docs/plugin-protocol/tfplugin6.1.proto docs/plugin-protocol/tfplugin6.2.proto
# Copy legacy_type_system fields from tfplugin5.2.proto into ApplyResourceChange.Response and PlanResourceChange
rm internal/tfplugin6/tfplugin6.proto
ln -s ../../docs/plugin-protocol/tfplugin6.2.proto internal/tfplugin6/tfplugin6.proto
go run tools/protobuf-compile/protobuf-compile.go `pwd`
# Updates to internal/plugin6/grpc_provider.go
```
Previously we were just returning a string representation of the file mode,
which spends more characters on the irrelevant permission bits that it
does on the directory entry type, and is presented in a Unix-centric
format that likely won't be familiar to the user of a Windows system.
Instead, we'll recognize a few specific directory entry types that seem
worth mentioning by name, and then use a generic message for the rest.
The original motivation here was actually to deal with the fact that our
tests for this function were previously not portable due to the error
message leaking system-specific permission detail that are not relevant
to the test. Rather than just directly addressing that portability
problem, I took the opportunity to improve the error messages at the same
time.
However, because of that initial focus there are only actually tests here
for the directory case. A test that tries to test any of these other file
modes would not be portable and in some cases would require superuser
access, so we'll just leave those cases untested for the moment since they
weren't tested before anyway, and so we've not _lost_ any test coverage
here.
Terraform uses "implied" move statements to represent the situation where
it automatically handles a switch from count to no-count on a resource.
Because that situation requires targeting only a specific resource
instance inside a specific module instance, implied move statements are
always presented as if they had been declared in the root module and then
traversed through the exact module instance path to reach the target
resource.
However, that means they can potentially cross a module package boundary,
if the changed resource belongs to an external module. Normally we
prohibit that to avoid the root module depending on implementation details
of the called module, but Terraform generates these implied statements
based only on information in the called module and so there's no need to
apply that same restriction to implied move statements, which will always
have source and destination addresses belonging to the same module
instance.
This change therefore fixes a misbehavior where Terraform would reject
an attempt to switch from no-count to count in a called module, where
previously the author of the calling configuration had no recourse to fix
it because the change has actually happened upstream.
Now that variable evaluation checks for a nil expression the graph
transformer does not need to generate a synthetic expression for
variable defaults. This means that all default handling is now located
in one place, and we are not surprised by a configuration expression
showing up which doesn't actually exist in the configuration.
Rename nodeModuleVariable.evalModuleCallArgument to evalModuleVariable.
This method is no longer handling only the module call argument, it is
also dealing with the variable declaration defaults and validation
statements.
Add an additional tests for validation with a non-nullable variable.
In earlier Terraform versions we had an extra validation step prior to
the graph walk which tried to partially validate root module input
variable values (just checking their type constraints) and then return
error messages which specified as accurately as possible where the value
had originally come from.
We're now handling that sort of validation exclusively during the graph
walk so that we can share the main logic between both root module and
child module variable values, but previously that shared code wasn't
able to generate such specific information about where the values had
originated, because it was adapted from code originally written to only
deal with child module variables.
Here then we restore a similar level of detail as before, when we're
processing root module variables. For child module variables, we use
synthetic InputValue objects which state that the value was declared
in the configuration, thus causing us to produce a similar sort of error
message as we would've before which includes a source range covering
the argument expression in the calling module block.