This commit introduces a capsule type, `TypeType`, which is used to
extricate type information from the console-only `type` function. In
combination with the `TypeType` mark, this allows us to restrict the use
of this function to top-level display of a value's type. Any other use
of `type()` will result in an error diagnostic.
These instances of marks.Raw usage were semantically only testing the
properties of combining multiple marks. Testing this with an arbitrary
value for the mark is just as valid and clearer.
The console-only `type` function allows interrogation of any value's
type. An implementation quirk is that we use a cty.Mark to allow the
console to display this type information without the usual HCL quoting.
For example:
> type("boop")
string
instead of:
> type("boop")
"string"
Because these marks can propagate when used in complex expressions,
using the type function as part of a complex expression could result in
this "print as raw" mark being attached to a collection. When this
happened, it would result in a crash when we tried to iterate over a
marked value.
The `type` function was never intended to be used in this way, which is
why its use is limited to the console command. Its purpose was as a
pseudo-builtin, used only at the top level to display the type of a
given value.
This commit goes some way to preventing the use of the `type` function
in complex expressions, by refusing to display any non-string value
which was marked by `type`, or contains a sub-value which was so marked.
The JSON plan configuration data now includes a `full_name` field for
providers. This addition warrants a backwards compatible increment to
the version number.
When rendering configuration as JSON, we have a single map of provider
configurations at the top level, since these are globally applicable.
Each resource has an opaque key into this map which points at the
configuration data for the provider.
This commit fixes two bugs in this implementation:
- Resources in non-root modules had an invalid provider config key,
which meant that there was never a valid reference to the provider
config block. These keys were prefixed with the local module name
instead of the path to the module. This is now corrected.
- Modules with passed provider configs would point to either an empty
provider config block or one which is not present at all. This has
been fixed so that these resources point to the provider config block
from the calling module (or wherever up the module tree it was
originally defined).
We also add a "full_name" key-value pair to the provider config block,
with the entire fully-qualified provider name including hostname and
namespace.
Preconditions and postconditions for resources and data sources may not
refer to the address of the containing resource or data source. This
commit adds a parse-time validation for this rule.
This is not currently gated by the experiment only because it is awkward
to do so in the context of evaluationStateData, which doesn't have any
concept of experiments at the moment.
If the configuration contains preconditions and/or postconditions for any
objects, we'll check them during evaluation of those objects and generate
errors if any do not pass.
The handling of post-conditions is particularly interesting here because
we intentionally evaluate them _after_ we've committed our record of the
resulting side-effects to the state/plan, with the intent that future
plans against the same object will keep failing until the problem is
addressed either by changing the object so it would pass the precondition
or changing the precondition to accept the current object. That then
avoids the need for us to proactively taint managed resources whose
postconditions fail, as we would for provisioner failures: instead, we can
leave the resolution approach up to the user to decide.
Co-authored-by: Alisdair McDiarmid <alisdair@users.noreply.github.com>
If a resource or output value has a precondition or postcondition rule
then anything the condition depends on is a dependency of the object,
because the condition rules will be evaluated as part of visiting the
relevant graph node.
This allows precondition and postcondition checks to be declared for
resources and output values as long as the preconditions_postconditions
experiment is enabled.
Terraform Core doesn't currently know anything about these features, so
as of this commit declaring them does nothing at all.
This construct of a block containing a condition and an error message will
be useful for other sorts of blocks defining expectations or contracts, so
we'll give it a more generic name in anticipation of it being used in
other situations.
Reference: https://github.com/hashicorp/terraform/issues/30373
This change forward ports the `legacy_type_system` boolean fields in the `ApplyResourceChange.Response` and `PlanResourceChange.Response` messages that existed in protocol version 5, so that existing terraform-plugin-sdk/v2 providers can be muxed with protocol version 6 providers (e.g. terraform-plugin-framework) while also taking advantage of the newer protocol features. This functionality should not be used by any providers or SDKs except those built with terraform-plugin-sdk.
Updated via:
```shell
cp docs/plugin-protocol/tfplugin6.1.proto docs/plugin-protocol/tfplugin6.2.proto
# Copy legacy_type_system fields from tfplugin5.2.proto into ApplyResourceChange.Response and PlanResourceChange
rm internal/tfplugin6/tfplugin6.proto
ln -s ../../docs/plugin-protocol/tfplugin6.2.proto internal/tfplugin6/tfplugin6.proto
go run tools/protobuf-compile/protobuf-compile.go `pwd`
# Updates to internal/plugin6/grpc_provider.go
```
Previously we were just returning a string representation of the file mode,
which spends more characters on the irrelevant permission bits that it
does on the directory entry type, and is presented in a Unix-centric
format that likely won't be familiar to the user of a Windows system.
Instead, we'll recognize a few specific directory entry types that seem
worth mentioning by name, and then use a generic message for the rest.
The original motivation here was actually to deal with the fact that our
tests for this function were previously not portable due to the error
message leaking system-specific permission detail that are not relevant
to the test. Rather than just directly addressing that portability
problem, I took the opportunity to improve the error messages at the same
time.
However, because of that initial focus there are only actually tests here
for the directory case. A test that tries to test any of these other file
modes would not be portable and in some cases would require superuser
access, so we'll just leave those cases untested for the moment since they
weren't tested before anyway, and so we've not _lost_ any test coverage
here.
Terraform uses "implied" move statements to represent the situation where
it automatically handles a switch from count to no-count on a resource.
Because that situation requires targeting only a specific resource
instance inside a specific module instance, implied move statements are
always presented as if they had been declared in the root module and then
traversed through the exact module instance path to reach the target
resource.
However, that means they can potentially cross a module package boundary,
if the changed resource belongs to an external module. Normally we
prohibit that to avoid the root module depending on implementation details
of the called module, but Terraform generates these implied statements
based only on information in the called module and so there's no need to
apply that same restriction to implied move statements, which will always
have source and destination addresses belonging to the same module
instance.
This change therefore fixes a misbehavior where Terraform would reject
an attempt to switch from no-count to count in a called module, where
previously the author of the calling configuration had no recourse to fix
it because the change has actually happened upstream.
Now that variable evaluation checks for a nil expression the graph
transformer does not need to generate a synthetic expression for
variable defaults. This means that all default handling is now located
in one place, and we are not surprised by a configuration expression
showing up which doesn't actually exist in the configuration.
Rename nodeModuleVariable.evalModuleCallArgument to evalModuleVariable.
This method is no longer handling only the module call argument, it is
also dealing with the variable declaration defaults and validation
statements.
Add an additional tests for validation with a non-nullable variable.
In earlier Terraform versions we had an extra validation step prior to
the graph walk which tried to partially validate root module input
variable values (just checking their type constraints) and then return
error messages which specified as accurately as possible where the value
had originally come from.
We're now handling that sort of validation exclusively during the graph
walk so that we can share the main logic between both root module and
child module variable values, but previously that shared code wasn't
able to generate such specific information about where the values had
originated, because it was adapted from code originally written to only
deal with child module variables.
Here then we restore a similar level of detail as before, when we're
processing root module variables. For child module variables, we use
synthetic InputValue objects which state that the value was declared
in the configuration, thus causing us to produce a similar sort of error
message as we would've before which includes a source range covering
the argument expression in the calling module block.
Previously we had three different layers all thinking they were
responsible for substituting a default value for an unset root module
variable:
- the local backend, via logic in backend.ParseVariableValues
- the context.Plan function (and other similar functions) trying to
preprocess the input variables using
terraform.mergeDefaultInputVariableValues .
- the newer prepareFinalInputVariableValue, which aims to centralize all
of the variable preparation logic so it can be common to both root and
child module variables.
The second of these was also trying to handle type constraint checking,
which is also the responsibility of the central function and not something
we need to handle so early.
Only the last of these consistently handles both root and child module
variables, and so is the one we ought to keep. The others are now
redundant and are causing prepareFinalInputVariableValue to get a slightly
corrupted view of the caller's chosen variable values.
To rectify that, here we remove the two redundant layers altogether and
have unset root variables pass through as cty.NilVal all the way to the
central prepareFinalInputVariableValue function, which will then handle
them in a suitable way which properly respects the "nullable" setting.
This commit includes some test changes in the terraform package to make
those tests no longer rely on the mergeDefaultInputVariableValues logic
we've removed, and to instead explicitly set cty.NilVal for all unset
variables to comply with our intended contract for PlanOpts.SetVariables,
and similar. (This is so that we can more easily catch bugs in callers
where they _don't_ correctly handle input variables; it allows us to
distinguish between the caller explicitly marking a variable as unset vs.
not describing it at all, where the latter is a bug in the caller.)
Previously we had a significant discrepancy between these two situations:
we wrote the raw root module variables directly into the EvalContext and
then applied type conversions only at expression evaluation time, while
for child modules we converted and validated the values while visiting
the variable graph node and wrote only the _final_ value into the
EvalContext.
This confusion seems to have been the root cause for #29899, where
validation rules for root module variables were being applied at the wrong
point in the process, prior to type conversion.
To fix that bug and also make similar mistakes less likely in the future,
I've made the root module variable handling more like the child module
variable handling in the following ways:
- The "raw value" (exactly as given by the user) lives only in the graph
node representing the variable, which mirrors how the _expression_
for a child module variable lives in its graph node. This means that
the flow for the two is the same except that there's no expression
evaluation step for root module variables, because they arrive as
constant values from the caller.
- The set of variable values in the EvalContext is always only "final"
values, after type conversion is complete. That in turn means we no
longer need to do "just in time" conversion in
evaluationStateData.GetInputVariable, and can just return the value
exactly as stored, which is consistent with how we handle all other
references between objects.
This diff is noisier than I'd like because of how much it takes to wire
a new argument (the raw variable values) through to the plan graph builder,
but those changes are pretty mechanical and the interesting logic lives
inside the plan graph builder itself, in NodeRootVariable, and
the shared helper functions in eval_variable.go.
While here I also took the opportunity to fix a historical API wart in
EvalContext, where SetModuleCallArguments was built to take a set of
variable values all at once but our current caller always calls with only
one at a time. That is now just SetModuleCallArgument singular, to match
with the new SetRootModuleArgument to deal with root module variables.
This test seems to be a holdover from the many-moons-ago switch from one
graph for all operations to separate graphs for plan and apply. It is
effectively just a copy of a subset of the content of the Context.Validate
function and is a maintainability hazard because it tends to lag behind
updates to that function unless changes there happen to make it fail.
This test doesn't cover anything that the other validate context tests
don't exercise as an implementation detail of calling Context.Validate,
so I've just removed it with no replacement.
Our original messaging here was largely just lifted from the equivalent
message for unknown values in "count", and it didn't really include any
specific advice on how to update a configuration to make for_each valid,
instead focusing only on the workaround of using the -target planning
option.
It's tough to pack in a fully-actionable suggestion here since unknown
values in for_each keys tends to be a gnarly architectural problem rather
than a local quirk -- when data flows between modules it can sometimes be
unclear whether it'll end up being used in a context which allows unknown
values.
I did my best to summarize the advice we've been giving in community forum
though, in the hope that more people will be able to address this for
themselves without asking for help, until we're one day able to smooth
this out better with a mechanism such as "partial apply".
The specific output order is meaningless, but it should always be the same after
two Encode() calls with identical (ignoring in-memory order) dependency sets.
This uses the decoupled build and run strategy to run the e2etests so that
we can arrange to run the tests against the real release packages produced
elsewhere in this workflow, rather than ones generated just in time by
the test harness.
The modifications to make-archive.sh here make it more consistent with its
originally-intended purpose of producing a harness for testing "real"
release executables. Our earlier compromise of making it include its own
terraform executable came from a desire to use that script as part of
manual cross-platform testing when we weren't yet set up to support
automation of those tests as we're doing here. That does mean, however,
that the terraform-e2etest package content must be combined with content
from a terraform release package in order to produce a valid contest for
running the tests.
We use a single job to cross-compile the test harness for all of the
supported platforms, because that build is relatively fast and so not
worth the overhead of matrix build, but then use a matrix build to
actually run the tests so that we can run them in a worker matching the
target platform.
We currently have access only to amd64 (x64) runners in GitHub Actions
and so for the moment this process is limited only to the subset of our
supported platforms which use that architecture.
When creating a Set of BasicEdges, the Hashcode function is used to determine
map keys for the underlying set data structure.
The string hex representation of the two vertices' pointers is unsafe to use
as a map key, since these addresses may change between the time they are added
to the set and the time the set is operated on.
Instead we modify the Hashcode function to maintain the references to the
underlying vertices so they cannot be garbage collected during the lifetime
of the Set.
TransitiveReduction does not rely on having a single root, and only
must be free of cycles.
DepthFirstWalk and ReverseDepthFirstWalk do not do a topological sort,
so if order matters TransitiveReduction must be run first.
These two functions were left during a refactor to ensure the old
behavior of a sorted walk was still accessible in some manner. The
package has since been removed from any public API, and the sorted
versions are no longer called, so we can remove them.
Create a separate `validateMoveStatementGraph` function so that
`ValidateMoves` and `ApplyMoves` both check the same conditions. Since
we're not using the builtin `graph.Validate` method, because we may have
multiple roots and want better cycle diagnostics, we need to add checks
for self references too. While multiple roots are an error enforced by
`Validate` for the concurrent walk, they are OK when using
`TransitiveReduction` and `ReverseDepthFirstWalk`, so we can skip that
check.
Apply moves must first use `TransitiveReduction` to reduce the graph,
otherwise nodes may be skipped if they are passed over by a transitive
edge.
Changing only the index on a nested module will cause all nested moves
to create cycles, since their full addresses will match both the From
and To addresses. When building the dependency graph, check if the
parent is only changing the index of the containing module, and prevent
the backwards edge for the move.
Add a method for checking if the From and To addresses in a move
statement are only changing the indexes of modules relative to the
statement module.
This is needed because move statement nested within the module will be
able to match against both the From and To addresses, causing cycles in
the order of move operations.