To allow using the same Tablestore table with multiple OSS buckets.
e.g. instead of env:/some/path/terraform.tfstate
the LockID now becomes some-bucket/env:/some/path/terraform.tfstate
configs.Module is accessible.
Continuing the work of removing all calls to addrs.NewLegacyProvider,
this commit uses configs.Module.ProviderForLocalConfig wherever the
caller has access to that Module.
EvalContext.InitProvider no longer needs the redundant typ String
terraform.contextComponentFactory refactored to take an addrs.Provider
instead of a string.
Added configs.Module.ProviderForLocalProviderConfig which allows
terraform.ProviderTransformer to get the provider FQN from the module,
instead of assuming NewLegacyProvider.
We're not far enough along yet to be able to actually use the
RepetitionData instances provided by the instances package, but having
these types be considered identical will help us to gradually migrate over
as we prepare the rest of Terraform to properly populate the Expander.
This is a minimal integration of instances.Expander used just for resource
count and for_each, for now just forcing modules to always be singletons
because the rest of Terraform Core isn't ready to deal with expanding
module calls yet.
This doesn't integrate super cleanly yet because we still have some
cleanup work to do in the design of the plan walk, to make it explicit
that the nodes in the plan graph represent static configuration objects
rather than expanded instances, including for modules. To make this work
in the meantime, there is some shimming between addrs.Module and
addrs.ModuleInstance to correct for the discontinuities that result from
the fact that Terraform currently assumes that modules are always
singletons.
This is not used yet, but in future commits will be used as a
"blackboard" to centrally aggregate the information pertaining to
expansion of resources and modules (using "count" or "for_each") to help
ensure consistent treatment of the expansion process during a graph walk.
In practice this only really makes sense for the plan walk, because the
apply walk doesn't do any dynamic expansion.
This package aims to encapsulate the module/resource repetition problem
so that Terraform Core's graph node DynamicExpand implementations can be
simpler.
This is also a building block on the path towards module repetition, by
modelling the recursive expansion of modules and their contents. This will
allow the Terraform Core plan graph to have one node per configuration
construct, each of which will DynamicExpand into as many sub-nodes as
necessary to cover all of the recursive module instantiations.
For the moment this is just dead code, because Terraform Core isn't yet
updated to use it.
When ModuleInstanceStep values appear alone in debug messages, it's easier
to read them in a compact, HCL-like form than as the default struct
printing style.
If an error occurs on creating the context for console or import, we
would fail to unlock the state. Fix this by unlocking slightly earlier.
Affects console and import commands.
Fixes#23318
* fix outdated syntax in comments
* test for non-strings in ParseAbsProviderConfig
* ProviderConfigDefault and ProviderConfigAliased now take Providers
instead of strings
We no longer need special cases for most things during a full destroy,
so remove those from the graph transformations.
The only remaining cases are:
- remove the root outputs, so destroy ends up with a clean state
- reverse the target deps when targeting a destroy.
Remove all the destroy provisioner tests that are testing what is no
longer allowed.
Add missing state dependencies to remaining tests that require it.
The AttachStateTransformer was never run in the destroy plan. This means
that resource without configuration that used a non-default provider
would not be connected to the correct provider for the plan.
The test that was attempting to catch this only worked because the
temporary graph used in the DestroyEdgeTransformer would add the state
and detect some issues.
Start by removing the DestroyEdge type altogether. This is only used to
detect the natural edge between a resource's create and destroy nodes,
but that's not necessary for any transformations. The custom edge type
also interferes with normal graph manipulations, because you can't
delete an arbitrary edge without knowing the type, so deletion of the
edge based only on the endpoints is often done incorrectly. The dag
package itself does this incorrectly in TransitiveReduction, which
always assumes the BasicEdge type.
Now that inter-resource destroy dependencies are already connected in the
DestroyEdgeTransformer (from the stored deps in state), there's no need
to search out all dependant resources in the CBD transformation, as they
should all be connected. This makes the CBD transformation rule quite
simple: reverse any edges from create nodes.
This special edge type is no longer used. While we still have the option
of encoding more meaning into the edged themselves, having one special
edge type used only in one specific case was easily overlooked, as
dag.BasicEdge is assumed in all other cases.
The requires destroy dependencies are what is stored in the state, and
have already been connected. There is no need to search further, since
new nodes from the config may interfere with the original destroy
ordering.
a large refactor to addrs.AbsProviderConfig, embedding the addrs.Provider instead of a Type string. I've added and updated tests, added some Legacy functions to support older state formats and shims, and added a normalization step when reading v4 (current) state files (not the added tests under states/statefile/roundtrip which work with both current and legacy-style AbsProviderConfig strings).
The remaining 'fixme' and 'todo' comments are mostly going to be addressed in a subsequent PR and involve looking up a given local provider config's FQN. This is fine for now as we are only working with default assumption.