There was a missing outer loop for catching inverse module dependencies
when pruning nodes for destroy. Since the need to "register" the fully
destroyed modules no longer exists, the extra complication of pruning
the modules as a whole from the leaves inward is no longer required.
While it is technically still a valid optimization to reduce iterations,
the extra comparisons required to backtrack for transitive dependencies
don't amount to much, and having a single nested loop is much easier to
maintain.
The SearchLocalDirectory function was intentionally written to only
support symlinks at the leaves so that it wouldn't risk getting into an
infinite loop traversing intermediate symlinks, but that rule was also
applying to the base directory itself.
It's pretty reasonable to put your local plugins in some location
Terraform wouldn't normally search (e.g. because you want to get them from
a shared filesystem mounted somewhere) and creating a symlink from one
of the locations Terraform _does_ search is a convenient way to help
Terraform find those without going all in on the explicit provider
installation methods configuration that is intended for more complicated
situations.
To allow for that, here we make a special exception for the base
directory, resolving that first before we do any directory walking.
In order to help with debugging a situation where there are for some
reason symlinks at intermediate levels inside the search tree, we also now
emit a WARN log line in that case to be explicit that symlinks are not
supported there and to hint to put the symlink at the top-level if you
want to use symlinks at all.
(The support for symlinks at the deepest level of search is not mentioned
in this message because we allow it primarily for our own cache linking
behavior.)
If a module has multiple terraform.required_version constraints, any
failures would point at the last constraint in the error diagnostics. If
an earlier constraint was the actual problem, this leads to confusing
errors like this:
Error: Unsupported Terraform Core version
on main.tf line 6, in terraform:
6: required_version = ">= 0.13.0"
This configuration does not support Terraform version 0.13.0.
The error was due to storing the declaration range of the constraint as
a pointer to the contents of a loop variable, which was later
overwritten in later iterations of the loop. Instead we now use HCL's
handy Ptr() method to create a direct pointer to the range struct.
Include the import walk in the list of operations for which we create an
EvalModuleCallArgument node. This causes module call arguments to be
evaluated even if the module variables have defaults, ensuring that
invalid default values (such as the common "{}" for variables thought of
as maps) do not cause failures specific to import.
This fixes a bug where a child module evaluates an input variable in its
locals block, assuming that it is a nested object structure. The bug
report includes a default value of "{}", which is overridden by a root
variable value. Without the eval node added in this commit, the default
value is used and the local evaluation errors.
Builtin provider addrs (i.e. "terraform.io/builtin/terraform") should be
able to convert to legacy string form (i.e. "terraform"). This ensures
that we can safely round-trip through ParseLegacyAbsProviderConfig,
which can return either a legacy or a builtin provider addr.
* statemgr: add a NewUnlockErrorFull state manager for tests
I've frequently needed to coerce Unlock() errors for tests and it's been
awkward and fraught every time, so I decided to add a full state manger
that returns *mostly* errors. I intend to use this in conjunction with
the clistate.Locker interface, which first calls Lock() (to block if the
mutex is in use) at the start of Unlock(), so Lock() rather awkwardly needed to succeed.
In order to determine if we need to re-read a data source during plan,
we need to compare the newly evaluated configuration with the stored
state. To do that we create a ProposedNewVal, which if there are no
changes, should match the existing state exactly.
A problem arises if the remote data source contains any blocks, and they
are not set in the configuration. Terraform always decodes configuration
blocks as empty containers, however the legacy SDK cannot correctly
handle empty blocks and may return a null block which is saved to the
state. In order to correctly make the comparison for planning, we need
to reify those null blocks as empty containers in the cty value.
The createEmptyBlocks helper converts any null NestingList or NestingSet
blocks to empty list or set cty values. We only need to be concerned
with List and Set, because those are the only types that can be defined
with the legacy SDK. In hindsight these could have been normalized in
the legacy SDK shims had this problem been uncovered earlier, but for the
sake of compatibility we will now normalize these in core.
The Consul KV store limits the size of the values in the KV store to 524288
bytes. Once the state reaches this limit Consul will refuse to save it. It is
currently possible to try to bypass this limitation by enable Gzip but the issue
will manifest itself later. This is particularly inconvenient as it is possible
for the state to reach this limit without changing the Terraform configuration
as datasources or computed attributes can suddenly return more data than they
used to. Several users already had issues with this.
To fix the problem once and for all we now split the payload in chunks of 524288
bytes when they are to large and store them separatly in the KV store. A small
JSON payload that references all the chunks so we can retrieve them later and
concatenate them to reconstruct the payload.
While this has the caveat of requiring multiple calls to Consul that cannot be
done as a single transaction as those have the same size limit, we use unique
paths for the chunks and CAS when setting the last payload so possible issues
during calls to Put() should not result in unreadable states.
Closes https://github.com/hashicorp/terraform/issues/19182
When the path ends with / (e.g. `path = "tfstate/"), the lock
path used will contain two consecutive slashes (e.g. `tfstate//.lock`) which
Consul does not accept.
This change the lock path so it is sanitized to `tfstate/.lock`.
If the user has two different Terraform project, one with `path = "tfstate"` and
the other with `path = "tfstate/"`, the paths for the locks will be the same
which will be confusing as locking one project will lock both. I wish it were
possible to forbid ending slashes altogether but doing so would require all
users currently having an ending slash in the path to manually move their
Terraform state and would be a poor user experience.
Closes https://github.com/hashicorp/terraform/issues/15747
When working with a ConfigResource, the generalization of a
ModuleInstance to a Module was inadvertently dropped, and there was to
test coverage for that type of target.
Ensure we can target a specific module instance alone.
Before expansion happens, we only have expansion resource nodes that
know their ConfigResource address. In order to properly compare these to
targets within a module instance, we need to generalize the target to
also be a ConfigResource.
We can also remove the IgnoreIndices field from the transformer, since
we have addresses that are properly scoped and can compare them in the
correct context.
If somehow an invalid workspace has been selected, the Meta.Workspace
method should not return an error, to ensure that we don't break any
existing workflows with invalid workspace names.
We are validating the workspace name for all workspace commands. Due to
a bug with the TF_WORKSPACE environment variable, it has been possible
to accidentally create a workspace with an invalid name.
This commit removes the valid workspace name check for workspace delete
to allow users to clean up any invalid workspaces.
The workspace name can be overridden by setting a TF_WORKSPACE
environment variable. If this is done, we should still validate the
resulting workspace name; otherwise, we could end up with an invalid and
unselectable workspace.
This change updates the Meta.Workspace function to return an error, and
handles that error wherever necessary.