There was a missing outer loop for catching inverse module dependencies
when pruning nodes for destroy. Since the need to "register" the fully
destroyed modules no longer exists, the extra complication of pruning
the modules as a whole from the leaves inward is no longer required.
While it is technically still a valid optimization to reduce iterations,
the extra comparisons required to backtrack for transitive dependencies
don't amount to much, and having a single nested loop is much easier to
maintain.
The SearchLocalDirectory function was intentionally written to only
support symlinks at the leaves so that it wouldn't risk getting into an
infinite loop traversing intermediate symlinks, but that rule was also
applying to the base directory itself.
It's pretty reasonable to put your local plugins in some location
Terraform wouldn't normally search (e.g. because you want to get them from
a shared filesystem mounted somewhere) and creating a symlink from one
of the locations Terraform _does_ search is a convenient way to help
Terraform find those without going all in on the explicit provider
installation methods configuration that is intended for more complicated
situations.
To allow for that, here we make a special exception for the base
directory, resolving that first before we do any directory walking.
In order to help with debugging a situation where there are for some
reason symlinks at intermediate levels inside the search tree, we also now
emit a WARN log line in that case to be explicit that symlinks are not
supported there and to hint to put the symlink at the top-level if you
want to use symlinks at all.
(The support for symlinks at the deepest level of search is not mentioned
in this message because we allow it primarily for our own cache linking
behavior.)
If a module has multiple terraform.required_version constraints, any
failures would point at the last constraint in the error diagnostics. If
an earlier constraint was the actual problem, this leads to confusing
errors like this:
Error: Unsupported Terraform Core version
on main.tf line 6, in terraform:
6: required_version = ">= 0.13.0"
This configuration does not support Terraform version 0.13.0.
The error was due to storing the declaration range of the constraint as
a pointer to the contents of a loop variable, which was later
overwritten in later iterations of the loop. Instead we now use HCL's
handy Ptr() method to create a direct pointer to the range struct.
Include the import walk in the list of operations for which we create an
EvalModuleCallArgument node. This causes module call arguments to be
evaluated even if the module variables have defaults, ensuring that
invalid default values (such as the common "{}" for variables thought of
as maps) do not cause failures specific to import.
This fixes a bug where a child module evaluates an input variable in its
locals block, assuming that it is a nested object structure. The bug
report includes a default value of "{}", which is overridden by a root
variable value. Without the eval node added in this commit, the default
value is used and the local evaluation errors.
Builtin provider addrs (i.e. "terraform.io/builtin/terraform") should be
able to convert to legacy string form (i.e. "terraform"). This ensures
that we can safely round-trip through ParseLegacyAbsProviderConfig,
which can return either a legacy or a builtin provider addr.
* statemgr: add a NewUnlockErrorFull state manager for tests
I've frequently needed to coerce Unlock() errors for tests and it's been
awkward and fraught every time, so I decided to add a full state manger
that returns *mostly* errors. I intend to use this in conjunction with
the clistate.Locker interface, which first calls Lock() (to block if the
mutex is in use) at the start of Unlock(), so Lock() rather awkwardly needed to succeed.
In order to determine if we need to re-read a data source during plan,
we need to compare the newly evaluated configuration with the stored
state. To do that we create a ProposedNewVal, which if there are no
changes, should match the existing state exactly.
A problem arises if the remote data source contains any blocks, and they
are not set in the configuration. Terraform always decodes configuration
blocks as empty containers, however the legacy SDK cannot correctly
handle empty blocks and may return a null block which is saved to the
state. In order to correctly make the comparison for planning, we need
to reify those null blocks as empty containers in the cty value.
The createEmptyBlocks helper converts any null NestingList or NestingSet
blocks to empty list or set cty values. We only need to be concerned
with List and Set, because those are the only types that can be defined
with the legacy SDK. In hindsight these could have been normalized in
the legacy SDK shims had this problem been uncovered earlier, but for the
sake of compatibility we will now normalize these in core.
When working with a ConfigResource, the generalization of a
ModuleInstance to a Module was inadvertently dropped, and there was to
test coverage for that type of target.
Ensure we can target a specific module instance alone.