Previously we would only ever add new lock entries or update existing
ones. However, it's possible that over time a module may _cease_ using
a particular provider, at which point we ought to remove it from the lock
file so that operations won't fail when seeing that the provider cache
directory is inconsistent with the lock file.
Now the provider installer (EnsureProviderVersions) will remove any lock
file entries that relate to providers not included in the given
requirements, which therefore makes the resulting lock file properly match
the set of packages the installer wrote into the cache.
This does potentially mean that someone could inadvertently defeat the
lock by removing a provider dependency, running "terraform init", then
undoing that removal, and finally running "terraform init" again. However,
that seems relatively unlikely compared to the likelihood of removing
a provider and keeping it removed, and in the event it _did_ happen the
changes to the lock entry for that provider would be visible in the diff
of the provider lock file as usual, and so could be noticed in code
review just as for any other change to dependencies.
instances.Set is only used after all instances have been processes, so
it should therefor only handle known instances and not panic when given
an address that traverses an unexpanded module.
Revert the evaluation change from #29862.
While returning a dynamic value for all expanded resources during
validation is not optimal, trying to work around this using unknown maps
and lists is causing other undesirable behaviors during evaluation.
Earlier versions of this code allowed "ref" to take any value that would
be accepted by "git checkout" as a valid target of a symbolic ref. We
inadvertently accepted a breaking change to upstream go-getter that broke
that as part of introducing a shallow clone optimization, because shallow
clone requires selecting a single branch.
To restore the previous capabilities while retaining the "depth" argument,
here we accept a compromise where "ref" has the stronger requirement of
being a valid named ref in the remote repository if and only if "depth"
is set to a value greater than zero. If depth isn't set or is less than
one, we will do the old behavior of just cloning all of the refs in the
remote repository in full and then switching to refer to the selected
branch, tag, or naked commit ID as a separate step.
This includes a heuristic to generate an additional error message hint if
we get an error from "git clone" and it looks like the user might've been
trying to use "depth" and "ref=COMMIT" together. We can't recognize that
error accurately because it's only reported as human-oriented git command
output, but this heuristic should hopefully minimize situations where we
show it inappropriately.
For now this is a change in the Terraform repository directly, so that we
can expedite the fix to an already-reported regression. After this is
released I tend to also submit a similar set of changes to upstream
go-getter, at which point we can revert Terraform to using the upstream
getter.GitGetter instead of our own local fork.
This is a pragmatic temporary solution to allow us to more quickly resolve
an upstream regression in go-getter locally within Terraform, so that the
work to upstream it for other callers can happen asynchronously and with
less time pressure.
This commit doesn't yet include any changes to address the bug, and
instead aims to be functionally equivalent to getter.GitGetter. A
subsequent commit will then address the regression, so that the diff of
that commit will be easier to apply later to the upstream to get the same
effect there.
A regression introduced in d72a413ef8
The comment explains, but TLDR: The remote backend actually *depended*
on being able to write it's backend state even though an 'error'
occurred (no workspaces).
This is an explicit technical debt note that our plan renderer isn't able
to give a fully-specific hint in this particular case of deletion reason.
This reason code means that at least one of the module instance keys in
the resource's module path doesn't match an instance declared in the
configuration, but the plan data structure doesn't retain enough
information to know which is the first step in the path which refers to
a missing instance, and so we just always return the whole thing.
This would be confusing if we return module.foo[0].module.bar not being
in the configuration as a result of module.foo not using "count"; it would
be better to say "module.foo[0] is not in the configuration" instead.
It would be most ideal to handle all of the different situations that
ResourceInstanceDeleteBecauseWrongRepetition's rendering does, so that we
can go further and explain exactly _why_ that module instance isn't
declared anymore.
We can do neither of those things today because only the Terraform Core
"expander" component knows that information, and we've discarded that
by the time we get to rendering a plan. To fix this one day would require
preserving in the plan information about which module instances are
declared, as a separate sidecar data structure from which resource
instances we're taking actions on, and then using that to identify which
step in addr.Module here first selects an invalid instance.
Previously we were treating it as a programming error to ask for the
instances of a resource inside an instance of a module that is declared
but whose declaration doesn't include the given instance key.
However, that's actually a valid situation which can arise if, for
example, the user has changed the repetition/expansion mode for an
existing module call and so now all of the resource instances addresses it
previously contained are "orphaned".
To represent that, we'll instead say that an invalid instance key of a
declared module behaves as if it contains no resource instances at all,
regardless of the configurations of any resources nested inside. This
then gives the result needed to successfully detect all of the former
resource instances as "orphaned" and plan to destroy them.
However, this then introduces a new case for
NodePlannableResourceInstanceOrphan.deleteActionReason to deal with: the
resource configuration still exists (because configuration isn't aware of
individual module/resource instances) but the module instance does not.
This actually allows us to resolve, at least partially, a previous missing
piece of explaining to the user why the resource instances are planned
for deletion in that case, finally allowing us to be explicit to the user
that it's because of the module instance being removed, which
internally we call plans.ResourceInstanceDeleteBecauseNoModule.
Co-authored-by: Alisdair McDiarmid <alisdair@users.noreply.github.com>