instances.Set is only used after all instances have been processes, so
it should therefor only handle known instances and not panic when given
an address that traverses an unexpanded module.
Running tests in parallel can help speed up overall test execution. Go
blocks parent tests while child tests run, so it does not fully fan out
as you might expect. It is noticably faster, though. Running 4 or more
concurrent processes knocks over a minute off the total execution time.
Revert the evaluation change from #29862.
While returning a dynamic value for all expanded resources during
validation is not optimal, trying to work around this using unknown maps
and lists is causing other undesirable behaviors during evaluation.
Earlier versions of this code allowed "ref" to take any value that would
be accepted by "git checkout" as a valid target of a symbolic ref. We
inadvertently accepted a breaking change to upstream go-getter that broke
that as part of introducing a shallow clone optimization, because shallow
clone requires selecting a single branch.
To restore the previous capabilities while retaining the "depth" argument,
here we accept a compromise where "ref" has the stronger requirement of
being a valid named ref in the remote repository if and only if "depth"
is set to a value greater than zero. If depth isn't set or is less than
one, we will do the old behavior of just cloning all of the refs in the
remote repository in full and then switching to refer to the selected
branch, tag, or naked commit ID as a separate step.
This includes a heuristic to generate an additional error message hint if
we get an error from "git clone" and it looks like the user might've been
trying to use "depth" and "ref=COMMIT" together. We can't recognize that
error accurately because it's only reported as human-oriented git command
output, but this heuristic should hopefully minimize situations where we
show it inappropriately.
For now this is a change in the Terraform repository directly, so that we
can expedite the fix to an already-reported regression. After this is
released I tend to also submit a similar set of changes to upstream
go-getter, at which point we can revert Terraform to using the upstream
getter.GitGetter instead of our own local fork.
This is a pragmatic temporary solution to allow us to more quickly resolve
an upstream regression in go-getter locally within Terraform, so that the
work to upstream it for other callers can happen asynchronously and with
less time pressure.
This commit doesn't yet include any changes to address the bug, and
instead aims to be functionally equivalent to getter.GitGetter. A
subsequent commit will then address the regression, so that the diff of
that commit will be easier to apply later to the upstream to get the same
effect there.
A regression introduced in d72a413ef8
The comment explains, but TLDR: The remote backend actually *depended*
on being able to write it's backend state even though an 'error'
occurred (no workspaces).
This is an explicit technical debt note that our plan renderer isn't able
to give a fully-specific hint in this particular case of deletion reason.
This reason code means that at least one of the module instance keys in
the resource's module path doesn't match an instance declared in the
configuration, but the plan data structure doesn't retain enough
information to know which is the first step in the path which refers to
a missing instance, and so we just always return the whole thing.
This would be confusing if we return module.foo[0].module.bar not being
in the configuration as a result of module.foo not using "count"; it would
be better to say "module.foo[0] is not in the configuration" instead.
It would be most ideal to handle all of the different situations that
ResourceInstanceDeleteBecauseWrongRepetition's rendering does, so that we
can go further and explain exactly _why_ that module instance isn't
declared anymore.
We can do neither of those things today because only the Terraform Core
"expander" component knows that information, and we've discarded that
by the time we get to rendering a plan. To fix this one day would require
preserving in the plan information about which module instances are
declared, as a separate sidecar data structure from which resource
instances we're taking actions on, and then using that to identify which
step in addr.Module here first selects an invalid instance.
Previously we were treating it as a programming error to ask for the
instances of a resource inside an instance of a module that is declared
but whose declaration doesn't include the given instance key.
However, that's actually a valid situation which can arise if, for
example, the user has changed the repetition/expansion mode for an
existing module call and so now all of the resource instances addresses it
previously contained are "orphaned".
To represent that, we'll instead say that an invalid instance key of a
declared module behaves as if it contains no resource instances at all,
regardless of the configurations of any resources nested inside. This
then gives the result needed to successfully detect all of the former
resource instances as "orphaned" and plan to destroy them.
However, this then introduces a new case for
NodePlannableResourceInstanceOrphan.deleteActionReason to deal with: the
resource configuration still exists (because configuration isn't aware of
individual module/resource instances) but the module instance does not.
This actually allows us to resolve, at least partially, a previous missing
piece of explaining to the user why the resource instances are planned
for deletion in that case, finally allowing us to be explicit to the user
that it's because of the module instance being removed, which
internally we call plans.ResourceInstanceDeleteBecauseNoModule.
Co-authored-by: Alisdair McDiarmid <alisdair@users.noreply.github.com>
This makes it match some incoming links we have elsewhere, but also it
makes the heading a bit more consice because "module" isn't really adding
anything here anyway: input variables are _always_ in modules.
We late-reorganized this into the "Module Development" subsection, but
forgot to update the actual link in the navbar, so it was still linking
to its old location.
Since this is only a minor release there isn't any super-significant
upgrade guide content this time, but I've used this page to elaborate on
some of the upgrade notes already recorded in the Terraform Changelog, to
give additional context if needed to the hopefully-small number of users
that these changes will directly effect during upgrading.