A regression introduced in d72a413ef8
The comment explains, but TLDR: The remote backend actually *depended*
on being able to write it's backend state even though an 'error'
occurred (no workspaces).
This is an explicit technical debt note that our plan renderer isn't able
to give a fully-specific hint in this particular case of deletion reason.
This reason code means that at least one of the module instance keys in
the resource's module path doesn't match an instance declared in the
configuration, but the plan data structure doesn't retain enough
information to know which is the first step in the path which refers to
a missing instance, and so we just always return the whole thing.
This would be confusing if we return module.foo[0].module.bar not being
in the configuration as a result of module.foo not using "count"; it would
be better to say "module.foo[0] is not in the configuration" instead.
It would be most ideal to handle all of the different situations that
ResourceInstanceDeleteBecauseWrongRepetition's rendering does, so that we
can go further and explain exactly _why_ that module instance isn't
declared anymore.
We can do neither of those things today because only the Terraform Core
"expander" component knows that information, and we've discarded that
by the time we get to rendering a plan. To fix this one day would require
preserving in the plan information about which module instances are
declared, as a separate sidecar data structure from which resource
instances we're taking actions on, and then using that to identify which
step in addr.Module here first selects an invalid instance.
Previously we were treating it as a programming error to ask for the
instances of a resource inside an instance of a module that is declared
but whose declaration doesn't include the given instance key.
However, that's actually a valid situation which can arise if, for
example, the user has changed the repetition/expansion mode for an
existing module call and so now all of the resource instances addresses it
previously contained are "orphaned".
To represent that, we'll instead say that an invalid instance key of a
declared module behaves as if it contains no resource instances at all,
regardless of the configurations of any resources nested inside. This
then gives the result needed to successfully detect all of the former
resource instances as "orphaned" and plan to destroy them.
However, this then introduces a new case for
NodePlannableResourceInstanceOrphan.deleteActionReason to deal with: the
resource configuration still exists (because configuration isn't aware of
individual module/resource instances) but the module instance does not.
This actually allows us to resolve, at least partially, a previous missing
piece of explaining to the user why the resource instances are planned
for deletion in that case, finally allowing us to be explicit to the user
that it's because of the module instance being removed, which
internally we call plans.ResourceInstanceDeleteBecauseNoModule.
Co-authored-by: Alisdair McDiarmid <alisdair@users.noreply.github.com>
This makes it match some incoming links we have elsewhere, but also it
makes the heading a bit more consice because "module" isn't really adding
anything here anyway: input variables are _always_ in modules.
We late-reorganized this into the "Module Development" subsection, but
forgot to update the actual link in the navbar, so it was still linking
to its old location.
Since this is only a minor release there isn't any super-significant
upgrade guide content this time, but I've used this page to elaborate on
some of the upgrade notes already recorded in the Terraform Changelog, to
give additional context if needed to the hopefully-small number of users
that these changes will directly effect during upgrading.
As explained in the changes: The 'enhanced' backend terminology, which
only truly pertains to the 'remote' backend with a single API (Terraform
Cloud/Enterprise's), has been found to be a confusing vestige which need
only be explained in the context of the 'remote' backend.
These changes reorient the explanation(s) of backends to pertain more
directly to their primary purpose, which is storage of state snapshots
(and not implementing operations).
That Terraform operations are still _implemented_ by the literal
`Backend` and `Enhanced` interfaces is inconsequential a user of
Terraform, an internal detail.
Apologies for not creating an issue first but it seemed like a simple docs change.
`apt install terraform` requires the `apt update` before terraform can be installed.
When migrating from an explicit local backend to Terraform Cloud, we ask
if you want to migrate the state. If there is no state to migrate we
should not ask if they want to migrate the emptiness.