Previously we would only ever add new lock entries or update existing
ones. However, it's possible that over time a module may _cease_ using
a particular provider, at which point we ought to remove it from the lock
file so that operations won't fail when seeing that the provider cache
directory is inconsistent with the lock file.
Now the provider installer (EnsureProviderVersions) will remove any lock
file entries that relate to providers not included in the given
requirements, which therefore makes the resulting lock file properly match
the set of packages the installer wrote into the cache.
This does potentially mean that someone could inadvertently defeat the
lock by removing a provider dependency, running "terraform init", then
undoing that removal, and finally running "terraform init" again. However,
that seems relatively unlikely compared to the likelihood of removing
a provider and keeping it removed, and in the event it _did_ happen the
changes to the lock entry for that provider would be visible in the diff
of the provider lock file as usual, and so could be noticed in code
review just as for any other change to dependencies.
This makes it match some incoming links we have elsewhere, but also it
makes the heading a bit more consice because "module" isn't really adding
anything here anyway: input variables are _always_ in modules.
As explained in the changes: The 'enhanced' backend terminology, which
only truly pertains to the 'remote' backend with a single API (Terraform
Cloud/Enterprise's), has been found to be a confusing vestige which need
only be explained in the context of the 'remote' backend.
These changes reorient the explanation(s) of backends to pertain more
directly to their primary purpose, which is storage of state snapshots
(and not implementing operations).
That Terraform operations are still _implemented_ by the literal
`Backend` and `Enhanced` interfaces is inconsequential a user of
Terraform, an internal detail.
Apologies for not creating an issue first but it seemed like a simple docs change.
`apt install terraform` requires the `apt update` before terraform can be installed.
The HashiCorp APT server supports several distro releases that were not in this list,
leading to a false impression that they aren't supported.
Note, Ubuntu has a new release twice a year, so periodic maintenance
is needed here.
The `root_module.resources[].sensitive_values` key in the example output was incorrectly named and clashed with the regular `root_module.resources[].values` key.
This is documentation for the first set of refactoring-related features,
all based on the new "moved" blocks in the Terraform language.
I've named the documentation section "refactoring" because in previous
discussions with users that seems to be the term they use to describe the
underlying need.
"moved" blocks are our first language feature intended to meet that need,
although it probably won't be the last as we consider other requirements
in later releases. My intent here is that once we've published this it
should eventually end up being the first result for a web search for the
topic of Terraform refactoring.
We introduced this experiment to gather feedback, and the feedback we saw
led to us deciding to do another round of design work before we move
forward with something to meet this use-case.
In addition to being experimental, this has only been included in alpha
releases so far, and so on both counts it is not protected by the
Terraform v1.0 Compatibility Promises.
The extra feedback information for why resource instance deletion is
planned is now included in the streaming JSON UI output.
We also add an explicit case for no-op actions to switch statements in
this package to ensure exhaustiveness, for future linting.
There are a few different reasons why a resource instance tracked in the
prior state might be considered an "orphan", but previously we reported
them all identically in the planned changes.
In order to help users understand the reason for a surprising planned
delete, we'll now try to specify an additional reason for the planned
deletion, covering all of the main reasons why that could happen.
This commit only introduces the new detail to the plans.Changes result,
though it also incidentally exposes it as part of the JSON plan result
in order to keep that working without returning errors in these new
cases. We'll expose this information in the human-oriented UI output in
a subsequent commit.
Add previous address information to the `planned_change` and
`resource_drift` messages for the streaming JSON UI output of plan and
apply operations.
Here we also add a "move" action value to the `change` object of these
messages, to represent a move-only operation.
As part of this work we also simplify this code to use the plan's
DriftedResources values instead of recomputing the drift from state.
Configuration-driven moves are represented in the plan file by setting
the resource's `PrevRunAddr` to a different value than its `Addr`. For
JSON plan output, we here add a new field to resource changes,
`previous_address`, which is present and non-empty only if the resource
is planned to be moved.
Like the CLI UI, refresh-only plans will include move-only changes in
the resource drift JSON output. In normal plan mode, these are elided to
avoid redundancy with planned changes.
In the last paragraph, the word "generated" is in the wrong tense for the sentence. The correct word is "generate" (unless I misunderstand the sentence 🙂).
Without `resource_group_name` I had
> │ Error: Either an Access Key / SAS Token or the Resource Group for the Storage Account must be specified - or Azure AD Authentication must be enabled