Our local filesystem mirror mechanism will allow provider packages to be
given either in packed form as an archive directly downloaded to disk or
in an unpacked form where the archive is extracted.
Distinguishing these two cases in the concrete Location types will allow
callers to reliably select the mode chosen by the selected installation
source and handle it appropriately, rather than resorting to out-of-band
heuristics like checking whether the object is a directory or a file.
In a future commit, these implementations of Source will allow finding
and retrieving provider packages via local mirrors, both in the local
filesystem and over the network using an HTTP-based protocol.
This is an API stub for a component that will be added in a future commit
to support considering a number of different installation sources for each
provider. These will eventually be configurable in the CLI configuration,
allowing users to e.g. mirror certain providers within their own
infrastructure while still being able to go upstream for those that aren't
mirrored, or permit locally-mirrored providers only, etc.
Some sources make network requests that are likely to be slow, so this
wrapper type can cache previous responses for its lifetime in order to
speed up repeated requests for the same information.
Registries backed by static files are likely to use relative paths to
their archives for simplicity's sake, but we'll normalize them to be
absolute before returning because the caller wouldn't otherwise know what
to resolve the URLs relative to.
We intend to support installation both directly from origin registries and
from mirrors in the local filesystem or over the network. This Source
interface will serve as our abstraction over those three options, allowing
calling code to treat them all the same.
Our existing provider installer was originally built to work with
releases.hashicorp.com and later retrofitted to talk to the official
Terraform Registry. It also assumes a flat namespace of providers.
We're starting a new one here, copying and adapting code from the old one
as necessary, so that we can build out this new API while retaining all
of the existing functionality and then cut over to this new implementation
in a later step.
Here we're creating a foundational component for the new installer, which
is a mechanism to query for the available versions and download locations
of a particular provider.
Subsequent commits in this package will introduce other Source
implementations for installing from network and filesystem mirrors.
* deps: bump terraform-config-inspect library
* configs: parse `version` in new required_providers block
With the latest version of `terraform-config-inspect`, the
required_providers attribute can now be a string or an object with
attributes "source" and "version". This change allows parsing the
version constraint from the new object while ignoring any given source attribute.
* command: use backend config from state when backend=false is used.
When a user runs `terraform init --backend=false`, terraform should
inspect the state for a previously-configured backend, and use that
backend, ignoring any backend config in the current configuration. If no
backend is configured or there is no state, return a local backend.
Fixes#16593
In earlier versions of Terraform the result of terraform state show was
in the pre-0.12 "flatmap" structure that was unable to reflect nested
data structures. That was fixed in Terraform 0.12, but as a consequence
this statement about the output being machine-parseable (which was
debateable even in older versions) is incorrect.
Fortunately, we now have "terraform show -json" to get output that is
intentionally machine-parseable, so we'll recommend to use that instead
here. The JSON output of that command is a superset of what's produced by
"terraform state show", so should be usable to meet any use-case that
might previously have been met by parsing the "terraform state show"
output.
In an earlier change we switched to defining our own sets of detectors,
getters, etc for go-getter in order to insulate us from upstream changes
to those sets that might otherwise change the user-visible behavior of
Terraform's module installer.
However, we apparently neglected to actually refer to our local set of
detectors, and continued to refer to the upstream set. Here we catch up
with the latest detectors from upstream (taken from the version of
go-getter we currently have vendored) and start using that fixed set.
Currently we are maintaining these custom go-getter sets in two places
due to the configload vs. initwd distinction. That was already true for
goGetterGetters and goGetterDecompressors, and so I've preserved that for
now just to keep this change relatively simple; in later change it would
be nice to factor these "get with go getter" functions out into a shared
location which we can call from both configload and initwd.
Clear any Dependencies if there is an entry matching a `state mv` from
address. While stale dependencies won't directly effect any current
operations, clearing the list will allow them to be recreated in their
entirety during refresh. This will help future releases that may rely
solely on the pre-calculated dependencies for destruction ordering.
This is a "should never happen" case, but we have reports of it actually
happening. In order to try to collect a bit more data about what's going
on here, we're changing what was previously a hard panic into a normal
error message that can include the address of the instance we were working
on and the action we were trying to do to it at the time.
The hope is to narrow down what situations can trigger this in order to
find a reliable reproduction case in order to debug further. This also
means that for those who _do_ encounter this problem in the meantime
Terraform will have a chance to shut down cleanly and therefore be more
likely to be able to recover on a subsequent plan/apply cycle.
Further investigation of this will follow once we see a report or two of
this updated error message.
Since a planned destroy can no longer indicate it is a full destroy,
unused values were being left in the apply graph for evaluation. If
these values contains interpolations that can fail, (for example, a
zipmap with mismatched list sizes), it will cause the apply to abort.
The PrunUnusedValuesTransformer was only previously run during destroy,
more out of conservatism than for any other particular reason. Adapt it
to always remove unused values from the graph, with the exception being
the root module outputs, which must be retained when we don't have a
clear indication that a full destroy is being executed.