We previously skipped this one because it wasn't strictly necessary for
replicating the old "terraform init" behavior, but we do need it to work
so that things like the -plugin-dir option can behave correctly.
Linking packages from other cache directories and installing from unpacked
directories are fundamentally the same operation because a cache directory
is really just a collection of unpacked packages, so here we refactor
the LinkFromOtherCache functionality to actually be in
installFromLocalDir, and LinkFromOtherCache becomes a wrapper for
the installFromLocalDir function that just calculates the source and
target directories automatically and invalidates the metaCache.
We previously had only a stub implementation for a totally-empty
MultiSource. Here we have an initial implementation of the full
functionality, which we'll need to support "terraform init -plugin-dir=..."
in a subsequent commit.
On Unix-derived systems a directory must be marked as "executable" in
order to be accessible, so our previous mode of 0660 here was unsufficient
and would cause a failure if it happened to be the installer that was
creating the plugins directory for the first time here.
Now we'll make it executable and readable for all but only writable by
the same user/group. For consistency, we also make the selections file
itself readable by everyone. In both cases, the umask we are run with may
further constrain these modes.
These are some helpers to support unit testing in other packages, allowing
callers to exercise provider installation mechanisms without hitting any
real upstream source or having to prepare local package directories.
MockSource is a Source implementation that just scans over a provided
static list of packages and returns whatever matches.
FakePackageMeta is a shorthand for concisely constructing a
realistic-looking but uninstallable PackageMeta, probably for use with
MockSource.
FakeInstallablePackageMeta is similar to FakePackageMeta but also goes to
the trouble of creating a real temporary archive on local disk so that
the resulting package meta is pointing to something real on disk. This
makes the result more useful to the caller, but in return they get the
responsibility to clean up the temporary file once the test is over.
Nothing is using these yet.
Just as with the old installer mechanism, our goal is that explicit
provider installation is the only way that new provider versions can be
selected.
To achieve that, we conclude each call to EnsureProviderVersions by
writing a selections lock file into the target directory. A later caller
can then recall the selections from that file by calling SelectedPackages,
which both ensures that it selects the same set of versions and also
verifies that the checksums recorded by the installer still match.
This new selections.json file has a different layout than our old
plugins.json lock file. Not only does it use a different hashing algorithm
than before, we also record explicitly which version of each provider
was selected. In the old model, we'd repeat normal discovery when
reloading the lock file and then fail with a confusing error message if
discovery happened to select a different version, but now we'll be able
to distinguish between a package that's gone missing since installation
(which could previously have then selected a different available version)
from a package that has been modified.
For the old-style provider cache directory model we hashed the individual
executable file for each provider. That's no longer appropriate because
we're giving each provider package a whole directory to itself where it
can potentially have many files.
This therefore introduces a new directory-oriented hashing algorithm, and
it's just using the Go Modules directory hashing algorithm directly
because that's already had its cross-platform quirks and other wrinkles
addressed during the Go Modules release process, and is now used
prolifically enough in Go codebases that breaking changes to the upstream
algorithm would be very expensive to the Go ecosystem.
This is also a bit of forward planning, anticipating that later we'll use
hashes in a top-level lock file intended to be checked in to user version
control, and then use those hashes also to verify packages _during_
installation, where we'd need to be able to hash unpacked zip files. The
Go Modules hashing algorithm is already implemented to consistently hash
both a zip file and an unpacked version of that zip file.
We've been using the models from the "moduledeps" package to represent our
provider dependencies everywhere since the idea of provider dependencies
was introduced in Terraform 0.10, but that model is not convenient to use
for any use-case other than the "terraform providers" command that needs
individual-module-level detail.
To make things easier for new codepaths working with the new-style
provider installer, here we introduce a new model type
getproviders.Requirements which is based on the type the new installer was
already taking as its input. We have new methods in the states, configs,
and earlyconfig packages to produce values of this type, and a helper
to merge Requirements together so we can combine config-derived and
state-derived requirements together during installation.
The advantage of this new model over the moduledeps one is that all of
recursive module walking is done up front and we produce a simple, flat
structure that is more convenient for the main use-cases of selecting
providers for installation and then finding providers in the local cache
to use them for other operations.
This new model is _not_ suitable for implementing "terraform providers"
because it does not retain module-specific requirement details. Therefore
we will likely keep using moduledeps for "terraform providers" for now,
and then possibly at a later time consider specializing the moduledeps
logic for only what "terraform providers" needs, because it seems to be
the only use-case that needs to retain that level of detail.
This was incorrectly removing the _source_ entry prior to creating the
symlink, therefore ending up with a dangling symlink and no source file.
This wasn't obvious before because the test case for LinkFromOtherCache
was also incorrectly named and therefore wasn't running. Fixing the name
of that test made this problem apparent.
The TestLinkFromOtherCache test case now ends up seeing the final resolved
directory rather than the symlink target, because of upstream changes
to the internal/getproviders filesystem scanning logic to handle symlinks
properly.
Previously this was failing to treat symlinks to directories as unpacked
layout, because our file info was only an Lstat result, not a full Stat.
Now we'll resolve the symlink first, allowing us to handle a symlink to
a directory. That's important because our internal/providercache behavior
is to symlink from one cache to another where possible.
There's a lot going on in these functions that can be hard to follow from
the outside, so we'll add some additional trace logging so that we can
more easily understand why things are behaving the way they are.
When a provider source produces an HTTP URL location we'll expect it to
resolve to a zip file, which we'll first download to a temporary
directory and then treat it like a local archive.
When a provider source produces a local archive path we'll expect it to
be a zip file and extract it into the target directory.
This does not yet include an implementation of installing from an
already-unpacked local directory. That will follow in a subsequent commit,
likely following a similar principle as in Dir.LinkFromOtherCache.
These new functions allow command implementations to get hold of the
providercache objects and installation source object derived from the
current CLI configuration.
The MultiSource isn't actually properly implemented yet, but this is a
minimal implementation just for the case where there are no underlying
sources at all, because we use an empty MultiSource as a placeholder
when a test in the "command" package fails to explicitly populate a
ProviderSource.
This is not tested yet, but it's a compilable strawman implementation of
the necessary sequence of events to coordinate all of the moving parts
of running a provider installation operation.
This will inevitably see more iteration in later commits as we complete
the surrounding parts and wire it up to be used by "terraform init". So
far, it's just dead code not called by any other package.
The Installer type will encapsulate the logic for running an entire
provider installation request: given a set of providers to install, it
will determine a method to obtain each of them (or detect that they are
already installed) and then take the necessary actions.
So far it doesn't do anything, but this stubs out an interface by which
the caller can request ongoing notifications during an installation
operation.
This will eventually be responsible for actually retrieving a package from
a source and then installing it into the cache directory, but for the
moment it's just a stub to complete the proposed API, which I intend to
test in a subsequent commit by writing the full "Installer" API that will
encapsulate the full installation logic.
When a system-wide shared plugin cache is configured, we'll want to make
use of entries already in the shared cache when populating a local
(configuration-specific) cache.
This new method LinkFromOtherCache encapsulates the work of placing a link
from one cache to another. If possible it will create a symlink, therefore
retaining a key advantage of configuring a shared plugin cache, but
otherwise we'll do a deep copy of the package directory from one cache
to the other.
Our old provider installer would always skip trying to create symlinks on
Windows because Go standard library support for os.Symlink on Windows
was inconsistent in older versions. However, os.Symlink can now create
symlinks using a new API introduced in a Windows 10 update and cleanly
fail if symlink creation is impossible, so it's safe for us to just
try to create the symlink and react if that produces an error, just as we
used to do on non-Windows systems when possibly creating symlinks on
filesystems that cannot support them.
The existing functionality in this package deals with finding packages
that are either available for installation or already installed. In order
to support installation we also need to determine the location where a
package should be installed.
This lives in the getproviders package because that way all of the logic
related to the filesystem layout for local provider directories lives
together here where they can be maintained together more easily in future.
We've previously been copying this function around so it could remain
unexported while being used in various packages. However, it's a
non-trivial function with lots of specific assumptions built into it, so
here we'll put it somewhere that other packages can depend on it _and_
document the assumptions it seems to be making for future reference.
As a bonus, this now uses os.SameFile to detect when two paths point to
the same physical file, instead of the slightly buggy local implementation
we had before which only worked on Unix systems and did not correctly
handle when the paths were on different physical devices.
The copy of the function I extracted here is the one from internal/initwd,
so this commit also includes the removal of that unexported version and
updating the callers in that package to use at at this new location.
Historically our logic to handle discovering and installing providers has
been spread across several different packages. This package is intended
to become the home of all logic related to what is now called "provider
cache directories", which means directories on local disk where Terraform
caches providers in a form that is ready to run.
That includes both logic related to interrogating items already in a cache
(included in this commit) and logic related to inserting new items into
the cache from upstream provider sources (to follow in later commits).
These new codepaths are focused on providers and do not include other
plugin types (provisioners and credentials helpers), because providers are
the only plugin type that is represented by a heirarchical, decentralized
namespace and the only plugin type that has an auto-installation protocol
defined. The existing codepaths will remain to support the handling of
the other plugin types that require manual installation and that use only
a flat, locally-defined namespace.
Previously this was available by instantiating a throwaway
FilesystemMirrorSource, but that's pretty counter-intuitive for callers
that just want to do a one-off scan without retaining any ongoing state.
Now we expose SearchLocalDirectory as an exported function, and the
FilesystemMirrorSource then uses it as part of its implementation too.
Callers that just want to know what's available in a directory can call
SearchLocalDirectory directly.
Implement a new provider_meta block in the terraform block of modules, allowing provider-keyed metadata to be communicated from HCL to provider binaries.
Bundled in this change for minimal protocol version bumping is the addition of markdown support for attribute descriptions and the ability to indicate when an attribute is deprecated, so this information can be shown in the schema dump.
Co-authored-by: Paul Tyng <paul@paultyng.net>
This implies some notable changes that will have a visible impact to
end-users of official Terraform releases:
- Terraform is no longer compatible with MacOS 10.10 Yosemite, and
requires at least 10.11 El Capitan. (Relatedly, Go 1.14 is planned to be
the last release to support El Capitan, so while that remains supported
for now, it's notable that Terraform 0.13 is likely to be the last major
release of Terraform supporting it, with 0.14 likely to further require
MacOS 10.12 Sierra.)
- Terraform is no longer compatible with FreeBSD 10.x, which has reached
end-of-life. Terraform now requires FreeBSD 11.2 or later.
- Terraform now supports TLS 1.3 when it makes connections to remote
services such as backends and module registries. Although TLS 1.3 is
backward-compatible in principle, some legacy systems reportedly work
incorrectly when attempting to negotiate it. (This change does not
affect outgoing requests made by provider plugins, though they will see
a similar change in behavior once built with Go 1.13 or later.)
- Ed25519 certificates are now supported for TLS 1.2 and 1.3 connections.
- On UNIX systems where "use-vc" is set in resolv.conf, TCP will now be
used for DNS resolution. This is unlikely to cause issues in practice
because a system set up in this way can presumably already reach its
nameservers over TCP (or else other applications would misbehave), but
could potentially lead to lookup failures in unusual situations where a
system only runs Terraform, has historically had "use-vc" in its
configuration, but yet is blocked from reaching its configured
nameservers over TCP.
- Some parts of Terraform now support Unicode 12.0 when working with
strings. However, notably the Terraform Language itself continues to
use the text segmentation tables from Unicode 9.0, which means it lacks
up-to-date support for recognizing modern emoji combining forms as
single characters. (We may wish to upgrade the text segmentation tables
to Unicode 12.0 tables in a later commit, to restore consistency.)
This also includes some changes to the contents of "vendor", and
particularly to the format of vendor/modules.txt, per the changes to
vendoring in the Go 1.14 toolchain. This new syntax is activated by the
specification of "go 1.14" in the go.mod file.
Finally, the exact format of error messages from the net/http library has
changed since Go 1.12, and so a couple of our tests needed updates to
their expected error messages to match that.
This is a basic implementation of FilesystemMirrorSource for now aimed
only at the specific use-case of scanning the cache of provider plugins
Terraform will keep under the ".terraform" directory, as part of our
interim provider installer implementation for Terraform 0.13.
The full functionality of this will grow out in later work when we
implement explicit local filesystem mirrors, but for now the goal is to
use this just to inspect the work done by the automatic installer once
we switch it to the new provider-FQN-aware directory structure.
The various FIXME comments in this are justified by the limited intended
scope of this initial implementation, and they should be resolved by
later work to use FilesystemMirrorSource explicitly for user-specified
provider package mirrors.
These are utility functions to ease processing of lists of PackageMeta
elsewhere, once we have functionality that works with multiple packages
at once. The local filesystem mirror source will be the first example of
this, so these methods are motivated mainly by its needs.
This is just to have a centralized set of logic for converting from a
platform string (like "linux_amd64") to a Platform object, so we can do
normalization and validation consistently.
Although we tend to return these in contexts where at least one of these
values is implied, being explicit means that PackageMeta values are
self-contained and less reliant on such external context.
* WIP: dynamic expand
* WIP: add variable and local support
* WIP: outputs
* WIP: Add referencer
* String representation, fixing tests it impacts
* Fixes TestContext2Apply_outputOrphanModule
* Fix TestContext2Apply_plannedDestroyInterpolatedCount
* Update DestroyOutputTransformer and associated types to reflect PlannableOutputs
* Remove comment about locals
* Remove module count enablement
* Removes allowing count for modules, and reverts the test,
while adding a Skip()'d test that works when you re-enable
the config
* update TargetDownstream signature to match master
* remove unnecessary method
Co-authored-by: James Bardin <j.bardin@gmail.com>
The provider FQN is becoming our primary identifier for a provider, so
it's important that we are clear about the equality rules for these
addresses and what characters are valid within them.
We previously had a basic regex permitting ASCII letters and digits for
validation and no normalization at all. We need to do at least case
folding and UTF-8 normalization because these names will appear in file
and directory names in case-insensitive filesystems and in repository
names such as on GitHub.
Since we're already using DNS-style normalization and validation rules
for the hostname part, rather than defining an entirely new set of rules
here we'll just treat the provider namespace and type as if they were
single labels in a DNS name. Aside from some internal consistency, that
also works out nicely because systems like GitHub use organization and
repository names as part of hostnames (e.g. with GitHub Pages) and so
tend to apply comparable constraints themselves.
This introduces the possibility of names containing letters from alphabets
other than the latin alphabet, and for latin letters with diacritics.
That's consistent with our introduction of similar support for identifiers
in the language in Terraform 0.12, and is intended to be more friendly to
Terraform users throughout the world that might prefer to name their
products using a different alphabet. This is also a further justification
for using the DNS normalization rules: modern companies tend to choose
product names that make good domain names, and now such names will be
usable as Terraform provider names too.
This is a temporary helper so that we can potentially ship the new
provider installer without making a breaking change by relying on the
old default namespace lookup API on the default registry to find a proper
FQN for a legacy provider provider address during installation.
If it's given a non-legacy provider address then it just returns the given
address verbatim, so any codepath using it will also correctly handle
explicit full provider addresses. This also means it will automatically
self-disable once we stop using addrs.NewLegacyProvider in the config
loader, because there will therefore no longer be any legacy provider
addresses in the config to resolve. (They'll be "default" provider
addresses instead, assumed to be under registry.terraform.io/hashicorp/* )
It's not decided yet whether we will actually introduce the new provider
in a minor release, but even if we don't this API function will likely be
useful for a hypothetical automatic upgrade tool to introduce explicit
full provider addresses into existing modules that currently rely on
the equivalent to this lookup in the current provider installer.
This is dead code for now, but my intent is that it would either be called
as part of new provider installation to produce an address suitable to
pass to Source.AvailableVersions, or it would be called from the
aforementioned hypothetical upgrade tool.
Whatever happens, these functions can be removed no later than one whole
major release after the new provider installer is introduced, when
everyone's had the opportunity to update their legacy unqualified
addresses.
Our local filesystem mirror mechanism will allow provider packages to be
given either in packed form as an archive directly downloaded to disk or
in an unpacked form where the archive is extracted.
Distinguishing these two cases in the concrete Location types will allow
callers to reliably select the mode chosen by the selected installation
source and handle it appropriately, rather than resorting to out-of-band
heuristics like checking whether the object is a directory or a file.
In a future commit, these implementations of Source will allow finding
and retrieving provider packages via local mirrors, both in the local
filesystem and over the network using an HTTP-based protocol.
This is an API stub for a component that will be added in a future commit
to support considering a number of different installation sources for each
provider. These will eventually be configurable in the CLI configuration,
allowing users to e.g. mirror certain providers within their own
infrastructure while still being able to go upstream for those that aren't
mirrored, or permit locally-mirrored providers only, etc.
Some sources make network requests that are likely to be slow, so this
wrapper type can cache previous responses for its lifetime in order to
speed up repeated requests for the same information.
Registries backed by static files are likely to use relative paths to
their archives for simplicity's sake, but we'll normalize them to be
absolute before returning because the caller wouldn't otherwise know what
to resolve the URLs relative to.
We intend to support installation both directly from origin registries and
from mirrors in the local filesystem or over the network. This Source
interface will serve as our abstraction over those three options, allowing
calling code to treat them all the same.
Our existing provider installer was originally built to work with
releases.hashicorp.com and later retrofitted to talk to the official
Terraform Registry. It also assumes a flat namespace of providers.
We're starting a new one here, copying and adapting code from the old one
as necessary, so that we can build out this new API while retaining all
of the existing functionality and then cut over to this new implementation
in a later step.
Here we're creating a foundational component for the new installer, which
is a mechanism to query for the available versions and download locations
of a particular provider.
Subsequent commits in this package will introduce other Source
implementations for installing from network and filesystem mirrors.
* deps: bump terraform-config-inspect library
* configs: parse `version` in new required_providers block
With the latest version of `terraform-config-inspect`, the
required_providers attribute can now be a string or an object with
attributes "source" and "version". This change allows parsing the
version constraint from the new object while ignoring any given source attribute.
In an earlier change we switched to defining our own sets of detectors,
getters, etc for go-getter in order to insulate us from upstream changes
to those sets that might otherwise change the user-visible behavior of
Terraform's module installer.
However, we apparently neglected to actually refer to our local set of
detectors, and continued to refer to the upstream set. Here we catch up
with the latest detectors from upstream (taken from the version of
go-getter we currently have vendored) and start using that fixed set.
Currently we are maintaining these custom go-getter sets in two places
due to the configload vs. initwd distinction. That was already true for
goGetterGetters and goGetterDecompressors, and so I've preserved that for
now just to keep this change relatively simple; in later change it would
be nice to factor these "get with go getter" functions out into a shared
location which we can call from both configload and initwd.
faster
The acceptance tests for etcdv3, oss and manta were not validating
required env variablea, chosing to assume that if one was running
acceptance tests they had already configured the credentials.
It was not always clear if this was a bug in the tests or the provider,
so I opted to make the tests fail faster when required attributes were
unset (or "").
Add versioned tfplugin proto files to the docs directory, for easier
reference. The latest version starts as a symlink to the current
file used for generated the tfplugin package in ./internal/tfplugin5.
When changing the protocol version, the old file must be copied to
./docs/plugin-protocol/, and a new symlink created for the latest
version.
Private data was previously created during Plan, and sent back to the
provider during Apply. This data also needs to be persisteded accross
Read calls, but rather than rely on core for that we can send the data
to the provider during Read to allow for more flexibilty.
This was already working, but since that codepath is separate from the
go-getter install codepath it's helpful to have a separate test for it,
in addition to the existing one for go-getter modules.
The "err" variable in the MaybeRelativePathErr condition was masking the
original err with nil in the "else" case of this branch, causing the
error message to be incomplete.
While here, also tweaked the wording to say "Could not download" rather
than "Error attempting to download", both to say the same thing in fewer
words and because the summary line above already starts with "Error:"
when we print out this message, so it looks weird to have both lines
start with the same word.
* internal/initwd: follow local module path symlink
Fixes#21060
While a previous commit fixed a problem when the local module directory
contained a symlink, it did not account for the possibility that the
entire directory was a symlink.
In study of existing providers we've found a pattern we werent previously
accounting for of using a nested block type to represent a group of
arguments that relate to a particular feature that is always enabled but
where it improves configuration readability to group all of its settings
together in a nested block.
The existing NestingSingle was not a good fit for this because it is
designed under the assumption that the presence or absence of the block
has some significance in enabling or disabling the relevant feature, and
so for these always-active cases we'd generate a misleading plan where
the settings for the feature appear totally absent, rather than showing
the default values that will be selected.
NestingGroup is, therefore, a slight variation of NestingSingle where
presence vs. absence of the block is not distinguishable (it's never null)
and instead its contents are treated as unset when the block is absent.
This then in turn causes any default values associated with the nested
arguments to be honored and displayed in the plan whenever the block is
not explicitly configured.
The current SDK cannot activate this mode, but that's okay because its
"legacy type system" opt-out flag allows it to force a block to be
processed in this way anyway. We're adding this now so that we can
introduce the feature in a future SDK without causing a breaking change
to the protocol, since the set of possible block nesting modes is not
extensible.
* configs/configupgrade: detect possible relative module sources
If a module source appears to be a relative local path but does not have
a preceding ./, print a #TODO message for the user.
* internal/initwd: limit go-getter detectors to those supported by terraform
* internal/initwd: move isMaybeRelativeLocalPath check into getWithGoGetter
To avoid making two calls to getter.Detect, which potentially makes
non-trivial API calls, the "isMaybeRelativeLocalPath" check was moved to
a later step and a custom error type was added so user-friendly
diagnostics could be displayed in the event that a possible relative local
path was detected.
Identify module sources that look like relative paths ("child" instead
of "./child", for example) and surface a helpful error.
Previously, such module sources would be passed to go-getter, which
would fail because it was expecting an absolute, or properly relative,
path. This commit moves the check for improper relative paths sooner so
a user-friendly error can be displayed.
configs/configload and internal/initwd both had a copyDir function that
would fail if the source directory contained a symlinked directory,
because the os.FileMode.IsDir() returns false for symlinks.
This PR adds a check for a symlink and copies that symlink in the
target directory. It handles symlinks for both files and directories
(with included tests).
Fixes#20539
Due to the inprecision of our shimming from the legacy SDK type system to
the new Terraform Core type system, the legacy SDK produces a number of
inconsistencies that produce only minor quirky behavior or broken
edge-cases. To retain compatibility with those existing weird behaviors,
the legacy SDK opts out of our safety checks.
The intent here is to allow existing providers to continue to do their
previous unsafe behaviors for now, accepting that this will allow certain
quirky bugs from previous releases to persist, and then gradually migrate
away from the legacy SDK and remove this opt-out on a per-resource basis
over time.
As with the apply-time safety check opt-out, this is reserved only for
the legacy SDK and must not be used in any new SDK implementations. We
still include any inconsistencies as warnings in the logs as an aid to
anyone debugging weird behavior, so that they can see situations where
blame may be misplaced in the user-visible error messages.
The shim layer for the legacy SDK type system is not precise enough to
guarantee it will produce identical results between plan and apply. In
particular, values that are null during plan will often become zero-valued
during apply.
To avoid breaking those existing providers while still allowing us to
introduce this check in the future, we'll introduce a rather-hacky new
flag that allows the legacy SDK to signal that it is the legacy SDK and
thus disable the check.
Once we start phasing out the legacy SDK in favor of one that natively
understands our new type system, we can stop setting this flag and thus
get the additional safety of this check without breaking any
previously-released providers.
No other SDK is permitted to set this flag, and we will remove it if we
ever introduce protocol version 6 in future, assuming that any provider
supporting that protocol will always produce consistent results.
There are a few constructs from 0.11 and prior that cause 0.12 parsing to
fail altogether, which previously created a chicken/egg problem because
we need to install the providers in order to run "terraform 0.12upgrade"
and thus fix the problem.
This changes "terraform init" to use the new "early configuration" loader
for module and provider installation. This is built on the more permissive
parser in the terraform-config-inspect package, and so it allows us to
read out the top-level blocks from the configuration while accepting
legacy HCL syntax.
In the long run this will let us do version compatibility detection before
attempting a "real" config load, giving us better error messages for any
future syntax additions, but in the short term the key thing is that it
allows us to install the dependencies even if the configuration isn't
fully valid.
Because backend init still requires full configuration, this introduces a
new mode of terraform init where it detects heuristically if it seems like
we need to do a configuration upgrade and does a partial init if so,
before finally directing the user to run "terraform 0.12upgrade" before
running any other commands.
The heuristic here is based on two assumptions:
- If the "early" loader finds no errors but the normal loader does, the
configuration is likely to be valid for Terraform 0.11 but not 0.12.
- If there's already a version constraint in the configuration that
excludes Terraform versions prior to v0.12 then the configuration is
probably _already_ upgraded and so it's just a normal syntax error,
even if the early loader didn't detect it.
Once the upgrade process is removed in 0.13.0 (users will be required to
go stepwise 0.11 -> 0.12 -> 0.13 to upgrade after that), some of this can
be simplified to remove that special mode, but the idea of doing the
dependency version checks against the liberal parser will remain valuable
to increase our chances of reporting version-based incompatibilities
rather than syntax errors as we add new features in future.
This is an adaptation of the installation code from configs/configload,
now using the "earlyconfig" package instead of the "configs" package.
Module installation is an initialization-only process, with all other
commands assuming an already-initialized directory. Having it here can
therefore simplify the API of configs/configload, which can now focus only
on the problem of loading modules that have already been installed.
The old installer code in configs/configload is still in place for now
because the caller in "terraform init" isn't yet updated to use this.
"terraform init" is quite a complex beast in relation to other commands
since almost everything it does is unique to it and thus not factored out
into other packages.
To get some of that sprawl out of the "command" package, this new internal
package will give us somewhere to put this init functionality that is
also useful for test code that needs to mimic the initialization behavior
against fixture directories.
This is an alternative to the full config loader in the "configs" package
that is good for "early" use-cases like "terraform init", where we want
to find out what our dependencies are without getting tripped up on any
other errors that might be present.
The main significant change here is that the package name for the proto
definition is "tfplugin5", which is important because this name is part
of the wire protocol for references to types defined in our package.
Along with that, we also move the generated package into "internal" to
make it explicit that importing the generated Go package from elsewhere is
not the right approach for externally-implemented SDKs, which should
instead vendor the proto definition they are using and generate their
own stubs to ensure that the wire protocol is the only hard dependency
between Terraform Core and plugins.
After this is merged, any provider binaries built against our
helper/schema package will need to be rebuilt so that they use the new
"tfplugin5" package name instead of "proto".
In a future commit we will include more elaborate and organized
documentation on how an external codebase might make use of our RPC
interface definition to implement an SDK, but the primary concern here
is to ensure we have the right wire package name before release.