Some errors from Get are essentially user error, so we want to be able to
recognize them and give the user good feedback on how to proceed.
Although sentinel values are not an ideal solution to this, it's something
reasonably simple we can do to get this done without lots of refactoring.
Fetch the SHA256SUMS file and verify it's signature before downloading
any plugins.
This embeds the hashicorp public key in the binary. If the publickey is
replaced, new releases will need to be cut anyway. A
--verify-plugin=false flag will be added to skip signature verification
in these cases.
Since the command package also needs to know about the specific OS_ARCH
directories, remove the logic fom the discovery package.
This doesn't completely remove the knowledge of the path from discovery,
in order to maintain the current behavior of skipping legacy plugin
names within a new-style path.
Previously we had a "getProvider" function type used to implement plugin
fetching. Here we replace that with an interface type, initially with
just a "Get" function.
For now this just simplifies the interface by allowing the target
directory and protocol version to be members of the struct rather than
passed as arguments.
A later change will extend this interface to also include a method to
purge unused plugins, so that upgrading frequently doesn't leave behind
a trail of unused executable files.
When running "terraform init" with providers that are unconstrained, we
will now produce information to help the user update configuration to
constrain for the particular providers that were chosen, to prevent
inadvertently drifting onto a newer major release that might contain
breaking changes.
A ~> constraint is used here because pinning to a single specific version
is expected to create dependency hell when using child modules. By using
this constraint mode, which allows minor version upgrades, we avoid the
need for users to constantly adjust version constraints across many
modules, but make major version upgrades still be opt-in.
Any constraint at all in the configuration will prevent the display of
these suggestions, so users are free to use stronger or weaker constraints
if desired, ignoring the recommendation.
We can filter the allowed versions and sort them before checking the
protocol version, that way we can just return the first one found
reducing network requests.
Extend the test reslease server to return the protocol version header
and a dummy zip file for the provider.
Test filtering the plugins by plugin protocol version and add a full
GetProvder test.
Get provider needs to be provided with the plugin protocol version,
because it can't be imported here directly.
The plugin url types and methods were confusing; replace them with a few
functions to format the urls.
This is used to mark the plugin protocol version. Currently we actually
just ignore this entirely, since only one protocol version exists anyway.
Later we will need to add checks here to ensure that we only pay attention
to plugins of the right version.
As well as constraining plugins by version number, we also want to be
able to pin plugins to use specific executables so that we can detect
drift in available plugins between commands.
This commit allows such requirements to be specified, but doesn't yet
specify any such requirements, nor validate them.
Add discovery.GetProviders to fetch plugins from the relases site.
This is an early version, with no tests, that only (probably) fetches
plugins from the default location. The URLs are still subject to change,
and since there are no plugin releases, it doesn't work at all yet.
Instead of providing the a path in BackendOpts, provide a loaded
*config.Config instead. This reduces the number of places where
configuration is loaded.
The semver library we were using doesn't have support for a "pessimistic
constraint" where e.g. the user wants to accept only minor or patch
version upgrades. This is important for providers since users will
generally want to pin their dependencies to not inadvertantly accept
breaking changes.
So here we switch to hashicorp's home-grown go-version library, which
has the ~> constraint operator for this sort of constraint.
Given how much the old version object was already intruding into the
interface and creating dependency noise in callers, this also now wraps
the "raw" go-version objects in package-local structs, thus keeping the
details encapsulated and allowing callers to deal just with this package's
own types.
Having this as a method of PluginMeta felt most natural, but unfortunately
that means that discovery must depend on plugin and plugin in turn
depends on core Terraform, thus making the discovery package hard to use
without creating dependency cycles.
To resolve this, we invert the dependency and make the plugin package be
responsible for instantiating clients given a meta, using a top-level
function.
The .terraformrc file allows the user to override the executable location
for certain plugins. This mechanism allows us to retain that behavior for
a deprecation period by treating such executables as an unversioned
plugin for the given name and excluding all other candidates for that
name, thus ensuring that the override will "win".
Users must eventually transition away from using this mechanism and use
vendor directories instead, because these unversioned overrides will never
match for a provider referenced with non-zero version constraints.
These new methods ClientConfig and Client provide the bridge into the
main plugin infrastructure by configuring and instantiating (respectively)
a client object for the referenced plugin.
This stops short of getting the proxy object from the client since that
then requires referencing the interface for the plugin kind, which would
then create a dependency on the main terraform package which we'd rather
avoid here. It'll be the responsibility of the caller in the command
package to do the final wiring to get a provider instance out of a
provider plugin client.
For now this supports both our old and new directory layouts, so we can
preserve compatibility with existing configurations until a future major
release where support for the old paths will be removed.
Currently this allows both styles in all given directories, which means we
support more variants than we actually intend to but this is accepted to
keep the interface simple such that we can easily remove the legacy
exception later. The documentation will reflect only the subset of
path layouts that we actually intend to support.
With forthcoming support for versioned plugins we need to be able to
answer questions like what versions of plugins are currently installed,
what's the newest version of a given plugin available, etc.
PluginMetaSet gives us a building block for this sort of plugin version
wrangling.
This is necessary since the TypeUnknown HIL handling in helper/schema
makes providers compiled WITHOUT TypeUnknown incompatible with the way
core handles unknown values.
This reverts commit c3a4cff133, reversing
changes made to 791a02e6e4.
This change requires plugin recompilation and we should hold off until a
minor release for that.
This is a breaking change to the ResourceProvider interface that adds the
new operations relating to data sources.
DataSources, ValidateDataSource, ReadDataDiff and ReadDataApply are the
data source equivalents of Resources, Validate, Diff and Apply (respectively)
for managed resources.
The diff/apply model seems at first glance a rather strange workflow for
read-only resources, but implementing data resources in this way allows them
to fit cleanly into the standard plan/apply lifecycle in cases where the
configuration contains computed arguments and thus the read must be deferred
until apply time.
Along with breaking the interface, we also fix up the plugin client/server
and helper/schema implementations of it, which are all of the callers
used when provider plugins use helper/schema. This would be a breaking
change for any provider plugin that directly implements the provider
interface, but no known plugins do this and it is not recommended.
At the helper/schema layer the implementer sees ReadDataApply as a "Read",
as opposed to "Create" or "Update" as in the managed resource Apply
implementation. The planning mechanics are handled entirely within
helper/schema, so that complexity is hidden from the provider implementation
itself.