* terraform: use hcl.MergeBodies instead of configs.MergeBodies for provider configuration
Previously, Terraform would return an error if the user supplied provider configuration via interactive input iff the configuration provided on the command line was missing any required attributes - even if those attributes were already set in config.
That error came from configs.MergeBody, which was designed for overriding valid configuration. It expects that the first ("base") body has all required attributes. However in the case of interactive input for provider configuration, it is perfectly valid if either or both bodies are missing required attributes, as long as the final body has all required attributes. hcl.MergeBodies works very similarly to configs.MergeBodies, with a key difference being that it only checks that all required attributes are present after the two bodies are merged.
I've updated the existing test to use interactive input vars and a schema with all required attributes. This test failed before switching from configs.MergeBodies to hcl.MergeBodies.
* add a command package test that shows that we can still have providers with dynamic configuration + required + interactive input merging
This test failed when buildProviderConfig still used configs.MergeBodies instead of hcl.MergeBodies
This PR adds decoding for the upcoming "moved" blocks in configuration. This code is gated behind an experiment called EverythingIsAPlan, but the experiment is not registered as an active experiment, so it will never run (there is a test in place which will fail if the experiment is ever registered).
This also adds a new function to the Targetable interface, AddrType, to simplifying comparing two addrs.Targetable.
There is some validation missing still: this does not (yet) descend into resources to see if the actual resource types are the same (I've put this off in part because we will eventually need the provider schema to verify aliased resources, so I suspect this validation will have to happen later on).
Previously, if any resources were found to have drifted, the JSON plan
output would include a drift entry for every resource in state. This
commit aligns the JSON plan output with the CLI UI, and only includes
those resources where the old value does not equal the new value---i.e.
drift has been detected.
Also fixes a bug where the "address" field was missing from the drift
output, and adds some test coverage.
* command: new command, terraform add, generates resource templates
terraform add ADDRESS generates a resource configuration template with all required (and optionally optional) attributes set to null. This can optionally also pre-populate nonsesitive attributes with values from an existing resource of the same type in state (sensitive vals will be populated with null and a comment indicating sensitivity)
* website: terraform add documentation
* Quoting filesystem path in scp command argument
* Adding proper shell quoting for scp commands
* Running go fmt
* Using a library for quoting shell commands
* Don't export quoteShell function
* jsonplan and jsonstate: include sensitive_values in state representations
A sensitive_values field has been added to the resource in state and planned values which is a map of all sensitive attributes with the values set to true.
It wasn't entirely clear to me if the values in state would suffice, or if we also need to consult the schema - I believe that this is sufficient for state files written since v0.15, and if that's incorrect or insufficient, I'll add in the provider schema check as well.
I also updated the documentation, and, since we've considered this before, bumped the FormatVersions for both jsonstate and jsonplan.
An unknown block represents a dynamic configuration block with an
unknown for_each value. We were not catching the case where a provider
modified this value unexpectedly, which would crash with block of type
NestingList blocks where the config value has no length for comparison.
Historically, we've used TFC's default run messages as a sort of dumping
ground for metadata about the run. We've recently decided to mostly stop
doing that, in favor of:
- Only specifying the run's source in the default message.
- Letting TFC itself handle the default messages.
Today, the remote backend explicitly sets a run message, overriding
any default that TFC might set. This commit removes that explicit message
so we can allow TFC to sort it out.
This shouldn't have any bad effect on TFE out in the wild, because it's
known how to set a default message for remote backend runs since late 2018.
* tools: remove terraform-bundle.
terraform-bundle is no longer supported in the main branch of terraform. Users can build terraform-bundle from terraform tagged v0.15 and older.
* add a README pointing users to the v0.15 branch
Previously we had a separation between ModuleSourceRemote and
ModulePackage as a way to represent within the type system that there's an
important difference between a module source address and a package address,
because module packages often contain multiple modules and so a
ModuleSourceRemote combines a ModulePackage with a subdirectory to
represent one specific module.
This commit applies that same strategy to ModuleSourceRegistry, creating
a new type ModuleRegistryPackage to represent the different sort of
package that we use for registry modules. Again, the main goal here is
to try to reflect the conceptual modelling more directly in the type
system so that we can more easily verify that uses of these different
address types are correct.
To make use of that, I've also lightly reworked initwd's module installer
to use addrs.ModuleRegistryPackage directly, instead of a string
representation thereof. This was in response to some earlier commits where
I found myself accidentally mixing up package addresses and source
addresses in the installRegistryModule method; with this new organization
those bugs would've been caught at compile time, rather than only at
unit and integration testing time.
While in the area anyway, I also took this opportunity to fix some
historical confusing names of fields in initwd.ModuleInstaller, to be
clearer that they are only for registry packages and not for all module
source address types.
We have some tests in this package that install real modules from the real
registry at registry.terraform.io. Those tests were written at an earlier
time when the registry's behavior was to return the URL of a .tar.gz
archive generated automatically by GitHub, which included an extra level
of subdirectory that would then be reflected in the paths to the local
copies of these modules.
GitHub started rate limiting those tar archives in a way that Terraform's
module installer couldn't authenticate to, and so the registry switched
to returning direct git repository URLs instead, which don't have that
extra subdirectory and so the local paths on disk now end up being a
little different, because the actual module directories are at a different
subdirectory of the package.
Now that we (in the previous commit) refactored how we deal with module
sources to do the parsing at config loading time rather than at module
installation time, we can expose a method to centralize the determination
for whether a particular module call (and its resulting Config object)
enters a new external package.
We don't use this for anything yet, but in later commits we will use this
for some cross-module features that are available only for modules
belonging to the same package, because we assume that modules grouped
together in a package can change together and thus it's okay to permit a
little more coupling of internal details in that case, which would not
be appropriate between modules that are versioned separately.
It's been a long while since we gave close attention to the codepaths for
module source address parsing and external module package installation.
Due to their age, these codepaths often diverged from our modern practices
such as representing address types in the addrs package, and encapsulating
package installation details only in a particular location.
In particular, this refactor makes source address parsing a separate step
from module installation, which therefore makes the result of that parsing
available to other Terraform subsystems which work with the configuration
representation objects.
This also presented the opportunity to better encapsulate our use of
go-getter into a new package "getmodules" (echoing "getproviders"), which
is intended to be the only part of Terraform that directly interacts with
go-getter.
This is largely just a refactor of the existing functionality into a new
code organization, but there is one notable change in behavior here: the
source address parsing now happens during configuration loading rather
than module installation, which may cause errors about invalid addresses
to be returned in different situations than before. That counts as
backward compatible because we only promise to remain compatible with
configurations that are _valid_, which means that they can be initialized,
planned, and applied without any errors. This doesn't introduce any new
error cases, and instead just makes a pre-existing error case be detected
earlier.
Our module registry client is still using its own special module address
type from registry/regsrc for now, with a small shim from the new
addrs.ModuleSourceRegistry type. Hopefully in a later commit we'll also
rework the registry client to work with the new address type, but this
commit is already big enough as it is.
We've previously had the syntax and representation of module source
addresses pretty sprawled around the codebase and intermingled with other
systems such as the module installer.
I've created a factored-out implementation here with the intention of
enabling some later refactoring to centralize the address parsing as part
of configuration decoding, and thus in turn allow the parsing result to
be seen by other parts of Terraform that interact with configuration
objects, so that they can more robustly handle differences between local
and remote modules, and can present module addresses consistently in the
UI.
This new package aims to encapsulate all of our interactions with
go-getter to fetch remote module packages, to ensure that the rest of
Terraform will only use the small subset of go-getter functionality that
our modern module installer uses.
In older versions of Terraform, go-getter was the entire implementation
of module installation, but along the way we found that several aspects of
its design are poor fit for Terraform's needs, and so now we're using it
as just an implementation detail of Terraform's handling of remote module
packages only, hiding it behind this wrapper API which exposes only
the services that our module installer needs.
This new package isn't actually used yet, but in a later commit we will
change all of the other callers to go-getter to only work indirectly
through this package, so that this will be the only package that actually
imports the go-getter packages.
As the comment notes, this hostname is the default for provide source
addresses. We'll shortly be adding some address types to represent module
source addresses too, and so we'll also have DefaultModuleRegistryHost
for that situation.
(They'll actually both contain the the same hostname, but that's a
coincidence rather than a requirement.)
When performing state migration to a remote backend target, Terraform
may fail due to mismatched remote and local Terraform versions. Here we
add the `-ignore-remote-version` flag to allow users to ignore this
version check when necessary.
When migrating multiple local workspaces to a remote backend target
using the `prefix` argument, we need to perform the version check
against all existing workspaces returned by the `Workspaces` method.
Failing to do so will result in a version check error.
Our module installer has a somewhat-informal idea of a "module package",
which is some external thing we can go fetch in order to add one or more
modules to the current configuration. Our documentation doesn't talk much
about it because most users seem to have found the distinction between
external and local modules pretty intuitive without us throwing a lot of
funny terminology at them, but there are some situations where the
distinction between a module and a module package are material to the
end-user.
One such situation is when using an absolute rather than relative
filesystem path: we treat that as an external package in order to make the
resulting working directory theoretically "portable" (although users can
do various other things to defeat that), and so Terraform will copy the
directory into .terraform/modules in the same way as it would download and
extract a remote archive package or clone a git repository.
A consequence of this, though, is that any relative paths called from
inside a module loaded from an absolute path will fail if they try to
traverse upward into the parent directory, because at runtime we're
actually running from a copy of the directory that's been taking out of
its original context.
A similar sort of situation can occur in a truly remote module package if
the author accidentally writes a "../" source path that traverses up out
of the package root, and so this commit introduces a special error message
for both situations that tries to be a bit clearer about there being a
package boundary and use that to explain why installation failed.
We would ideally have made escaping local references like that illegal in
the first place, but sadly we did not and so when we rebuilt the module
installer for Terraform v0.12 we ended up keeping the previous behavior of
just trying it and letting it succeed if there happened to somehow be a
matching directory at the given path, in order to remain compatible with
situations that had worked by coincidence rather than intention. For that
same reason, I've implemented this as a replacement error message we will
return only if local module installation was going to fail anyway, and
thus it only modifies the error message for some existing error situations
rather than introducing new error situations.
This also includes some light updates to the documentation to say a little
more about how Terraform treats absolute paths, though aiming not to get
too much into the weeds about module packages since it's something that
most users can get away with never knowing.
When returning from the context method, a deferred function call checked
for error diagnostics in the `diags` variable, and unlocked the state if
any exist. This means that we need to be extra careful to mutate that
variable when returning errors, rather than returning a different set of
diags in the same position.
Previously this would result in an invalid plan file causing a lock to
become stuck.
Most legacy provider resources do not implement any import functionality
other than returning an empty object with the given ID, relying on core
to later read that resource and obtain the complete state. Because of
this, we need to check the response from ReadResource for a null value,
and use that as an indication the import id was invalid.
This was dead code, and there is no clear way to retrieve this
information, as we currently only derive the drift information as part
of the rendering process.
The schemas for provider and the resources didn't match, so the changes
were not going to be rendered at all.
Add a test which contains a deposed resource.
* getproviders ParsePlatform: add check for invalid platform strings with too many parts
The existing logic would not catch things like a platform string containing multiple underscores. I've added an explicit check for exactly 2 parts and some basic tests to prove it.
* command/providers-lock: add tests
This commit adds some simple tests for the providers lock command. While adding this test I noticed that there was a mis-copied error message, so I replaced that with a more specific message. I also added .terraform.lock.hcl to our gitignore for hopefully obvious reasons.
getproviders.ParsePlatform: use parts in place of slice range, since it's available
* command: Providers mirror tests
The providers mirror command is already well tested in e2e tests, so this includes only the most absolutely basic test case.
Do not convert provisioner diagnostics to errors so that users can get
context from provisioner failures.
Return diagnostics from the builtin provisioners that can be annotated
with configuration context and instance addresses.
When an attribute value changes in sensitivity, we previously rendered
this in the diff with a `~` update action and a note about the
consequence of the sensitivity change. Since we also suppress the
attribute value, this made it impossible to know if the underlying value
was changing, too, which has significant consequences on the meaning of
the plan.
This commit adds an equality check of the old/new underlying values. If
these are unchanged, we add a note to the sensitivity warning to clarify
that only sensitivity is changing.
This is part of a general effort to move all of Terraform's non-library
package surface under internal in order to reinforce that these are for
internal use within Terraform only.
If you were previously importing packages under this prefix into an
external codebase, you could pin to an earlier release tag as an interim
solution until you've make a plan to achieve the same functionality some
other way.
This is part of a general effort to move all of Terraform's non-library
package surface under internal in order to reinforce that these are for
internal use within Terraform only.
If you were previously importing packages under this prefix into an
external codebase, you could pin to an earlier release tag as an interim
solution until you've make a plan to achieve the same functionality some
other way.
This is part of a general effort to move all of Terraform's non-library
package surface under internal in order to reinforce that these are for
internal use within Terraform only.
If you were previously importing packages under this prefix into an
external codebase, you could pin to an earlier release tag as an interim
solution until you've make a plan to achieve the same functionality some
other way.
This is part of a general effort to move all of Terraform's non-library
package surface under internal in order to reinforce that these are for
internal use within Terraform only.
If you were previously importing packages under this prefix into an
external codebase, you could pin to an earlier release tag as an interim
solution until you've make a plan to achieve the same functionality some
other way.
This is part of a general effort to move all of Terraform's non-library
package surface under internal in order to reinforce that these are for
internal use within Terraform only.
If you were previously importing packages under this prefix into an
external codebase, you could pin to an earlier release tag as an interim
solution until you've make a plan to achieve the same functionality some
other way.
This is part of a general effort to move all of Terraform's non-library
package surface under internal in order to reinforce that these are for
internal use within Terraform only.
If you were previously importing packages under this prefix into an
external codebase, you could pin to an earlier release tag as an interim
solution until you've make a plan to achieve the same functionality some
other way.
This is part of a general effort to move all of Terraform's non-library
package surface under internal in order to reinforce that these are for
internal use within Terraform only.
If you were previously importing packages under this prefix into an
external codebase, you could pin to an earlier release tag as an interim
solution until you've make a plan to achieve the same functionality some
other way.
This is part of a general effort to move all of Terraform's non-library
package surface under internal in order to reinforce that these are for
internal use within Terraform only.
If you were previously importing packages under this prefix into an
external codebase, you could pin to an earlier release tag as an interim
solution until you've make a plan to achieve the same functionality some
other way.
This is part of a general effort to move all of Terraform's non-library
package surface under internal in order to reinforce that these are for
internal use within Terraform only.
If you were previously importing packages under this prefix into an
external codebase, you could pin to an earlier release tag as an interim
solution until you've make a plan to achieve the same functionality some
other way.
This is part of a general effort to move all of Terraform's non-library
package surface under internal in order to reinforce that these are for
internal use within Terraform only.
If you were previously importing packages under this prefix into an
external codebase, you could pin to an earlier release tag as an interim
solution until you've make a plan to achieve the same functionality some
other way.
This is part of a general effort to move all of Terraform's non-library
package surface under internal in order to reinforce that these are for
internal use within Terraform only.
If you were previously importing packages under this prefix into an
external codebase, you could pin to an earlier release tag as an interim
solution until you've make a plan to achieve the same functionality some
other way.
This is part of a general effort to move all of Terraform's non-library
package surface under internal in order to reinforce that these are for
internal use within Terraform only.
If you were previously importing packages under this prefix into an
external codebase, you could pin to an earlier release tag as an interim
solution until you've make a plan to achieve the same functionality some
other way.
This is part of a general effort to move all of Terraform's non-library
package surface under internal in order to reinforce that these are for
internal use within Terraform only.
If you were previously importing packages under this prefix into an
external codebase, you could pin to an earlier release tag as an interim
solution until you've make a plan to achieve the same functionality some
other way.
This is part of a general effort to move all of Terraform's non-library
package surface under internal in order to reinforce that these are for
internal use within Terraform only.
If you were previously importing packages under this prefix into an
external codebase, you could pin to an earlier release tag as an interim
solution until you've make a plan to achieve the same functionality some
other way.
This is part of a general effort to move all of Terraform's non-library
package surface under internal in order to reinforce that these are for
internal use within Terraform only.
If you were previously importing packages under this prefix into an
external codebase, you could pin to an earlier release tag as an interim
solution until you've make a plan to achieve the same functionality some
other way.
This is part of a general effort to move all of Terraform's non-library
package surface under internal in order to reinforce that these are for
internal use within Terraform only.
If you were previously importing packages under this prefix into an
external codebase, you could pin to an earlier release tag as an interim
solution until you've make a plan to achieve the same functionality some
other way.
This is part of a general effort to move all of Terraform's non-library
package surface under internal in order to reinforce that these are for
internal use within Terraform only.
If you were previously importing packages under this prefix into an
external codebase, you could pin to an earlier release tag as an interim
solution until you've make a plan to achieve the same functionality some
other way.
This is part of a general effort to move all of Terraform's non-library
package surface under internal in order to reinforce that these are for
internal use within Terraform only.
If you were previously importing packages under this prefix into an
external codebase, you could pin to an earlier release tag as an interim
solution until you've make a plan to achieve the same functionality some
other way.
This is part of a general effort to move all of Terraform's non-library
package surface under internal in order to reinforce that these are for
internal use within Terraform only.
If you were previously importing packages under this prefix into an
external codebase, you could pin to an earlier release tag as an interim
solution until you've make a plan to achieve the same functionality some
other way.
This is part of a general effort to move all of Terraform's non-library
package surface under internal in order to reinforce that these are for
internal use within Terraform only.
If you were previously importing packages under this prefix into an
external codebase, you could pin to an earlier release tag as an interim
solution until you've make a plan to achieve the same functionality some
other way.
This is part of a general effort to move all of Terraform's non-library
package surface under internal in order to reinforce that these are for
internal use within Terraform only.
If you were previously importing packages under this prefix into an
external codebase, you could pin to an earlier release tag as an interim
solution until you've make a plan to achieve the same functionality some
other way.
This is part of a general effort to move all of Terraform's non-library
package surface under internal in order to reinforce that these are for
internal use within Terraform only.
If you were previously importing packages under this prefix into an
external codebase, you could pin to an earlier release tag as an interim
solution until you've make a plan to achieve the same functionality some
other way.
Once a plugin process is started, go-plugin will redirect the stdout and
stderr stream through a grpc service and provide those streams to the
client. This is rarely used, as it is prone to failing with races
because those same file descriptors are needed for the initial handshake
and logging setup, but data may be accidentally sent to these
nonetheless.
The usual culprits are stray `fmt.Print` usage where logging was
intended, or the configuration of a logger after the os.Stderr file
descriptor was replaced by go-plugin. These situations are very hard for
provider developers to debug since the data is discarded entirely.
While there may be improvements to be made in the go-plugin package to
configure this behavior, in the meantime we can add a simple monitoring
io.Writer to the streams which will surface th data as warnings in the
logs instead of writing it to `io.Discard`
This is not currently a supported interface, but we plan to release
tool(s) that consume parts of it that are more dependable later,
separately from Terraform CLI itself.
* Add helper suggestion when failed registry err
When someone has a failed registry error on init, remind them that
they should have required_providers in every module
* Give suggestion for a provider based on reqs
Suggest another provider on a registry error, from the list of
requirements we have on init. This skips the legacy lookup
process if there is a similar provider existing in requirements.
Fixes#27506
Add a new flag `-lockfile=readonly` to `terraform init`.
It would be useful to allow us to suppress dependency lockfile changes
explicitly.
The type of the `-lockfile` flag is string rather than bool, leaving
room for future extensions to other behavior variants.
The readonly mode suppresses lockfile changes, but should verify
checksums against the information already recorded. It should conflict
with the `-upgrade` flag.
Note: In the original use-case described in #27506, I would like to
suppress adding zh hashes, but a test code here suppresses adding h1
hashes because it's easy for testing.
Co-authored-by: Alisdair McDiarmid <alisdair@users.noreply.github.com>
* providers.Interface: rename ValidateDataSourceConfig to
ValidateDataResourceConfig
This PR came about after renaming ValidateResourceTypeConfig to
ValidateResourceConfig: I now understand that we'd called it the former
instead of the latter to indicate that the function wasn't necessarily
operating on a resource that actually exists. A possibly-more-accurate
renaming of both functions might then be ValidateManagedResourceConfig
and ValidateDataResourceConfig.
The next commit will update the protocol (v6 only) as well; these are in
separate commits for reviewers and will get squashed together before
merging.
* extend renaming to protov6
This is just a prototype to gather some feedback in our ongoing research
on integration testing of Terraform modules. The hope is that by having a
command integrated into Terraform itself it'll be easier for interested
module authors to give it a try, and also easier for us to iterate quickly
based on feedback without having to coordinate across multiple codebases.
Everything about this is subject to change even in future patch releases.
Since it's a CLI command rather than a configuration language feature it's
not using the language experiments mechanism, but generates a warning
similar to the one language experiments generate in order to be clear that
backward compatibility is not guaranteed.
As part of ongoing research into Terraform testing we'd like to use an
experimental feature to validate our current understanding that expressing
tests as part of the Terraform language, as opposed to in some other
language run alongside, is a good and viable way to write practical
module integration tests.
This initial experimental incarnation of that idea is implemented as a
provider, just because that's an easier extension point for research
purposes than a first-class language feature would be. Whether this would
ultimately emerge as a provider similar to this or as custom language
constructs will be a matter for future research, if this first
experiment confirms that tests written in the Terraform language are the
best direction to take.
The previous incarnation of this experiment was an externally-developed
provider apparentlymart/testing, listed on the Terraform Registry. That
helped with showing that there are some useful tests that we can write
in the Terraform language, but integrating such a provider into Terraform
will allow us to make use of it in the also-experimental "terraform test"
command, which will follow in subsequent commits, to see how this might
fit into a development workflow.
* Add support for plugin protocol v6
This PR turns on support for plugin protocol v6. A provider can
advertise itself as supporting protocol version 6 and terraform will
use the correct client.
Todo:
The "unmanaged" providers functionality does not support protocol
version, so at the moment terraform will continue to assume that
"unmanaged" providers are on protocol v5. This will require some
upstream work on go-plugin (I believe).
I would like to convert the builtin providers to use protocol v6 in a
future PR; however it is not necessary until we remove protocol v6.
* add e2e test for using both plugin protocol versions
- copied grpcwrap and made a version that returns protocol v6 provider
- copied the test provider, provider-simple, and made a version that's
using protocol v6 with the above fun
- added an e2etest
Remove the README that had old user-facing information, replacing
it with a doc.go that describes the package and points to the
plugin SDK for external consumers.
* providers.Interface: huge renamification
This commit renames a handful of functions in the providers.Interface to
match changes made in protocol v6. The following commit implements this
change across the rest of the codebase; I put this in a separate commit
for ease of reviewing and will squash these together when merging.
One noteworthy detail: protocol v6 removes the config from the
ValidateProviderConfigResponse, since it's never been used. I chose to
leave that in place in the interface until we deprecate support for
protocol v5 entirely.
Note that none of these changes impact current providers using protocol
v5; the protocol is unchanged. Only the translation layer between the
proto and terraform have changed.
It's pretty common to want to apply the various fmt.Fprint... functions
to our two output streams, and so to make that much less noisy at the
callsite here we have a small number of very thin wrappers around the
underlying fmt package functionality.
Although we're aiming to not have too much abstraction in this "terminal"
package, this seems justified in that it is only a very thin wrapper
around functionality that most Go programmers are already familiar with,
and so the risk of this causing any surprises is low and the improvement
to readability of callers seems worth it.
This is to allow convenient testing of functions that are designed to work
directly with *terminal.Streams or the individual stream objects inside.
Because the InputStream and OutputStream APIs expose directly an *os.File,
this does some extra work to set up OS-level pipes so we can capture the
output into local buffers to make test assertions against. The idea here
is to keep the tricky stuff we need for testing confined to the test
codepaths, so that the "real" codepaths don't end up needing to work
around abstractions that are otherwise unnecessary.
This is the first commit for plugin protocol v6. This is currently
unused (dead) code; future commits will add the necessary conversion
packages, extend configschema, and modify the providers.Interface.
The new plugin protocol includes the following changes:
- A new field has been added to Attribute: NestedType. This will be the
key new feature in plugin protocol v6
- Several massages were renamed for consistency with the verb-noun
pattern seen in _most_ messages.
- The prepared_config has been removed from PrepareProviderConfig
(renamed ValidateProviderConfig), as it has never been used.
- The provisioner service has been removed entirely. This has no impact
on built-in provisioners. 3rd party provisioners are not supported by
the SDK and are not included in this protocol at all.
This is a helper package that creates a very thin abstraction over
terminal setup, with the main goal being to deal with all of the extra
setup we need to do in order to get a UTF-8-supporting virtual terminal
on a Windows system.
Previously we were expecting that the *hcl.File would always be non-nil,
even in error cases. That isn't always true, so now we'll be more robust
about it and explicitly return an empty locks object in that case, along
with the error diagnostics.
In particular this avoids a panic in a strange situation where the user
created a directory where the lock file would normally go. There's no
meaning to such a directory, so it would always be a mistake and so now
we'll return an error message about it, rather than panicking as before.
The error message for the situation where the lock file is a directory is
currently not very specific, but since it's HCL responsible for generating
that message we can't really fix that at this layer. Perhaps in future
we can change HCL to have a specialized error message for that particular
error situation, but for the sake of this commit the goal is only to
stop the panic and return a normal error message.
If a user forgets to specify the source address for a provider, Terraform
will assume they meant a provider in the registry.terraform.io/hashicorp/
namespace. If that ultimately doesn't exist, we'll now try to see if
there's some other provider source address recorded in the registry's
legacy provider lookup table, and suggest it if so.
The error message here is a terse one addressed primarily to folks who are
already somewhat familiar with provider source addresses and how to
specify them. Terraform v0.13 had a more elaborate version of this error
message which directed the user to try the v0.13 automatic upgrade tool,
but we no longer have that available in v0.14 and later so the user must
make the fix themselves.
The temporary directory on some systems (most notably MacOS) contains
symlinks, which would not be recorded by the installer. In order to make
these paths comparable in the tests we need to eval the symlinks in
the paths before giving them to the installer.
When logging is turned on, panicwrap will still see provider crashes and
falsely report them as core crashes, hiding the formatted provider
error. We can trick panicwrap by slightly obfuscating the error line.
When rendering a set of version constraints to a string, we normalize
partially-constrained versions. This means converting a version
like 2.68.* to 2.68.0.
Prior to this commit, this normalization was done after deduplication.
This could result in a version constraints string with duplicate
entries, if multiple partially-constrained versions are equivalent. This
commit fixes this by normalizing before deduplicating and sorting.
Previously we were only verifying locked hashes for local archive zip
files, but if we have non-ziphash hashes available then we can and should
also verify that a local directory matches at least one of them.
This does mean that folks using filesystem mirrors but yet also running
Terraform across multiple platforms will need to take some extra care to
ensure the hashes pass on all relevant platforms, which could mean using
"terraform providers lock" to pre-seed their lock files with hashes across
all platforms, or could mean using the "packed" directory layout for the
filesystem mirror so that Terraform will end up in the install-from-archive
codepath instead of this install-from-directory codepath, and can thus
verify ziphash too.
(There's no additional documentation about the above here because there's
already general information about this in the lock file documentation
due to some similar -- though not identical -- situations with network
mirrors.)
We previously had some tests for some happy paths and a few specific
failures into an empty directory with no existing locks, but we didn't
have tests for the installer respecting existing lock file entries.
This is a start on a more exhaustive set of tests for the installer,
aiming to visit as many of the possible codepaths as we can reasonably
test using this mocking strategy. (Some other codepaths require different
underlying source implementations, etc, so we'll have to visit those in
other tests separately.)
This won't be a typical usage pattern for normal code, but will be useful
for tests that need to work with locks as input so that they don't need to
write out a temporary file on disk just to read it back in immediately.
An earlier commit made this remove duplicates, which set the precedent
that this function is trying to canonically represent the _meaning_ of
the version constraints rather than exactly how they were expressed in
the configuration.
Continuing in that vein, now we'll also apply a consistent (though perhaps
often rather arbitrary) ordering to the terms, so that it doesn't change
due to irrelevant details like declarations being written in a different
order in the configuration.
The ordering here is intended to be reasonably intuitive for simple cases,
but constraint strings with many different constraints are hard to
interpret no matter how we order them so the main goal is consistency,
so those watching how the constraints change over time (e.g. in logs of
Terraform output, or in the dependency log file) will see fewer noisy
changes that don't actually mean anything.
Create a logger that will record any apparent crash output for later
processing.
If the cli command returns with a non-zero exit status, check for any
recorded crashes and add those to the output.
Now that hclog can independently set levels on related loggers, we can
separate the log levels for different subsystems in terraform.
This adds the new environment variables, `TF_LOG_CORE` and
`TF_LOG_PROVIDER`, which each take the same set of log level arguments,
and only applies to logs from that subsystem. This means that setting
`TF_LOG_CORE=level` will not show logs from providers, and
`TF_LOG_PROVIDER=level` will not show logs from core. The behavior of
`TF_LOG` alone does not change.
While it is not necessarily needed since the default is to disable logs,
there is also a new level argument of `off`, which reflects the
associated level in hclog.
A set of version constraints can contain duplicates. This can happen if
multiple identical constraints are specified throughout a configuration.
When rendering the set, it is confusing to display redundant
constraints. This commit changes the string renderer to only show the
first instance of a given constraint, and adds unit tests for this
function to cover this change.
This also fixes a bug with the locks file generation: previously, a
configuration with redundant constraints would result in this error on
second init:
Error: Invalid provider version constraints
on .terraform.lock.hcl line 6:
(source code not available)
The recorded version constraints for provider
registry.terraform.io/hashicorp/random must be written in normalized form:
"3.0.0".
Use a separate log sink to always capture trace logs for the panicwrap
handler to write out in a crash log.
This requires creating a log file in the outer process and passing that
path to the child process to log to.
Previously this codepath was generating a confusing message in the absense
of any symlinks, because filepath.EvalSymlinks returns a successful result
if the target isn't a symlink.
Now we'll emit the log line only if filepath.EvalSymlinks returns a
result that's different in a way that isn't purely syntactic (which
filepath.Clean would "fix").
The new message is a little more generic because technically we've not
actually ensured that a difference here was caused by a symlink and so
we shouldn't over-promise and generate something potentially misleading.
ioutil.TempFile has a special case where an empty string for its dir
argument is interpreted as a request to automatically look up the system
temporary directory, which is commonly /tmp .
We don't want that behavior here because we're specifically trying to
create the temporary file in the same directory as the file we're hoping
to replace. If the file gets created in /tmp then it might be on a
different device and thus the later atomic rename won't work.
Instead, we'll add our own special case to explicitly use "." when the
given filename is in the current working directory. That overrides the
special automatic behavior of ioutil.TempFile and thus forces the
behavior we need.
This hadn't previously mattered for earlier callers of this code because
they were creating files in subdirectories, but this codepath was failing
for the dependency lock file due to it always being created directly
in the current working directory.
Unfortunately since this is a picky implementation detail I couldn't find
a good way to write a unit test for it without considerable refactoring.
Instead, I verified manually that the temporary filename wasn't in /tmp on
my Linux system, and hope that the comment inline will explain this
situation well enough to avoid an accidental regression in future
maintenence.
If a configuration requires a partial provider version (with some parts
unspecified), Terraform considers this as a constrained-to-zero version.
For example, a version constraint of 1.2 will result in an attempt to
install version 1.2.0, even if 1.2.1 is available.
When writing the dependency locks file, we previously would write 1.2.*,
as this is the in-memory representation of 1.2. This would then cause an
error on re-reading the locks file, as this is not a valid constraint
format.
Instead, we now explicitly convert the constraint to its zero-filled
representation before writing the locks file. This ensures that it
correctly round-trips.
Because this change is made in getproviders.VersionConstraintsString, it
also affects the output of the providers sub-command.
Use a single log writer instance for all std library logging.
Setup the std log writer in the logging package, and remove boilerplate
from test packages.
In this case, "atomic" means that there will be no situation where the
file contains only part of the newContent data, and therefore other
software monitoring the file for changes (using a mechanism like inotify)
won't encounter a truncated file.
It does _not_ mean that there can't be existing filehandles open against
the old version of the file. On Windows systems the write will fail in
that case, but on Unix systems the write will typically succeed but leave
the existing filehandles still pointing at the old version of the file.
They'll need to reopen the file in order to see the new content.
This originated in the cliconfig code to write out credentials files. The
Windows implementation of this in particular was quite onerous to get
right because it needs a very specific sequence of operations to avoid
running into exclusive file locks, and so by factoring this out with
only cosmetic modification we can avoid repeating all of that engineering
effort for other atomic file writing use-cases.
This builds on an experimental feature in the underlying cty library which
allows marking specific attribtues of an object type constraint as
optional, which in turn modifies how the cty conversion package handles
missing attributes in a source value: it will silently substitute a null
value of the appropriate type rather than returning an error.
In order to implement the experiment this commit temporarily forks the
HCL typeexpr extension package into a local internal/typeexpr package,
where I've extended the type constraint syntax to allow annotating object
type attributes as being optional using the HCL function call syntax.
If the experiment is successful -- both at the Terraform layer and in
the underlying cty library -- we'll likely send these modifications to
upstream HCL so that other HCL-based languages can potentially benefit
from this new capability.
Because it's experimental, the optional attribute modifier is allowed only
with an explicit opt-in to the module_variable_optional_attrs experiment.
Previously we were just letting hclwrite do its default formatting
behavior here. The current behavior there isn't ideal anyway -- it puts
big data structures all on one line -- but even ignoring that our goal
for this file format is to keep things in a highly-normalized shape so
that diffs against the file are clear and easy to read.
With that in mind, here we directly control how we write that value into
the file, which means that later changes to hclwrite's list/set
presentation won't affect it, regardless of what form they take.
This probably isn't the best UI we could do here, but it's a placeholder
for now just to avoid making it seem like we're ignoring the lock file
and checking for new versions anyway.
This changes the approach used by the provider installer to remember
between runs which selections it has previously made, using the lock file
format implemented in internal/depsfile.
This means that version constraints in the configuration are considered
only for providers we've not seen before or when -upgrade mode is active.
These are helper functions to give the installation UI some hints about
whether the lock file has changed so that it can in turn give the user
advice about it. The UI-layer callers of these will follow in a later
commit.
helper/copy CopyDir was used heavily in tests. It differes from
internal/copydir in a few ways, the main one being that it creates the
dst directory while the internal version expected the dst to exist
(there are other differences, which is why I did not just switch tests
to using internal's CopyDir).
I moved the CopyDir func from helper/copy into command_test.go; I could
also have moved it into internal/copy and named it something like
CreateDirAndCopy so if that seems like a better option please let me
know.
helper/copy/CopyFile was used in a couple of spots so I moved it into
internal, at which point I thought it made more sense to rename the
package copy (instead of copydir).
There's also a `go mod tidy` included.
This new-ish package ended up under "helper" during the 0.12 cycle for
want of some other place to put it, but in retrospect that was an odd
choice because the "helper/" tree is otherwise a bunch of legacy code from
when the SDK lived in this repository.
Here we move it over into the "internal" directory just to distance it
from the guidance of not using "helper/" packages in new projects;
didyoumean is a package we actively use as part of error message hints.
We no longer need to support 0.12-and-earlier-style provider addresses
because users should've upgraded their existing configurations and states
on Terraform 0.13 already.
For now this is only checked in the "init" command, because various test
shims are still relying on the idea of legacy providers the core layer.
However, rejecting these during init is sufficient grounds to avoid
supporting legacy provider addresses in the new dependency lock file
format, and thus sets the stage for a more severe removal of legacy
provider support in a later commit.
In earlier commits we started to make the installation codepath
context-aware so that it could be canceled in the event of a SIGINT, but
we didn't complete wiring that through the API of the getproviders
package.
Here we make the getproviders.Source interface methods, along with some
other functions that can make network requests, take a context.Context
argument and act appropriately if that context is cancelled.
The main providercache.Installer.EnsureProviderVersions method now also
has some context-awareness so that it can abort its work early if its
context reports any sort of error. That avoids waiting for the process
to wind through all of the remaining iterations of the various loops,
logging each request failure separately, and instead returns just
a single aggregate "canceled" error.
We can then set things up in the "terraform init" and
"terraform providers mirror" commands so that the context will be
cancelled if we get an interrupt signal, allowing provider installation
to abort early while still atomically completing any local-side effects
that may have started.
As we continue iterating towards saving valid hashes for a package in a
depsfile lock file after installation and verifying them on future
installation, this prepares getproviders for the possibility of having
multiple valid hashes per package.
This will arise in future commits for two reasons:
- We will need to support both the legacy "zip hash" hashing scheme and
the new-style content-based hashing scheme because currently the
registry protocol is only able to produce the legacy scheme, but our
other installation sources prefer the content-based scheme. Therefore
packages will typically have a mixture of hashes of both types.
- Installing from an upstream registry will save the hashes for the
packages across all supported platforms, rather than just the current
platform, and we'll consider all of those valid for future installation
if we see both successful matching of the current platform checksum and
a signature verification for the checksums file as a whole.
This also includes some more preparation for the second case above in that
signatureAuthentication now supports AcceptableHashes and returns all of
the zip-based hashes it can find in the checksums file. This is a bit of
an abstraction leak because previously that authenticator considered its
"document" to just be opaque bytes, but we want to make sure that we can
only end up trusting _all_ of the hashes if we've verified that the
document is signed. Hopefully we'll make this better in a future commit
with some refactoring, but that's deferred for now in order to minimize
disruption to existing codepaths while we work towards a provider locking
MVP.
The logic for what constitutes a valid hash and how different hash schemes
are represented was starting to get sprawled over many different files and
packages.
Consistently with other cases where we've used named types to gather the
definition of a particular string into a single place and have the Go
compiler help us use it properly, this introduces both getproviders.Hash
representing a hash value and getproviders.HashScheme representing the
idea of a particular hash scheme.
Most of this changeset is updating existing uses of primitive strings to
uses of getproviders.Hash. The new type definitions are in
internal/getproviders/hash.go.
Although origin registries return specific [filename, hash] pairs, our
various different installation methods can't produce a structured mapping
from platform to hash without breaking changes.
Therefore, as a compromise, we'll continue to do platform-specific checks
against upstream data in the cases where that's possible (installation
from origin registry or network mirror) but we'll treat the lock file as
just a flat set of equally-valid hashes, at least one of which must match
after we've completed whatever checks we've made against the
upstream-provided checksums/signatures.
This includes only the minimal internal/getproviders updates required to
make this compile. A subsequent commit will update that package to
actually support the idea of verifying against multiple hashes.
The "acceptable hashes" for a package is a set of hashes that the upstream
source considers to be good hashes for checking whether future installs
of the same provider version are considered to match this one.
Because the acceptable hashes are a package authentication concern and
they already need to be known (at least in part) to implement the
authenticators, here we add AcceptableHashes as an optional extra method
that an authenticator can implement.
Because these are hashes chosen by the upstream system, the caller must
make its own determination about their trustworthiness. The result of
authentication is likely to be an input to that, for example by
distrusting hashes produced by an authenticator that succeeds but doesn't
report having validated anything.
This is the pre-existing hashing scheme that was initially built for
releases.hashicorp.com and then later reused for the provider registry
protocol, which takes a SHA256 hash of the official distribution .zip file
and formats it as lowercase hex.
This is a non-ideal hash scheme because it works only for
PackageLocalArchive locations, and so we can't verify package directories
on local disk against such hashes. However, the registry protocol is now
a compatibility constraint and so we're going to need to support this
hashing scheme for the foreseeable future.
It doesn't make sense for a built-in provider to appear in a lock file
because built-in providers have no version independent of the version of
Terraform they are compiled into.
We also exclude legacy providers here, because they were supported only
as a transitional aid to enable the Terraform 0.13 upgrade process and
are not intended for explicit selection.
The provider installer will, once it's updated to understand dependency
locking, use this concept to decide which subset of its selections to
record in the dependency lock file for reference for future installation
requests.
This is an initial implementation of writing locks back to a file on disk.
This initial implementation is incomplete because it does not write the
changes to the new file atomically. We'll revisit that in a later commit
as we return to polish these codepaths, once we've proven out this
package's design by integrating it with Terraform's provider installer.
This is the initial implementation of the parser/decoder portion of the
new dependency lock file handler. It's currently dead code because the
caller isn't written yet. We'll continue to build out this functionality
here until we have the basic level of both load and save functionality
before introducing this into the provider installer codepath.
The version constraint parser allows "~> 2", but it behavior is identical
to "~> 2.0". Due to a quirk of the constraint parser (caused by the fact
that it supports both Ruby-style and npm/cargo-style constraints), it
ends up returning "~> 2" with the minor version marked as "unconstrained"
rather than as zero, but that means the same thing as zero in this context
anyway and so we'll prefer to stringify as "~> 2.0" so that we can be
clearer about how Terraform is understanding that version constraint.
For reasons that are unclear, these two tests just started failing on
macOS very recently. The failure looked like:
PackageDir: strings.Join({
"/",
+ "private/",
"var/folders/3h/foobar/T/terraform-test-p",
"rovidercache655312854/registry.terraform.io/hashicorp/null/2.0.0",
"/windows_amd64",
},
Speculating that the macOS temporary directory moved into the /private
directory, I added a couple of EvalSymlinks calls and the tests pass
again.
No other unit tests appear to be affected by this at the moment.
If a provider changes namespace in the registry, we can detect this when
running the 0.13upgrade command. As long as there is a version matching
the user's constraints, we now use the provider's new source address.
Otherwise, warn the user that the provider has moved and a version
upgrade is necessary to move to it.
We previously had this just stubbed out because it was a stretch goal for
the v0.13.0 release and it ultimately didn't make it in.
Here we fill out the existing stub -- with a minor change to its interface
so it can access credentials -- with a client implementation that is
compatible with the directory structure produced by the
"terraform providers mirror" subcommand, were the result to be published
on a static file server.
Earlier we introduced a new package hashing mechanism that is compatible
with both packed and unpacked packages, because it's a hash of the
contents of the package rather than of the archive it's delivered in.
However, we were using that only for the local selections file and not
for any remote package authentication yet.
The provider network mirrors protocol includes new-style hashes as a step
towards transitioning over to the new hash format in all cases, so this
new authenticator is here in preparation for verifying the checksums of
packages coming from network mirrors, for mirrors that support them.
For now this leaves us in a kinda confusing situation where we have both
NewPackageHashAuthentication for the new style and
NewArchiveChecksumAuthentication for the old style, which for the moment
is represented only by a doc comment on the latter. Hopefully we can
remove NewArchiveChecksumAuthentication in a future commit, if we can
get the registry updated to use the new hashing format.
The installFromHTTPURL function downloads a package to a temporary file,
then delegates to installFromLocalArchive to install it. We were
previously not deleting the temporary file afterwards. This commit fixes
that.
The SearchLocalDirectory function was intentionally written to only
support symlinks at the leaves so that it wouldn't risk getting into an
infinite loop traversing intermediate symlinks, but that rule was also
applying to the base directory itself.
It's pretty reasonable to put your local plugins in some location
Terraform wouldn't normally search (e.g. because you want to get them from
a shared filesystem mounted somewhere) and creating a symlink from one
of the locations Terraform _does_ search is a convenient way to help
Terraform find those without going all in on the explicit provider
installation methods configuration that is intended for more complicated
situations.
To allow for that, here we make a special exception for the base
directory, resolving that first before we do any directory walking.
In order to help with debugging a situation where there are for some
reason symlinks at intermediate levels inside the search tree, we also now
emit a WARN log line in that case to be explicit that symlinks are not
supported there and to hint to put the symlink at the top-level if you
want to use symlinks at all.
(The support for symlinks at the deepest level of search is not mentioned
in this message because we allow it primarily for our own cache linking
behavior.)
When installing a provider which is already cached, we attempt to create
a symlink from the install directory targeting the cache. If symlinking
fails due to missing OS/filesystem support, we instead want to copy the
cached provider.
The fallback code to do this would always fail, due to a missing target
directory. This commit fixes that. I was unable to find a way to add
automated tests around this, but I have manually verified the fix on
Windows 8.1.
At the end of the EnsureProviderVersions process, we generate a lockfile
of the selected and installed provider versions. This includes a hash of
the unpacked provider directory.
When calculating this hash and generating the lockfile, we now also
verify that the provider directory contains a valid executable file. If
not, we return an error for this provider and trigger the installer's
HashPackageFailure event. Note that this event is not yet processed by
terraform init; that comes in the next commit.
Instead of searching the installed provider package directory for a
binary as we install it, we can lazily detect the executable as it is
required. Doing so allows us to separately report an invalid unpacked
package, giving the user more actionable error messages.
* internal/getproviders: decode and return any registry warnings
The public registry may include a list of warnings in the "versions"
response for any given provider. This PR adds support for warnings from
the registry and an installer event to return those warnings to the
user.
* internal/initwd: fix panics with relative submodules in DirFromModule
There were two related issues here:
1. panic with any local module with submodules
1. panic with a relative directory that was above the workdir ("../")
The first panic was caused by the local installer looking up the root
module with the (nonexistant) key "root.", instead of "".
The second panic was caused by the installer trying to determine the
relative path from ".". This was fixed by detecting "." as the source
path and using the absolute path for the call to filepath.Rel.
Added test cases for both panics and updated the existing e2e tests with
the correct install paths.
This is the equivalent of UnpackedDirectoryPathForPackage when working
with the packed directory layout. It returns a path to a .zip file with
a name that would be detected by SearchLocalDirectory as a
PackageLocalArchive package.
We previously had this functionality available for cached packages in the
providercache package. This moves the main implementation of this over
to the getproviders package and then implements it also for PackageMeta,
allowing us to compute hashes in a consistent way across both of our
representations of a provider package.
The new methods on PackageMeta will only be effective for packages in the
local filesystem because we need direct access to the contents in order
to produce the hash. Hopefully in future the registry protocol will be
able to also provide hashes using this content-based (rather than
archive-based) algorithm and then we'll be able to make this work for
PackageMeta referring to a package obtained from a registry too, but
hashes for local packages only are still useful for some cases right now,
such as generating mirror directories in the "terraform providers mirror"
command.
This adds supports for "unmanaged" providers, or providers with process
lifecycles not controlled by Terraform. These providers are assumed to
be started before Terraform is launched, and are assumed to shut
themselves down after Terraform has finished running.
To do this, we must update the go-plugin dependency to v1.3.0, which
added support for the "test mode" plugin serving that powers all this.
As a side-effect of not needing to manage the process lifecycle anymore,
Terraform also no longer needs to worry about the provider's binary, as
it won't be used for anything anymore. Because of this, we can disable
the init behavior that concerns itself with downloading that provider's
binary, checking its version, and otherwise managing the binary.
This is all managed on a per-provider basis, so managed providers that
Terraform downloads, starts, and stops can be used in the same commands
as unmanaged providers. The TF_REATTACH_PROVIDERS environment variable
is added, and is a JSON encoding of the provider's address to the
information we need to connect to it.
This change enables two benefits: first, delve and other debuggers can
now be attached to provider server processes, and Terraform can connect.
This allows for attaching debuggers to provider processes, which before
was difficult to impossible. Second, it allows the SDK test framework to
host the provider in the same process as the test driver, while running
a production Terraform binary against the provider. This allows for Go's
built-in race detector and test coverage tooling to work as expected in
provider tests.
Unmanaged providers are expected to work in the exact same way as
managed providers, with one caveat: Terraform kills provider processes
and restarts them once per graph walk, meaning multiple times during
most Terraform CLI commands. As unmanaged providers can't be killed by
Terraform, and have no visibility into graph walks, unmanaged providers
are likely to have differences in how their global mutable state behaves
when compared to managed providers. Namely, unmanaged providers are
likely to retain global state when managed providers would have reset
it. Developers relying on global state should be aware of this.
Relying on the early config for provider requirements was necessary in
Terraform 0.12, to allow the 0.12upgrade command to run after init
installs providers.
However in 0.13, the same restrictions do not apply, and the detection
of provider requirements has changed. As a result, the early config
loader gives incorrect provider requirements in some circumstances,
such as those in the new test in this commit.
Therefore we are changing the init command to use the requirements found
by the full configuration loader. This also means that we can remove the
internal initwd CheckCoreVersionRequirements function.
provider is not found.
Previously a user would see the following error even if terraform was
only searching the local filesystem:
"provider registry registry.terraform.io does not have a provider named
...."
This PR adds a registry-specific error type and modifies the MultiSource
installer to check for registry errors. It will return the
registry-specific error message if there is one, but if not the error
message will list all locations searched.
* providercache: add logging for errors from getproviders.SearchLocalDirectory
providercache.fillMetaCache() was silently swallowing errors when
searching the cache directory. This commit logs the error without
changing the behavior otherwise.
* command/cliconfig: validate plugin cache dir exists
The plugin cache directory must exist for terraform to use it, so we
will add a check at the begining.
* internal/getproviders: fix panic with invalid path parts
If the search path is missing a directory, the provider installer would
try to create an addrs.Provider with the wrong parts. For example if the
hostname was missing (as in the test case), it would call
addrs.NewProvider with (namespace, typename, version). This adds a
validation step for each part before calling addrs.NewProvider to avoid
the panic.
This is a port of the retry/timeout logic added in #24260 and #24259,
using the same environment variables to configure the retry and timeout
settings.
* internal/registry source: return error if requested provider version protocols are not supported
* getproviders: move responsibility for protocol compatibility checks into the registry client
The original implementation had the providercache checking the provider
metadata for protocol compatibility, but this is only relevant for the
registry source so it made more sense to move the logic into
getproviders.
This also addresses an issue where we were pulling the metadata for
every provider version until we found one that was supported. I've
extended the registry client to unmarshal the protocols in
`ProviderVersions` so we can filter through that list, instead of
pulling each version's metadata.
When looking up the namespace for a legacy provider source, we need to
use the /v1/providers/-/{name}/versions endpoint. For non-HashiCorp
providers, the /v1/providers/-/{name} endpoint returns a 404.
This commit updates the LegacyProviderDefaultNamespace method and the
mock registry servers accordingly.
This commit implements most of the intended functionality of the upgrade
command for rewriting configurations.
For a given module, it makes a list of all providers in use. Then it
attempts to detect the source address for providers without an explicit
source.
Once this step is complete, the tool rewrites the relevant configuration
files. This results in a single "required_providers" block for the
module, with a source for each provider.
Any providers for which the source cannot be detected (for example,
unofficial providers) will need a source to be defined by the user. The
tool writes an explanatory comment to the configuration to help with
this.
* internal/getproviders: apply case normalizations in ParseMultiSourceMatchingPatterns
This is a very minor refactor which takes advantage of addrs.ParseProviderPart case normalization to normalize non-wildcard sources.
An earlier commit added a redundant stub for a new network mirror source
that was already previously stubbed as HTTPMirrorSource.
This commit removes the unnecessary extra stub and changes the CLI config
handling to use it instead. Along the way this also switches to using a
full base URL rather than just a hostname for the mirror, because using
the usual "Terraform-native service discovery" protocol here doesn't isn't
as useful as in the places we normally use it (the mirror mechanism is
already serving as an indirection over the registry protocol) and using
a direct base URL will make it easier to deploy an HTTP mirror under
a path prefix on an existing static file server.
* internal/providercache: verify that the provider protocol version is
compatible
The public registry includes a list of supported provider protocol
versions for each provider version. This change adds verification of
support and adds a specific error message pointing users to the closest
matching version.
This is a placeholder for later implementation of a mirror source that
talks to a particular remote HTTP server and expects it to implement the
provider mirror protocol.
* tools/terraform-bundle: refactor to use new provider installer and
provider directory layouts
terraform-bundle now supports a "source" attribute for providers,
uses the new provider installer, and the archive it creates preserves
the new (required) directory hierarchy for providers, under a "plugins"
directory.
This is a breaking change in many ways: source is required for any
non-HashiCorp provider, locally-installed providers must be given a
source (can be arbitrary, see docs) and placed in the expected directory
hierarchy, and the unzipped archive is no longer flat; there is a new
"plugins" directory created with providers in the new directory layout.
This PR also extends the existing test to check the contents of the zip
file.
TODO: Re-enable e2e tests (currently suppressed with a t.Skip)
This commit includes an update to our travis configuration, so the terraform-bundle e2e tests run. It also turns off the e2e tests, which will fail until we have a terraform 0.13.* release under releases.hashicorp.com. We decided it was better to merge this now instead of waiting when we started seeing issues opened from users who built terraform-bundle from 0.13 and found it didn't work with 0.12 - better that they get an immediate error message from the binary directing them to build from the appropriate release.
Providers installed from the registry are accompanied by a list of
checksums (the "SHA256SUMS" file), which is cryptographically signed to
allow package authentication. The process of verifying this has multiple
steps:
- First we must verify that the SHA256 hash of the package archive
matches the expected hash. This could be done for local installations
too, in the future.
- Next we ensure that the expected hash returned as part of the registry
API response matches an entry in the checksum list.
- Finally we verify the cryptographic signature of the checksum list,
using the public keys provided by the registry.
Each of these steps is implemented as a separate PackageAuthentication
type. The local archive installation mechanism uses only the archive
checksum authenticator, and the HTTP installation uses all three in the
order given.
The package authentication system now also returns a result value, which
is used by command/init to display the result of the authentication
process.
There are three tiers of signature, each of which is presented
differently to the user:
- Signatures from the embedded HashiCorp public key indicate that the
provider is officially supported by HashiCorp;
- If the signing key is not from HashiCorp, it may have an associated
trust signature, which indicates that the provider is from one of
HashiCorp's trusted partners;
- Otherwise, if the signature is valid, this is a community provider.
Due to other pressures at the time this was implemented, it was tested
only indirectly through integration tests in other packages. This now
introduces tests for the two main entry points on MemoizeSource.
Due to other pressures at the time this was implemented, it was tested
only indirectly through integration tests in other packages.
This now introduces tests for the two main entry points on the
MultiSource, along with its provider-address pattern matching logic.
This does not yet include thorough tests for
ParseMultiSourceMatchingPatterns, because that function still needs some
adjustments to do the same case folding as for normal provider address
parsing, which will follow in a latter commit along with suitable tests.
With that said, the tests added here do _indirectly_ test the happy path
of ParseMultiSourceMatchingPatterns, so we have some incomplete testing
of that function in the meantime.
Earlier on in the stubbing of this package we realized that it wasn't
going to be possible to populate the authentication-related bits for all
packages because the relevant metadata just isn't available for packages
that are already local.
However, we just moved ahead with that awkward design at the time because
we needed to get other work done, and so we've been mostly producing
PackageMeta values with all-zeros hashes and just ignoring them entirely
as a temporary workaround.
This is a first step towards what is hopefully a more intuitive model:
authentication is an optional thing in a PackageMeta that is currently
populated only for packages coming from a registry.
So far this still just models checking a SHA256 hash, which is not a
sufficient set of checks for a real release but hopefully the "real"
implementation is a natural iteration of this starting point, and if not
then at least this interim step is a bit more honest about the fact that
Authentication will not be populated on every PackageMeta.
The fake installable package meta used a ZIP archive which gave
different checksums between macOS and Linux targets. This commit removes
the target from the contents of this archive, and updates the golden
hash value in the test to match. This test should now pass on both
platforms.
Built-in providers are special providers that are distributed as part of
Terraform CLI itself, rather than being installed separately. They always
live in the terraform.io/builtin/... namespace so it's easier to see that
they are special, and currently there is only one built-in provider named
"terraform".
Previous commits established the addressing scheme for built-in providers.
This commit makes the installer aware of them to the extent that it knows
not to try to install them the usual way and it's able to report an error
if the user requests a built-in provider that doesn't exist or tries to
impose a particular version constraint for a built-in provider.
For the moment the tests for this are the ones in the "command" package
because that's where the existing testing infrastructure for this
functionality lives. A later commit should add some more focused unit
tests here in the internal/providercache package, too.
This encapsulates the logic for selecting an implied FQN for an
unqualified type name, which could either come from a local name used in
a module without specifying an explicit source for it or from the prefix
of a resource type on a resource that doesn't explicitly set "provider".
This replaces the previous behavior of just directly calling
NewDefaultProvider everywhere so that we can use a different implication
for the local name "terraform", to refer to the built-in terraform
provider rather than the stale one that's on registry.terraform.io for
compatibility with other Terraform versions.
Due to some incomplete rework of this function in an earlier commit, the
safety check for using the same directory as both the target and the
cache was inverted and was raising an error _unless_ they matched, rather
than _if_ they matched.
This change is verified by the e2etest TestInitProviders_pluginCache,
which is also updated to use the new-style cache directory layout as part
of this commit.
We previously skipped this one because it wasn't strictly necessary for
replicating the old "terraform init" behavior, but we do need it to work
so that things like the -plugin-dir option can behave correctly.
Linking packages from other cache directories and installing from unpacked
directories are fundamentally the same operation because a cache directory
is really just a collection of unpacked packages, so here we refactor
the LinkFromOtherCache functionality to actually be in
installFromLocalDir, and LinkFromOtherCache becomes a wrapper for
the installFromLocalDir function that just calculates the source and
target directories automatically and invalidates the metaCache.
We previously had only a stub implementation for a totally-empty
MultiSource. Here we have an initial implementation of the full
functionality, which we'll need to support "terraform init -plugin-dir=..."
in a subsequent commit.
On Unix-derived systems a directory must be marked as "executable" in
order to be accessible, so our previous mode of 0660 here was unsufficient
and would cause a failure if it happened to be the installer that was
creating the plugins directory for the first time here.
Now we'll make it executable and readable for all but only writable by
the same user/group. For consistency, we also make the selections file
itself readable by everyone. In both cases, the umask we are run with may
further constrain these modes.
These are some helpers to support unit testing in other packages, allowing
callers to exercise provider installation mechanisms without hitting any
real upstream source or having to prepare local package directories.
MockSource is a Source implementation that just scans over a provided
static list of packages and returns whatever matches.
FakePackageMeta is a shorthand for concisely constructing a
realistic-looking but uninstallable PackageMeta, probably for use with
MockSource.
FakeInstallablePackageMeta is similar to FakePackageMeta but also goes to
the trouble of creating a real temporary archive on local disk so that
the resulting package meta is pointing to something real on disk. This
makes the result more useful to the caller, but in return they get the
responsibility to clean up the temporary file once the test is over.
Nothing is using these yet.
Just as with the old installer mechanism, our goal is that explicit
provider installation is the only way that new provider versions can be
selected.
To achieve that, we conclude each call to EnsureProviderVersions by
writing a selections lock file into the target directory. A later caller
can then recall the selections from that file by calling SelectedPackages,
which both ensures that it selects the same set of versions and also
verifies that the checksums recorded by the installer still match.
This new selections.json file has a different layout than our old
plugins.json lock file. Not only does it use a different hashing algorithm
than before, we also record explicitly which version of each provider
was selected. In the old model, we'd repeat normal discovery when
reloading the lock file and then fail with a confusing error message if
discovery happened to select a different version, but now we'll be able
to distinguish between a package that's gone missing since installation
(which could previously have then selected a different available version)
from a package that has been modified.
For the old-style provider cache directory model we hashed the individual
executable file for each provider. That's no longer appropriate because
we're giving each provider package a whole directory to itself where it
can potentially have many files.
This therefore introduces a new directory-oriented hashing algorithm, and
it's just using the Go Modules directory hashing algorithm directly
because that's already had its cross-platform quirks and other wrinkles
addressed during the Go Modules release process, and is now used
prolifically enough in Go codebases that breaking changes to the upstream
algorithm would be very expensive to the Go ecosystem.
This is also a bit of forward planning, anticipating that later we'll use
hashes in a top-level lock file intended to be checked in to user version
control, and then use those hashes also to verify packages _during_
installation, where we'd need to be able to hash unpacked zip files. The
Go Modules hashing algorithm is already implemented to consistently hash
both a zip file and an unpacked version of that zip file.
We've been using the models from the "moduledeps" package to represent our
provider dependencies everywhere since the idea of provider dependencies
was introduced in Terraform 0.10, but that model is not convenient to use
for any use-case other than the "terraform providers" command that needs
individual-module-level detail.
To make things easier for new codepaths working with the new-style
provider installer, here we introduce a new model type
getproviders.Requirements which is based on the type the new installer was
already taking as its input. We have new methods in the states, configs,
and earlyconfig packages to produce values of this type, and a helper
to merge Requirements together so we can combine config-derived and
state-derived requirements together during installation.
The advantage of this new model over the moduledeps one is that all of
recursive module walking is done up front and we produce a simple, flat
structure that is more convenient for the main use-cases of selecting
providers for installation and then finding providers in the local cache
to use them for other operations.
This new model is _not_ suitable for implementing "terraform providers"
because it does not retain module-specific requirement details. Therefore
we will likely keep using moduledeps for "terraform providers" for now,
and then possibly at a later time consider specializing the moduledeps
logic for only what "terraform providers" needs, because it seems to be
the only use-case that needs to retain that level of detail.
This was incorrectly removing the _source_ entry prior to creating the
symlink, therefore ending up with a dangling symlink and no source file.
This wasn't obvious before because the test case for LinkFromOtherCache
was also incorrectly named and therefore wasn't running. Fixing the name
of that test made this problem apparent.
The TestLinkFromOtherCache test case now ends up seeing the final resolved
directory rather than the symlink target, because of upstream changes
to the internal/getproviders filesystem scanning logic to handle symlinks
properly.
Previously this was failing to treat symlinks to directories as unpacked
layout, because our file info was only an Lstat result, not a full Stat.
Now we'll resolve the symlink first, allowing us to handle a symlink to
a directory. That's important because our internal/providercache behavior
is to symlink from one cache to another where possible.
There's a lot going on in these functions that can be hard to follow from
the outside, so we'll add some additional trace logging so that we can
more easily understand why things are behaving the way they are.
When a provider source produces an HTTP URL location we'll expect it to
resolve to a zip file, which we'll first download to a temporary
directory and then treat it like a local archive.
When a provider source produces a local archive path we'll expect it to
be a zip file and extract it into the target directory.
This does not yet include an implementation of installing from an
already-unpacked local directory. That will follow in a subsequent commit,
likely following a similar principle as in Dir.LinkFromOtherCache.
These new functions allow command implementations to get hold of the
providercache objects and installation source object derived from the
current CLI configuration.
The MultiSource isn't actually properly implemented yet, but this is a
minimal implementation just for the case where there are no underlying
sources at all, because we use an empty MultiSource as a placeholder
when a test in the "command" package fails to explicitly populate a
ProviderSource.
This is not tested yet, but it's a compilable strawman implementation of
the necessary sequence of events to coordinate all of the moving parts
of running a provider installation operation.
This will inevitably see more iteration in later commits as we complete
the surrounding parts and wire it up to be used by "terraform init". So
far, it's just dead code not called by any other package.
The Installer type will encapsulate the logic for running an entire
provider installation request: given a set of providers to install, it
will determine a method to obtain each of them (or detect that they are
already installed) and then take the necessary actions.
So far it doesn't do anything, but this stubs out an interface by which
the caller can request ongoing notifications during an installation
operation.
This will eventually be responsible for actually retrieving a package from
a source and then installing it into the cache directory, but for the
moment it's just a stub to complete the proposed API, which I intend to
test in a subsequent commit by writing the full "Installer" API that will
encapsulate the full installation logic.
When a system-wide shared plugin cache is configured, we'll want to make
use of entries already in the shared cache when populating a local
(configuration-specific) cache.
This new method LinkFromOtherCache encapsulates the work of placing a link
from one cache to another. If possible it will create a symlink, therefore
retaining a key advantage of configuring a shared plugin cache, but
otherwise we'll do a deep copy of the package directory from one cache
to the other.
Our old provider installer would always skip trying to create symlinks on
Windows because Go standard library support for os.Symlink on Windows
was inconsistent in older versions. However, os.Symlink can now create
symlinks using a new API introduced in a Windows 10 update and cleanly
fail if symlink creation is impossible, so it's safe for us to just
try to create the symlink and react if that produces an error, just as we
used to do on non-Windows systems when possibly creating symlinks on
filesystems that cannot support them.
The existing functionality in this package deals with finding packages
that are either available for installation or already installed. In order
to support installation we also need to determine the location where a
package should be installed.
This lives in the getproviders package because that way all of the logic
related to the filesystem layout for local provider directories lives
together here where they can be maintained together more easily in future.
We've previously been copying this function around so it could remain
unexported while being used in various packages. However, it's a
non-trivial function with lots of specific assumptions built into it, so
here we'll put it somewhere that other packages can depend on it _and_
document the assumptions it seems to be making for future reference.
As a bonus, this now uses os.SameFile to detect when two paths point to
the same physical file, instead of the slightly buggy local implementation
we had before which only worked on Unix systems and did not correctly
handle when the paths were on different physical devices.
The copy of the function I extracted here is the one from internal/initwd,
so this commit also includes the removal of that unexported version and
updating the callers in that package to use at at this new location.
Historically our logic to handle discovering and installing providers has
been spread across several different packages. This package is intended
to become the home of all logic related to what is now called "provider
cache directories", which means directories on local disk where Terraform
caches providers in a form that is ready to run.
That includes both logic related to interrogating items already in a cache
(included in this commit) and logic related to inserting new items into
the cache from upstream provider sources (to follow in later commits).
These new codepaths are focused on providers and do not include other
plugin types (provisioners and credentials helpers), because providers are
the only plugin type that is represented by a heirarchical, decentralized
namespace and the only plugin type that has an auto-installation protocol
defined. The existing codepaths will remain to support the handling of
the other plugin types that require manual installation and that use only
a flat, locally-defined namespace.
Previously this was available by instantiating a throwaway
FilesystemMirrorSource, but that's pretty counter-intuitive for callers
that just want to do a one-off scan without retaining any ongoing state.
Now we expose SearchLocalDirectory as an exported function, and the
FilesystemMirrorSource then uses it as part of its implementation too.
Callers that just want to know what's available in a directory can call
SearchLocalDirectory directly.
Implement a new provider_meta block in the terraform block of modules, allowing provider-keyed metadata to be communicated from HCL to provider binaries.
Bundled in this change for minimal protocol version bumping is the addition of markdown support for attribute descriptions and the ability to indicate when an attribute is deprecated, so this information can be shown in the schema dump.
Co-authored-by: Paul Tyng <paul@paultyng.net>
This implies some notable changes that will have a visible impact to
end-users of official Terraform releases:
- Terraform is no longer compatible with MacOS 10.10 Yosemite, and
requires at least 10.11 El Capitan. (Relatedly, Go 1.14 is planned to be
the last release to support El Capitan, so while that remains supported
for now, it's notable that Terraform 0.13 is likely to be the last major
release of Terraform supporting it, with 0.14 likely to further require
MacOS 10.12 Sierra.)
- Terraform is no longer compatible with FreeBSD 10.x, which has reached
end-of-life. Terraform now requires FreeBSD 11.2 or later.
- Terraform now supports TLS 1.3 when it makes connections to remote
services such as backends and module registries. Although TLS 1.3 is
backward-compatible in principle, some legacy systems reportedly work
incorrectly when attempting to negotiate it. (This change does not
affect outgoing requests made by provider plugins, though they will see
a similar change in behavior once built with Go 1.13 or later.)
- Ed25519 certificates are now supported for TLS 1.2 and 1.3 connections.
- On UNIX systems where "use-vc" is set in resolv.conf, TCP will now be
used for DNS resolution. This is unlikely to cause issues in practice
because a system set up in this way can presumably already reach its
nameservers over TCP (or else other applications would misbehave), but
could potentially lead to lookup failures in unusual situations where a
system only runs Terraform, has historically had "use-vc" in its
configuration, but yet is blocked from reaching its configured
nameservers over TCP.
- Some parts of Terraform now support Unicode 12.0 when working with
strings. However, notably the Terraform Language itself continues to
use the text segmentation tables from Unicode 9.0, which means it lacks
up-to-date support for recognizing modern emoji combining forms as
single characters. (We may wish to upgrade the text segmentation tables
to Unicode 12.0 tables in a later commit, to restore consistency.)
This also includes some changes to the contents of "vendor", and
particularly to the format of vendor/modules.txt, per the changes to
vendoring in the Go 1.14 toolchain. This new syntax is activated by the
specification of "go 1.14" in the go.mod file.
Finally, the exact format of error messages from the net/http library has
changed since Go 1.12, and so a couple of our tests needed updates to
their expected error messages to match that.
This is a basic implementation of FilesystemMirrorSource for now aimed
only at the specific use-case of scanning the cache of provider plugins
Terraform will keep under the ".terraform" directory, as part of our
interim provider installer implementation for Terraform 0.13.
The full functionality of this will grow out in later work when we
implement explicit local filesystem mirrors, but for now the goal is to
use this just to inspect the work done by the automatic installer once
we switch it to the new provider-FQN-aware directory structure.
The various FIXME comments in this are justified by the limited intended
scope of this initial implementation, and they should be resolved by
later work to use FilesystemMirrorSource explicitly for user-specified
provider package mirrors.
These are utility functions to ease processing of lists of PackageMeta
elsewhere, once we have functionality that works with multiple packages
at once. The local filesystem mirror source will be the first example of
this, so these methods are motivated mainly by its needs.
This is just to have a centralized set of logic for converting from a
platform string (like "linux_amd64") to a Platform object, so we can do
normalization and validation consistently.
Although we tend to return these in contexts where at least one of these
values is implied, being explicit means that PackageMeta values are
self-contained and less reliant on such external context.
* WIP: dynamic expand
* WIP: add variable and local support
* WIP: outputs
* WIP: Add referencer
* String representation, fixing tests it impacts
* Fixes TestContext2Apply_outputOrphanModule
* Fix TestContext2Apply_plannedDestroyInterpolatedCount
* Update DestroyOutputTransformer and associated types to reflect PlannableOutputs
* Remove comment about locals
* Remove module count enablement
* Removes allowing count for modules, and reverts the test,
while adding a Skip()'d test that works when you re-enable
the config
* update TargetDownstream signature to match master
* remove unnecessary method
Co-authored-by: James Bardin <j.bardin@gmail.com>
The provider FQN is becoming our primary identifier for a provider, so
it's important that we are clear about the equality rules for these
addresses and what characters are valid within them.
We previously had a basic regex permitting ASCII letters and digits for
validation and no normalization at all. We need to do at least case
folding and UTF-8 normalization because these names will appear in file
and directory names in case-insensitive filesystems and in repository
names such as on GitHub.
Since we're already using DNS-style normalization and validation rules
for the hostname part, rather than defining an entirely new set of rules
here we'll just treat the provider namespace and type as if they were
single labels in a DNS name. Aside from some internal consistency, that
also works out nicely because systems like GitHub use organization and
repository names as part of hostnames (e.g. with GitHub Pages) and so
tend to apply comparable constraints themselves.
This introduces the possibility of names containing letters from alphabets
other than the latin alphabet, and for latin letters with diacritics.
That's consistent with our introduction of similar support for identifiers
in the language in Terraform 0.12, and is intended to be more friendly to
Terraform users throughout the world that might prefer to name their
products using a different alphabet. This is also a further justification
for using the DNS normalization rules: modern companies tend to choose
product names that make good domain names, and now such names will be
usable as Terraform provider names too.
This is a temporary helper so that we can potentially ship the new
provider installer without making a breaking change by relying on the
old default namespace lookup API on the default registry to find a proper
FQN for a legacy provider provider address during installation.
If it's given a non-legacy provider address then it just returns the given
address verbatim, so any codepath using it will also correctly handle
explicit full provider addresses. This also means it will automatically
self-disable once we stop using addrs.NewLegacyProvider in the config
loader, because there will therefore no longer be any legacy provider
addresses in the config to resolve. (They'll be "default" provider
addresses instead, assumed to be under registry.terraform.io/hashicorp/* )
It's not decided yet whether we will actually introduce the new provider
in a minor release, but even if we don't this API function will likely be
useful for a hypothetical automatic upgrade tool to introduce explicit
full provider addresses into existing modules that currently rely on
the equivalent to this lookup in the current provider installer.
This is dead code for now, but my intent is that it would either be called
as part of new provider installation to produce an address suitable to
pass to Source.AvailableVersions, or it would be called from the
aforementioned hypothetical upgrade tool.
Whatever happens, these functions can be removed no later than one whole
major release after the new provider installer is introduced, when
everyone's had the opportunity to update their legacy unqualified
addresses.
Our local filesystem mirror mechanism will allow provider packages to be
given either in packed form as an archive directly downloaded to disk or
in an unpacked form where the archive is extracted.
Distinguishing these two cases in the concrete Location types will allow
callers to reliably select the mode chosen by the selected installation
source and handle it appropriately, rather than resorting to out-of-band
heuristics like checking whether the object is a directory or a file.
In a future commit, these implementations of Source will allow finding
and retrieving provider packages via local mirrors, both in the local
filesystem and over the network using an HTTP-based protocol.
This is an API stub for a component that will be added in a future commit
to support considering a number of different installation sources for each
provider. These will eventually be configurable in the CLI configuration,
allowing users to e.g. mirror certain providers within their own
infrastructure while still being able to go upstream for those that aren't
mirrored, or permit locally-mirrored providers only, etc.
Some sources make network requests that are likely to be slow, so this
wrapper type can cache previous responses for its lifetime in order to
speed up repeated requests for the same information.
Registries backed by static files are likely to use relative paths to
their archives for simplicity's sake, but we'll normalize them to be
absolute before returning because the caller wouldn't otherwise know what
to resolve the URLs relative to.
We intend to support installation both directly from origin registries and
from mirrors in the local filesystem or over the network. This Source
interface will serve as our abstraction over those three options, allowing
calling code to treat them all the same.
Our existing provider installer was originally built to work with
releases.hashicorp.com and later retrofitted to talk to the official
Terraform Registry. It also assumes a flat namespace of providers.
We're starting a new one here, copying and adapting code from the old one
as necessary, so that we can build out this new API while retaining all
of the existing functionality and then cut over to this new implementation
in a later step.
Here we're creating a foundational component for the new installer, which
is a mechanism to query for the available versions and download locations
of a particular provider.
Subsequent commits in this package will introduce other Source
implementations for installing from network and filesystem mirrors.
* deps: bump terraform-config-inspect library
* configs: parse `version` in new required_providers block
With the latest version of `terraform-config-inspect`, the
required_providers attribute can now be a string or an object with
attributes "source" and "version". This change allows parsing the
version constraint from the new object while ignoring any given source attribute.
In an earlier change we switched to defining our own sets of detectors,
getters, etc for go-getter in order to insulate us from upstream changes
to those sets that might otherwise change the user-visible behavior of
Terraform's module installer.
However, we apparently neglected to actually refer to our local set of
detectors, and continued to refer to the upstream set. Here we catch up
with the latest detectors from upstream (taken from the version of
go-getter we currently have vendored) and start using that fixed set.
Currently we are maintaining these custom go-getter sets in two places
due to the configload vs. initwd distinction. That was already true for
goGetterGetters and goGetterDecompressors, and so I've preserved that for
now just to keep this change relatively simple; in later change it would
be nice to factor these "get with go getter" functions out into a shared
location which we can call from both configload and initwd.
faster
The acceptance tests for etcdv3, oss and manta were not validating
required env variablea, chosing to assume that if one was running
acceptance tests they had already configured the credentials.
It was not always clear if this was a bug in the tests or the provider,
so I opted to make the tests fail faster when required attributes were
unset (or "").
Add versioned tfplugin proto files to the docs directory, for easier
reference. The latest version starts as a symlink to the current
file used for generated the tfplugin package in ./internal/tfplugin5.
When changing the protocol version, the old file must be copied to
./docs/plugin-protocol/, and a new symlink created for the latest
version.
Private data was previously created during Plan, and sent back to the
provider during Apply. This data also needs to be persisteded accross
Read calls, but rather than rely on core for that we can send the data
to the provider during Read to allow for more flexibilty.
This was already working, but since that codepath is separate from the
go-getter install codepath it's helpful to have a separate test for it,
in addition to the existing one for go-getter modules.
The "err" variable in the MaybeRelativePathErr condition was masking the
original err with nil in the "else" case of this branch, causing the
error message to be incomplete.
While here, also tweaked the wording to say "Could not download" rather
than "Error attempting to download", both to say the same thing in fewer
words and because the summary line above already starts with "Error:"
when we print out this message, so it looks weird to have both lines
start with the same word.
* internal/initwd: follow local module path symlink
Fixes#21060
While a previous commit fixed a problem when the local module directory
contained a symlink, it did not account for the possibility that the
entire directory was a symlink.
In study of existing providers we've found a pattern we werent previously
accounting for of using a nested block type to represent a group of
arguments that relate to a particular feature that is always enabled but
where it improves configuration readability to group all of its settings
together in a nested block.
The existing NestingSingle was not a good fit for this because it is
designed under the assumption that the presence or absence of the block
has some significance in enabling or disabling the relevant feature, and
so for these always-active cases we'd generate a misleading plan where
the settings for the feature appear totally absent, rather than showing
the default values that will be selected.
NestingGroup is, therefore, a slight variation of NestingSingle where
presence vs. absence of the block is not distinguishable (it's never null)
and instead its contents are treated as unset when the block is absent.
This then in turn causes any default values associated with the nested
arguments to be honored and displayed in the plan whenever the block is
not explicitly configured.
The current SDK cannot activate this mode, but that's okay because its
"legacy type system" opt-out flag allows it to force a block to be
processed in this way anyway. We're adding this now so that we can
introduce the feature in a future SDK without causing a breaking change
to the protocol, since the set of possible block nesting modes is not
extensible.
* configs/configupgrade: detect possible relative module sources
If a module source appears to be a relative local path but does not have
a preceding ./, print a #TODO message for the user.
* internal/initwd: limit go-getter detectors to those supported by terraform
* internal/initwd: move isMaybeRelativeLocalPath check into getWithGoGetter
To avoid making two calls to getter.Detect, which potentially makes
non-trivial API calls, the "isMaybeRelativeLocalPath" check was moved to
a later step and a custom error type was added so user-friendly
diagnostics could be displayed in the event that a possible relative local
path was detected.
Identify module sources that look like relative paths ("child" instead
of "./child", for example) and surface a helpful error.
Previously, such module sources would be passed to go-getter, which
would fail because it was expecting an absolute, or properly relative,
path. This commit moves the check for improper relative paths sooner so
a user-friendly error can be displayed.
configs/configload and internal/initwd both had a copyDir function that
would fail if the source directory contained a symlinked directory,
because the os.FileMode.IsDir() returns false for symlinks.
This PR adds a check for a symlink and copies that symlink in the
target directory. It handles symlinks for both files and directories
(with included tests).
Fixes#20539
Due to the inprecision of our shimming from the legacy SDK type system to
the new Terraform Core type system, the legacy SDK produces a number of
inconsistencies that produce only minor quirky behavior or broken
edge-cases. To retain compatibility with those existing weird behaviors,
the legacy SDK opts out of our safety checks.
The intent here is to allow existing providers to continue to do their
previous unsafe behaviors for now, accepting that this will allow certain
quirky bugs from previous releases to persist, and then gradually migrate
away from the legacy SDK and remove this opt-out on a per-resource basis
over time.
As with the apply-time safety check opt-out, this is reserved only for
the legacy SDK and must not be used in any new SDK implementations. We
still include any inconsistencies as warnings in the logs as an aid to
anyone debugging weird behavior, so that they can see situations where
blame may be misplaced in the user-visible error messages.
The shim layer for the legacy SDK type system is not precise enough to
guarantee it will produce identical results between plan and apply. In
particular, values that are null during plan will often become zero-valued
during apply.
To avoid breaking those existing providers while still allowing us to
introduce this check in the future, we'll introduce a rather-hacky new
flag that allows the legacy SDK to signal that it is the legacy SDK and
thus disable the check.
Once we start phasing out the legacy SDK in favor of one that natively
understands our new type system, we can stop setting this flag and thus
get the additional safety of this check without breaking any
previously-released providers.
No other SDK is permitted to set this flag, and we will remove it if we
ever introduce protocol version 6 in future, assuming that any provider
supporting that protocol will always produce consistent results.
There are a few constructs from 0.11 and prior that cause 0.12 parsing to
fail altogether, which previously created a chicken/egg problem because
we need to install the providers in order to run "terraform 0.12upgrade"
and thus fix the problem.
This changes "terraform init" to use the new "early configuration" loader
for module and provider installation. This is built on the more permissive
parser in the terraform-config-inspect package, and so it allows us to
read out the top-level blocks from the configuration while accepting
legacy HCL syntax.
In the long run this will let us do version compatibility detection before
attempting a "real" config load, giving us better error messages for any
future syntax additions, but in the short term the key thing is that it
allows us to install the dependencies even if the configuration isn't
fully valid.
Because backend init still requires full configuration, this introduces a
new mode of terraform init where it detects heuristically if it seems like
we need to do a configuration upgrade and does a partial init if so,
before finally directing the user to run "terraform 0.12upgrade" before
running any other commands.
The heuristic here is based on two assumptions:
- If the "early" loader finds no errors but the normal loader does, the
configuration is likely to be valid for Terraform 0.11 but not 0.12.
- If there's already a version constraint in the configuration that
excludes Terraform versions prior to v0.12 then the configuration is
probably _already_ upgraded and so it's just a normal syntax error,
even if the early loader didn't detect it.
Once the upgrade process is removed in 0.13.0 (users will be required to
go stepwise 0.11 -> 0.12 -> 0.13 to upgrade after that), some of this can
be simplified to remove that special mode, but the idea of doing the
dependency version checks against the liberal parser will remain valuable
to increase our chances of reporting version-based incompatibilities
rather than syntax errors as we add new features in future.
This is an adaptation of the installation code from configs/configload,
now using the "earlyconfig" package instead of the "configs" package.
Module installation is an initialization-only process, with all other
commands assuming an already-initialized directory. Having it here can
therefore simplify the API of configs/configload, which can now focus only
on the problem of loading modules that have already been installed.
The old installer code in configs/configload is still in place for now
because the caller in "terraform init" isn't yet updated to use this.
"terraform init" is quite a complex beast in relation to other commands
since almost everything it does is unique to it and thus not factored out
into other packages.
To get some of that sprawl out of the "command" package, this new internal
package will give us somewhere to put this init functionality that is
also useful for test code that needs to mimic the initialization behavior
against fixture directories.
This is an alternative to the full config loader in the "configs" package
that is good for "early" use-cases like "terraform init", where we want
to find out what our dependencies are without getting tripped up on any
other errors that might be present.
The main significant change here is that the package name for the proto
definition is "tfplugin5", which is important because this name is part
of the wire protocol for references to types defined in our package.
Along with that, we also move the generated package into "internal" to
make it explicit that importing the generated Go package from elsewhere is
not the right approach for externally-implemented SDKs, which should
instead vendor the proto definition they are using and generate their
own stubs to ensure that the wire protocol is the only hard dependency
between Terraform Core and plugins.
After this is merged, any provider binaries built against our
helper/schema package will need to be rebuilt so that they use the new
"tfplugin5" package name instead of "proto".
In a future commit we will include more elaborate and organized
documentation on how an external codebase might make use of our RPC
interface definition to implement an SDK, but the primary concern here
is to ensure we have the right wire package name before release.