When returning from the context method, a deferred function call checked
for error diagnostics in the `diags` variable, and unlocked the state if
any exist. This means that we need to be extra careful to mutate that
variable when returning errors, rather than returning a different set of
diags in the same position.
Previously this would result in an invalid plan file causing a lock to
become stuck.
Most legacy provider resources do not implement any import functionality
other than returning an empty object with the given ID, relying on core
to later read that resource and obtain the complete state. Because of
this, we need to check the response from ReadResource for a null value,
and use that as an indication the import id was invalid.
This was dead code, and there is no clear way to retrieve this
information, as we currently only derive the drift information as part
of the rendering process.
The schemas for provider and the resources didn't match, so the changes
were not going to be rendered at all.
Add a test which contains a deposed resource.
* getproviders ParsePlatform: add check for invalid platform strings with too many parts
The existing logic would not catch things like a platform string containing multiple underscores. I've added an explicit check for exactly 2 parts and some basic tests to prove it.
* command/providers-lock: add tests
This commit adds some simple tests for the providers lock command. While adding this test I noticed that there was a mis-copied error message, so I replaced that with a more specific message. I also added .terraform.lock.hcl to our gitignore for hopefully obvious reasons.
getproviders.ParsePlatform: use parts in place of slice range, since it's available
* command: Providers mirror tests
The providers mirror command is already well tested in e2e tests, so this includes only the most absolutely basic test case.
Do not convert provisioner diagnostics to errors so that users can get
context from provisioner failures.
Return diagnostics from the builtin provisioners that can be annotated
with configuration context and instance addresses.
When an attribute value changes in sensitivity, we previously rendered
this in the diff with a `~` update action and a note about the
consequence of the sensitivity change. Since we also suppress the
attribute value, this made it impossible to know if the underlying value
was changing, too, which has significant consequences on the meaning of
the plan.
This commit adds an equality check of the old/new underlying values. If
these are unchanged, we add a note to the sensitivity warning to clarify
that only sensitivity is changing.
This is part of a general effort to move all of Terraform's non-library
package surface under internal in order to reinforce that these are for
internal use within Terraform only.
If you were previously importing packages under this prefix into an
external codebase, you could pin to an earlier release tag as an interim
solution until you've make a plan to achieve the same functionality some
other way.
This is part of a general effort to move all of Terraform's non-library
package surface under internal in order to reinforce that these are for
internal use within Terraform only.
If you were previously importing packages under this prefix into an
external codebase, you could pin to an earlier release tag as an interim
solution until you've make a plan to achieve the same functionality some
other way.
This is part of a general effort to move all of Terraform's non-library
package surface under internal in order to reinforce that these are for
internal use within Terraform only.
If you were previously importing packages under this prefix into an
external codebase, you could pin to an earlier release tag as an interim
solution until you've make a plan to achieve the same functionality some
other way.
This is part of a general effort to move all of Terraform's non-library
package surface under internal in order to reinforce that these are for
internal use within Terraform only.
If you were previously importing packages under this prefix into an
external codebase, you could pin to an earlier release tag as an interim
solution until you've make a plan to achieve the same functionality some
other way.
This is part of a general effort to move all of Terraform's non-library
package surface under internal in order to reinforce that these are for
internal use within Terraform only.
If you were previously importing packages under this prefix into an
external codebase, you could pin to an earlier release tag as an interim
solution until you've make a plan to achieve the same functionality some
other way.
This is part of a general effort to move all of Terraform's non-library
package surface under internal in order to reinforce that these are for
internal use within Terraform only.
If you were previously importing packages under this prefix into an
external codebase, you could pin to an earlier release tag as an interim
solution until you've make a plan to achieve the same functionality some
other way.
This is part of a general effort to move all of Terraform's non-library
package surface under internal in order to reinforce that these are for
internal use within Terraform only.
If you were previously importing packages under this prefix into an
external codebase, you could pin to an earlier release tag as an interim
solution until you've make a plan to achieve the same functionality some
other way.
This is part of a general effort to move all of Terraform's non-library
package surface under internal in order to reinforce that these are for
internal use within Terraform only.
If you were previously importing packages under this prefix into an
external codebase, you could pin to an earlier release tag as an interim
solution until you've make a plan to achieve the same functionality some
other way.
This is part of a general effort to move all of Terraform's non-library
package surface under internal in order to reinforce that these are for
internal use within Terraform only.
If you were previously importing packages under this prefix into an
external codebase, you could pin to an earlier release tag as an interim
solution until you've make a plan to achieve the same functionality some
other way.
This is part of a general effort to move all of Terraform's non-library
package surface under internal in order to reinforce that these are for
internal use within Terraform only.
If you were previously importing packages under this prefix into an
external codebase, you could pin to an earlier release tag as an interim
solution until you've make a plan to achieve the same functionality some
other way.
This is part of a general effort to move all of Terraform's non-library
package surface under internal in order to reinforce that these are for
internal use within Terraform only.
If you were previously importing packages under this prefix into an
external codebase, you could pin to an earlier release tag as an interim
solution until you've make a plan to achieve the same functionality some
other way.
This is part of a general effort to move all of Terraform's non-library
package surface under internal in order to reinforce that these are for
internal use within Terraform only.
If you were previously importing packages under this prefix into an
external codebase, you could pin to an earlier release tag as an interim
solution until you've make a plan to achieve the same functionality some
other way.
This is part of a general effort to move all of Terraform's non-library
package surface under internal in order to reinforce that these are for
internal use within Terraform only.
If you were previously importing packages under this prefix into an
external codebase, you could pin to an earlier release tag as an interim
solution until you've make a plan to achieve the same functionality some
other way.
This is part of a general effort to move all of Terraform's non-library
package surface under internal in order to reinforce that these are for
internal use within Terraform only.
If you were previously importing packages under this prefix into an
external codebase, you could pin to an earlier release tag as an interim
solution until you've make a plan to achieve the same functionality some
other way.
This is part of a general effort to move all of Terraform's non-library
package surface under internal in order to reinforce that these are for
internal use within Terraform only.
If you were previously importing packages under this prefix into an
external codebase, you could pin to an earlier release tag as an interim
solution until you've make a plan to achieve the same functionality some
other way.
This is part of a general effort to move all of Terraform's non-library
package surface under internal in order to reinforce that these are for
internal use within Terraform only.
If you were previously importing packages under this prefix into an
external codebase, you could pin to an earlier release tag as an interim
solution until you've make a plan to achieve the same functionality some
other way.
This is part of a general effort to move all of Terraform's non-library
package surface under internal in order to reinforce that these are for
internal use within Terraform only.
If you were previously importing packages under this prefix into an
external codebase, you could pin to an earlier release tag as an interim
solution until you've make a plan to achieve the same functionality some
other way.
This is part of a general effort to move all of Terraform's non-library
package surface under internal in order to reinforce that these are for
internal use within Terraform only.
If you were previously importing packages under this prefix into an
external codebase, you could pin to an earlier release tag as an interim
solution until you've make a plan to achieve the same functionality some
other way.
This is part of a general effort to move all of Terraform's non-library
package surface under internal in order to reinforce that these are for
internal use within Terraform only.
If you were previously importing packages under this prefix into an
external codebase, you could pin to an earlier release tag as an interim
solution until you've make a plan to achieve the same functionality some
other way.
This is part of a general effort to move all of Terraform's non-library
package surface under internal in order to reinforce that these are for
internal use within Terraform only.
If you were previously importing packages under this prefix into an
external codebase, you could pin to an earlier release tag as an interim
solution until you've make a plan to achieve the same functionality some
other way.
This is part of a general effort to move all of Terraform's non-library
package surface under internal in order to reinforce that these are for
internal use within Terraform only.
If you were previously importing packages under this prefix into an
external codebase, you could pin to an earlier release tag as an interim
solution until you've make a plan to achieve the same functionality some
other way.
This is part of a general effort to move all of Terraform's non-library
package surface under internal in order to reinforce that these are for
internal use within Terraform only.
If you were previously importing packages under this prefix into an
external codebase, you could pin to an earlier release tag as an interim
solution until you've make a plan to achieve the same functionality some
other way.
Once a plugin process is started, go-plugin will redirect the stdout and
stderr stream through a grpc service and provide those streams to the
client. This is rarely used, as it is prone to failing with races
because those same file descriptors are needed for the initial handshake
and logging setup, but data may be accidentally sent to these
nonetheless.
The usual culprits are stray `fmt.Print` usage where logging was
intended, or the configuration of a logger after the os.Stderr file
descriptor was replaced by go-plugin. These situations are very hard for
provider developers to debug since the data is discarded entirely.
While there may be improvements to be made in the go-plugin package to
configure this behavior, in the meantime we can add a simple monitoring
io.Writer to the streams which will surface th data as warnings in the
logs instead of writing it to `io.Discard`
This is not currently a supported interface, but we plan to release
tool(s) that consume parts of it that are more dependable later,
separately from Terraform CLI itself.
* Add helper suggestion when failed registry err
When someone has a failed registry error on init, remind them that
they should have required_providers in every module
* Give suggestion for a provider based on reqs
Suggest another provider on a registry error, from the list of
requirements we have on init. This skips the legacy lookup
process if there is a similar provider existing in requirements.
Fixes#27506
Add a new flag `-lockfile=readonly` to `terraform init`.
It would be useful to allow us to suppress dependency lockfile changes
explicitly.
The type of the `-lockfile` flag is string rather than bool, leaving
room for future extensions to other behavior variants.
The readonly mode suppresses lockfile changes, but should verify
checksums against the information already recorded. It should conflict
with the `-upgrade` flag.
Note: In the original use-case described in #27506, I would like to
suppress adding zh hashes, but a test code here suppresses adding h1
hashes because it's easy for testing.
Co-authored-by: Alisdair McDiarmid <alisdair@users.noreply.github.com>
* providers.Interface: rename ValidateDataSourceConfig to
ValidateDataResourceConfig
This PR came about after renaming ValidateResourceTypeConfig to
ValidateResourceConfig: I now understand that we'd called it the former
instead of the latter to indicate that the function wasn't necessarily
operating on a resource that actually exists. A possibly-more-accurate
renaming of both functions might then be ValidateManagedResourceConfig
and ValidateDataResourceConfig.
The next commit will update the protocol (v6 only) as well; these are in
separate commits for reviewers and will get squashed together before
merging.
* extend renaming to protov6
This is just a prototype to gather some feedback in our ongoing research
on integration testing of Terraform modules. The hope is that by having a
command integrated into Terraform itself it'll be easier for interested
module authors to give it a try, and also easier for us to iterate quickly
based on feedback without having to coordinate across multiple codebases.
Everything about this is subject to change even in future patch releases.
Since it's a CLI command rather than a configuration language feature it's
not using the language experiments mechanism, but generates a warning
similar to the one language experiments generate in order to be clear that
backward compatibility is not guaranteed.
As part of ongoing research into Terraform testing we'd like to use an
experimental feature to validate our current understanding that expressing
tests as part of the Terraform language, as opposed to in some other
language run alongside, is a good and viable way to write practical
module integration tests.
This initial experimental incarnation of that idea is implemented as a
provider, just because that's an easier extension point for research
purposes than a first-class language feature would be. Whether this would
ultimately emerge as a provider similar to this or as custom language
constructs will be a matter for future research, if this first
experiment confirms that tests written in the Terraform language are the
best direction to take.
The previous incarnation of this experiment was an externally-developed
provider apparentlymart/testing, listed on the Terraform Registry. That
helped with showing that there are some useful tests that we can write
in the Terraform language, but integrating such a provider into Terraform
will allow us to make use of it in the also-experimental "terraform test"
command, which will follow in subsequent commits, to see how this might
fit into a development workflow.
* Add support for plugin protocol v6
This PR turns on support for plugin protocol v6. A provider can
advertise itself as supporting protocol version 6 and terraform will
use the correct client.
Todo:
The "unmanaged" providers functionality does not support protocol
version, so at the moment terraform will continue to assume that
"unmanaged" providers are on protocol v5. This will require some
upstream work on go-plugin (I believe).
I would like to convert the builtin providers to use protocol v6 in a
future PR; however it is not necessary until we remove protocol v6.
* add e2e test for using both plugin protocol versions
- copied grpcwrap and made a version that returns protocol v6 provider
- copied the test provider, provider-simple, and made a version that's
using protocol v6 with the above fun
- added an e2etest
Remove the README that had old user-facing information, replacing
it with a doc.go that describes the package and points to the
plugin SDK for external consumers.
* providers.Interface: huge renamification
This commit renames a handful of functions in the providers.Interface to
match changes made in protocol v6. The following commit implements this
change across the rest of the codebase; I put this in a separate commit
for ease of reviewing and will squash these together when merging.
One noteworthy detail: protocol v6 removes the config from the
ValidateProviderConfigResponse, since it's never been used. I chose to
leave that in place in the interface until we deprecate support for
protocol v5 entirely.
Note that none of these changes impact current providers using protocol
v5; the protocol is unchanged. Only the translation layer between the
proto and terraform have changed.
It's pretty common to want to apply the various fmt.Fprint... functions
to our two output streams, and so to make that much less noisy at the
callsite here we have a small number of very thin wrappers around the
underlying fmt package functionality.
Although we're aiming to not have too much abstraction in this "terminal"
package, this seems justified in that it is only a very thin wrapper
around functionality that most Go programmers are already familiar with,
and so the risk of this causing any surprises is low and the improvement
to readability of callers seems worth it.
This is to allow convenient testing of functions that are designed to work
directly with *terminal.Streams or the individual stream objects inside.
Because the InputStream and OutputStream APIs expose directly an *os.File,
this does some extra work to set up OS-level pipes so we can capture the
output into local buffers to make test assertions against. The idea here
is to keep the tricky stuff we need for testing confined to the test
codepaths, so that the "real" codepaths don't end up needing to work
around abstractions that are otherwise unnecessary.
This is the first commit for plugin protocol v6. This is currently
unused (dead) code; future commits will add the necessary conversion
packages, extend configschema, and modify the providers.Interface.
The new plugin protocol includes the following changes:
- A new field has been added to Attribute: NestedType. This will be the
key new feature in plugin protocol v6
- Several massages were renamed for consistency with the verb-noun
pattern seen in _most_ messages.
- The prepared_config has been removed from PrepareProviderConfig
(renamed ValidateProviderConfig), as it has never been used.
- The provisioner service has been removed entirely. This has no impact
on built-in provisioners. 3rd party provisioners are not supported by
the SDK and are not included in this protocol at all.
This is a helper package that creates a very thin abstraction over
terminal setup, with the main goal being to deal with all of the extra
setup we need to do in order to get a UTF-8-supporting virtual terminal
on a Windows system.
Previously we were expecting that the *hcl.File would always be non-nil,
even in error cases. That isn't always true, so now we'll be more robust
about it and explicitly return an empty locks object in that case, along
with the error diagnostics.
In particular this avoids a panic in a strange situation where the user
created a directory where the lock file would normally go. There's no
meaning to such a directory, so it would always be a mistake and so now
we'll return an error message about it, rather than panicking as before.
The error message for the situation where the lock file is a directory is
currently not very specific, but since it's HCL responsible for generating
that message we can't really fix that at this layer. Perhaps in future
we can change HCL to have a specialized error message for that particular
error situation, but for the sake of this commit the goal is only to
stop the panic and return a normal error message.
If a user forgets to specify the source address for a provider, Terraform
will assume they meant a provider in the registry.terraform.io/hashicorp/
namespace. If that ultimately doesn't exist, we'll now try to see if
there's some other provider source address recorded in the registry's
legacy provider lookup table, and suggest it if so.
The error message here is a terse one addressed primarily to folks who are
already somewhat familiar with provider source addresses and how to
specify them. Terraform v0.13 had a more elaborate version of this error
message which directed the user to try the v0.13 automatic upgrade tool,
but we no longer have that available in v0.14 and later so the user must
make the fix themselves.
The temporary directory on some systems (most notably MacOS) contains
symlinks, which would not be recorded by the installer. In order to make
these paths comparable in the tests we need to eval the symlinks in
the paths before giving them to the installer.
When logging is turned on, panicwrap will still see provider crashes and
falsely report them as core crashes, hiding the formatted provider
error. We can trick panicwrap by slightly obfuscating the error line.
When rendering a set of version constraints to a string, we normalize
partially-constrained versions. This means converting a version
like 2.68.* to 2.68.0.
Prior to this commit, this normalization was done after deduplication.
This could result in a version constraints string with duplicate
entries, if multiple partially-constrained versions are equivalent. This
commit fixes this by normalizing before deduplicating and sorting.
Previously we were only verifying locked hashes for local archive zip
files, but if we have non-ziphash hashes available then we can and should
also verify that a local directory matches at least one of them.
This does mean that folks using filesystem mirrors but yet also running
Terraform across multiple platforms will need to take some extra care to
ensure the hashes pass on all relevant platforms, which could mean using
"terraform providers lock" to pre-seed their lock files with hashes across
all platforms, or could mean using the "packed" directory layout for the
filesystem mirror so that Terraform will end up in the install-from-archive
codepath instead of this install-from-directory codepath, and can thus
verify ziphash too.
(There's no additional documentation about the above here because there's
already general information about this in the lock file documentation
due to some similar -- though not identical -- situations with network
mirrors.)
We previously had some tests for some happy paths and a few specific
failures into an empty directory with no existing locks, but we didn't
have tests for the installer respecting existing lock file entries.
This is a start on a more exhaustive set of tests for the installer,
aiming to visit as many of the possible codepaths as we can reasonably
test using this mocking strategy. (Some other codepaths require different
underlying source implementations, etc, so we'll have to visit those in
other tests separately.)
This won't be a typical usage pattern for normal code, but will be useful
for tests that need to work with locks as input so that they don't need to
write out a temporary file on disk just to read it back in immediately.
An earlier commit made this remove duplicates, which set the precedent
that this function is trying to canonically represent the _meaning_ of
the version constraints rather than exactly how they were expressed in
the configuration.
Continuing in that vein, now we'll also apply a consistent (though perhaps
often rather arbitrary) ordering to the terms, so that it doesn't change
due to irrelevant details like declarations being written in a different
order in the configuration.
The ordering here is intended to be reasonably intuitive for simple cases,
but constraint strings with many different constraints are hard to
interpret no matter how we order them so the main goal is consistency,
so those watching how the constraints change over time (e.g. in logs of
Terraform output, or in the dependency log file) will see fewer noisy
changes that don't actually mean anything.
Create a logger that will record any apparent crash output for later
processing.
If the cli command returns with a non-zero exit status, check for any
recorded crashes and add those to the output.
Now that hclog can independently set levels on related loggers, we can
separate the log levels for different subsystems in terraform.
This adds the new environment variables, `TF_LOG_CORE` and
`TF_LOG_PROVIDER`, which each take the same set of log level arguments,
and only applies to logs from that subsystem. This means that setting
`TF_LOG_CORE=level` will not show logs from providers, and
`TF_LOG_PROVIDER=level` will not show logs from core. The behavior of
`TF_LOG` alone does not change.
While it is not necessarily needed since the default is to disable logs,
there is also a new level argument of `off`, which reflects the
associated level in hclog.
A set of version constraints can contain duplicates. This can happen if
multiple identical constraints are specified throughout a configuration.
When rendering the set, it is confusing to display redundant
constraints. This commit changes the string renderer to only show the
first instance of a given constraint, and adds unit tests for this
function to cover this change.
This also fixes a bug with the locks file generation: previously, a
configuration with redundant constraints would result in this error on
second init:
Error: Invalid provider version constraints
on .terraform.lock.hcl line 6:
(source code not available)
The recorded version constraints for provider
registry.terraform.io/hashicorp/random must be written in normalized form:
"3.0.0".
Use a separate log sink to always capture trace logs for the panicwrap
handler to write out in a crash log.
This requires creating a log file in the outer process and passing that
path to the child process to log to.
Previously this codepath was generating a confusing message in the absense
of any symlinks, because filepath.EvalSymlinks returns a successful result
if the target isn't a symlink.
Now we'll emit the log line only if filepath.EvalSymlinks returns a
result that's different in a way that isn't purely syntactic (which
filepath.Clean would "fix").
The new message is a little more generic because technically we've not
actually ensured that a difference here was caused by a symlink and so
we shouldn't over-promise and generate something potentially misleading.
ioutil.TempFile has a special case where an empty string for its dir
argument is interpreted as a request to automatically look up the system
temporary directory, which is commonly /tmp .
We don't want that behavior here because we're specifically trying to
create the temporary file in the same directory as the file we're hoping
to replace. If the file gets created in /tmp then it might be on a
different device and thus the later atomic rename won't work.
Instead, we'll add our own special case to explicitly use "." when the
given filename is in the current working directory. That overrides the
special automatic behavior of ioutil.TempFile and thus forces the
behavior we need.
This hadn't previously mattered for earlier callers of this code because
they were creating files in subdirectories, but this codepath was failing
for the dependency lock file due to it always being created directly
in the current working directory.
Unfortunately since this is a picky implementation detail I couldn't find
a good way to write a unit test for it without considerable refactoring.
Instead, I verified manually that the temporary filename wasn't in /tmp on
my Linux system, and hope that the comment inline will explain this
situation well enough to avoid an accidental regression in future
maintenence.
If a configuration requires a partial provider version (with some parts
unspecified), Terraform considers this as a constrained-to-zero version.
For example, a version constraint of 1.2 will result in an attempt to
install version 1.2.0, even if 1.2.1 is available.
When writing the dependency locks file, we previously would write 1.2.*,
as this is the in-memory representation of 1.2. This would then cause an
error on re-reading the locks file, as this is not a valid constraint
format.
Instead, we now explicitly convert the constraint to its zero-filled
representation before writing the locks file. This ensures that it
correctly round-trips.
Because this change is made in getproviders.VersionConstraintsString, it
also affects the output of the providers sub-command.
Use a single log writer instance for all std library logging.
Setup the std log writer in the logging package, and remove boilerplate
from test packages.
In this case, "atomic" means that there will be no situation where the
file contains only part of the newContent data, and therefore other
software monitoring the file for changes (using a mechanism like inotify)
won't encounter a truncated file.
It does _not_ mean that there can't be existing filehandles open against
the old version of the file. On Windows systems the write will fail in
that case, but on Unix systems the write will typically succeed but leave
the existing filehandles still pointing at the old version of the file.
They'll need to reopen the file in order to see the new content.
This originated in the cliconfig code to write out credentials files. The
Windows implementation of this in particular was quite onerous to get
right because it needs a very specific sequence of operations to avoid
running into exclusive file locks, and so by factoring this out with
only cosmetic modification we can avoid repeating all of that engineering
effort for other atomic file writing use-cases.
This builds on an experimental feature in the underlying cty library which
allows marking specific attribtues of an object type constraint as
optional, which in turn modifies how the cty conversion package handles
missing attributes in a source value: it will silently substitute a null
value of the appropriate type rather than returning an error.
In order to implement the experiment this commit temporarily forks the
HCL typeexpr extension package into a local internal/typeexpr package,
where I've extended the type constraint syntax to allow annotating object
type attributes as being optional using the HCL function call syntax.
If the experiment is successful -- both at the Terraform layer and in
the underlying cty library -- we'll likely send these modifications to
upstream HCL so that other HCL-based languages can potentially benefit
from this new capability.
Because it's experimental, the optional attribute modifier is allowed only
with an explicit opt-in to the module_variable_optional_attrs experiment.
Previously we were just letting hclwrite do its default formatting
behavior here. The current behavior there isn't ideal anyway -- it puts
big data structures all on one line -- but even ignoring that our goal
for this file format is to keep things in a highly-normalized shape so
that diffs against the file are clear and easy to read.
With that in mind, here we directly control how we write that value into
the file, which means that later changes to hclwrite's list/set
presentation won't affect it, regardless of what form they take.
This probably isn't the best UI we could do here, but it's a placeholder
for now just to avoid making it seem like we're ignoring the lock file
and checking for new versions anyway.
This changes the approach used by the provider installer to remember
between runs which selections it has previously made, using the lock file
format implemented in internal/depsfile.
This means that version constraints in the configuration are considered
only for providers we've not seen before or when -upgrade mode is active.
These are helper functions to give the installation UI some hints about
whether the lock file has changed so that it can in turn give the user
advice about it. The UI-layer callers of these will follow in a later
commit.
helper/copy CopyDir was used heavily in tests. It differes from
internal/copydir in a few ways, the main one being that it creates the
dst directory while the internal version expected the dst to exist
(there are other differences, which is why I did not just switch tests
to using internal's CopyDir).
I moved the CopyDir func from helper/copy into command_test.go; I could
also have moved it into internal/copy and named it something like
CreateDirAndCopy so if that seems like a better option please let me
know.
helper/copy/CopyFile was used in a couple of spots so I moved it into
internal, at which point I thought it made more sense to rename the
package copy (instead of copydir).
There's also a `go mod tidy` included.
This new-ish package ended up under "helper" during the 0.12 cycle for
want of some other place to put it, but in retrospect that was an odd
choice because the "helper/" tree is otherwise a bunch of legacy code from
when the SDK lived in this repository.
Here we move it over into the "internal" directory just to distance it
from the guidance of not using "helper/" packages in new projects;
didyoumean is a package we actively use as part of error message hints.
We no longer need to support 0.12-and-earlier-style provider addresses
because users should've upgraded their existing configurations and states
on Terraform 0.13 already.
For now this is only checked in the "init" command, because various test
shims are still relying on the idea of legacy providers the core layer.
However, rejecting these during init is sufficient grounds to avoid
supporting legacy provider addresses in the new dependency lock file
format, and thus sets the stage for a more severe removal of legacy
provider support in a later commit.
In earlier commits we started to make the installation codepath
context-aware so that it could be canceled in the event of a SIGINT, but
we didn't complete wiring that through the API of the getproviders
package.
Here we make the getproviders.Source interface methods, along with some
other functions that can make network requests, take a context.Context
argument and act appropriately if that context is cancelled.
The main providercache.Installer.EnsureProviderVersions method now also
has some context-awareness so that it can abort its work early if its
context reports any sort of error. That avoids waiting for the process
to wind through all of the remaining iterations of the various loops,
logging each request failure separately, and instead returns just
a single aggregate "canceled" error.
We can then set things up in the "terraform init" and
"terraform providers mirror" commands so that the context will be
cancelled if we get an interrupt signal, allowing provider installation
to abort early while still atomically completing any local-side effects
that may have started.
As we continue iterating towards saving valid hashes for a package in a
depsfile lock file after installation and verifying them on future
installation, this prepares getproviders for the possibility of having
multiple valid hashes per package.
This will arise in future commits for two reasons:
- We will need to support both the legacy "zip hash" hashing scheme and
the new-style content-based hashing scheme because currently the
registry protocol is only able to produce the legacy scheme, but our
other installation sources prefer the content-based scheme. Therefore
packages will typically have a mixture of hashes of both types.
- Installing from an upstream registry will save the hashes for the
packages across all supported platforms, rather than just the current
platform, and we'll consider all of those valid for future installation
if we see both successful matching of the current platform checksum and
a signature verification for the checksums file as a whole.
This also includes some more preparation for the second case above in that
signatureAuthentication now supports AcceptableHashes and returns all of
the zip-based hashes it can find in the checksums file. This is a bit of
an abstraction leak because previously that authenticator considered its
"document" to just be opaque bytes, but we want to make sure that we can
only end up trusting _all_ of the hashes if we've verified that the
document is signed. Hopefully we'll make this better in a future commit
with some refactoring, but that's deferred for now in order to minimize
disruption to existing codepaths while we work towards a provider locking
MVP.
The logic for what constitutes a valid hash and how different hash schemes
are represented was starting to get sprawled over many different files and
packages.
Consistently with other cases where we've used named types to gather the
definition of a particular string into a single place and have the Go
compiler help us use it properly, this introduces both getproviders.Hash
representing a hash value and getproviders.HashScheme representing the
idea of a particular hash scheme.
Most of this changeset is updating existing uses of primitive strings to
uses of getproviders.Hash. The new type definitions are in
internal/getproviders/hash.go.
Although origin registries return specific [filename, hash] pairs, our
various different installation methods can't produce a structured mapping
from platform to hash without breaking changes.
Therefore, as a compromise, we'll continue to do platform-specific checks
against upstream data in the cases where that's possible (installation
from origin registry or network mirror) but we'll treat the lock file as
just a flat set of equally-valid hashes, at least one of which must match
after we've completed whatever checks we've made against the
upstream-provided checksums/signatures.
This includes only the minimal internal/getproviders updates required to
make this compile. A subsequent commit will update that package to
actually support the idea of verifying against multiple hashes.
The "acceptable hashes" for a package is a set of hashes that the upstream
source considers to be good hashes for checking whether future installs
of the same provider version are considered to match this one.
Because the acceptable hashes are a package authentication concern and
they already need to be known (at least in part) to implement the
authenticators, here we add AcceptableHashes as an optional extra method
that an authenticator can implement.
Because these are hashes chosen by the upstream system, the caller must
make its own determination about their trustworthiness. The result of
authentication is likely to be an input to that, for example by
distrusting hashes produced by an authenticator that succeeds but doesn't
report having validated anything.
This is the pre-existing hashing scheme that was initially built for
releases.hashicorp.com and then later reused for the provider registry
protocol, which takes a SHA256 hash of the official distribution .zip file
and formats it as lowercase hex.
This is a non-ideal hash scheme because it works only for
PackageLocalArchive locations, and so we can't verify package directories
on local disk against such hashes. However, the registry protocol is now
a compatibility constraint and so we're going to need to support this
hashing scheme for the foreseeable future.
It doesn't make sense for a built-in provider to appear in a lock file
because built-in providers have no version independent of the version of
Terraform they are compiled into.
We also exclude legacy providers here, because they were supported only
as a transitional aid to enable the Terraform 0.13 upgrade process and
are not intended for explicit selection.
The provider installer will, once it's updated to understand dependency
locking, use this concept to decide which subset of its selections to
record in the dependency lock file for reference for future installation
requests.
This is an initial implementation of writing locks back to a file on disk.
This initial implementation is incomplete because it does not write the
changes to the new file atomically. We'll revisit that in a later commit
as we return to polish these codepaths, once we've proven out this
package's design by integrating it with Terraform's provider installer.
This is the initial implementation of the parser/decoder portion of the
new dependency lock file handler. It's currently dead code because the
caller isn't written yet. We'll continue to build out this functionality
here until we have the basic level of both load and save functionality
before introducing this into the provider installer codepath.
The version constraint parser allows "~> 2", but it behavior is identical
to "~> 2.0". Due to a quirk of the constraint parser (caused by the fact
that it supports both Ruby-style and npm/cargo-style constraints), it
ends up returning "~> 2" with the minor version marked as "unconstrained"
rather than as zero, but that means the same thing as zero in this context
anyway and so we'll prefer to stringify as "~> 2.0" so that we can be
clearer about how Terraform is understanding that version constraint.
For reasons that are unclear, these two tests just started failing on
macOS very recently. The failure looked like:
PackageDir: strings.Join({
"/",
+ "private/",
"var/folders/3h/foobar/T/terraform-test-p",
"rovidercache655312854/registry.terraform.io/hashicorp/null/2.0.0",
"/windows_amd64",
},
Speculating that the macOS temporary directory moved into the /private
directory, I added a couple of EvalSymlinks calls and the tests pass
again.
No other unit tests appear to be affected by this at the moment.
If a provider changes namespace in the registry, we can detect this when
running the 0.13upgrade command. As long as there is a version matching
the user's constraints, we now use the provider's new source address.
Otherwise, warn the user that the provider has moved and a version
upgrade is necessary to move to it.
We previously had this just stubbed out because it was a stretch goal for
the v0.13.0 release and it ultimately didn't make it in.
Here we fill out the existing stub -- with a minor change to its interface
so it can access credentials -- with a client implementation that is
compatible with the directory structure produced by the
"terraform providers mirror" subcommand, were the result to be published
on a static file server.
Earlier we introduced a new package hashing mechanism that is compatible
with both packed and unpacked packages, because it's a hash of the
contents of the package rather than of the archive it's delivered in.
However, we were using that only for the local selections file and not
for any remote package authentication yet.
The provider network mirrors protocol includes new-style hashes as a step
towards transitioning over to the new hash format in all cases, so this
new authenticator is here in preparation for verifying the checksums of
packages coming from network mirrors, for mirrors that support them.
For now this leaves us in a kinda confusing situation where we have both
NewPackageHashAuthentication for the new style and
NewArchiveChecksumAuthentication for the old style, which for the moment
is represented only by a doc comment on the latter. Hopefully we can
remove NewArchiveChecksumAuthentication in a future commit, if we can
get the registry updated to use the new hashing format.
The installFromHTTPURL function downloads a package to a temporary file,
then delegates to installFromLocalArchive to install it. We were
previously not deleting the temporary file afterwards. This commit fixes
that.
The SearchLocalDirectory function was intentionally written to only
support symlinks at the leaves so that it wouldn't risk getting into an
infinite loop traversing intermediate symlinks, but that rule was also
applying to the base directory itself.
It's pretty reasonable to put your local plugins in some location
Terraform wouldn't normally search (e.g. because you want to get them from
a shared filesystem mounted somewhere) and creating a symlink from one
of the locations Terraform _does_ search is a convenient way to help
Terraform find those without going all in on the explicit provider
installation methods configuration that is intended for more complicated
situations.
To allow for that, here we make a special exception for the base
directory, resolving that first before we do any directory walking.
In order to help with debugging a situation where there are for some
reason symlinks at intermediate levels inside the search tree, we also now
emit a WARN log line in that case to be explicit that symlinks are not
supported there and to hint to put the symlink at the top-level if you
want to use symlinks at all.
(The support for symlinks at the deepest level of search is not mentioned
in this message because we allow it primarily for our own cache linking
behavior.)
When installing a provider which is already cached, we attempt to create
a symlink from the install directory targeting the cache. If symlinking
fails due to missing OS/filesystem support, we instead want to copy the
cached provider.
The fallback code to do this would always fail, due to a missing target
directory. This commit fixes that. I was unable to find a way to add
automated tests around this, but I have manually verified the fix on
Windows 8.1.
At the end of the EnsureProviderVersions process, we generate a lockfile
of the selected and installed provider versions. This includes a hash of
the unpacked provider directory.
When calculating this hash and generating the lockfile, we now also
verify that the provider directory contains a valid executable file. If
not, we return an error for this provider and trigger the installer's
HashPackageFailure event. Note that this event is not yet processed by
terraform init; that comes in the next commit.
Instead of searching the installed provider package directory for a
binary as we install it, we can lazily detect the executable as it is
required. Doing so allows us to separately report an invalid unpacked
package, giving the user more actionable error messages.