Since we don't currently auto-install provisioner plugins this is
currently placed on the providers documentation page and referred to as
the "Provider Plugin Cache". In future this mechanism may also apply to
provisioners, in which case we'll figure out at that point where better
to place this information so it can be referenced from both the provider
and provisioner documentation pages.
This mechanism for configuring plugins is now deprecated, since it's not
capable of declaring plugin versions. Instead, we recommend just placing
plugins into a particular directory, which is now documented on the
main providers documentation page and linked from the more detailed docs
on plugins in general.
Previously we described inline here where to put the .terraformrc file,
but now we have a separate page all about this file that gives us more
room to describe in more detail where the file is placed and what else it
can do.
This is a tough one to unit tests because the behavior is tangled up in
the code that hits releases.hashicorp.com, so we'll add this e2etest as
some extra insurance that this works end-to-end.
Either the environment variable TF_PLUGIN_CACHE_DIR or a setting in the
CLI config file (~/.terraformrc on Unix) allow opting in to the plugin
caching behavior.
This is opt-in because for new users we don't want to pollute their system
with extra directories they don't know about. By opting in to caching, the
user is assuming the responsibility to occasionally prune the cache over
time as older plugins become stale and unused.
For users that have metered or slow internet connections it is annoying
to have Terraform constantly re-downloading the same files when they
initialize many separate directories.
To help such users, here we add an opt-in mechanism to use a local
directory as a read-through cache. When enabled, any plugin download will
be skipped if a suitable file already exists in the cache directory. If
the desired plugin isn't in the cache, it will be downloaded into the
cache for use next time.
This mechanism also serves to reduce total disk usage by allowing
plugin files to be shared between many configurations, as long as the
target system isn't Windows and supports either hardlinks or symlinks.
If we encounter something that isn't a file -- for example, a dangling
symlink whose referent has been deleted -- we'll ignore it so that we
can either later produce a "no such plugin" error or auto-install a plugin
that will actually work.
This, in principle, allows us to make use of configuration information
when we populate the Meta structure, though we won't actually make use
of that until a subsequent commit.
We don't usually run "acceptance tests" during a Travis run, but this
particular suite doesn't require any special credentials since it just
accesses releases.hashicorp.com to download plugins, so therefore it's
safe to run in Travis at the expense of adding a few more seconds to
the runtime.
Running it in Travis can therefore give us some extra confidence for
pull requests that may inadvertently break certain details of the
workflow, as well as ensuring that these tests are kept up-to-date as
the system changes.
Since we now have a guide that recommends some specific ways to run
Terraform in automation, we can mimic those suggestions in an e2e test and
thus ensure they keep working.
Here we test the three different approaches suggested in the guide:
- init, plan, apply (main case)
- init, apply (e.g. for deploying to a QA/staging environment)
- init, plan (e.g. for verifying a pull request)
In 6712192724 we stopped counting data
source destroys in the destroy tally since they are an implementation
detail.
This caused this test to start failing, though since the new behavior is
correct here we just update the test to match.
A refactor introduced an extra `/` in the download url, which causes an
extra redirect during discovery.
Improve a registry test to verify that detection doesn't require the
registry after the modules have been fetched.
This function takes a map of lists of strings and inverts it so that
the string values become keys and the keys become items within the
corresponding lists.
Locals don't need to be evaluated during destroy. Rather than simply
skipping them, remove them from the state as they are encountered. Even
though they are not persisted in the state, it keeps the state up to
date as the destroy happens, and we reduce the chance of other
inconstancies later on.
These tests were written before subtest support was available. By running
them as subtests we can get better output in the event of an error, or
in verbose mode.
Shell tab completion for all of the subcommands under
"terraform workspace", providing the appropriate kind of auto-complete for
each argument, along with completion for for any flags.
This helper is a Predictor for the "complete" package that tries to
auto-complete workspace names from the current backend, if it's
initialized and operable.
The predictors built in to the "complete" package assume that the same
type of argument is repeated indefinitely, but most Terraform commands
don't work like that, so this helper allows us to define a sequence of
predictors that apply to each argument in turn.
The CLI package has automatic support for shell autocomplete (bash and
zsh, at time of writing) for subcommands, so all we need to do here is
just opt into it.
Users can install this into their shells by running:
terraform -install-autocomplete
This adds NoZeroValues, a small SchemaValidateFunc that checks that a
defined value is not a zero value. It's useful for situations where you
want to keep someone from explicitly entering a zero value (ie:
literally "0", or an empty string) on a required field, and want to
catch it in the validation stage, versus during apply using GetOk.
Module detection currently requires calling the registry to determine
the subdirectory. Since we're not directly accessing the subdirectory
through FolderStorage, and now handling it within terraform so modules can
reference sibling paths, we need to call out to the registry every
time we load a configuration to verify the subdirectory for the module,
which is returned during the Detect.
Record the subdirectories for each module in the top-level of the
FolderStorage path for retrieval during Tree.Load. This lets us bypass
Detection altogether, modules can be loaded without redetecting.
In order to remain backward compatible with some modules, we need to
handle subdirs during Load. This means duplicating part of the go-getter
code path for subDir handling so we can resolve any subDirs and globs
internally, while keeping the entire remote directory structure within
the file storage.
Test that we can get a subdirectory from a tarball (or any other
"packed" source that we support).
The 'tar-subdir-to-parent' test highlights a regression where the
subdirectory module references a module in its parent directory. This
breaks the intended use ofr the subdirectory and the implementation in
go-getter. We need to fix this in terraform, and possible plan warnings
and deprecations for this type of source.