When working on an existing plan, the context always used walkApply,
even if the plan was for a full destroy. Mark in the plan if it was
icreated for a destroy, and transfer that to the context when reading
the plan.
A Targeted graph may include outputs that were transitively included,
but if they are missing any dependencies they will fail to interpolate
later on.
Prune any outputs in the TargetsTransformer that have missing
dependencies, and are not depended on by any resource. This will
maintain the existing behavior of outputs failing silently ni most
cases, but allow errors to be surfaced where the output value is
required.
Module outputs may not have complete information during Input, because
it happens before refresh. Continue process on output interpolation
errors during the Input walk.
Remove the Input flag threaded through the input graph creation process
to prevent interpolation failures on module variables.
Use an EvalOpFilter instead to inset the correct EvalNode during
walkInput. Remove the EvalTryInterpolate type, and use the same
ContinueOnErr flag as the output node for consistency and to try and
keep the number possible eval node types down.
Since we don't currently auto-install provisioner plugins this is
currently placed on the providers documentation page and referred to as
the "Provider Plugin Cache". In future this mechanism may also apply to
provisioners, in which case we'll figure out at that point where better
to place this information so it can be referenced from both the provider
and provisioner documentation pages.
This mechanism for configuring plugins is now deprecated, since it's not
capable of declaring plugin versions. Instead, we recommend just placing
plugins into a particular directory, which is now documented on the
main providers documentation page and linked from the more detailed docs
on plugins in general.
Previously we described inline here where to put the .terraformrc file,
but now we have a separate page all about this file that gives us more
room to describe in more detail where the file is placed and what else it
can do.
This is a tough one to unit tests because the behavior is tangled up in
the code that hits releases.hashicorp.com, so we'll add this e2etest as
some extra insurance that this works end-to-end.
Either the environment variable TF_PLUGIN_CACHE_DIR or a setting in the
CLI config file (~/.terraformrc on Unix) allow opting in to the plugin
caching behavior.
This is opt-in because for new users we don't want to pollute their system
with extra directories they don't know about. By opting in to caching, the
user is assuming the responsibility to occasionally prune the cache over
time as older plugins become stale and unused.
For users that have metered or slow internet connections it is annoying
to have Terraform constantly re-downloading the same files when they
initialize many separate directories.
To help such users, here we add an opt-in mechanism to use a local
directory as a read-through cache. When enabled, any plugin download will
be skipped if a suitable file already exists in the cache directory. If
the desired plugin isn't in the cache, it will be downloaded into the
cache for use next time.
This mechanism also serves to reduce total disk usage by allowing
plugin files to be shared between many configurations, as long as the
target system isn't Windows and supports either hardlinks or symlinks.
If we encounter something that isn't a file -- for example, a dangling
symlink whose referent has been deleted -- we'll ignore it so that we
can either later produce a "no such plugin" error or auto-install a plugin
that will actually work.
This, in principle, allows us to make use of configuration information
when we populate the Meta structure, though we won't actually make use
of that until a subsequent commit.
We don't usually run "acceptance tests" during a Travis run, but this
particular suite doesn't require any special credentials since it just
accesses releases.hashicorp.com to download plugins, so therefore it's
safe to run in Travis at the expense of adding a few more seconds to
the runtime.
Running it in Travis can therefore give us some extra confidence for
pull requests that may inadvertently break certain details of the
workflow, as well as ensuring that these tests are kept up-to-date as
the system changes.
Since we now have a guide that recommends some specific ways to run
Terraform in automation, we can mimic those suggestions in an e2e test and
thus ensure they keep working.
Here we test the three different approaches suggested in the guide:
- init, plan, apply (main case)
- init, apply (e.g. for deploying to a QA/staging environment)
- init, plan (e.g. for verifying a pull request)
In 6712192724 we stopped counting data
source destroys in the destroy tally since they are an implementation
detail.
This caused this test to start failing, though since the new behavior is
correct here we just update the test to match.
A refactor introduced an extra `/` in the download url, which causes an
extra redirect during discovery.
Improve a registry test to verify that detection doesn't require the
registry after the modules have been fetched.
This function takes a map of lists of strings and inverts it so that
the string values become keys and the keys become items within the
corresponding lists.
Locals don't need to be evaluated during destroy. Rather than simply
skipping them, remove them from the state as they are encountered. Even
though they are not persisted in the state, it keeps the state up to
date as the destroy happens, and we reduce the chance of other
inconstancies later on.
These tests were written before subtest support was available. By running
them as subtests we can get better output in the event of an error, or
in verbose mode.
Shell tab completion for all of the subcommands under
"terraform workspace", providing the appropriate kind of auto-complete for
each argument, along with completion for for any flags.
This helper is a Predictor for the "complete" package that tries to
auto-complete workspace names from the current backend, if it's
initialized and operable.
The predictors built in to the "complete" package assume that the same
type of argument is repeated indefinitely, but most Terraform commands
don't work like that, so this helper allows us to define a sequence of
predictors that apply to each argument in turn.
The CLI package has automatic support for shell autocomplete (bash and
zsh, at time of writing) for subcommands, so all we need to do here is
just opt into it.
Users can install this into their shells by running:
terraform -install-autocomplete