This new command is intended to make it easy to create or update a mirror
directory containing suitable providers for the current configuration,
producing a layout that is appropriate both for a filesystem mirror or,
if copied into the document root of an HTTP server, a network mirror.
This initial version is not customizable aside from being able to select
multiple platforms to install packages for.
Future iterations of this could include commands to turn the JSON index
generation on and off, or to instruct it to produce the unpacked directory
layout instead of the packed directory layout as it currently does. Both
of those options would make the generated directory unsuitable to be
a network mirror, but it would still work as a filesystem mirror.
In the long run this will hopefully form part of a replacement workflow to
terraform-bundle as a way to put copies of providers somewhere so we don't
need to re-download them every time, but some other changes will be needed
outside of just this command before that'd be true, such as adding support
for network and/or filesystem mirrors in Terraform Enterprise.
This is the equivalent of UnpackedDirectoryPathForPackage when working
with the packed directory layout. It returns a path to a .zip file with
a name that would be detected by SearchLocalDirectory as a
PackageLocalArchive package.
We previously had this functionality available for cached packages in the
providercache package. This moves the main implementation of this over
to the getproviders package and then implements it also for PackageMeta,
allowing us to compute hashes in a consistent way across both of our
representations of a provider package.
The new methods on PackageMeta will only be effective for packages in the
local filesystem because we need direct access to the contents in order
to produce the hash. Hopefully in future the registry protocol will be
able to also provide hashes using this content-based (rather than
archive-based) algorithm and then we'll be able to make this work for
PackageMeta referring to a package obtained from a registry too, but
hashes for local packages only are still useful for some cases right now,
such as generating mirror directories in the "terraform providers mirror"
command.
When helping folks in the community forum, I commonly see questions around
more complex patterns in transforming deep data structures into different
shapes to work with for_each. We have examples of these patterns in the
docs for the functions that they rely on, but they were not previously
very discoverable in the main configuration language documentation
sections.
Here I've moved the "Using Expressions in for_each" subsection on the
Resources page above some of the other sub-sections to hopefully make it
easier to see, and written out in more detail the two specific patterns
that answer a significant number of for_each-related user questions in
the hope that readers will be more likely to realize that the links are
relevant to what their goals.
I also added some more elaboration about the behavior of converting from
list to set in the "Using Sets" subsection, because this feature is often
a user's first encounter with the set data type and I've inferred from
some of the questions I've answered that a number of Terraform users don't
have prior experience with set data types in other languages to draw
assumptions from.
Finally, I added some similar links to the for_each patterns within the
for expression documentation itself, to try to make those examples more
visible to those who might be discovering the documentation in a different
sequence, e.g. by following a deep link shared in an answer to a question
in the community forum.
The "apply" documentation contained a simple typo, while the "plan"
documentation contained outdated information about using
"terraform plan PLANFILE" to view a plan. The latter is now a separate
command entirely, since Terraform 0.12: "terraform show PLANFILE".
This is a baby-step towards an intended future where all Terraform actions
which have side-effects in either remote objects or the Terraform state
can go through the plan+apply workflow.
This initial change is focused only on allowing plan+apply for changes to
root module output values, so that these can be written into a new state
snapshot (for consumption by terraform_remote_state elsewhere) without
having to go outside of the primary workflow by running
"terraform refresh".
This is also better than "terraform refresh" because it gives an
opportunity to review the proposed changes before applying them, as we're
accustomed to with resource changes.
The downside here is that Terraform Core was not designed to produce
accurate changesets for root module outputs. Although we added a place for
it in the plan model in Terraform 0.12, Terraform Core currently produces
inaccurate changesets there which don't properly track the prior values.
We're planning to rework Terraform Core's evaluation approach in a
forthcoming release so it would itself be able to distinguish between the
prior state and the planned new state to produce an accurate changeset,
but this commit introduces a temporary stop-gap solution of implementing
the logic up in the local backend code, where we can freeze a snapshot of
the prior state before we take any other actions and then use that to
produce an accurate output changeset to decide whether the plan has
externally-visible side-effects and render any changes to output values.
This temporary approach should be replaced by a more appropriately-placed
solution in Terraform Core in a release, which should then allow further
behaviors in similar vein, such as user-visible drift detection for
resource instances.
as with this version of this doc users receives a warning like this if them use quotes with parameters of 'on_failure' setting:
"
on_failure = "continue"
In this context, keywords are expected literally rather than in quotes.
Terraform 0.11 and earlier required quotes, but quoted keywords are now
deprecated and will be removed in a future version of Terraform. Remove the
quotes surrounding this keyword to silence this warning.
"
same with:
when = "destroy"
Resource destroy nodes can only depend on other resources. Connecting
them to their module expander can introduce cycles when the module
expander depends on resources in the destroyer's subgraph.
We don't need another node type for orphaned outptus, they are just
outputs being removed for a different reason than destroy. Use the
NodeDestroyableOutput implementation.
Destroy outputs also don't need to be referencers, since they are being
removed.
Rename DestroyOutputTransformer to destroyRootOutputTransformer, and add
an explanation as to why it is the only transformer that requires an
exception to know when it's involved from the destroy command.
simplification allows us to settle on a single interface,
graphNodeExpandsInstances for all types if instance expanders. The only
other specific class of resource we need to detect during pruning is the
nodeExpandApplyableResource node, which is already classified under the
GraphNodeResourceInstance interface.
ModulePath was incorrectly returning the parent module, because it did
not implement ReferenceOutside. With ReferenceOutside working correctly,
we can have ModulePath return the real path and remove the special case
for this during pruning.
Create a single transformer to remove all unused nodes from the apply
graph. This is similar to the combination of the resource pruning done
in the destroy edge transformer, and the unused values transformer. In
addition to resources, variables, locals, and outputs, we now need to
remove unused module expansion nodes as well. Since these can all be
interdependent, we need to process them as whole in a single
transformation.
In order for depends_on to work, modules need to implicitly depend on
their child modules. This will have little effect on terraform's
concurrency, as configuration trees are always much wider than they are
deep.
create interfaces that nodes can implement to declare whether they
expand into instances of some sort, using the instances.Expander, and/or
whether use the instances.Expander to find instances.
included is a rough transformer implementation to remove these nodes
from the apply graph.
All of the feedback from the experiment described enhancements that can
potentially be added later without breaking changes, so this change simply
removes the experiment gate from the feature as originally implemented
with no changes to its functionality.
Further enhancements may follow in later releases, but the goal of this
change is just to ship the feature exactly as it was under the experiment.
Most of the changes here are cleaning up the experiment opt-ins from our
test cases. The most important parts are in configs/experiments.go and in
experiments/experiment.go .