Terraform's design assumes that each remote object in Terraform's care is
bound to one resource instance and one alone. If the same object is bound
to multiple instances then confusing behavior will often result, such as
two resource configurations competing to update a single object, or
objects being "left behind" when all existing Terraform deployments are
destroyed.
This assumption was previously only implied, though. This change is an
attempt to be more explicit about it, although these are additions to some
older documentation sections that have not been revised for some time and
so this is just a best effort to make this information discoverable
without getting drawn into a full-on reorganization of these sections.
While revising this there were some particular oddities that I decided to
revise while I was there, but I'll leave a fuller revision of this older
content for a later commit when we have more time to review it in greater
detail.
* Make sidebar nav in language docs more intuitive
* Minor display fixes for registry docs
* Explain providers in the registry in the providers index
* Revise a bunch of language docs around provider reqs
This is mostly an effort to smooth out some of the explanations, make sure
things are presented in a helpful order, make sure terminology lines up, draw
connections between related concepts, make default behavior more apparent, and
the like. It shouldn't include very much new information, but there might be one
or two things that came out of a conversation somewhere.
Co-authored-by: Judith Malnick <judith@hashicorp.com>
As part of documenting the new module for_each capabilities we added a
section noting that shared modules using the legacy pattern of declaring
their own provider configurations would not be compatible with them.
However, that also applies to the new module depends_on and several folks
participating in the beta pointed out that the documentation wasn't
discussing that at all.
In order to generalize the advice, I've moved the old content we had
(since v0.11) recommending against provider configurations in shared
modules out into its own section, now being more explicit that it is
a legacy pattern and not recommended, and then folded the content about
for_each and count, now also including depends_on, into that expanded
section.
As is often the case, that had some knock-on effects on the content on
the rest of this page, so there's some general editing and reorganization
here. In particular, I moved the "Multiple Instances of a Module" section
much further up the page because it's content relevant to users of
shared modules, while the later content on this page is more aimed at
authors of shared modules, including the new section about the legacy
pattern.
When looking up a resource during plan, we need to return an empty
container type when we're certain there are going to be no instances.
It's now more common to reference resources in a context that needs to
be known during plan (e.g. for_each), and always returning a DynamicVal
her would block plan from succeeding.
The couldHaveUnknownBlockPlaceholder helper was added to detect when a
set block has a placeholder for an unknown number of values. This worked
fine when the number increased from 1, but we were still attempting to
validate the unknown placeholder against the empty set when the final
count turned out to be 0.
Since we can't differentiate the unknown dynamic placeholder value from
an actual set value, we have to skip that object's validation
altogether.
When installing a provider which is already cached, we attempt to create
a symlink from the install directory targeting the cache. If symlinking
fails due to missing OS/filesystem support, we instead want to copy the
cached provider.
The fallback code to do this would always fail, due to a missing target
directory. This commit fixes that. I was unable to find a way to add
automated tests around this, but I have manually verified the fix on
Windows 8.1.
This code was made to do nothing pre-0.12, and we have no plans to
reintroduce a diff in the apply output, so it seems reasonable to now
remove it altogether.
This is the known case broken by the changes to allow resources pending
destruction to be evaluated from state. When a resource references
another that is create_before_destroy, and that resource is being scaled
in, the first resource will not be updated correctly.
Since we have to allow destroy nodes to be evaluated for providers
during a full destroy, this is adding a transformer to connect temporary
values to any destroy versions of their references when possible. The
ensures that the destroy happens before evaluation, even when there
isn't a full create-then-destroy set of instances.
The cases where the connection can't be made are when the temporary
value has a provider descendant, which means it must evaluate early in
the case of a full destroy. This means the value may contain incorrect
data when referencing resource that are create_before_destroy, or being
scaled-in via count or for_each. That will need to be addressed later by
reevaluating how we handle the full destroy case in terraform.
During a full destroy, providers may reference resources that are going
to be destroyed as well. We currently cannot change this behavior, so we
need to allow the evaluation and try to prevent it from leaking into as
many other places as possible. Another transformer to try and protect
the values in locals, variables and outputs will be added to enforce
destroy ordering when possible.
Outputs and locals cannot refer to destroy nodes. Since those nodes
types do not have different ordering for create and destroy operations,
connecting them directly to destroy nodes can cause cycles.
The destroy graph builder test requires state in order to be correct,
which it didn't have. The other tests hits the edge case where a planned
destroy cannot remove outputs, because the apply phase does not know it
was created from a destroy.
Since data source destruction is only state removal, and other resources
cannot depend on them creating any physical resources, the destroy
dependencies were not tracked in the state. It turns out that there is a
special case which requires this; running terraform destroy where the
provider depends on a data source. In that case the resources using that
provider need to record their indirect dependence on the data source, so
that they can be deleted before the data source is removed from the
state.