In this case, "atomic" means that there will be no situation where the
file contains only part of the newContent data, and therefore other
software monitoring the file for changes (using a mechanism like inotify)
won't encounter a truncated file.
It does _not_ mean that there can't be existing filehandles open against
the old version of the file. On Windows systems the write will fail in
that case, but on Unix systems the write will typically succeed but leave
the existing filehandles still pointing at the old version of the file.
They'll need to reopen the file in order to see the new content.
This originated in the cliconfig code to write out credentials files. The
Windows implementation of this in particular was quite onerous to get
right because it needs a very specific sequence of operations to avoid
running into exclusive file locks, and so by factoring this out with
only cosmetic modification we can avoid repeating all of that engineering
effort for other atomic file writing use-cases.
Terraform v0.10 introduced .terraform/plugins as a cache directory for
automatically-installed plugins, Terraform v0.13 later reorganized the
directory structure inside but retained its purpose as a cache.
The local cache used to also serve as a record of specifically which
packages were selected in a particular working directory, with the intent
that a second run of "terraform init" would always select the same
packages again. That meant that in some sense it behaved a bit like a
local filesystem mirror directory, even though that wasn't its intended
purpose.
Due to some unfortunate miscommunications, somewhere a long the line we
published some documentation that _recommended_ using the cache directory
as if it were a filesystem mirror directory when working with Terraform
Cloud. That was really only working as an accident of implementation
details, and Terraform v0.14 is now going to break that because the source
of record for the currently-selected provider versions is now the
public-facing dependency lock file rather than the contents of an existing
local cache directory on disk.
After some consideration of how to move forward here, this commit
implements a compromise that tries to avoid silently doing anything
surprising while still giving useful guidance to folks who were previously
using the unsupported strategy. Specifically:
- The local cache directory will now be .terraform/providers rather than
.terraform/plugins, because .terraform/plugins is effectively "poisoned"
by the incorrect usage that we can't reliably distinguish from prior
version correct usage.
- The .terraform/plugins directory is now the "legacy cache directory". It
is intentionally _not_ now a filesystem mirror directory, because that
would risk incorrectly interpreting providers automatically installed
by Terraform v0.13 as if they were a local mirror, and thus upgrades
and checksum fetches from the origin registry would be blocked.
- Because of the previous two points, someone who _was_ trying to use the
legacy cache directory as a filesystem mirror would see installation
fail for any providers they manually added to the legacy directory.
To avoid leaving that user stumped as to what went wrong, there's a
heuristic for the case where a non-official provider fails installation
and yet we can see it in the legacy cache directory. If that heuristic
matches then we'll produce a warning message hinting to move the
provider under the terraform.d/plugins directory, which is a _correct_
location for "bundled" provider plugins that belong only to a single
configuration (as opposed to being installed globally on a system).
This does unfortunately mean that anyone who was following the
incorrectly-documented pattern will now encounter an error (and the
aforementioned warning hint) after upgrading to Terraform v0.14. This
seems like the safest compromise because Terraform can't automatically
infer the intent of files it finds in .terraform/plugins in order to
decide automatically how best to handle them.
The internals of the .terraform directory are always considered
implementation detail for a particular Terraform version and so switching
to a new directory for the _actual_ cache directory fits within our usual
set of guarantees, though it's definitely non-ideal in isolation but okay
when taken in the broader context of this problem, where the alternative
would be silent misbehavior when upgrading.
DecoderSpec may be called many times, and deeply recursive calls are
expensive. Since we cannot synchronize the Blocks themselves due to them
being copied in parts of the code, we use a separate cache to store the
generated Specs.
A few tests were inadvertently renamed, causing them to be be skipped.
For some reason this is not caught by the `vet` pass that happens during
normal testing.
When an attribute's sensitivity changes, but its value remains the same,
we consider this an update operation for the plan. This commit updates
the diff renderer to match this, detecting and displaying the change in
sensitivity.
Previously, the renderer would detect no changes to the value of the
attribute, and consider it a no-op action. This resulted in suppression
of the attribute when the plan is in concise mode.
This is achieved with a new helper function, ctyEqualValueAndMarks. We
call this function whenever we want to check that two values are equal
in order to determine whether the action is update or no-op.
The ProviderConfigTransformer was using only the provider FQN to attach
a provider configuration to the provider, but what it needs to do is
find the local name for the given provider FQN (which may not match the
type name) and use that when searching for matching provider
configuration.
Fixes#26556
This will also be backported to the v0.13 branch.
The legacy tests never had to account for outputs in the plan. This path
is not used outside of old builtin test provider, so just work around
the output changes until we remove this completely.
The upgrade requirements for this release are considerably more modest
than for Terraform v0.13, so this time we just have some notes about a
few changes in behavior that may be impactful to some users.
This first pass is intended to be included as part of a forthcoming beta
testers' guide as we begin the v0.14 beta testing period. We will make
further changes to this upgrade guide based on feedback from those who
participate in the beta process.
Note that this upgrade guide is not intended as release marketing material
and so its presentation is focused on addressing concerns users might
encounter while upgrading. We'll share highlights from the release in
other contexts, such as the changelog and in the product blog.
The state is not loaded here with any marks, so we cannot rely on marks
alone for equality comparison. Compare both the state and the
configuration sensitivity before creating the OutputChange.
Since root outputs can now use the planned changes, we can directly
insert the correct applyable or destroyable node into the graph during
plan and apply, and it will remove the outputs if they are being
destroyed.
This builds on an experimental feature in the underlying cty library which
allows marking specific attribtues of an object type constraint as
optional, which in turn modifies how the cty conversion package handles
missing attributes in a source value: it will silently substitute a null
value of the appropriate type rather than returning an error.
In order to implement the experiment this commit temporarily forks the
HCL typeexpr extension package into a local internal/typeexpr package,
where I've extended the type constraint syntax to allow annotating object
type attributes as being optional using the HCL function call syntax.
If the experiment is successful -- both at the Terraform layer and in
the underlying cty library -- we'll likely send these modifications to
upstream HCL so that other HCL-based languages can potentially benefit
from this new capability.
Because it's experimental, the optional attribute modifier is allowed only
with an explicit opt-in to the module_variable_optional_attrs experiment.
We record output changes in the plan, but don't currently use them for
anything other than display. If we have a wholly known output value
stored in the plan, we should prefer that for apply in order to ensure
consistency with the planned values. This also avoids cases where
evaluation during apply cannot happen correctly, like when all resources
are being removed or we are executing a destroy.
We also need to record output Delete changes when the plan is for
destroy operation. Otherwise without a change, the apply step will
attempt to evaluate the outputs, causing errors, or leaving them in the
state with stale values.