function
The original EvalReadState node is used only by `NodeAbstractResource`s,
so I've created a new method on NodeAbstractResource which does the same
thing as EvalReadState. When the EvalNode refactor project is complete,
EvalReadState will be removed entirely.
In order to ensure all the starting values agree, and since
ignore_changes is only meant to apply to the configuration, we need to
process the ignore_changes values on the config itself rather than the
proposed value.
This ensures the proposed new value and the config value seen by
providers are coordinated, and still allows us to use the rules laid out
by objchange.AssertPlanValid to compare the result to the configuration.
ignore_changes should only exclude changes to the resource arguments,
and not alter the returned value from PlanResourceChange. This would
effect very few providers, since most current providers don't actively
create their plan, and those that do should be generating computed
values here rather than modifying existing ones.
In earlier commits we started to make the installation codepath
context-aware so that it could be canceled in the event of a SIGINT, but
we didn't complete wiring that through the API of the getproviders
package.
Here we make the getproviders.Source interface methods, along with some
other functions that can make network requests, take a context.Context
argument and act appropriately if that context is cancelled.
The main providercache.Installer.EnsureProviderVersions method now also
has some context-awareness so that it can abort its work early if its
context reports any sort of error. That avoids waiting for the process
to wind through all of the remaining iterations of the various loops,
logging each request failure separately, and instead returns just
a single aggregate "canceled" error.
We can then set things up in the "terraform init" and
"terraform providers mirror" commands so that the context will be
cancelled if we get an interrupt signal, allowing provider installation
to abort early while still atomically completing any local-side effects
that may have started.
We only preserve these major upgrade versions for one major version after
they are added, because our upgrade path assumes moving forward only one
major version at a time. Now that our main branch is tracking towards
Terraform 0.14, we no longer need the 0.13upgrade subcommand.
This also includes some minor adjustments to the 0.12upgrade command to
align the terminology used in the output of both commands. We usually
use the word "deprecated" to mean that something is still available but
no longer recommended, but neither of these commands is actually available
so "removed" is clearer.
We could in principle have removed even the removal notice for 0.12upgrade
here, but it's relatively little code and not a big deal to keep around
to help prompt those who might try to upgrade directly from 0.11 to 0.14.
We may still remove the historical configuration upgrade commands prior to
releasing Terraform 1.0, though.
* Put link to tutorial in its own section, call it a tutorial instead of guide, and use new canonical URL.
* Mention limitations of using import with a remote backed
* Typo fix
Co-authored-by: Nick Fagerlund <nick.fagerlund@gmail.com>
* Fix taint and untaint commands when in a workspace
Fixes#22157. Removes DefaultStateFilepath as the default for the
-state flag, allowing workspaces to be used properly.
* update test with modern state types
Co-authored-by: Kristin Laemmert <mildwonkey@users.noreply.github.com>
In Terraform 0.11 and earlier, the "terraform fmt" command was very
opinionated in the interests of consistency. While that remains its goal,
for pragmatic reasons Terraform 0.12 significantly reduced the number
of formatting behaviors in the fmt command. We've held off on introducing
0.12-and-later-flavored cleanups out of concern it would make it harder
to maintain modules that are cross-compatible with both Terraform 0.11
and 0.12, but with this aimed to land in 0.14 -- two major releases
later -- our new goal is to help those who find older Terraform language
examples learn about the more modern idiom.
More rules may follow later, now that the implementation is set up to
allow modifications to tokens as well as modifications to whitespace, but
for this initial pass the command will now apply the following formatting
conventions:
- 0.11-style quoted variable type constraints will be replaced with their
0.12 syntax equivalents. For example, "string" becomes just string.
(This change quiets a deprecation warning.)
- Collection type constraints that don't specify an element type will
be rewritten to specify the "any" element type explicitly, so
list becomes list(any).
- Arguments whose expressions consist of a quoted string template with
only a single interpolation sequence inside will be "unwrapped" to be
the naked expression instead, which is functionally equivalent.
(This change quiets a deprecation warning.)
- Block labels are given in quotes.
Two of the rules above are coming from a secondary motivation of
continuing down the deprecation path for two existing warnings, so authors
can have two active deprecation warnings quieted automatically by
"terraform fmt", without the need to run any third-party tools.
All of these rules match with current documented idiom as shown in the
Terraform documentation, so anyone who follows the documented style should
see no changes as a result of this. Those who have adopted other local
style will see their configuration files rewritten to the standard
Terraform style, but it should not make any changes that affect the
functionality of the configuration.
There are some further similar rewriting rules that could be added in
future, such as removing 0.11-style quotes around various keyword or
static reference arguments, but this initial pass focused only on some
rules that have been proven out in the third-party tool
terraform-clean-syntax, from which much of this commit is a direct port.
For now this doesn't attempt to re-introduce any rules about vertical
whitespace, even though the 0.11 "terraform fmt" would previously apply
such changes. We'll be more cautious about those because the results of
those rules in Terraform 0.11 were often sub-optimal and so we'd prefer
to re-introduce those with some care to the implications for those who
may be using vertical formatting differences for some semantic purpose,
like grouping together related arguments.
Previously we were just using hclwrite.Format, a token-only formatting
pass. Now we'll do that via the full hclwrite parser, getting the
formatting as a side-effect of the parsing and re-serialization.
This should have no change in observable behavior as-is, but in a future
commit we'll add some additional processing rules that modify the syntax
tree before re-serializing it.
Previously formatting was just a simple wrapper around hclwrite.Format.
That remains true here, but the call is factored out into a separate
method in preparation for making it also do some Terraform-specific
cleanups in a future commit.
Sensitive values may not be used in outputs which are not also marked
as sensitive. This includes values nested within complex structures.
Note that sensitive values are unmarked before writing to state. This
means that sensitive values used in module outputs will have the
sensitive mark removed. At the moment, we have not implemented
sensitivity propagation from module outputs back to value marks.
This commit also reworks the tests for NodeApplyableOutput to cover
more existing behaviour, as well as this change.
Add a written bug triage process and link to it in README.md
Bug process
Remove goals, edit for brevity, and move how to write a good issue report to bug report template
link HashiCorp GPG key in bug report template
add summary links for triage process
A data source referencing another data source through depends_on should
not be forced to defer until apply. Data sources have no side effects,
so nothing should need to be applied. If the dependency has a
planned change due to a managed resource, the original data source will
also encounter that further down the list of dependencies.
This prevents a data source being read during plan for any reason from
causing other data sources to be deferred until apply. It does not
change the behavior noticeably in 0.14, but because 0.13 still had
separate refresh and plan phases which could read the data source, the
deferral could cause many things downstream to become unexpectedly
unknown until apply.
When a sensitive variable has a complex type, any traversal of the
variable should still result in a sensitive value. This test uses a
sensitive `map(string)` and verifies that both plan and state output
include the appropriate sensitive marks for the resource attribute.
NodePlannableResource and NodeApplyableResource EvalTree()s have been
replaced with Execute() nodes and straight-through code. Both called
EvalWriteResourceState and were the only functions to use it, so I chose
to replace EvalWriteResourceState entirely with straight-through code
(by copying the contents into the two locations).
As we continue iterating towards saving valid hashes for a package in a
depsfile lock file after installation and verifying them on future
installation, this prepares getproviders for the possibility of having
multiple valid hashes per package.
This will arise in future commits for two reasons:
- We will need to support both the legacy "zip hash" hashing scheme and
the new-style content-based hashing scheme because currently the
registry protocol is only able to produce the legacy scheme, but our
other installation sources prefer the content-based scheme. Therefore
packages will typically have a mixture of hashes of both types.
- Installing from an upstream registry will save the hashes for the
packages across all supported platforms, rather than just the current
platform, and we'll consider all of those valid for future installation
if we see both successful matching of the current platform checksum and
a signature verification for the checksums file as a whole.
This also includes some more preparation for the second case above in that
signatureAuthentication now supports AcceptableHashes and returns all of
the zip-based hashes it can find in the checksums file. This is a bit of
an abstraction leak because previously that authenticator considered its
"document" to just be opaque bytes, but we want to make sure that we can
only end up trusting _all_ of the hashes if we've verified that the
document is signed. Hopefully we'll make this better in a future commit
with some refactoring, but that's deferred for now in order to minimize
disruption to existing codepaths while we work towards a provider locking
MVP.
The logic for what constitutes a valid hash and how different hash schemes
are represented was starting to get sprawled over many different files and
packages.
Consistently with other cases where we've used named types to gather the
definition of a particular string into a single place and have the Go
compiler help us use it properly, this introduces both getproviders.Hash
representing a hash value and getproviders.HashScheme representing the
idea of a particular hash scheme.
Most of this changeset is updating existing uses of primitive strings to
uses of getproviders.Hash. The new type definitions are in
internal/getproviders/hash.go.
Although origin registries return specific [filename, hash] pairs, our
various different installation methods can't produce a structured mapping
from platform to hash without breaking changes.
Therefore, as a compromise, we'll continue to do platform-specific checks
against upstream data in the cases where that's possible (installation
from origin registry or network mirror) but we'll treat the lock file as
just a flat set of equally-valid hashes, at least one of which must match
after we've completed whatever checks we've made against the
upstream-provided checksums/signatures.
This includes only the minimal internal/getproviders updates required to
make this compile. A subsequent commit will update that package to
actually support the idea of verifying against multiple hashes.