Because the destroy plan only creates the necessary changes for apply to
remove all the resources, it does no reading of resources or data
sources, leading to stale data in the state. In most cases this is not a
problem, but when a provider configuration is using resource values, the
provider may not be able to run correctly during apply. In prior
versions of terraform, the implicit refresh that happened during
`terraform destroy` would update the data sources and remove missing
resources from state as required.
The destroy plan graph has a minimal amount of information, so it is not
feasible to work the reading of resources into the operation without
completely replicating the normal plan graph, and updating the plan
graph and all destroy node implementation is also a considerable amount
of refactoring. Instead, we can run a normal plan which is used to
refresh the state before creating the destroy plan. This brings back
similar behavior to core versions prior to 0.14, and the refresh can
still be skipped using the `-refresh=false` cli flag.
The context used for Stop is more appropriately tied to the lifetime of
the provisioner rather than a call to the ProvisionResource method. In
some cases Stop can be called before ProvisionResource, causing a panic
the provisioner. Rather than adding nil checks to the CancelFunc call
for Stop, create a base context to use for cancellation with both Stop
and Close methods.
Recent changes to the human-readable rendering of outputs and console
values led to long integer values being presented in scientific
notation (e.g. 1.2345678e+07). While technically correct, this is an
unusual way to present integer values.
This commit changes the formatting mode to 'f', which never uses
scientific notation and displays integer values as a sequence of digits
instead (e.g. 12345678).
This variable mechanism was replaced long ago with a explicit `Allow
destroy plans` setting on a Terraform Cloud workspace, and no longer
does anything: https://www.terraform.io/docs/cloud/workspaces/settings.html#destruction-and-deletion
Rather than mention this new mechanism at all though, I've removed the
requisite from here entirely - the reason being that a consideration
like this is no different from other permission concerns (e.g. "You must
have Apply permission on a workspace to `apply`"), and without
enumerating _all_ of these here - which doesn't seem appropriate - we
just remove this concern entirely.
Binding a sensitive value to a variable with custom validation rules
would cause a panic, as the validation expression carries the sensitive
mark when it is evaluated for truthiness. This commit drops the marks
before testing, which fixes the issue.
If the remote backend is connected to a Terraform Cloud workspace in
local operations mode, we disable the version check, as the remote
Terraform version is meaningless.
...and also shrink the explanation for alternate sharing approaches, a bit.
Actually, it looks like I already half-adopted it by accident. 😬 But this
commit adds it to the sidebar under "State", so users can browse to it. I'm
leaving the URL alone, because it's not urgent and we'll need to adjust a large
swath of URLs at some point anyway.
This change effectively stops presenting `terraform` as a provider in the normal
sense, and reduces /docs/providers/terraform/index.html to a ghost page in the
language section (to avoid breaking links for the time being). The message a
reader should get is that Terraform has one special built-in data source where
you don't need to think about the provider or its version.
As of December 18, 2020, we've redirected nearly all of the provider
documentation that used to live on terraform.io:
- For providers that got published on the Registry, we redirected each docs page
to the corresponding Registry docs page.
- For providers that never got adopted by a new publisher, we archived the
GitHub repository and redirected each docs page to the corresponding Markdown
source file on github.com. (For an example of these redirects, see
https://www.terraform.io/docs/providers/telefonicaopencloud/r/s3_bucket.html)
There are ten providers left that we haven't redirected. These ones got adopted
by new publishers and _will_ end up on the Registry, but they aren't quite ready
to ship and get their permanent redirects, and we don't want to sabotage their
SEO by 301ing to a temporary destination.
Isolate the test schema expansion, because having NestingSet
in the schema actually necessitates [] values in the AttrsJson.
While this didn't fail any tests on its addition, that
is scary and so isolate this to the one test using it.
These links largely still go somewhere useful, but they have some kind of issue
revealed by our new link checker:
- Some of them point to a stale URL that redirects, and can be updated to the
new destination.
- Some of them point to anchors that don't exist (anymore?) in the destination.
- Some of them end up redirecting unnecessarily due to how the server handles
directory URLs without trailing slashes. Sorry, I know that's pointless, just,
humor me for the time being so we can get our CI green. 😭
In a couple cases, I've added invisible anchors to destination pages, either to
preserve an old habit or because the current anchors kind of suck due to being
particularly long or meandering.
Terraform remote version conflicts are not a concern for operations. We
are in one of three states:
- Running remotely, in which case the local version is irrelevant;
- Workspace configured for local operations, in which case the remote
version is meaningless;
- Forcing local operations with a remote backend, which should only
happen in the Terraform Cloud worker, in which case the Terraform
versions by definition match.
This commit therefore disables the version check for operations (plan
and apply), which has the consequence of disabling it in Terraform Cloud
and Enterprise runs. In turn this enables Terraform Enterprise runs with
bundles which have a version that doesn't exactly match the bundled
Terraform version.
* Add test for existing behavior, when a value contains a marked value
* Allow some marked values as for_each arguments
Rather than disallow values that have any marks
as for_each arguments, this makes the check more
nuanced to disallow cases where the whole value
is marked (a whole map, or any set). This allows
cases where a user may pass a map that has marked
values, but the keys are not sensitive
* Add limitations section to for_each
Move limitations from a note to their own section,
to allow for expansion on disallowing sensitive values
in for_each