includes an example of running the upgrade process across all directories under
a particular prefix that contain `.tf` files using some common Unix command line
tools, which may be useful if you want to upgrade all modules in a single
repository at once.
After you've added explicit provider source addresses to your configuration,
run `terraform init` again to re-run the provider installer.
-> **Action:** Either run [`terraform 0.13upgrade`](/docs/commands/0.13upgrade.html) for each of your modules, or manually update the provider declarations to use explicit source addresses.
[The UI- and VCS-driven Run Workflow](/docs/cloud/run/ui.html) to learn how
to manually start a run after you select a Terraform v0.13 release for your
workspace.
If you remove a `resource` block (or a `module` block for a module that
contains `resource` blocks) before the first `terraform apply`, you may see
a message like this reflecting that Terraform cannot determine which provider
configuration the existing object ought to be managed by:
```
Error: Provider configuration not present
To work with null_resource.foo its original provider configuration at
provider["registry.terraform.io/-/aws"] is required, but it has been removed.
This occurs when a provider configuration is removed while objects created by
that provider still exist in the state. Re-add the provider configuration to
destroy aws_instance.example, after which you can remove the provider
configuration again.
```
In this specific upgrade situation the problem is actually the missing
`resource` block rather than the missing `provider` block: Terraform would
normally refer to the configuration to see if this resource has an explicit
`provider` argument that would override the default strategy for selecting
a provider. If you see the above after upgrading, re-add the resource mentioned
in the error message until you've completed the upgrade.
-> **Action:** After updating all modules in your configuration to use the new provider requirements syntax, run `terraform apply` to create a new state snapshot containing the new-style provider source addresses that are now specified in your configuration.
which you can use to automatically populate a local directory based on the
requirements of the current configuration file:
```
terraform providers mirror ~/.terraform.d/plugins
```
-> **Action:** If you use local copies of official providers rather than installing them automatically from Terraform Registry, adopt the new expected directory structure for your local directory either by running `terraform providers mirror` or by manually reorganizing the existing files.
### In-house Providers
If you use an in-house provider that is not available from an upstream registry
at all, after upgrading you will see an error similar to the following:
```
- Finding latest version of hashicorp/happycloud...
Error: Failed to install provider
Error while installing hashicorp/happycloud: provider registry
registry.terraform.io does not have a provider named
registry.terraform.io/hashicorp/happycloud
```
Terraform assumes that a provider without an explicit source address belongs
to the "hashicorp" namespace on `registry.terraform.io`, which is not true
for your in-house provider. Instead, you can use any domain name under your
control to establish a _virtual_ source registry to serve as a separate
If you wish, you can later run your own Terraform provider registry at the
specified hostname as an alternative to local installation, without any further
modifications to the above configuration. However, we recommend tackling that
only after your initial upgrade using the new local filesystem layout.
-> **Action:** If you use in-house providers that are not installable from a provider registry, assign them a new source address under a domain name you control and update your modules to specify that new source address.
If your configuration using one or more in-house providers has existing state
snapshots that include resources belonging to those providers, you'll also need
to perform a one-time migration of the provider references in the state, so
Terraform can understand them as belonging to your in-house providers rather
than to providers in the public Terraform Registry. If you are in this
situation, `terraform init` will produce the following error message after
you complete the configuration changes described above:
```
Error: Failed to query available provider packages
Could not retrieve the list of available versions for provider -/happycloud:
provider registry registry.terraform.io does not have a provider named
registry.terraform.io/-/happycloud
```
Provider source addresses starting with `registry.terraform.io/-/` are a special
way Terraform marks legacy addresses where the true namespace is unknown.
For providers that were automatically-installable in Terraform 0.12, Terraform
0.13 can automatically determine the new addresses for these using a lookup
table in the public Terraform Registry, but for in-house providers you will
need to provide the appropriate mapping manually.
The `terraform state replace-provider` subcommand allows re-assigning provider
source addresses recorded in the Terraform state, and so we can use this
command to tell Terraform how to reinterpret the "legacy" provider addresses
as properly-namespaced providers that match with the provider source addresses
in the configuration.
~> **Warning:** The `terraform state replace-provider` subcommand, like all of the `terraform state` subcommands, will create a new state snapshot and write it to the configured backend. After the command succeeds the latest state snapshot will use syntax that Terraform v0.12 cannot understand, so you should perform this step only when you are ready to permanently upgrade to Terraform v0.13.
```
terraform state replace-provider 'registry.terraform.io/-/happycloud' 'terraform.example.com/awesomecorp/happycloud'
```
The command above asks Terraform to update any resource instance in the state
that belongs to a legacy (non-namespaced) provider called "happycloud" to
instead belong to the fully-qualified source address
`terraform.example.com/awesomecorp/happycloud`.
Whereas the configuration changes for provider requirements are made on a
per-module basis, the Terraform state captures data from throughout the
configuration (all of the existing module instances) and so you only need to
run `terraform state replace-provider` once per configuration.
Running `terraform init` again after completing this step should cause
Terraform to attempt to install `terraform.example.com/awesomecorp/happycloud`
and to find it in the local filesystem directory you populated in an earlier
step.
-> **Action:** If you use in-house providers that are not installable from a provider registry and your existing state contains resource instances that were created with any of those providers, use the `terraform state replace-provider` command to update the state to use the new source addressing scheme only once you are ready to commit to your v0.13 upgrade. (Terraform v0.12 cannot parse a state snapshot that was created by this command.)
## Destroy-time provisioners may not refer to other resources
Destroy-time provisioners allow introducing arbitrary additional actions into
the destroy phase of the resource lifecycle, but in practice the design of this
feature was flawed because it created the possibility for a destroy action
of one resource to depend on a create or update action of another resource,
which often leads either to dependency cycles or to incorrect behavior due to
unsuitable operation ordering.
In order to retain as many destroy-time provisioner capabilities as possible
while addressing those design flaws, Terraform v0.12.18 began reporting
deprecation warnings for any `provisioner` block setting `when = destroy` whose
configuration refers to any objects other than `self`, `count`, and `each`.
Addressing the flaws in the destroy-time provisioner design was a pre-requisite
for new features in v0.13 such as module `depends_on`, so Terraform v0.13
concludes the deprecation cycle by making such references now be fatal errors:
```
Error: Invalid reference from destroy provisioner
Destroy-time provisioners and their connection configurations may only
reference attributes of the related resource, via 'self', 'count.index',
or 'each.key'.
References to other resources during the destroy phase can cause dependency
cycles and interact poorly with create_before_destroy.
```
Some existing modules using resource or other references inside destroy-time
provisioners can be updated by placing the destroy-time provisioner inside a
`null_resource` resource and copying any data needed at destroy time into
In the above example, the `null_resource.example.triggers` map is effectively
acting as a temporary "cache" for the instance's private IP address to
guarantee that a value will be available when the provisioner runs, even if
the `aws_instance.example` object itself isn't currently available.
The provisioner's `connection` configuration can refer to that value via
`self`, whereas referring directly to `aws_instance.example.private_ip` in that
context is forbidden.
[Provisioners are a last resort](/docs/provisioners/#provisioners-are-a-last-resort),
so we recommend avoiding both create-time and destroy-time provisioners wherever
possible. Other options for destroy-time actions include using `systemd` to
run commands within your virtual machines during shutdown or using virtual
machine lifecycle hooks provided by your chosen cloud computing platform,
both of which can help ensure that the shutdown actions are taken even if the
virtual machine is terminated in an unusual way.
-> **Action:** If you encounter the "Invalid reference from destroy provisioner" error message after upgrading, reorganize your destroy-time provisioners to depend only on self-references, and consider other approaches if possible to avoid using destroy-time provisioners at all.
## Data resource reads can no longer be disabled by `-refresh=false`
In Terraform v0.12 and earlier, Terraform would read the data for data
resources during the "refresh" phase of `terraform plan`, which is the same
phase where Terraform synchronizes its state with any changes made to
remote objects.
An important prerequisite for properly supporting `depends_on` for both
data resources and modules containing data resources was to change the data
resource lifecycle to now read data during the _plan_ phase, so that
dependencies on managed resources could be properly respected.
If you were previously using `terraform plan -refresh=false` or
`terraform apply -refresh=false` to disable the refresh phase, you will find
that under Terraform 0.13 this will continue to disable synchronization of
managed resources (declared with `resource` blocks) but will no longer
disable the reading of data resources (declared with `data` blocks).
~> Updating the data associated with data resources is crucial to producing an
accurate plan, and so there is no replacement mechanism in Terraform v0.13