Start fixing plan tests that don't expect data sources to be in the
plan. A few were just checking that Read was never called, and some
expected the data source to be nil.
In order to udpate data sources correctly when their configuration
changes, they need to be evaluated during plan. Since the plan working
state isn't saved, store any data source reads as plan changes to be
applied later. This is currently abusing the Update plan action to
indicate that the data source was read and needs to be applied to state.
We can possibly add a Store action for data sources if this approach
works out. The Read action still indicates that the data source was
deferred to the Apply phase.
We also fully handle any data source depends_on changes. Now that all
the transitive resource dependencies are known at the time of
evaluation, we can check the plan to determine if there are any changes
in the dependencies and selectively defer reading the data source.
We need to load the state during refresh, so that even if the data
source can't be read due to `depends_on`, the state can be saved back
again to prevent it from being lost altogether.
This is a step towards having data sources refresh like resources, which
will be from their saved state value.
This transformer is what will provider the data sources with the
transitive dependencies needed to determine if they can read during plan
or must be deferred.
* internal/getproviders: fix panic with invalid path parts
If the search path is missing a directory, the provider installer would
try to create an addrs.Provider with the wrong parts. For example if the
hostname was missing (as in the test case), it would call
addrs.NewProvider with (namespace, typename, version). This adds a
validation step for each part before calling addrs.NewProvider to avoid
the panic.
This is a port of the retry/timeout logic added in #24260 and #24259,
using the same environment variables to configure the retry and timeout
settings.
That name tag was left in only to reduce the diff when during
implementation. Fix the naming now for these nodes so it is correct, and
prevent any possible name collision between types.
Previously the diagnostics from the config loaders (earlyconfig and
regular) were only appended to the overall diags if an error was found.
This adds all diagnostics from the regular config loader so that any
generated warnings will be displayed, even if there are no errors.
I did not add the `earlyconfig` warnings since they will be displayed if
there is an error and are likely to be duplicated by the config loader.
* internal/registry source: return error if requested provider version protocols are not supported
* getproviders: move responsibility for protocol compatibility checks into the registry client
The original implementation had the providercache checking the provider
metadata for protocol compatibility, but this is only relevant for the
registry source so it made more sense to move the logic into
getproviders.
This also addresses an issue where we were pulling the metadata for
every provider version until we found one that was supported. I've
extended the registry client to unmarshal the protocols in
`ProviderVersions` so we can filter through that list, instead of
pulling each version's metadata.
When looking up the namespace for a legacy provider source, we need to
use the /v1/providers/-/{name}/versions endpoint. For non-HashiCorp
providers, the /v1/providers/-/{name} endpoint returns a 404.
This commit updates the LegacyProviderDefaultNamespace method and the
mock registry servers accordingly.
Adding a transformer to attach any transitive DependsOn references to
data sources during plan. Refactored the ReferenceMap from the
ReferenceTransformer so it can be reused for both.
GraphNodeAttachDependsOn give us a method for adding all transitive resource
dependencies found through depends_on references, so that data source
can determine if they can be read during plan. This will be done by
inspecting the changes of all dependency resources, and delaying read
until apply if any changes are planned.
Validate providers in expanding modules. Expanding modules cannot have provider configurations with non-empty configs, which includes having a version configured. If an empty or alias-only block is passed, the provider must be passed through the providers argument on the module call
If a configuration had multiple blocks in the versions.tf file, it would
be added to the `rewritePaths` list multiple times. We would then remove
it from this slice, but only once, and so the output file would later be
rewritten to remove the required providers block.
This commit uses a set instead of a list to prevent this case, and adds
a regression test.
Instead of using providers.tf as the default output file for the
upgrader, we now default to versions.tf. This means that if the
configuration has no `required_providers` blocks at all, or has
multiple, the provider version requirements will be stored in the
versions.tf file.
We now also update the versions.tf file to set a `required_version`
attribute in the first `terraform` block, with value ">= 0.13". This
is similar to the behaviour of the 0.12upgrade command, and signals that
the configuration should not be used with older versions of Terraform.
Validation is supposed to be a local-only operation, but Configure implementations
are allowed to make outgoing requests to remote APIs to validate settings.