We need to run the force CBD transformer during apply too, both to
ensure we can rely on the `CreateBeforeDestroy()` status for dependants
during apply, but also to ensure that the correct status is stored into
state.
* addrs: replace NewLegacyProvider with NewDefaultProvider in ParseProviderSourceString
ParseProviderSourceString was still defaulting to NewLegacyProvider when
encountering single-part strings. This has been fixed.
This commit also adds a new function, IsProviderPartNormalized, which
returns a bool indicating if the string given is the same as a
normalized version (as normalized by ParseProviderPart) or an error.
This is intended for use by the configs package when decoding provider
configurations.
* terraform: fix provider local names in tests
* configs: validate that all provider names are normalized
The addrs package normalizes all source strings, but not the local
names. This caused very odd behavior if for e.g. a provider local name
was capitalized in one place and not another. We considered enabling
case-sensitivity for provider local names, but decided that since this
was not something that worked in previous versions of terraform (and we
have yet to encounter any use cases for this feature) we could generate
an error if the provider local name is not normalized. This error also
provides instructions on how to fix it.
* configs: refactor decodeProviderRequirements to consistently not set an FQN when there are errors
HashiBot labels issues as "crash" and "bug" when they container "panic:". This causes issues to bypass human triage, which means that provider-specific panics are put in our issue list rather than being labeled correctly. This removes that rule to allow for human labeling.
The new data source planning logic no longer needs a separate action,
and the apply status can be determined from whether the After value is
complete or not.
Ensure that a data source with depends_on not only plans to update
during refresh, but evaluates correctly in the plan ensuring
dependencies are planned accordingly.
The state was not being set, so the change was not evaluated correctly
for dependant resources.
Remove use of cty.NilVal in readDataSource, only one place was using it,
so the code could just be moved out.
Fix a bunch of places where warnings would be lost.
Rather than re-read the data source during every plan cycle, apply the
config to the prior state, and skip reading if there is no change.
Remove the TODOs, as we're going to accept that data-only changes will
still not be plan-able for the time being.
Fix the null data source test resource, as it had no computed fields at
all, even the id.
The logic for refresh, plan and apply are all subtly different, so
rather than trying to manage that complex flow through a giant 300 line
method, break it up somewhat into 3 different types that can share the
types and a few helpers.
Start fixing plan tests that don't expect data sources to be in the
plan. A few were just checking that Read was never called, and some
expected the data source to be nil.
In order to udpate data sources correctly when their configuration
changes, they need to be evaluated during plan. Since the plan working
state isn't saved, store any data source reads as plan changes to be
applied later. This is currently abusing the Update plan action to
indicate that the data source was read and needs to be applied to state.
We can possibly add a Store action for data sources if this approach
works out. The Read action still indicates that the data source was
deferred to the Apply phase.
We also fully handle any data source depends_on changes. Now that all
the transitive resource dependencies are known at the time of
evaluation, we can check the plan to determine if there are any changes
in the dependencies and selectively defer reading the data source.
We need to load the state during refresh, so that even if the data
source can't be read due to `depends_on`, the state can be saved back
again to prevent it from being lost altogether.
This is a step towards having data sources refresh like resources, which
will be from their saved state value.
This transformer is what will provider the data sources with the
transitive dependencies needed to determine if they can read during plan
or must be deferred.
* internal/getproviders: fix panic with invalid path parts
If the search path is missing a directory, the provider installer would
try to create an addrs.Provider with the wrong parts. For example if the
hostname was missing (as in the test case), it would call
addrs.NewProvider with (namespace, typename, version). This adds a
validation step for each part before calling addrs.NewProvider to avoid
the panic.
This is a port of the retry/timeout logic added in #24260 and #24259,
using the same environment variables to configure the retry and timeout
settings.
That name tag was left in only to reduce the diff when during
implementation. Fix the naming now for these nodes so it is correct, and
prevent any possible name collision between types.
Previously the diagnostics from the config loaders (earlyconfig and
regular) were only appended to the overall diags if an error was found.
This adds all diagnostics from the regular config loader so that any
generated warnings will be displayed, even if there are no errors.
I did not add the `earlyconfig` warnings since they will be displayed if
there is an error and are likely to be duplicated by the config loader.
* internal/registry source: return error if requested provider version protocols are not supported
* getproviders: move responsibility for protocol compatibility checks into the registry client
The original implementation had the providercache checking the provider
metadata for protocol compatibility, but this is only relevant for the
registry source so it made more sense to move the logic into
getproviders.
This also addresses an issue where we were pulling the metadata for
every provider version until we found one that was supported. I've
extended the registry client to unmarshal the protocols in
`ProviderVersions` so we can filter through that list, instead of
pulling each version's metadata.