Start fixing plan tests that don't expect data sources to be in the
plan. A few were just checking that Read was never called, and some
expected the data source to be nil.
In order to udpate data sources correctly when their configuration
changes, they need to be evaluated during plan. Since the plan working
state isn't saved, store any data source reads as plan changes to be
applied later. This is currently abusing the Update plan action to
indicate that the data source was read and needs to be applied to state.
We can possibly add a Store action for data sources if this approach
works out. The Read action still indicates that the data source was
deferred to the Apply phase.
We also fully handle any data source depends_on changes. Now that all
the transitive resource dependencies are known at the time of
evaluation, we can check the plan to determine if there are any changes
in the dependencies and selectively defer reading the data source.
We need to load the state during refresh, so that even if the data
source can't be read due to `depends_on`, the state can be saved back
again to prevent it from being lost altogether.
This is a step towards having data sources refresh like resources, which
will be from their saved state value.
This transformer is what will provider the data sources with the
transitive dependencies needed to determine if they can read during plan
or must be deferred.
Adding a transformer to attach any transitive DependsOn references to
data sources during plan. Refactored the ReferenceMap from the
ReferenceTransformer so it can be reused for both.
GraphNodeAttachDependsOn give us a method for adding all transitive resource
dependencies found through depends_on references, so that data source
can determine if they can be read during plan. This will be done by
inspecting the changes of all dependency resources, and delaying read
until apply if any changes are planned.
Validate providers in expanding modules. Expanding modules cannot have provider configurations with non-empty configs, which includes having a version configured. If an empty or alias-only block is passed, the provider must be passed through the providers argument on the module call
If a configuration had multiple blocks in the versions.tf file, it would
be added to the `rewritePaths` list multiple times. We would then remove
it from this slice, but only once, and so the output file would later be
rewritten to remove the required providers block.
This commit uses a set instead of a list to prevent this case, and adds
a regression test.
Instead of using providers.tf as the default output file for the
upgrader, we now default to versions.tf. This means that if the
configuration has no `required_providers` blocks at all, or has
multiple, the provider version requirements will be stored in the
versions.tf file.
We now also update the versions.tf file to set a `required_version`
attribute in the first `terraform` block, with value ">= 0.13". This
is similar to the behaviour of the 0.12upgrade command, and signals that
the configuration should not be used with older versions of Terraform.
Validation is supposed to be a local-only operation, but Configure implementations
are allowed to make outgoing requests to remote APIs to validate settings.
This commit implements most of the intended functionality of the upgrade
command for rewriting configurations.
For a given module, it makes a list of all providers in use. Then it
attempts to detect the source address for providers without an explicit
source.
Once this step is complete, the tool rewrites the relevant configuration
files. This results in a single "required_providers" block for the
module, with a source for each provider.
Any providers for which the source cannot be detected (for example,
unofficial providers) will need a source to be defined by the user. The
tool writes an explanatory comment to the configuration to help with
this.
Both differing serials and lineage protections should be bypassed
with the -force flag (in addition to resources).
Compared to other backends we aren’t just shipping over the state
bytes in a simple payload during the persistence phase of the push
command and the force flag added to the Go TFE client needs to be
specified at that time.
To prevent changing every method signature of PersistState of the
remote client I added an optional interface that provides a hook
to flag the Client as operating in a force push context. Changing
the method signature would be more explicit at the cost of not
being used anywhere else currently or the optional interface pattern
could be applied to the state itself so it could be upgraded to
support PersistState(force bool) only when needed.
Prior to this only the resources of the state were checked for
changes not the lineage or the serial. To bring this in line with
documented behavior noted above those attributes also have a “read”
counterpart just like state has. These are now checked along with
state to determine if the state as a whole is unchanged.
Tests were altered to table driven test format and testing was
expanded to include WriteStateForMigration and its interaction
with a ClientForcePusher type.
Providers like Okta and AWS Cognito expect that the PKCE challenge
uses base64 URL Encoding without any padding (base64.RawURLEncoding)
Additionally, Okta strictly adheres to section 4.2 of RFC 7636 and
requires that the unencoded key for the PKCE data is at least 43
characters in length.
We only persist a new state if the actual state contents have
changed. This test demonstrates that behavior by calling write
and persist methods when either the lineage or serial have changed.
The only situation where `state mv` needs to understand the each mode is
when with resource addresses that may reference a single instance, or a
group of for_each or count instances. In this case we can differentiate
the two by checking the existence of the NoKey instance key.
Due to the fact that resources can transition between each modes, trying
to track the mode for a resource as a whole in state doesn't work,
because there may be instances with a mode different from the resource
as a whole. This is difficult for core to track, as this metadata being
changed as a side effect from multiple places often causes core to see
the incorrect mode when evaluating instances.
Since core can always determine the correct mode to evaluate from the
configuration, we don't need to interrogate the state to know the mode.
Once core no longer needs to reference EachMode from states, the
resource state can simply be a container for instances, and doesn't need
to try and track the "current" mode.
* Include eval in output walk
This allows outputs to be evaluated in the evalwalk,
impacting terraform console. Outputs are still not evaluated
for terraform console in the root module, so this has
no impact on writing to state (as child module outputs are not
written to state). Also adds test coverage to the console command,
including for evaluating locals (another use of the evalwalk)
Previously, resources without explicit provider configuration (i.e. a
`provider =` attribute) would be assigned a default provider based upon
the resource type. For example, a resource `foo_bar` would be assigned
provider `hashicorp/foo`.
This behaviour did not work well with community or partner providers,
with sources configured in `terraform.required_providers` blocks. With
the following configuration:
terraform {
required_providers {
foo = {
source = "acme/foo"
}
}
}
resource foo_bar "a" { }
the resource would be configured with the `hashicorp/foo` provider.
This commit fixes this implied provider behaviour. First we look for a
provider with local name matching the resource type in the module's
required providers map. If one is found, this provider is assigned to
the resource. Otherwise, we still fall back to a default provider.