We now fetch all of the necessary schemas during context creation, so we
can just thread that repository of schemas through into EvalContext and
Evaluator and access the schemas as needed without any further fetching.
This requires updating a few tests to have a valid Provider address in
their state objects, because we need that in order to trigger the loading
of the relevant schema.
This test depends on having a correct schema, so we'll specify the minimum
schema for its fixture inline here rather than using the superset schema
returned by testProvider.
While we're planning we must always update the state with the proposed new
data resulting from the plan. In this case, we must record that the
orphan instance doesn't exist at all in the proposed new state by storing
its state as nil.
This in turn allows references to the containing resource to evaluate
properly, using the new updated resource count. This fixes
TestContext2Apply_multiVarCountDec.
This also includes a number of changes to the test output of
TestContext2Apply_multiVarCountDec that make it easier to debug failures.
An aliased provider should not be automatically inherited, nor
implicitly instantiated in a module. This test should not have
previously passed.
Add a proxy provider block to the module and update the provider to
match the schema.
The Provider field in ResourceState is now required, whereas before it
could be omitted and have Terraform try to discover a fitting provider
configuration automatically.
The automatic behavior was a compatibility shim added in v0.11 to support
states from prior versions without an explicit migration, but for v0.12
we will have a migration to our new state format anyway and so we will
fix this up during that migration pass.
This comprehensive test was covering a few different behaviors that are
intentionally different for v0.12:
- Applying the splat operator to a list of resource instances that haven't
been created yet produces a list of unknown values rather than a single
unknown list as before. This is important because it allows that list
to be passed into length().
- Wrapping a splat expression in another round of brackets now produces
a list of lists, whereas before we had a special case (for compatibility
with prior to v0.10) that would flatten this away in the schema layer.
The adaptation of ModuleState.RemovedOutputs for the new config types
was incorrect because it took the absence of any output map as "nothing to
do", rather than "everything has been removed" as expected.
Now it treats a nil map like an empty map, detecting _all_ of the outputs
as having been removed if the output map is nil.
We no longer have this merge behavior, because it is inconsistent with how
variables behave in all other contexts and similar behavior can now be
achieved by merging the user's input with a predefined map in a local
value expression.
This test seems to have been buggy before our current work, with the test
fixture containing a reference to a resource that doesn't exist.
This both fixes the fixture and adds a mock schema for it, though this
just revealed another error which isn't fixed here, where the a_ids value
seems to come through as unknown after apply. That will be fixed in a
subsequent commit.
We no longer support using "self.count" in a provisioner to access the
resolved count meta-argument value of the associated resource.
This was only possible before because of a special exception in how
Terraform resolved variables, and in new HCL that exception isn't possible
because resource instances are real values in the scope and we don't want
to add this implied "count" attribute to all of them.
"count" is a property of the resource config rather than of the resource
instances, and since "self" is a resource _instance_ it doesn't make sense
to expose it there.
There is no replacement for this feature. In the rare case where it is
needed, the user must factor the count out into a named local value and
refer to that both in the count meta-argument and in the provisioner.
Most of these changes are just adding schema to describe the expectations
of the existing test fixtures. However, some of them require the fixtures
themselves to be changed due to changing assumptions in the language.
This was previously an apply-time failure due to our inability to
type-check unknowns in 0.11, but we now retain type information for
unknown values and so this check now fails during plan instead.
The fixtures for this test assume some atypical arguments to the resources
and also need a provisioner schema.
This doesn't actually fix the test, but by fixing the schema/fixture this
exposes a problem that seems to exist in the main code, which will be
fixed in a subsequent commit.
Update test fixtures to work in our new world.
This is mostly changing out attribute names for those in the schema,
adding Providers to states, and updating the test-fixture
configurations.
After the refactoring to integrate HCL2 many of the tests were no longer
using correct types, attribute names, etc.
This is a bulk update of all of the tests to make them compile again, with
minimal changes otherwise. Although the tests now compile, many of them
do not yet pass. The tests will be gradually repaired in subsequent
commits, as we continue to complete the refactoring and retrofit work.
When computing the count value, make sure to include walkDestroy with
walkApply, as the former is only a special case of the latter.
When applying a saved plan, the computed count values are lost and we
can no longer query the state for those values. The apply walk was
already considered in the `resourceCountMax` function, but the destroy
walk was not. This worked when destroying in a single operation
("terraform destroy"), since the state would still be updated with the
latest counts from the plan.
During a full destroy when outputs are removed, the
NodeDestroyableOutput was preventing it's sibling output from being
destroyed. Prune the output node if it only has its destroy node as a
dependent.
The destroy output test is simply run a second time with no state, which
would cause the output interpolation to fail if it remained in the
graph.
If an existing resources is scaled back to 0, locals and outputs will
still have a multi-variable reference to evaluate, which should return
an empty list. Due to how the resource is removed, the resource will
still exist in the state but with no primary instance, which needs to be
ignored in the instance count.
The provisionerFail_createBeforeDestroy test was verifying the incorrect
output. The create_before_destroy instance in the state has an ID of
"bar" with require_new="abc", and a new instance would get an ID of
"foo" with require_new="xyz". The existing test was expecting the
following state:
aws_instance.bar: (1 deposed)
ID = bar
provider = provider.aws
require_new = abc
Deposed ID 1 = foo (tainted)
Which showed "bar" still the primary instance in the state, with the new
instance "foo" as being the deposed instance, though properly tainted.
The new output is:
aws_instance.bar: (tainted) (1 deposed)
ID = foo
provider = provider.aws
require_new = xyz
type = aws_instance
Deposed ID 1 = bar
Showing the new "foo instance as being the primary instance in the
state, with "bar" as the deposed instance.
An interpolated count value that is determined during plan, is lost
during plan serialization, causing apply to fail when the interpolation
string can't be evaluated.
Similar to NodeApplyableOuptut, NodeDestroyableOutputs also need to stay
in the graph if any ancestor nodes
Use the same GraphNodeTargetDownstream method to keep them from being
pruned, since they are dependent on the output node and all its
descendants.
Since outputs and local nodes are always evaluated, if the reference a
resource form the configuration that isn't in the state, the
interpolation could fail.
Prune any local or output values that have no references in the graph.
Now that outputs are always evaluated, we still need a way to remove
them from state when they are destroyed.
Previously, outputs were removed during destroy from the same
"Applyable" node type that evaluates them. Now that we need to possibly
both evaluate and remove output during an apply, we add a new node -
NodeDestroyableOutput.
This new node is added to the graph by the DestroyOutputTransformer,
which make the new destroy node depend on all descendants of the output
node. This ensures that the output remains in the state as long as
everything which may interpolate the output still exists.
Add a complex destroy provisioner testcase using locals, outputs and
variables.
Add that pesky "id" attribute to the instance states for interpolation.
Destroy-time provisioners require us to re-evaluate during destroy.
Rather than destroying local values, which doesn't do much since they
aren't persisted to state, we always evaluate them regardless of the
type of apply. Since the destroy-time local node is no longer a
"destroy" operation, the order of evaluation need to be reversed. Take
the existing DestroyValueReferenceTransformer and change it to reverse
the outgoing edges, rather than in incoming edges. This makes it so that
any dependencies of a local or output node are destroyed after
evaluation.
Having locals evaluated during destroy failed one other test, but that
was the odd case where we need `id` to exist as an attribute as well as
a field.
Previously we would interpolate the count config (ResourceConfig.RawCount)
only while preparing to dynamic-expand aggregate resource nodes. This is
problematic because we do not dynamic-expand any resource nodes during the
apply walk, and so previously the count value was not available for
interpolation during apply and would result in an error.
Now we interpolate RawCount once for each resource we visit during the
apply walk -- even though that redundantly interpolates the same config
multiple times when count > 1 -- to ensure that it's available by the
time we interpolate any remaining expressions in the config and any
expressions within "connection" and "provisioner" blocks.
This error was masked by us sharing a single RawConfig instance between
the plan and apply walks when "terraform apply" is run with no explicit
plan file argument, but was exposed by the workflow where the plan is
written first to disk since in that case the interpolation result from
during the plan phase is not present in the deflated plan object. For
this reason, the new context test serializes the plan into an in-memory
buffer and reloads it in order to simulate the effect of the two-step
workflow.
Validation is the best time to return detailed diagnostics
to the user since we're much more likely to have source
location information, etc than we are in later operations.
This change doesn't actually add any detail to the messages
yet, but it changes the interface so that we can gradually
introduce more detailed diagnostics over time.
While here there are some minor adjustments to some of the
messages to improve their consistency with terminology we
use elsewhere.
The provider name coming from ProvidedBy may be resolved if it only
exists in the state. Make sure to strip the module and provider
prefixes for the provider name when adding missing providers.
Remove the module entry from the state if a module is no longer in the
configuration. Modules are not removed if there are any existing
resources with the module path as a prefix. The only time this should be
the case is if a module was removed in the config, but the apply didn't
target that module.
Create a NodeModuleRemoved and an associated EvalDeleteModule to track
the module in the graph then remove it from the state. The
NodeModuleRemoved dependencies are simply any other node which contains
the module path as a prefix in its path.
This could have probably been done much easier as a step in pruning the
state, but modules are going to have to be promoted to full graph nodes
anyway in order to support count.
Now that the resolved provider is always stored in state, we need to
udpate all the test data to match. There will probably be some more
breakage once the provider field is properly diffed.
We have a few pesky functions that don't act like proper functions and
instead return different values on each call. These are tricky because
we need to make sure we don't trip over ourselves by re-generating these
between plan and apply.
Here we add a context test to verify correct behavior in the presence
of such functions.
There's actually a pre-existing bug which this test caught as originally
written: we re-evaluate the interpolation expressions during apply,
causing these unstable functions to produce new values, and so the
applied value ends up not exactly matching the plan. This is a bug that
needs fixing, but it's been around at least since v0.7.6 (random old
version I tried this with to see) so we'll put it on the list and address
it separately. For now, this part of the test is commented out with a
TODO attached.
This turned out to be a big messy commit, since the way providers are
referenced is tightly coupled throughout the code. That starts to unify
how providers are referenced, using the format output node Name method.
Add a new field to the internal resource data types called
ResolvedProvider. This is set by a new setter method SetProvider when a
resource is connected to a provider during graph creation. This allows
us to later lookup the provider instance a resource is connected to,
without requiring it to have the same module path.
The InitProvider context method now takes 2 arguments, one if the
provider type and the second is the full name of the provider. While the
provider type could still be parsed from the full name, this makes it
more explicit and, and changes to the name format won't effect this
code.
Use the configured providers directly, rather than looking for inherited
provider configuration during graph evaluation.
First remove the provider config cache, and the associated
SetProviderConfig and ParentProviderConfig methods on the eval context.
Every provider must be configured, so there's no need to look for
configuration from other provider instances.
The config.ProviderConfig struct now has a Scope field which stores the
proper path for the interpolation scope. To get this metadata to the
interpolator, we add an EvalInterpolatProvider node which can carry the
ProviderConfig, and an InterpolateProvider context method to carry the
ProviderConfig.Scope into the InterplationScope.
Some of the tests could be adjusted to account for the new inheritance
behavior, and some were simply no longer valid and will be removed.
The remaining tests have questions on how they should work in practice.
This mostly concerns orphaned modules where there is no longer a way to
obtain a provider. In some cases we may require that a minimal provider
config be present to handle the destroy process, but we need further
testing.
All disabled code was commented out in this commit to record any
additional comments. The following commit will be a cleanup pass.
Locals don't need to be evaluated during destroy. Rather than simply
skipping them, remove them from the state as they are encountered. Even
though they are not persisted in the state, it keeps the state up to
date as the destroy happens, and we reduce the chance of other
inconstancies later on.
We stash the locals in the module state in a map that is ignored for JSON
serialization. We don't include locals in the persisted state because they
can be trivially recomputed and this allows us to assume that they will
pass through verbatim, without any normalization or other transforms
caused by the JSON serialization.
From a user standpoint a local is just a named alias for an expression,
so it's desirable that the result passes through here in as raw a form
as possible, so it behaves as closely as possible to simply using the
given expression directly.
Previously the behavior for -target when given a module address was to
target only resources directly within that module, ignoring any resources
defined in child modules.
This behavior turned out to be counter-intuitive, since users expected
the -target address to be interpreted hierarchically.
We'll now use the new "Contains" function for addresses, which provides
a hierarchical "containment" concept that is more consistent with user
expectations. In particular, it allows module.foo to match
module.foo.module.bar.aws_instance.baz, where before that would not have
been true.
Since Contains isn't commutative (unlike Equals) this requires some
special handling for targeting specific indices. When given an argument
like -target=aws_instance.foo[0], the initial graph construction (for
both plan and refresh) is for the resource nodes from configuration, which
have not yet been expanded to separate indexed instances. Thus we need
to do the first pass of TargetsTransformer in mode where indices are
ignored, with the work then completed by the DynamicExpand method which
re-applies the TargetsTransformer in index-sensitive mode.
This is a breaking change for anyone depending on the previous behavior
of -target, since it will now select more resources than before. There is
no way provided to obtain the previous behavior. Eventually we may support
negative targeting, which could then combine with positive targets to
regain the previous behavior as an explicit choice.
Rather than providing an already-resolved map of plugins to core, we now
provide a "provider resolver" which knows how to resolve a set of provider
dependencies, to be determined later, and produce that map.
This requires the context to be instantiated in a different way, so this
very noisy diff is a mostly-mechanical update of all of the existing
places where contexts get created for testing, using some adapted versions
of the pre-existing utilities for passing in mock providers.
Previously the Type of a ResourceState was generally ignored, but we're
now starting to use it to figure out which providers are needed to
support the resources in state so our tests need to set it accurately
in order to get the expected result.
Prior to Terraform 0.7, lists in Terraform were just a shallow abstraction
on top of strings with a magic delimiter between items. Wrapping a single
string in brackets in the configuration was Terraform's prompt that it
needed to split the string on that delimiter during interpolation.
In 0.7, when first-class lists were added, this convention was preserved
by flattening lists-of-lists by one level when they were encountered in
configuration. However, there was an oversight in that change where it
did not correctly handle the case where the inner list was unknown.
In #14135 we removed some code that was flattening partially-unknown lists
into fully-unknown (untyped) values. This inadvertently exposed the missed
case from the previous paragraph, causing issues for list-wrapped splat
expressions with unknown members. While this worked fine for resources,
due to some fixup done inside helper/schema, this did not work for other
interpolation contexts such as module blocks.
Various attempts to fix this up and restore the flattening behavior
selectively were unsuccessful, due to a proliferation of assumptions all
over the core code that would be too risky to change just to fix this bug.
This change, then, takes the different approach of removing the
requirement that splats be presented inside list brackets. This
requirement didn't make much sense anymore anyway, since no other
list-returning expression had this constraint and so the rest of Terraform
was already successfully dealing with both cases.
This leaves us with two different scenarios:
- For resource arguments, existing normalization code in helper/schema
does its own flattening that preserves compatibility with the common
practice of using bracketed splats. This change proves this with a test
within the "test" provider that exercises the whole Terraform core and
helper/schema stack that assigns bracketed splats to list and set
attributes.
- For arguments in other blocks, such as in module callsites, the
interpolator's own flattening behavior applies to known lists,
preserving compatibility with configurations from before
partially-computed splats were possible, but those wishing to use
partially-computed splats are required to drop the surrounding brackets.
This is less concerning because this scenario was introduced only in
0.9.5, so the scope for breakage is limited to those who adopted this
new feature quickly after upgrading.
As of this commit, the recommendation is to stop using brackets around
splats but the old form continues to be supported for backward
compatibility. In a future _major_ version of Terraform we will probably
phase out this legacy form to improve consistency, but for now both
forms are acceptable at the expense of some (pre-existing) weird behavior
when _actual_ lists-of-lists are used.
This addresses #14521 by officially adopting the suggested workaround of
dropping the brackets around the splat. However, it doesn't yet allow
passing of a partially-unknown list between modules: that still violates
assumptions in Terraform's core, so for the moment partially-unknown lists
work only within a _single_ interpolation expression, and cannot be
passed around between expressions. Until more holistic work is done to
improve Terraform's type handling, passing a partially-unknown splat
through to a module will result in a fully-unknown list emerging on
the other side, just as was the case before #14135; this change just
addresses the fact that this was failing with an error in 0.9.5.
For child modules, a ModuleState isn't allocated until the first time a
module instance is inserted into the state under the module's path.
Normally interpolations of resource attributes are delayed until at least
one resource has been created due to the nature of the dependency graph,
but if the interpolation value is a multi-var (splat) then it is possible
that the referenced resource has count=0 and thus created _no_ resource
states when it was visited.
Previously we would crash when trying to access the resource map for the
nil module in order to count how many instances are present. Since we know
there can't be any instances present in a nil module, we now preempt
this crash by returning zero early.
This edge-case does not apply to the root module because its ModuleState
is allocated as part of initializing the main State instance.
This fixes#14438.
The previous behavior of targets was that targeting a particular node
would implicitly target everything it depends on. This makes sense when
the dependencies in question are between resources, since we need to
make sure all of a resource's dependencies are in place before we can
create or update it.
However, it had the undesirable side-effect that targeting a resource
would _exclude_ any outputs referring to it, since the dependency edge
goes from output to resource. This then causes the output to be "stale",
which is problematic when outputs are being consumed by downstream
configs using terraform_remote_state.
GraphNodeTargetDownstream allows nodes to opt-in to a new behavior where
they can be targeted by _inverted_ dependency edges. That is, it allows
outputs to be considered targeted if anything they directly depend on
is targeted.
This is different than the implied targeting behavior in the other
direction because transitive dependencies are not considered unless the
intermediate nodes themselves have TargetDownstream. This means that
an output1→output2→resource chain can implicitly target both outputs, but
an output→resource1→resource2 chain _won't_ target the output if only
resource2 is targeted.
This behavior creates a scenario where an output can be visited before
all of its dependencies are ready, since it may have a mixture of both
targeted and untargeted dependencies. This is fine for outputs because
they silently ignore any errors encountered during interpolation anyway,
but other hypothetical future implementers of this interface may need to
be more careful.
This fixes#14186.
Make sure duplicate depends_on entries are pruned from existing states
on read.
Make sure new state built from configs with multiple references to the
same resource only add it once to the Dependencies.
Starting with Go 1.8 betas, we've periodically received SIGQUITs on our
tests in Travis. The stack trace looks like this:
https://gist.github.com/mitchellh/abf09b0980f8ea01269f8d9d6133884d
The tests are timing out! This is a test that hasn't been touched really
in a very long time and has always passed. I've **reproduced this
locally** by setting `GOMAXPROCS=1` and running the test. By yielding
the scheduler in the hot loop, it now passes almost instantly every
time.
Perhaps the test can be written in a different way, but this gets tests
passing and I think will fix our periodic errors.
Fixes#10911
Outputs that aren't targeted shouldn't be included in the graph.
This requires passing targets to the apply graph. This is unfortunate
but long term should be removable since I'd like to move output changes
to the diff as well.
Fixes#11749
I'm **really** surprised this didn't come up earlier.
When only the state is available for a node, the advertised
referenceable name (the name used for dependency connections) included
the module path. This module path is automatically prepended to the
name. This means that probably every non-root resource for state-only
operations (destroys) didn't order properly.
This fixes that by omitting the path properly.
Multiple tests added to verify both graph correctness as well as a
higher level context test.
Will backport to 0.8.x
This switches to the Go "context" package for cancellation and threads
the context through all the way to evaluation to allow behavior based on
stopping deep within graph execution.
This also adds the Stop API to provisioners so they can quickly exit
when stop is called.
Fixes#10680
This moves TargetsTransformer to run after the transforms that add
module variables is run. This makes targeting work across modules (test
added).
This is a bug that only exists in the new graph, but was caught by a
shadow error in #10680. Tests were added to protect against regressions.
If a data source has explicit dependencies in `depends_on`, we can
assume the user has added those because of a dependency not tracked
directly in the config. If there are any entries in `depends_on`, don't
apply the data source early during Refresh.
Fixes#4645
This is something that never worked (even in legacy graphs), but as we
push forward towards encouraging multi-provider usage especially with
things like the Vault data source, I want to make sure we have this
right for 0.8.
When you have a config like this:
```
resource "foo_type" "name" {}
provider "bar" { attr = "${foo_type.name.value}" }
resource "bar_type" "name" {}
```
Then the destruction ordering MUST be:
1. `bar_type`
2. `foo_type`
Since configuring the client for `bar_type` requires accessing data from
`foo_type`. Prior to this PR, these two would be done in parallel. This
properly pushes forward the dependency.
There are more cases I want to test but this is a basic case that is
fixed.
Fixes#10440
This updates the behavior of "apply" resources to depend on the
destroy versions of their dependencies.
We make an exception to this behavior when the "apply" resource is CBD.
This is odd and not 100% correct, but it mimics the behavior of the
legacy graphs and avoids us having to do major core work to support the
100% correct solution.
I'll explain this in examples...
Given the following configuration:
resource "null_resource" "a" {
count = "${var.count}"
}
resource "null_resource" "b" {
triggers { key = "${join(",", null_resource.a.*.id)}" }
}
Assume we've successfully created this configuration with count = 2.
When going from count = 2 to count = 1, `null_resource.b` should wait
for `null_resource.a.1` to destroy.
If it doesn't, then it is a race: depending when we interpolate the
`triggers.key` attribute of `null_resource.b`, we may get 1 value or 2.
If `null_resource.a.1` is destroyed, we'll get 1. Otherwise, we'll get
2. This was the root cause of #10440
In the legacy graphs, `null_resource.b` would depend on the destruction
of any `null_resource.a` (orphans, tainted, anything!). This would
ensure proper ordering. We mimic that behavior here.
The difference is CBD. If `null_resource.b` has CBD enabled, then the
ordering **in the legacy graph** becomes:
1. null_resource.b (create)
2. null_resource.b (destroy)
3. null_resource.a (destroy)
In this case, the update would always have 2 values for `triggers.key`,
even though we were destroying a resource later! This scenario required
two `terraform apply` operations.
This is what the CBD check is for in this PR. We do this to mimic the
behavior of the legacy graph.
The correct solution to do one day is to allow splat references
(`null_resource.a.*.id`) to happen in parallel and only read up to to
the `count` amount in the state. This requires some fairly significant
work close to the 0.8 release date, so we can defer this to later and
adopt the 0.7.x behavior for now.
Fixes#10439
When a CBD resource depends on a non-CBD resource, the non-CBD resource
is auto-promoted to CBD. This was done in
cf3a259. This PR makes it so that we
also set the config CBD to true. This causes the proper runtime
execution behavior to occur where we depose state and so on.
So in addition to simple graph edge tricks we also treat the non-CBD
resources as CBD resources.
Fixes#10313
The new graph wasn't properly recording resource dependencies to a
specific index of itself. For example: `foo.bar.2` depending on
`foo.bar.0` wasn't shown in the state when it should've been.
This adds a test to verify this and fixes it.
terraform: more specific resource references
terraform: outputs need to know about the new reference format
terraform: resources w/o a config still have a referencable name
This makes the old graph also prune orphan outputs in modules.
This will fix shadow graph errors such as #9905 since the old graph will
also behave correctly in these scenarios.
Luckily, because orphan outputs don't rely on anything, we were able to
simply use the same transformer!
Fixes#9920
This was an issue caught with the shadow graph. Self references in
provisioners were causing a self-edge on destroy apply graphs.
We need to explicitly check that we're not creating an edge to ourself.
This is also how the reference transformer works.
Fixes a shadow graph error found during usage.
The new apply graph was only adding module variables that referenced
data that existed _in the graph_. This isn't a valid optimization since
the data it is referencing may be in the state with no diff, and
therefore available but not in the graph.
This just removes that optimization logic, which causes no failing
tests. It also adds a test that exposes the bug if we had the pruning
logic.
Found via a shadow graph failure:
Provider aliases weren't being configured by the new apply graph.
This was caused by the transform that attaches configs to provider nodes
not being able to handle aliases and therefore not attaching a config.
Added a test to this and fixed it.
Fixes#9444
This appears to be a regression from 0.7.0, but there were no tests
covering it so we missed it and changed the behavior at some point! Oh
no.
This PR make the ordering of multi-var access: `resource.name.*.attr`
consistent: it is the ordering of the count, not the lexical ordering of
the value. This allows behavior where two lists are indexed by count
index and can be assumed to be related (for example user data for an aws
instance, as shown in the above referenced issue).
Two new context tests added to cover this case.
Fixes#9840
The new apply graph wasn't properly nesting provisioners. This resulted
in reading the provisioners being nil on apply in the shadow graph which
caused the crash in the above issue.
The actual cause of this is that the new graphs we're moving towards do
not have any "flattening" (they are flat to begin with): all modules are
in the root graph from the beginning of construction versus building a
number of different graphs and flattening them. The transform that adds
the provisioners wasn't modified to handle already-flat graphs and so
was only adding provisioners to the root module, not children.
The change modifies the `MissingProvisionerTransformer` (primarily) to
support already-flat graphs and add provisioners for all module levels.
Tests are there to cover this as well.
**NOTE:** This PR focuses on fixing that specific issue. I'm going to follow up
this PR with another PR that is more focused on being robust against
crashing (more nil checks, recover() for shadow graph, etc.). In the
interest of focus and keeping a PR reviewable this focuses only on the
issue itself.
Fixes#6327
Deposed instances weren't calling PostApply which was causing the counts
for what happened during `apply` to be wrong. This was a simple fix to
ensure we call that hook.
This reverts commit c3a4cff133, reversing
changes made to 791a02e6e4.
This change requires plugin recompilation and we should hold off until a
minor release for that.
This enables the shadow graph since all tests pass!
We also change the destroy node to check the resource type using the
addr since that is always available and reliable. The configuration can
be nil for orphans.
It appears data sources have always been coded to work during apply, as
can be verified with this test (no impl. changes were necessary to make
it pass).
This test should be added to ensure our apply graph always works with
data sources as well.
This adds the proper logic for "disabling" providers to the new apply
graph: interolating and storing the config for inheritance but not
actually initializing and configuring the provider.
This is important since parent modules will often contain incomplete
provider configurations for the purpose of inheritance that would error
if they were actually attempted to be configured (since they're
incomplete). If the provider is not used, it should be "disabled".
This doesn't explicitly set `rs.Provider` on destroy nodes.
To be honest, I'm not sure why this was done in the first place (git
blame points to 6fda7bb5483a155b8ae1e1e4e4b7b7c4073bc1d9). Tests always
passed without it, and by adding it it causes other tests to fail. I
should've never changed those other tests.
Removing it now to get tests passing, this also reverts the test changes
made in 8213824962f085279810f04b60b95d1176a3a3f2.