Our new resource-to-provider matching is stricter about explicitly
matching aliases when config is present (no longer automatically
inherited) and with locating providers to destroy removed resources.
With this in mind, this is an attempt to expand slightly on this error
message now that users are more likely to see it.
In future it would be nice to do some explicit validation of this a bit
closer to the UI, so we can have room for more explanatory text, but this
additional messaging is intended to help users understand why they might
be seeing this message after removing a provider configuration block from
configuration, whether directly or as a side-effect of removing a module.
Remove the module entry from the state if a module is no longer in the
configuration. Modules are not removed if there are any existing
resources with the module path as a prefix. The only time this should be
the case is if a module was removed in the config, but the apply didn't
target that module.
Create a NodeModuleRemoved and an associated EvalDeleteModule to track
the module in the graph then remove it from the state. The
NodeModuleRemoved dependencies are simply any other node which contains
the module path as a prefix in its path.
This could have probably been done much easier as a step in pruning the
state, but modules are going to have to be promoted to full graph nodes
anyway in order to support count.
You can't find orphans by walking the config, because by definition
orphans aren't in the config.
Leaving the broken test for when empty modules are removed from the
state as well.
Now that the resolved provider is always stored in state, we need to
udpate all the test data to match. There will probably be some more
breakage once the provider field is properly diffed.
Use the ResourceState.Provider field to store the full name of the
provider used during apply. This field is only used when a resource is
removed from the config, and will allow that resource to be removed by
the exact same provider with which it was created.
Modify the locations which might accept the alue of the
ResourceState.Provider field to detect that the name is resolved.
Here we complete the passing of providers between modules via the
module/providers configuration, add another test and update broken test
outputs.
The DisbableProviderTransformer is being removed, since it was really
only for provider configuration inheritance. Since configuration is no
longer inherited, there's no need to keep around unused providers. The
actually shouldn't be any unused providers going into the graph any
longer, but put off verifying that condition for later. Replace it's
usage with the PruneProviderTransformer, and use that to also remove the
unneeded proxy provider nodes.
Implement the adding of provider through the module/providers map in the
configuration.
The way this works is that we start walking the module tree from the
top, and for any instance of a provider that can accept a configuration
through the parent's module/provider map, we add a proxy node that
provides the real name and a pointer to the actual parent provider node.
Multiple proxies can be chained back to the original provider. When
connecting resources to providers, if that provider is a proxy, we can
then connect the resource directly to the proxied node. The proxies are
later removed by the DisabledProviderTransformer.
This should re-instate the 0.11 beta inheritance behavior, but will
allow us to later store the actual concrete provider used by a resource,
so that it can be re-connected if it's orphaned by removing its module
configuration.
Now that resources can be connected to providers with different paths in
the core graph, handling the inheritance in config makes less sense.
Removing this to make room for core to walk the Tree and connect
resources directly to the proper provider instance.
A missing provider alias should not be implicitly added to the graph.
Run the AttachaProviderConfigTransformer immediately after adding the
providers, since the ProviderConfigTransformer should have just added
these nodes.
There are situations where one may need to write to a set, list, or map
more than once per single TF operation (apply/refresh/etc). In these
cases, further writes using Set (example: d.Set("some_set", newSet))
currently create unstable results in the set writer (the name of the
writer layer that holds the data set by these calls) because old keys
are not being cleared out first.
This bug is most visible when using sets. Example: First write to set
writes elements that have been hashed at 10 and 20, and the second write
writes elements that have been hashed at 30 and 40. While the set length
has been correctly set at 2, since a set is basically a map (as is the
entire map writer) and map results are non-deterministic, reads to this
set will now deliver unstable results in a random but predictable
fashion as the map results are delivered to the caller non-deterministic
- sometimes you may correctly get 30 and 40, but sometimes you may get
10 and 20, or even 10 and 30, etc.
This problem propagates to state which is even more damaging as unstable
results are set to state where they become part of the permanent data
set going forward.
The problem also applies to lists and maps. This is probably more of an
issue with maps as a map can contain any key/value combination and hence
there is no predictable pattern where keys would be overwritten with
default or zero values. This is contrary to complex lists, which has
this problem as well, but since lists are deterministic and the length
of a list properly gets updated during the overwrite, the problem is
masked by the fact that a read will only read to the boundary of the
list, skipping any bad data that may still be available due past the
list boundary.
This update clears the child contents of any set, list, or map before
beginning a new write to address this issue. Tests are included for all
three data types.
We have a few pesky functions that don't act like proper functions and
instead return different values on each call. These are tricky because
we need to make sure we don't trip over ourselves by re-generating these
between plan and apply.
Here we add a context test to verify correct behavior in the presence
of such functions.
There's actually a pre-existing bug which this test caught as originally
written: we re-evaluate the interpolation expressions during apply,
causing these unstable functions to produce new values, and so the
applied value ends up not exactly matching the plan. This is a bug that
needs fixing, but it's been around at least since v0.7.6 (random old
version I tried this with to see) so we'll put it on the list and address
it separately. For now, this part of the test is commented out with a
TODO attached.
We previously didn't compare values but had a TODO to start doing so,
which we then recently did. Unfortunately it turns out that we _depend_
on not comparing values here, because when we use EvalCompareDiff (a key
user of Diff.Same) we pass in a diff made from a fresh re-interpolation
of the configuration and so any non-pure function results (timestamp,
uuid) have produced different values.
As part of the 0.10 core/provider split we moved this provider, along with
all the others, out into its own repository.
In retrospect, the "terraform" provider doesn't really make sense to be
separated since it's just a thin wrapper around some core code anyway,
and so re-integrating it into core avoids the confusion that results when
Terraform Core and the terraform provider have inconsistent versions of
the backend code and dependencies.
There is no good reason to use a different version of the backend code
in the provider than in core, so this new "internal provider" mechanism
is stricter than the old one: it's not possible to use an external build
of this provider at all, and version constraints for it are rejected as
a result.
This provider is also run in-process rather than in a child process, since
again it's just a very thin wrapper around code that's already running
in Terraform core anyway, and so the process barrier between the two does
not create enough advantage to warrant the additional complexity.