Use the ResourceState.Provider field to store the full name of the
provider used during apply. This field is only used when a resource is
removed from the config, and will allow that resource to be removed by
the exact same provider with which it was created.
Modify the locations which might accept the alue of the
ResourceState.Provider field to detect that the name is resolved.
Here we complete the passing of providers between modules via the
module/providers configuration, add another test and update broken test
outputs.
The DisbableProviderTransformer is being removed, since it was really
only for provider configuration inheritance. Since configuration is no
longer inherited, there's no need to keep around unused providers. The
actually shouldn't be any unused providers going into the graph any
longer, but put off verifying that condition for later. Replace it's
usage with the PruneProviderTransformer, and use that to also remove the
unneeded proxy provider nodes.
Implement the adding of provider through the module/providers map in the
configuration.
The way this works is that we start walking the module tree from the
top, and for any instance of a provider that can accept a configuration
through the parent's module/provider map, we add a proxy node that
provides the real name and a pointer to the actual parent provider node.
Multiple proxies can be chained back to the original provider. When
connecting resources to providers, if that provider is a proxy, we can
then connect the resource directly to the proxied node. The proxies are
later removed by the DisabledProviderTransformer.
This should re-instate the 0.11 beta inheritance behavior, but will
allow us to later store the actual concrete provider used by a resource,
so that it can be re-connected if it's orphaned by removing its module
configuration.
Now that resources can be connected to providers with different paths in
the core graph, handling the inheritance in config makes less sense.
Removing this to make room for core to walk the Tree and connect
resources directly to the proper provider instance.
A missing provider alias should not be implicitly added to the graph.
Run the AttachaProviderConfigTransformer immediately after adding the
providers, since the ProviderConfigTransformer should have just added
these nodes.
There are situations where one may need to write to a set, list, or map
more than once per single TF operation (apply/refresh/etc). In these
cases, further writes using Set (example: d.Set("some_set", newSet))
currently create unstable results in the set writer (the name of the
writer layer that holds the data set by these calls) because old keys
are not being cleared out first.
This bug is most visible when using sets. Example: First write to set
writes elements that have been hashed at 10 and 20, and the second write
writes elements that have been hashed at 30 and 40. While the set length
has been correctly set at 2, since a set is basically a map (as is the
entire map writer) and map results are non-deterministic, reads to this
set will now deliver unstable results in a random but predictable
fashion as the map results are delivered to the caller non-deterministic
- sometimes you may correctly get 30 and 40, but sometimes you may get
10 and 20, or even 10 and 30, etc.
This problem propagates to state which is even more damaging as unstable
results are set to state where they become part of the permanent data
set going forward.
The problem also applies to lists and maps. This is probably more of an
issue with maps as a map can contain any key/value combination and hence
there is no predictable pattern where keys would be overwritten with
default or zero values. This is contrary to complex lists, which has
this problem as well, but since lists are deterministic and the length
of a list properly gets updated during the overwrite, the problem is
masked by the fact that a read will only read to the boundary of the
list, skipping any bad data that may still be available due past the
list boundary.
This update clears the child contents of any set, list, or map before
beginning a new write to address this issue. Tests are included for all
three data types.
We have a few pesky functions that don't act like proper functions and
instead return different values on each call. These are tricky because
we need to make sure we don't trip over ourselves by re-generating these
between plan and apply.
Here we add a context test to verify correct behavior in the presence
of such functions.
There's actually a pre-existing bug which this test caught as originally
written: we re-evaluate the interpolation expressions during apply,
causing these unstable functions to produce new values, and so the
applied value ends up not exactly matching the plan. This is a bug that
needs fixing, but it's been around at least since v0.7.6 (random old
version I tried this with to see) so we'll put it on the list and address
it separately. For now, this part of the test is commented out with a
TODO attached.
We previously didn't compare values but had a TODO to start doing so,
which we then recently did. Unfortunately it turns out that we _depend_
on not comparing values here, because when we use EvalCompareDiff (a key
user of Diff.Same) we pass in a diff made from a fresh re-interpolation
of the configuration and so any non-pure function results (timestamp,
uuid) have produced different values.
As part of the 0.10 core/provider split we moved this provider, along with
all the others, out into its own repository.
In retrospect, the "terraform" provider doesn't really make sense to be
separated since it's just a thin wrapper around some core code anyway,
and so re-integrating it into core avoids the confusion that results when
Terraform Core and the terraform provider have inconsistent versions of
the backend code and dependencies.
There is no good reason to use a different version of the backend code
in the provider than in core, so this new "internal provider" mechanism
is stricter than the old one: it's not possible to use an external build
of this provider at all, and version constraints for it are rejected as
a result.
This provider is also run in-process rather than in a child process, since
again it's just a very thin wrapper around code that's already running
in Terraform core anyway, and so the process barrier between the two does
not create enough advantage to warrant the additional complexity.
Change "Downloading" to 'Initializing" to match the provider loading
dialog.
List each module being loaded.
If a regisry module is being downloaded, list the registry host, and the
version discovered.
Show the source string from the config that is being fetched, rather
than the go-getter url. The full source can be found in the logs for
debugging.
Add much more extensive logging
We can remove the AllowAny option which is no longer used, and providers
don't need to be connected to their resources at this stage, since that
will happen in the ProviderTransformer.
Now that providers in the graph can adopt resources without an explicit
provider, there's no need to add the implicit configs to the module.Tree
when loading.
Simplify the MissingProviderTransformer so that it only adds missing
providers at the root level. There's no need for the multitple providers
added at every level of the path
ParentProviderTransformer then only needs to connect providers with the
equivalent type at the root level.
The the grandChild missing test has a provider declared in a child
module which is missing in a grandchildmodule. Verify that the
grandchild gets connected to the child provider, and they all are
connected to the root providers.
Update some test outputs to match the expected behavior of only adding
missing providers at the root level.