We've historically been somewhat inconsistent in how we refer to the
type of object defined by "variable" blocks in configuration. Parts of
our documentation refer to them as "input variables" or just "variables",
while our implementation refers to them as "user variables".
Since Terraform Registry is now also referring to these as "Inputs", here
we standardize on "Input Variable" as the fully-qualified name for this
concept, with "variable" being a shorthand for this where context is
obvious. Outside of this context, anything that can be referred to in
an interpolation expression is generically known as a "variable", with
Input Variables being just one kind, specified by the "var." prefix.
While this terminology shift is not critical yet, it will become more
important as we start to document the new version of the configuration
language so we can use the generic meaning of "variable" there.
The bulk of the text on this page hasn't been revised for some time and
so parts of it were using non-idiomatic terminology or not defining terms
at all.
The main goal of this revision is to standardize on the following terms:
- "provider configuration" refers to a specific provider block in config,
as a distinct idea from the provider _itself_, which is a singleton.
- "Default" vs. "additional" provider configurations, distinguishing
those without and with "alias" arguments respectively. These are named
here so that we can use this terminology to describe the different
behaviors of each for the purposes of provider inheritance between
modules.
There was a bug where all references would be discarded in the case when
a self-reference was encountered. Since a module references all
descendants by it's own path, it returns a self-reference by definition.
Our new resource-to-provider matching is stricter about explicitly
matching aliases when config is present (no longer automatically
inherited) and with locating providers to destroy removed resources.
With this in mind, this is an attempt to expand slightly on this error
message now that users are more likely to see it.
In future it would be nice to do some explicit validation of this a bit
closer to the UI, so we can have room for more explanatory text, but this
additional messaging is intended to help users understand why they might
be seeing this message after removing a provider configuration block from
configuration, whether directly or as a side-effect of removing a module.
Remove the module entry from the state if a module is no longer in the
configuration. Modules are not removed if there are any existing
resources with the module path as a prefix. The only time this should be
the case is if a module was removed in the config, but the apply didn't
target that module.
Create a NodeModuleRemoved and an associated EvalDeleteModule to track
the module in the graph then remove it from the state. The
NodeModuleRemoved dependencies are simply any other node which contains
the module path as a prefix in its path.
This could have probably been done much easier as a step in pruning the
state, but modules are going to have to be promoted to full graph nodes
anyway in order to support count.
You can't find orphans by walking the config, because by definition
orphans aren't in the config.
Leaving the broken test for when empty modules are removed from the
state as well.
Now that the resolved provider is always stored in state, we need to
udpate all the test data to match. There will probably be some more
breakage once the provider field is properly diffed.
Use the ResourceState.Provider field to store the full name of the
provider used during apply. This field is only used when a resource is
removed from the config, and will allow that resource to be removed by
the exact same provider with which it was created.
Modify the locations which might accept the alue of the
ResourceState.Provider field to detect that the name is resolved.
Here we complete the passing of providers between modules via the
module/providers configuration, add another test and update broken test
outputs.
The DisbableProviderTransformer is being removed, since it was really
only for provider configuration inheritance. Since configuration is no
longer inherited, there's no need to keep around unused providers. The
actually shouldn't be any unused providers going into the graph any
longer, but put off verifying that condition for later. Replace it's
usage with the PruneProviderTransformer, and use that to also remove the
unneeded proxy provider nodes.
Implement the adding of provider through the module/providers map in the
configuration.
The way this works is that we start walking the module tree from the
top, and for any instance of a provider that can accept a configuration
through the parent's module/provider map, we add a proxy node that
provides the real name and a pointer to the actual parent provider node.
Multiple proxies can be chained back to the original provider. When
connecting resources to providers, if that provider is a proxy, we can
then connect the resource directly to the proxied node. The proxies are
later removed by the DisabledProviderTransformer.
This should re-instate the 0.11 beta inheritance behavior, but will
allow us to later store the actual concrete provider used by a resource,
so that it can be re-connected if it's orphaned by removing its module
configuration.
Now that resources can be connected to providers with different paths in
the core graph, handling the inheritance in config makes less sense.
Removing this to make room for core to walk the Tree and connect
resources directly to the proper provider instance.
A missing provider alias should not be implicitly added to the graph.
Run the AttachaProviderConfigTransformer immediately after adding the
providers, since the ProviderConfigTransformer should have just added
these nodes.
There are situations where one may need to write to a set, list, or map
more than once per single TF operation (apply/refresh/etc). In these
cases, further writes using Set (example: d.Set("some_set", newSet))
currently create unstable results in the set writer (the name of the
writer layer that holds the data set by these calls) because old keys
are not being cleared out first.
This bug is most visible when using sets. Example: First write to set
writes elements that have been hashed at 10 and 20, and the second write
writes elements that have been hashed at 30 and 40. While the set length
has been correctly set at 2, since a set is basically a map (as is the
entire map writer) and map results are non-deterministic, reads to this
set will now deliver unstable results in a random but predictable
fashion as the map results are delivered to the caller non-deterministic
- sometimes you may correctly get 30 and 40, but sometimes you may get
10 and 20, or even 10 and 30, etc.
This problem propagates to state which is even more damaging as unstable
results are set to state where they become part of the permanent data
set going forward.
The problem also applies to lists and maps. This is probably more of an
issue with maps as a map can contain any key/value combination and hence
there is no predictable pattern where keys would be overwritten with
default or zero values. This is contrary to complex lists, which has
this problem as well, but since lists are deterministic and the length
of a list properly gets updated during the overwrite, the problem is
masked by the fact that a read will only read to the boundary of the
list, skipping any bad data that may still be available due past the
list boundary.
This update clears the child contents of any set, list, or map before
beginning a new write to address this issue. Tests are included for all
three data types.