To avoid a massively-disruptive change to how EvalNode works, we're now
"smuggling" warnings through the error return value for these, but this
depends on all of the Eval machinery correctly handling this special case
and continuing evaluation when only warnings are returned.
Previous changes missed EvalSequence as a place where execution halts on
error. Now it will accumulate diagnostics itself, aborting if any of
them are error diagnostics, and then wrap its own result up in an error
to be returned by the main Eval function, which already treats non-fatal
errors as a special case, though now produces an explicit log message
about that situation to make it easier to spot in trace logs.
This also includes a more detailed warning message for the warning about
provider input being disabled. While this warning should be removed before
we release anyway, having this additional detail is helpful in debugging
tests where it's being returned.
Since outputs now rely on providers in order to ensure that a schema is
available for evaluation, we need to exclude providers from checking
TargetDownstream.
A provider's schema is the same regardless of its address in the
config. Key them by type so that an evaluation referencing a provider
from an address not included in the graph can still find the schema.
We no longer have this merge behavior, because it is inconsistent with how
variables behave in all other contexts and similar behavior can now be
achieved by merging the user's input with a predefined map in a local
value expression.
Previously we'd create the stub provider in any case where we didn't need
a configured provider, but we also need to skip creating it if there's
already a provider node present, or else we can end up with multiple
stub nodes in the graph.
Since ProviderTransformer now needs the schema in order to infer indirect
references to providers, we must run AttachSchemaTransformer before the
provider transformers in order to calculate the correct ordering of
operations.
The provider schema cache is keyed by provider configuration address
rather than provider type, so we need to do the same inheritance logic
to resolve providers needed because of reference as we do for providers
needed for direct use.
This allows resources that override "provider" or resources in child
modules that have their own provider configurations to be associated
with the provider config they will eventually get schema from, rather
than (as before) always the default configuration for the provider in
the root module.
Eventually it'd probably be better to switch to using a provider cache
that is keyed by provider _type_ rather than provider config, but since
it's currently fetched by visiting the individual provider graph nodes
we currently visit each provider configuration separately and fetch a
schema for each.
Any non-resource (outputs, variables, locals) that references a resource
type must also be connected to that resources provider. This is required
during apply, because the graph built from the diff may not include the
referenced resources because they are being evaluated from the state.
If the provider isn't present already, add a NodeEvalableProvider to
fetch the provider schema.
The provider transformers now need to happen after the outputs, locals,
and variables are transformed.
This is important in particular for shimming the "providers" map in module
blocks:
providers = {
"aws" = "aws.foo"
}
We call this shim for both the key and the value here, and the value would
previously have worked. However, the key is wrapped up by the parser in
an ObjectConsKeyExpr container, which deals with the fact that in normal
use an object constructor key that is just a bare identifier is actually
interpreted as a string. We don't care about that interpretation for our
shimming purposes, and so we can just unwrap it here.
This test seems to have been buggy before our current work, with the test
fixture containing a reference to a resource that doesn't exist.
This both fixes the fixture and adds a mock schema for it, though this
just revealed another error which isn't fixed here, where the a_ids value
seems to come through as unknown after apply. That will be fixed in a
subsequent commit.
We no longer support using "self.count" in a provisioner to access the
resolved count meta-argument value of the associated resource.
This was only possible before because of a special exception in how
Terraform resolved variables, and in new HCL that exception isn't possible
because resource instances are real values in the scope and we don't want
to add this implied "count" attribute to all of them.
"count" is a property of the resource config rather than of the resource
instances, and since "self" is a resource _instance_ it doesn't make sense
to expose it there.
There is no replacement for this feature. In the rare case where it is
needed, the user must factor the count out into a named local value and
refer to that both in the count meta-argument and in the provisioner.
Most of these changes are just adding schema to describe the expectations
of the existing test fixtures. However, some of them require the fixtures
themselves to be changed due to changing assumptions in the language.
This was previously an apply-time failure due to our inability to
type-check unknowns in 0.11, but we now retain type information for
unknown values and so this check now fails during plan instead.
The fixtures for this test assume some atypical arguments to the resources
and also need a provisioner schema.
This doesn't actually fix the test, but by fixing the schema/fixture this
exposes a problem that seems to exist in the main code, which will be
fixed in a subsequent commit.
On the initial pass here I reached a faulty conclusion about what from
the new world should shim into a NewInstanceInfo, based on a poor read
of existing code.
It actually _should've_ been based on an absolute instance after all,
as evidenced by the expected result of TestContext2Refresh_targetedCount.
Therefore the signature is changed here, and all of the callers (which,
in retrospect, were all holding a full instance address anyway!) are
updated to that new signature.
References can't be connected directly to the instances, because the
resources are expanded when ReferenceTransformer is run. Lookup
references by the resource type.
Although there isn't really a good reason why there should be no schema
in practice, it's better for us not to crash right now while we're still
updating all of the callers (mostly tests) to make schema available.
This was trying to test gathering input from a default and an alias
provider configuration, but it was incorrectly setting the provider ref
on the instance that was supposed to belong to the alias. This was working
before because Terraform would gather input from the aliased provider
anyway, but now the invalid "alias" argument in the resource is producing
a validation error.
This doesn't actually make the test work again yet, because we still have
provider input disabled at this time, pending a forthcoming change to
how provider input is handled.
This test was previously not setting InputModeVarUnset, causing us to
overwrite the "amis" map that _is_ already set. This worked before because
we used to treat the empty result as an empty map and then merge it with
the given value, but since we no longer do that merging behavior we were
ending up with an empty map after input.
Since the intent of this test is to see that the "foo" variable gets
populated by input, here we add InputModeVarUnset which then matches how
the input walk is triggered by the "real" codepath in the local backend.
This also includes some updates to make the test fixture v0.12-idiomatic
(applied after it was seen to work with the old fixture) and to properly
handle the "diags" return value from the various context methods.
This attribute is referenced in order to include a computed value into
another resource, and so it must be present in the schema so that it can
be properly resolved.
This requires making the "components" object available to the resource
node so it can be used during DynamicExpand. It also involved splitting
the provisioner schema attachment into a separate interface from
GraphNodeProvisionerConsumer so that it can now be handled within
AttachSchemaTransformer, along with all of the other schema attachment
steps.
The initial rework of this function to support traversals didn't correctly
handle the "all" case, due to a logic error where the ignoreAll branch
could be visited only if ignoreChanges were non-empty, but yet the two
are mutually exclusive in practice.
Now we process ignoreAll separately from ignoreChanges, and invert the
two loops so that we will visit all attributes regardless of what is
in the ignoreChanges slice.
Prior to our v0.12 changes this test was confusingly using an attribute
named "set", but assigning a map to it. The expected test result suggested
that it was actually expecting legacy HCL2's weird interpretation of a
single map as a list of maps, and so to retain the intent of the test here
(in spite of the contrary name) we type "set" as list of map of string,
update the fixture to _actually_ be a list of maps, and then we get the
expected test result.
Update test fixtures to work in our new world.
This is mostly changing out attribute names for those in the schema,
adding Providers to states, and updating the test-fixture
configurations.