This was trying to test gathering input from a default and an alias
provider configuration, but it was incorrectly setting the provider ref
on the instance that was supposed to belong to the alias. This was working
before because Terraform would gather input from the aliased provider
anyway, but now the invalid "alias" argument in the resource is producing
a validation error.
This doesn't actually make the test work again yet, because we still have
provider input disabled at this time, pending a forthcoming change to
how provider input is handled.
This test was previously not setting InputModeVarUnset, causing us to
overwrite the "amis" map that _is_ already set. This worked before because
we used to treat the empty result as an empty map and then merge it with
the given value, but since we no longer do that merging behavior we were
ending up with an empty map after input.
Since the intent of this test is to see that the "foo" variable gets
populated by input, here we add InputModeVarUnset which then matches how
the input walk is triggered by the "real" codepath in the local backend.
This also includes some updates to make the test fixture v0.12-idiomatic
(applied after it was seen to work with the old fixture) and to properly
handle the "diags" return value from the various context methods.
This attribute is referenced in order to include a computed value into
another resource, and so it must be present in the schema so that it can
be properly resolved.
This requires making the "components" object available to the resource
node so it can be used during DynamicExpand. It also involved splitting
the provisioner schema attachment into a separate interface from
GraphNodeProvisionerConsumer so that it can now be handled within
AttachSchemaTransformer, along with all of the other schema attachment
steps.
The initial rework of this function to support traversals didn't correctly
handle the "all" case, due to a logic error where the ignoreAll branch
could be visited only if ignoreChanges were non-empty, but yet the two
are mutually exclusive in practice.
Now we process ignoreAll separately from ignoreChanges, and invert the
two loops so that we will visit all attributes regardless of what is
in the ignoreChanges slice.
Prior to our v0.12 changes this test was confusingly using an attribute
named "set", but assigning a map to it. The expected test result suggested
that it was actually expecting legacy HCL2's weird interpretation of a
single map as a list of maps, and so to retain the intent of the test here
(in spite of the contrary name) we type "set" as list of map of string,
update the fixture to _actually_ be a list of maps, and then we get the
expected test result.
Update test fixtures to work in our new world.
This is mostly changing out attribute names for those in the schema,
adding Providers to states, and updating the test-fixture
configurations.
This includes an upstream fix to the hcldec.Variables function that fixes
its behavior when dealing with specs that contain DefaultSpec, and other
similar wrapper specs.
This includes a number of upstream fixes, but in particular fixes a race
on evaluating the same splat expression concurrently for multiple separate
EvalContexts.
Previously this test's fixture was depending on the fact that attribute
access of an unknown value would always succeed and return another unknown
value, but under the new language interpreter an unknown value still
retains type information and so accessing this "bar" attribute would fail
the semantic check.
We also have to fuss a bit here to work around the limitations of the
testDiffFn implementation, which doesn't have enough context to understand
that "list" in the ResourceConfig is the same as "list.#" in its result.
Since this part of the provider API will change soon to use cty values
directly, this change just accepts a slightly-odd-looking diff in the mean
time, with both "list" and "list.#" populated.
Previously we only handled the "count cannot be computed" check during
validate, leaving other walks to just report "a number is required"
(because "unknown" was represented as a special string) but now we have
unknown as first-class we handle it during all walks, and so this error
message is now the more appropriate one saying that the value is not
yet known.
The behavior of the "count" meta-argument has changed so that its presence
(rather than its value) chooses whether the associated resource has
indexes. As a consequence, these tests which set count = 1 now produce
a single indexed instance with index 0.
Previously Terraform Core was unaware of the structure of a resource type
schema and so the strange behavior of our testDiffFn caused some
attributes to not appear at all in the result. With core now more aware,
it "fills in" these missing items before calling, and as a result they
now appear in the diff even with the testDiffFn.
In real code, where helper/schema is constructing diffs, this situation
doesn't arise because the framework always produces schema-compatible
diffs.
The diff stringer now uses the standard serialization of a module address,
so we need to update the golden representations to restore their
associated tests to passing.
This doesn't yet include test updates, since there are problems in core
currently blocking these tests from running. The tests will therefore be
updated in a subsequent commit.
The old testDiffFn used th raw config to dynamically set computed values
in the diff. Since the schema now defines what values should be there,
all test diffs end up with unkown computed values. Filter these out by
looking for a value set to "compute"
Previously unknown values were round-tripping through flatmap and coming
out as known strings containing the UnknownVariableValue. (The classic bug
that, ironically, was one of the big reasons to write cty!)
Now we properly handle unknown values in both directions: going in to
flatmap we write UnknownVariableValue at the appropriate key (as the count
for sequences or maps) and then coming out of flatmap we turn
UnknownVariableValue back into a cty unknown value of the requested type.
This was assuming our old practice of a slice starting with the string
"root". We'll normalize here and then stringify the result to ensure that
we get a string consistent with what's used elsewhere.
This is primarily aimed at fixing some of the context plan tests.
Most changes here just introduce some custom schema into the test mocks.
In some cases, a fixture is lightly updated to more modern assumptions.
The test for accessing count.index in a resource block without count set
is removed, because that is no longer valid under the new language
implementation.