One quirky aspect of our import feature is that we allow the importer to
produce additional resources alongside the one that was imported, such as
to create separate rules for each rule of an imported security group.
Providers need to be able to set the types of these other resources since
they may not match the "main" resource type. They do this by calling
ResourceData.SetType, which in turn sets InstanceState.Ephemeral.Type.
In our shims here we therefore need to copy that out into our new TypeName
field so that the new core import code can see it and create the right
type in the state.
Testing this required a minor change to the test harness to allow the
ImportStateCheck function to see the resource type.
Historically helper/schema did not support non-primitive map attributes
because they cannot be represented unambiguously in flatmap. When we
initially implemented CoreConfigSchema here we mapped that situation to
a nested block of mode NestingMap, even though that'd never worked until
now, assuming that it'd be harmless because providers wouldn't be using
it.
It turns out that some providers are, in fact, incorrectly populating
a TypeMap schema with Elem: &schema.Resource, apparently under the false
assumption that it would constrain the keys allowed in the map. In
practice, helper/schema has just been ignoring this and treating such
attributes as map of string. (#20076)
In order to preserve the behavior of these existing incorrectly-specified
attribute definitions, here we mimic the helper/schema behavior by
presenting as an attribute of type map(string).
These attributes have also been shown in some documentation as nested
blocks (with no equals sign), so that'll need to be fixed in user
configurations as they upgrade to Terraform 0.12. However, the existing
upgrade tool rules will take care of that as a natural consequence of the
name being indicated as an attribute in the schema, rather than as a block
type.
This fixes#20076.
Our new diff handling no longer requires stripping the empty diffs out,
and provider may be relying on some of the empty-value quirks in
helper/schema.
Due to various inprecisions in the old SDK implementation, applying the
generated diff can potentially make changes to the data structure that
have no real effect, such as replacing an empty list with a null list or
vice-versa.
Although we can't totally eliminate such diff noise, here we attempt to
avoid it in situations where there are _only_ meaningless changes -- where
the prior state and planned state are equivalent -- by just echoing back
the prior state verbatim to ensure that Terraform will treat it as a noop
change.
If there _are_ some legitimate changes then the result may still contain
meaningless changes alongside it, but that is just a cosmetic problem for
the diff renderer, because the meaningless changes will be ignored
altogether during a subsequent apply anyway. The primary goal here is just
to ensure we can converge on a fixpoint when there are no explicit changes
in the configuration.
This adds unexpected values in some cases, and since the case this
handles is only within set objects, we'll deal woth this when tackling
the sets themselves.
Cycle through the shim operations after Apply, to ensure that we can
converge on a stable value for for Plan. While the shims produce valid
values in both directions, helper/schema sometimes does not agree on
which containers should be empty or null.
There are a few constructs from 0.11 and prior that cause 0.12 parsing to
fail altogether, which previously created a chicken/egg problem because
we need to install the providers in order to run "terraform 0.12upgrade"
and thus fix the problem.
This changes "terraform init" to use the new "early configuration" loader
for module and provider installation. This is built on the more permissive
parser in the terraform-config-inspect package, and so it allows us to
read out the top-level blocks from the configuration while accepting
legacy HCL syntax.
In the long run this will let us do version compatibility detection before
attempting a "real" config load, giving us better error messages for any
future syntax additions, but in the short term the key thing is that it
allows us to install the dependencies even if the configuration isn't
fully valid.
Because backend init still requires full configuration, this introduces a
new mode of terraform init where it detects heuristically if it seems like
we need to do a configuration upgrade and does a partial init if so,
before finally directing the user to run "terraform 0.12upgrade" before
running any other commands.
The heuristic here is based on two assumptions:
- If the "early" loader finds no errors but the normal loader does, the
configuration is likely to be valid for Terraform 0.11 but not 0.12.
- If there's already a version constraint in the configuration that
excludes Terraform versions prior to v0.12 then the configuration is
probably _already_ upgraded and so it's just a normal syntax error,
even if the early loader didn't detect it.
Once the upgrade process is removed in 0.13.0 (users will be required to
go stepwise 0.11 -> 0.12 -> 0.13 to upgrade after that), some of this can
be simplified to remove that special mode, but the idea of doing the
dependency version checks against the liberal parser will remain valuable
to increase our chances of reporting version-based incompatibilities
rather than syntax errors as we add new features in future.
Provider tests often rely on checking values contained within sets, by
directly accessing their flatmapped representation. In order to provider
the test harness with the expected set hashes, the sets must be
generated by the schema.Resource itself.
During the test we now build a fixed map of the providers, which should
only contain schema.Provider instances, and pass them into each
TestStep. The individual schema.Resource instances can then be pulled
from the providers, and used to recreate the state from the cty.Value
returned by the core operations.
Stricter type handling in the new shims may add empty containers into
the state where they were previously elided. Since the detection of
missing and empty containers in the legacy state was never reliable,
allow TestCheckNoResourceAttr to succeed if the key is a container count
index, and the value is "0"
Missing containers were often erroneously kept in the state, but since
the addition of the new provider shims, they can often be correctly
eliminated. There are however many tests that check for a "0" count in
the flatmap state when there shouldn't be a key at all. This addition
looks for a container count key and "0" pair, and allows for the key to
be missing.
There may be some tests negatively effected by this which were
legitimately checking for empty containers, but those were also not
reliably detected, and there should be much fewer tests involved.
Zero values and empty containers can be lost during the shimming
process, and during the provider's Apply step.
If we have known zero value containers and primitives in the source,
which appear as null values in the destination, we copy over the zero
value. Sets (and lists to an extent) are more difficult, since there
before and after indexes may not correlate. In that case we take the
entire container if it's wholly known, expecting the provider to have
correctly handled the value.
Due to incorrect use of a loop iterator variable inside a closure, all of
the given providers were ending up with the same factory function.
Now we copy the factory function to a local within the loop first so that
each iteration has its own variable.
This is the second round of similar bugs in this function, so we'll also
add a test case for it to reduce the risk of future regressions given that
most real callers don't exercise this with multiple providers in practice.
We use a shim to convert from the new state model back to the old because
the provider test API is still using the old API throughout. However, the
shim was not preserving the schema version recorded in the new-style state
and so a round-trip through this shim would cause the schema versions to
all revert to zero.
This can cause trouble with the destroy phase of provider tests because
(for API legacy reasons) we round-trip from old state back to new again
before the destroy phase and thus causing the providers to try to upgrade
from state version zero even though the data was already latest, which
can cause errors because state upgrades are generally not idempotent.
With the introduction of explicit "null" in 0.12 it's possible for a value
that is unknown during plan to become a known null during apply, so we
need to slightly weaken our validation rules to accommodate that, in
particular skipping the validation of conflicting attributes if the result
could potentially be valid after the unknown values become known.
This change is in the codepath that is common to both 0.12 and 0.11
callers, but that's safe because 0.11 re-runs validation during the apply
step and so will still catch problems here, albeit in the apply step
rather than in the plan step, thus matching the 0.12 behavior. This new
behavior is a superset of the old in the sense that everything that was
valid before is still valid.
The implementation here also causes us to skip all other validation for
an attribute whose value is unknown. Most of the downstream validation
functions handle this directly anyway, but again this doesn't add any new
failure cases, and should clean up some of the rough edges we've seen with
unknown values in 0.11 once people upgrade to 0.12-compatible providers.
Any issues we now short-circuit during planning will still be caught
during apply.
While working on this I found that the existing "Not a list" test was not
actually testing the correct behavior, so this also includes a tweak to
that to ensure that it really is checking the "should be a list" path
rather than the "cannot be set" codepath it was inadvertently testing
before.
This causes the output to include additional helpful context such as
the values of variables referenced in the config, etc. The output is in
the same format as normal Terraform CLI error output, though we don't
retain a source code cache in this codepath so it will not include a
source code snippet.
Previously the test harness was preloading schemas from the providers
before running any test steps.
Since terraform.NewContext already deals with loading provider schemas,
we can instead just use the schemas it loaded for our shimming needs,
avoiding the need to reimplement the schema lookup behavior and thus
the need to create a throwaway provider instance with which to do it.
Previously we were running the factory function only once when
constructing the provider resolver, which means that all contexts created
from that resolver share the same provider instance.
Instead now we will call the given factory function once for each
instantiation, ensuring that each caller ends up with a separate object
as would be the case in real-world use.
The added test in this commit, without the fix, will make d.Set return
the following error:
`Invalid address to set: []string{"ports", "0", "set"}`
This was due to the fact that setSet in feild_writer_map tried to
convert a slice into a set by creating a temp set schema and calling
writeField on that with the address(`[]string{"ports", "0", "set"}"` in
this case). However the temp schema was only for the set and not the
whole schema as seen in the address so, it should have been `[]string{"set"}"`
so it would align with the schema.
This commits adds another variable there(tempAddr) which will only
contain the last entry of the address that would be the set key, which
would match the created schema
This commit potentially fixes the problem described in #16331
Any state modifying functions can only be run once during the plan-apply
cycle. When regenerating the Diff during ApplyResourceChange, strip out
all StateFunc and CustomizeDiff functions from the schema.
Thew NewExtra diff field was where config data that was modified by a
StateFunc was stored, and needs to be maintained between plan and apply.
During PlanResourceChange, store any NewExtra data from the Diff in the
PlannedPrivate data, and re-insert the NewExtra data into the Diff
generated during ApplyResourceChange.
Errors were being ignore with the intention that they would be caught
later in validation, but it turns out we nee dto catch those earlier.
The legacy schemas also allowed providers to set and empty string for a
bool value, which we need to handle here, since it's not being handled
from user input like a normal config value.
The rest of Terraform is still using uint64 for this in various spots, but
we'll update that gradually later. We use int64 here because that matches
what's used in our protobuf definition, and unsigned integers are not
portable across all of the protobuf target languages anyway.
When normalizing flatmapped containers, compare the attributes to the
prior state and preserve pre-existing zero-length or unknown values. A
zero-length value that was previously unknown is preserved as a
zero-length value, as that may have been computed as such by the
provider.
Since the SDK's schema system conflates attributes and nested blocks, it's
possible to state some nonsensical schema situations such as:
- A nested block is both optional but has MinItems > 0
- A nested block is entirely computed but has MinItems or MaxItems set
Both of these weird situations are handled here in the same way that the
existing helper/schema validation code would've handled them: by
effectively disabling the MinItems/MaxItems checks where they would've
been ignored before.
the MinItems/MaxItems
The SDK has a mechanism that effectively makes it possible to declare an
attribute as being _conditionally_ required, which is not a concept that
Terraform Core is aware of.
Since this mechanism is in practice only used for a small UX improvement
in prompting for these values interactively when the environment variable
is not set, we avoid here introducing all of this complexity into the
plugin protocol by just having the provider selectively modify its schema
if it detects that such an attribute might be set dynamically.
This then prevents Terraform Core from validating the presence of the
argument or prompting for a new value for it, allowing the null value to
pass through into the provider so that the default value can be generated
again dynamically.
This is a kinda-kludgey solution which we're accepting here because the
alternative would be a much-more-complex two-pass decode operation within
Core itself, and that doesn't seem worth it.
This fixes#19139.
The main significant change here is that the package name for the proto
definition is "tfplugin5", which is important because this name is part
of the wire protocol for references to types defined in our package.
Along with that, we also move the generated package into "internal" to
make it explicit that importing the generated Go package from elsewhere is
not the right approach for externally-implemented SDKs, which should
instead vendor the proto definition they are using and generate their
own stubs to ensure that the wire protocol is the only hard dependency
between Terraform Core and plugins.
After this is merged, any provider binaries built against our
helper/schema package will need to be rebuilt so that they use the new
"tfplugin5" package name instead of "proto".
In a future commit we will include more elaborate and organized
documentation on how an external codebase might make use of our RPC
interface definition to implement an SDK, but the primary concern here
is to ensure we have the right wire package name before release.