This makes some slight adjustments to the shape of the schema we
present to Terraform Core without affecting how it is consumed by the
SDK and thus the provider. This mechanism is designed specifically to
avoid changing how the schema is interpreted by the SDK itself or by the
provider, so that prior behavior can be preserved in Terraform v0.11 mode.
This also includes a new rule that Computed-only (i.e. not also Optional)
schemas _always_ map to attributes, because that is a better mapping of
the intent: they are object values to be used in expressions. Nested
blocks conceptually represent nested objects that are in some sense
independent of what they are embedded in, and so they cannot themselves be
computed.
This allows a provider developer slightly more control over how an SDK
schema is mapped into the Terraform configuration language, overriding
some default assumptions.
ConfigMode overrides the default assumption that a schema with
an Elem of type *Resource is to be mapped to configuration as a nested
block, allowing mapping as an attribute containing an object type instead.
These behaviors only apply when a provider is being used with Terraform
v0.12 or later. They are ignored altogether in Terraform v0.11 mode, to
preserve compatibility. We are adding these primarily to allow the v0.12
version of a resource type schema to be specified to match the prevailing
usage of it in existing configurations, in situations where the default
mapping to v0.12 concepts is not appropriate.
This commit adds only the fields themselves and the InternalValidate rules
for them. A subsequent commit for Terraform v0.12 will add the behavior
as part of the protocol version 5 shim layer.
When the user aborts input, it may end up as an unknown value, which
needs to be converted to null for PrepareConfig.
Allow PrepareConfig to accept null config values in order to fill in
missing defaults.
This mirrors the change made for providers, so that default values can
be inserted into the config by the backend implementation. This is only
the interface and method name changes, it does not yet add any default
values.
This comment seems to imply that you can put CRUD functions on nested schema.Resource objects.
The comment goes back to the first commit to this file, but AFAICT this functionality has never been implemented.
If set elements are computed, we can't be certain that they are actually
equal. Catch identical computed set hashes when they are added to the
set, and alter the set key slightly to keep the set counts correct.
In previous versions the interpolation string would be included in the
set, and different string values would cause the set to hash
differently, so this is change is only activated for the new protocol.
Sets rely on diffs being complete for all elements, even when they are
unchanged. When encountering a DiffSuppressFunc inside a set the diffs
were being dropped entirely, possible causing set elements to be lost.
The new decoder is more precise, and unpacks the timeout block into a
single map, which ResourceTimeout.ConfigDecode was updated to handle.
We however still need to work with legacy versions of terraform, with
the old decoder.
Historically helper/schema did not support non-primitive map attributes
because they cannot be represented unambiguously in flatmap. When we
initially implemented CoreConfigSchema here we mapped that situation to
a nested block of mode NestingMap, even though that'd never worked until
now, assuming that it'd be harmless because providers wouldn't be using
it.
It turns out that some providers are, in fact, incorrectly populating
a TypeMap schema with Elem: &schema.Resource, apparently under the false
assumption that it would constrain the keys allowed in the map. In
practice, helper/schema has just been ignoring this and treating such
attributes as map of string. (#20076)
In order to preserve the behavior of these existing incorrectly-specified
attribute definitions, here we mimic the helper/schema behavior by
presenting as an attribute of type map(string).
These attributes have also been shown in some documentation as nested
blocks (with no equals sign), so that'll need to be fixed in user
configurations as they upgrade to Terraform 0.12. However, the existing
upgrade tool rules will take care of that as a natural consequence of the
name being indicated as an attribute in the schema, rather than as a block
type.
This fixes#20076.
With the introduction of explicit "null" in 0.12 it's possible for a value
that is unknown during plan to become a known null during apply, so we
need to slightly weaken our validation rules to accommodate that, in
particular skipping the validation of conflicting attributes if the result
could potentially be valid after the unknown values become known.
This change is in the codepath that is common to both 0.12 and 0.11
callers, but that's safe because 0.11 re-runs validation during the apply
step and so will still catch problems here, albeit in the apply step
rather than in the plan step, thus matching the 0.12 behavior. This new
behavior is a superset of the old in the sense that everything that was
valid before is still valid.
The implementation here also causes us to skip all other validation for
an attribute whose value is unknown. Most of the downstream validation
functions handle this directly anyway, but again this doesn't add any new
failure cases, and should clean up some of the rough edges we've seen with
unknown values in 0.11 once people upgrade to 0.12-compatible providers.
Any issues we now short-circuit during planning will still be caught
during apply.
While working on this I found that the existing "Not a list" test was not
actually testing the correct behavior, so this also includes a tweak to
that to ensure that it really is checking the "should be a list" path
rather than the "cannot be set" codepath it was inadvertently testing
before.
The added test in this commit, without the fix, will make d.Set return
the following error:
`Invalid address to set: []string{"ports", "0", "set"}`
This was due to the fact that setSet in feild_writer_map tried to
convert a slice into a set by creating a temp set schema and calling
writeField on that with the address(`[]string{"ports", "0", "set"}"` in
this case). However the temp schema was only for the set and not the
whole schema as seen in the address so, it should have been `[]string{"set"}"`
so it would align with the schema.
This commits adds another variable there(tempAddr) which will only
contain the last entry of the address that would be the set key, which
would match the created schema
This commit potentially fixes the problem described in #16331
The rest of Terraform is still using uint64 for this in various spots, but
we'll update that gradually later. We use int64 here because that matches
what's used in our protobuf definition, and unsigned integers are not
portable across all of the protobuf target languages anyway.
Since the SDK's schema system conflates attributes and nested blocks, it's
possible to state some nonsensical schema situations such as:
- A nested block is both optional but has MinItems > 0
- A nested block is entirely computed but has MinItems or MaxItems set
Both of these weird situations are handled here in the same way that the
existing helper/schema validation code would've handled them: by
effectively disabling the MinItems/MaxItems checks where they would've
been ignored before.
the MinItems/MaxItems
The SDK has a mechanism that effectively makes it possible to declare an
attribute as being _conditionally_ required, which is not a concept that
Terraform Core is aware of.
Since this mechanism is in practice only used for a small UX improvement
in prompting for these values interactively when the environment variable
is not set, we avoid here introducing all of this complexity into the
plugin protocol by just having the provider selectively modify its schema
if it detects that such an attribute might be set dynamically.
This then prevents Terraform Core from validating the presence of the
argument or prompting for a new value for it, allowing the null value to
pass through into the provider so that the default value can be generated
again dynamically.
This is a kinda-kludgey solution which we're accepting here because the
alternative would be a much-more-complex two-pass decode operation within
Core itself, and that doesn't seem worth it.
This fixes#19139.
This makes sure the diff is generated with the matching set ids from
helper/schema.
Update the tests to add ID fields to the state, which will exists in
practice, since any state traversing through the shims will have the ID
inserted.
Resource timeouts were a separate config block, but did not exist in the
resource schema. Insert any defined timeouts when generating the
configshema.Block so that the fields can be accepted and validated by
core.
Use the new SimpleDiff method of the provider so that the diff isn't
altered by ForceNew attributes.
Always set an "id" as RequiresReplace so core knows an instance will be
replaced, even if all ForceNew attributes are filtered out due to
ignore_changes.
Terraform now handles any actual "diffing" of resource, and the existing
Diff functions are only used to shim the schema.Provider to the new
methods. Since terraform is handling what used to be the Diff, the
provider now should not modify the diff based on RequiresNew due to it
interfering with the ignore_changes handling.
The "id" field is assumed to always exist, and must have a valid value.
Set "id" to unknown when planning new resource changes to indicate that
it will be computed.
In order to not require state migrations to be supported in both
MigrateState and StateUpgraders, the legacy provider codepath needs to
handle the StateUpgraders transparently during Refresh.
This adds some of the required shim functions to the schema package.
While this further bloats the already huge package, adding the helpers
here was significantly less disruptive than refactoring types into
separate packages to prevent import cycles.
The majority of tests here are directly adapted from existing schema
tests to provide as many known good values to the shims as possible.
It turns out that state upgrades need to be handled differently since
providers are going to be backwards compatible. This means that new
state upgrades may still be stored in the flatmap format when used wih
terraform 0.11. Because we can't account for the specific version which
could produce a legacy state, all future state upgrades need to record
the schema types for decoding.
Rather than defining a single Upgrade function for states, we now have a
list of functions, each of which handle upgrading a specific version to
the next. In practice this isn't much different from the way many
resources implement upgrades themselves, with a separate function for
each version dispatched from the MigrateState function. The only added
burden is the recording of the schema type, and we intend to supply
tools and helper function to prevent the need to copy the entire
existing schema in all cases.
This is the provider-side UpgradeState implementation for a particular
resource. This new function will be called to upgrade a saved state with
an old schema version to the current schema.
UpgradeState also requires a record of the last schema and version that
could have been stored as a flatmapped state. If the stored state is in
the legacy flatmap format, this will allow the provider to properly
decode the flatmapped state into the expected structure for the new json
encoded state. If the stored state's version is below that of the
LegacySchema.Version value, it will first be processed by the legacy
MigrateState function.
The update protocol shims will also check for this this, but eventually
"id" will only be a normal attribute, and we shouldn't have to special
case this.
When converting a legacy schemaMap to a configschema, we need to add
"id" as a required attribute to top-level resources if it's not
declared.
The "id" field will be required to interoperate with the legacy helper
schema, since the presence of an id was used to indicate the existence
of a resource.
The "config" package is no longer used and will be removed as part
of the 0.12 release cleanup. Since configschema is part of the
"new world" of configuration modelling, it makes more sense for
it to live as a subdirectory of the newer "configs" package.
Due to how deeply the configuration types go into Terraform Core, there
isn't a great way to switch out to HCL2 gradually. As a consequence, this
huge commit gets us from the old state to a _compilable_ new state, but
does not yet attempt to fix any tests and has a number of known missing
parts and bugs. We will continue to iterate on this in forthcoming
commits, heading back towards passing tests and making Terraform
fully-functional again.
The three main goals here are:
- Use the configuration models from the "configs" package instead of the
older models in the "config" package, which is now deprecated and
preserved only to help us write our migration tool.
- Do expression inspection and evaluation using the functionality of the
new "lang" package, instead of the Interpolator type and related
functionality in the main "terraform" package.
- Represent addresses of various objects using types in the addrs package,
rather than hand-constructed strings. This is not critical to support
the above, but was a big help during the implementation of these other
points since it made it much more explicit what kind of address is
expected in each context.
Since our new packages are built to accommodate some future planned
features that are not yet implemented (e.g. the "for_each" argument on
resources, "count"/"for_each" on modules), and since there's still a fair
amount of functionality still using old-style APIs, there is a moderate
amount of shimming here to connect new assumptions with old, hopefully in
a way that makes it easier to find and eliminate these shims later.
I apologize in advance to the person who inevitably just found this huge
commit while spelunking through the commit history.
The new config loader requires some steps to happen in a different
order, particularly in regard to knowing the schema in order to
decode the configuration.
Here we lean directly on the configschema package, rather than
on helper/schema.Backend as before, because it's generally
sufficient for our needs here and this prepares us for the
helper/schema package later moving out into its own repository
to seed a "plugin SDK".
We will need access to this information in order to render interactive
input prompts, and it will also be useful in returning schema information
to external tools such as text editors that have autocomplete-like
functionality.
When checking if a ResourceDiff key is valid, allow for keys that exist
on sub-blocks. This allows things like diff.Clear to be called on
sub-block fields, instead of just on top-level fields.
The `remote` backend config contains an attribute that is defined as a `*schema.Set`, but currently only `string` values are accepted as the `config` attribute is defined as a `schema.TypeMap`.
Additionally the `b.Validate()` method wasn’t called to prevent a possible panic in case of unexpected configurations being passed to `b.Configure()`.
This commit is a bit of a hack to be able to support this in the 0.11 series. The 0.12 series will have proper support, so when merging 0.12 this should be reverted again.
We already had the functionality to make resources deprecated, which was
used when migrating resources to data sources, but the functionality was
unexported, so only the schema package could do it. Now it's exported,
meaning providers can mark entire resources as deprecated. I also added
a test in hopefully-the-right place?
A couple of bugs have been discovered in ResourceDiff.ForceNew:
* NewRemoved is not preserved when a diff for a key is already present.
This is because the second diff that happens after customization
performs a second getChange on not just state and config, but also on
the pre-existing diff. This results in Exists == true, meaning nil is
never returned as a new value.
* ForceNew was doing the work of adding the key to the list of changed
keys by doing a full SetNew on the existing value. This has a side
effect of fetching zero values from what were otherwise undefined values
and creating diffs for these values where there should not have been
(example: "" => "0").
This update fixes these scenarios by:
* Adding a new private function to check the existing diff for
NewRemoved keys. This is included in the check on new values in
diffChange.
* Keys that have been flagged as ForceNew (or parent keys of lists and
sets that have been flagged as ForceNew) are now maintained in a
separate map. UpdatedKeys now returns the results of both of these maps,
but otherwise these keys are ignored by ResourceDiff.
* Pursuant the above, values are no longer pushed into the newDiff
writer by ForceNew. This prevents the zero value problem, and makes for
a cleaner implementation where the provider has to "manually" SetNew to
update the appropriate values in the writer. It also prevents
non-computed keys from winding up in the diff, which ResourceDiff
normally blocks by design.
There are also a couple of tests for cases that should never come up
right now involving Optional/Computed values and NewRemoved, for which
explanations are given in annotations of each test. These are here to
guard against future regressions.
Added some more context to GetOkExists, moved Computed to NewValueKnown
to accommodate some changes that will be coming up in HCL2 that may make
"Computed" less intuitive of a function name, and updated the docs for
NewValueKnown as well.
This adds a new method to ResourceDiff: Computed, which exposes the
computed read result field to ResourceDiff. In the context of
customizing the diff, this is important as interpolated and otherwise
computed values will show up in the diff as blank, with no way of
determining if the value is actually blank or if it's a computed value
not available at diff customization time. Currently assumptions need to
be made on this, but this does not help in validation scenarios where
one needs to differentiate between an actual blank value and a value
that will be available later.
This is exposed for the most part via NewComputed in the diff, but the
tests cover both the config reader as well (with no diff, even though
this should not come up in normal operation) and also the newDiff reader
when someone sets a new value using SetNew and SetNewComputed.
This commit also exposes GetOkExists. The tests were mostly pulled from
ResourceData but a few were added to ensure that config was being
properly covered as well, in addition to covering SetNew and
SetNewComputed.
Return the global default timeout if the ResourceData timeouts are nil.
Set the timeouts from the Resource when calling Resource.Data, so that
the config values are always available.
For historical reasons, the handling of element types for maps is inconsistent with other collection types.
Here we begin a multi-step process to make it consistent, starting by supporting both the "consistent" form of using a schema.Schema and an existing erroneous form of using a schema.Type directly. In subsequent commits we will phase out the erroneous form and require the schema.Schema approach, the same as we do for TypeList and TypeSet.
This new codepath with the getDiff "customzed" return value, along with
the associated test need to be removed as soon as we can support unset
fields from the config, so we don't continue to carry this broken
behavior forward any longer than needed.
This extends the internal diffChange method so that ResourceDiff's
implementation of it can report back whether or not the value came from
a customized diff.
This is an effort to work to preserve the pre-ResourceDiff behaviour
that ignores the diff for computed keys when the old value was populated
but the new value wasn't - this behaviour is actually being depended on
by users that are using it to exploit using zero values in modules. This
should allow both scenarios to co-exist by shifting the NewComputed
exemption over to exempting values that come from diff customization.
This reverts one of the changes from 6a4f7b0, which broke empty strings
being seen as unset for computed values.
This breaks a number of other tests, and is only an intermediate change
for evaluating other solutions.
This case should be expected to fail with the current diff algorithm,
but the existing behavior was widely relied upon so we need to roll this
back until there is a representable nil value.
There are situations where one may need to write to a set, list, or map
more than once per single TF operation (apply/refresh/etc). In these
cases, further writes using Set (example: d.Set("some_set", newSet))
currently create unstable results in the set writer (the name of the
writer layer that holds the data set by these calls) because old keys
are not being cleared out first.
This bug is most visible when using sets. Example: First write to set
writes elements that have been hashed at 10 and 20, and the second write
writes elements that have been hashed at 30 and 40. While the set length
has been correctly set at 2, since a set is basically a map (as is the
entire map writer) and map results are non-deterministic, reads to this
set will now deliver unstable results in a random but predictable
fashion as the map results are delivered to the caller non-deterministic
- sometimes you may correctly get 30 and 40, but sometimes you may get
10 and 20, or even 10 and 30, etc.
This problem propagates to state which is even more damaging as unstable
results are set to state where they become part of the permanent data
set going forward.
The problem also applies to lists and maps. This is probably more of an
issue with maps as a map can contain any key/value combination and hence
there is no predictable pattern where keys would be overwritten with
default or zero values. This is contrary to complex lists, which has
this problem as well, but since lists are deterministic and the length
of a list properly gets updated during the overwrite, the problem is
masked by the fact that a read will only read to the boundary of the
list, skipping any bad data that may still be available due past the
list boundary.
This update clears the child contents of any set, list, or map before
beginning a new write to address this issue. Tests are included for all
three data types.
This keeps CustomizeDiff from being defined on data sources, where it
would be useless. We just catch this in InternalValidate like the rest
of the CRUD functions that are not used in data sources.
Added some more detailed comments to CustomizeDiff's comments. The new
comments detail how CustomizeDiff will be called in the event of
different scenarios like creating a new resource, diffing an existing
one, diffing an existing resource that has a change that requires a new
resource, and destroy/tainted resources.
Also added similar detail to ForceNew in ResourceDiff.
This should help mitigate any confusion that may come up when using
CustomizeDiff, especially in the ForceNew scenario when the second run
happens with no state.
setDiff does not make use of its new parameter anymore, so it has been
removed. Also, as there is no more SetDiff (exported) function, mentions
of that have been removed from comments too.
This fixes nil pointer issues that could come up if an invalid key was
referenced (ie: not one in the schema). Also ships a helper validation
function to streamline things.
Restoring the naming of this field in the resource back to
CustomizeDiff, as this is generally more descriptive of the process
that's happening, despite the lengthy name.
The old comments said that this interface was API compatible with
terraform.ResourceProvider's Diff method - with the addition of passing
down meta to it, this is no longer the case.
Not too sure if this is really a big deal - schema.Resource never fully
implemented terraform.ResourceProvider, from what I can see, and the
path from Provdier.Diff to Resource.Diff is still pretty clear. Just
wanted to remove an outdated comment.
This could panic if we sent a parent that was actually longer than the
child (should almost never come up, but the guard will make it safe
anyway).
Also fixed a slice style lint warning in UpdatedKeys.