We already had the functionality to make resources deprecated, which was
used when migrating resources to data sources, but the functionality was
unexported, so only the schema package could do it. Now it's exported,
meaning providers can mark entire resources as deprecated. I also added
a test in hopefully-the-right place?
This adds the Taint field to the acceptance testing framework, allowing
the ability to pre-taint resources at the beginning of a particular
TestStep. This can be useful for when an explicit ForceNew is required
for a specific resource for troubleshooting things like diff mismatches,
etc.
The field accepts resource addresses as a list of strings. To keep
things simple for the time being, only addresses in the root module are
accepted. If we ever want to expand this past that, I'd be almost
inclined to add some facilities to the core terraform package to help
translate actual module resource addresses (ie:
module.foo.module.bar.some_resource.baz) into the correct state, versus
the current convention in some acceptance testing facilities that take
the module address as a list of strings (ie: []string{"root", "foo",
"bar"}).
A couple of bugs have been discovered in ResourceDiff.ForceNew:
* NewRemoved is not preserved when a diff for a key is already present.
This is because the second diff that happens after customization
performs a second getChange on not just state and config, but also on
the pre-existing diff. This results in Exists == true, meaning nil is
never returned as a new value.
* ForceNew was doing the work of adding the key to the list of changed
keys by doing a full SetNew on the existing value. This has a side
effect of fetching zero values from what were otherwise undefined values
and creating diffs for these values where there should not have been
(example: "" => "0").
This update fixes these scenarios by:
* Adding a new private function to check the existing diff for
NewRemoved keys. This is included in the check on new values in
diffChange.
* Keys that have been flagged as ForceNew (or parent keys of lists and
sets that have been flagged as ForceNew) are now maintained in a
separate map. UpdatedKeys now returns the results of both of these maps,
but otherwise these keys are ignored by ResourceDiff.
* Pursuant the above, values are no longer pushed into the newDiff
writer by ForceNew. This prevents the zero value problem, and makes for
a cleaner implementation where the provider has to "manually" SetNew to
update the appropriate values in the writer. It also prevents
non-computed keys from winding up in the diff, which ResourceDiff
normally blocks by design.
There are also a couple of tests for cases that should never come up
right now involving Optional/Computed values and NewRemoved, for which
explanations are given in annotations of each test. These are here to
guard against future regressions.
Added some more context to GetOkExists, moved Computed to NewValueKnown
to accommodate some changes that will be coming up in HCL2 that may make
"Computed" less intuitive of a function name, and updated the docs for
NewValueKnown as well.
This adds a new method to ResourceDiff: Computed, which exposes the
computed read result field to ResourceDiff. In the context of
customizing the diff, this is important as interpolated and otherwise
computed values will show up in the diff as blank, with no way of
determining if the value is actually blank or if it's a computed value
not available at diff customization time. Currently assumptions need to
be made on this, but this does not help in validation scenarios where
one needs to differentiate between an actual blank value and a value
that will be available later.
This is exposed for the most part via NewComputed in the diff, but the
tests cover both the config reader as well (with no diff, even though
this should not come up in normal operation) and also the newDiff reader
when someone sets a new value using SetNew and SetNewComputed.
This commit also exposes GetOkExists. The tests were mostly pulled from
ResourceData but a few were added to ensure that config was being
properly covered as well, in addition to covering SetNew and
SetNewComputed.
Return the global default timeout if the ResourceData timeouts are nil.
Set the timeouts from the Resource when calling Resource.Data, so that
the config values are always available.
For historical reasons, the handling of element types for maps is inconsistent with other collection types.
Here we begin a multi-step process to make it consistent, starting by supporting both the "consistent" form of using a schema.Schema and an existing erroneous form of using a schema.Type directly. In subsequent commits we will phase out the erroneous form and require the schema.Schema approach, the same as we do for TypeList and TypeSet.
This is rarely needed, but sometimes tests need to create temporary files as part of their operation. This should be used sparingly, since it prevents the pro-active cleanup of the temporary working directory.
In terraform-providers/terraform-provider-aws#2935, we have been cleaning code
duplication by benefiting from the "NormalizeJsonString" present in the "structure" helper.
It appears that tests in the AWS provider are covering more use-cases,
which are added in this work.
This new codepath with the getDiff "customzed" return value, along with
the associated test need to be removed as soon as we can support unset
fields from the config, so we don't continue to carry this broken
behavior forward any longer than needed.
This extends the internal diffChange method so that ResourceDiff's
implementation of it can report back whether or not the value came from
a customized diff.
This is an effort to work to preserve the pre-ResourceDiff behaviour
that ignores the diff for computed keys when the old value was populated
but the new value wasn't - this behaviour is actually being depended on
by users that are using it to exploit using zero values in modules. This
should allow both scenarios to co-exist by shifting the NewComputed
exemption over to exempting values that come from diff customization.
This reverts one of the changes from 6a4f7b0, which broke empty strings
being seen as unset for computed values.
This breaks a number of other tests, and is only an intermediate change
for evaluating other solutions.
This case should be expected to fail with the current diff algorithm,
but the existing behavior was widely relied upon so we need to roll this
back until there is a representable nil value.
The CustomizeDiff functionality in helper/schema is powerful, but directly
writing single CustomizeDiff functions can obscure the intent when a
number of different, orthogonal diff-customization behaviors are required.
This new library provides some building blocks that aim to allow a more
declarative form of CustomizeDiff implementation, by composing a number of
smaller operations. For example:
&schema.Resource{
// ...
CustomizeDiff: customdiff.All(
customdiff.ValidateChange("size", func (old, new, meta interface{}) error {
// If we are increasing "size" then the new value must be
// a multiple of the old value.
if new.(int) <= old.(int) {
return nil
}
if (new.(int) % old.(int)) != 0 {
return fmt.Errorf("new size value must be an integer multiple of old value %d", old.(int))
}
return nil
}),
customdiff.ForceNewIfChange("size", func (old, new, meta interface{}) bool {
// "size" can only increase in-place, so we must create a new resource
// if it is decreased.
return new.(int) < old.(int)
}),
customdiff.ComputedIf("version_id", func (d *schema.ResourceDiff, meta interface{}) bool {
// Any change to "content" causes a new "version_id" to be allocated.
return d.HasChange("content")
}),
),
}
The goal is to allow the various separate operations to be quickly seen
and to ensure that each of them runs independently of the others. These
functions all create closures on the call parameters, so the result is
still just a normal CustomizeDiffFunc and so the helpers in this package
can be combined with hand-written functions as needed.
As we get more experience writing CustomizeDiff functions we may wish to
expand the repertoire of functions here in future; this initial set
attempts to cover some common cases we've seen so far. We may also
investigate some helper functions that are entirely declarative and so
don't take callback functions at all, but want to learn what the relevant
use-cases are before going in too deep here.
Looks like while we were checking errors correctly when ExpectError was
set, we weren't checking for the *absence* of an error, which is should
be checked as well (no error is still not the error we are looking for).
Added a few more tests for ExpectError as well.
Validation is the best time to return detailed diagnostics
to the user since we're much more likely to have source
location information, etc than we are in later operations.
This change doesn't actually add any detail to the messages
yet, but it changes the interface so that we can gradually
introduce more detailed diagnostics over time.
While here there are some minor adjustments to some of the
messages to improve their consistency with terminology we
use elsewhere.
StringMatch returns a validation function that can be used to match a
string against a regular expression. This can be used for simple
substring validations or more complex validation scenarios. Optionally,
an error message can be returned so that the user is returned a better
error message other than that their field did not match a regular
expression that they might not be able to understand.
There are situations where one may need to write to a set, list, or map
more than once per single TF operation (apply/refresh/etc). In these
cases, further writes using Set (example: d.Set("some_set", newSet))
currently create unstable results in the set writer (the name of the
writer layer that holds the data set by these calls) because old keys
are not being cleared out first.
This bug is most visible when using sets. Example: First write to set
writes elements that have been hashed at 10 and 20, and the second write
writes elements that have been hashed at 30 and 40. While the set length
has been correctly set at 2, since a set is basically a map (as is the
entire map writer) and map results are non-deterministic, reads to this
set will now deliver unstable results in a random but predictable
fashion as the map results are delivered to the caller non-deterministic
- sometimes you may correctly get 30 and 40, but sometimes you may get
10 and 20, or even 10 and 30, etc.
This problem propagates to state which is even more damaging as unstable
results are set to state where they become part of the permanent data
set going forward.
The problem also applies to lists and maps. This is probably more of an
issue with maps as a map can contain any key/value combination and hence
there is no predictable pattern where keys would be overwritten with
default or zero values. This is contrary to complex lists, which has
this problem as well, but since lists are deterministic and the length
of a list properly gets updated during the overwrite, the problem is
masked by the fact that a read will only read to the boundary of the
list, skipping any bad data that may still be available due past the
list boundary.
This update clears the child contents of any set, list, or map before
beginning a new write to address this issue. Tests are included for all
three data types.
This keeps CustomizeDiff from being defined on data sources, where it
would be useless. We just catch this in InternalValidate like the rest
of the CRUD functions that are not used in data sources.
Added some more detailed comments to CustomizeDiff's comments. The new
comments detail how CustomizeDiff will be called in the event of
different scenarios like creating a new resource, diffing an existing
one, diffing an existing resource that has a change that requires a new
resource, and destroy/tainted resources.
Also added similar detail to ForceNew in ResourceDiff.
This should help mitigate any confusion that may come up when using
CustomizeDiff, especially in the ForceNew scenario when the second run
happens with no state.
setDiff does not make use of its new parameter anymore, so it has been
removed. Also, as there is no more SetDiff (exported) function, mentions
of that have been removed from comments too.
This fixes nil pointer issues that could come up if an invalid key was
referenced (ie: not one in the schema). Also ships a helper validation
function to streamline things.
Restoring the naming of this field in the resource back to
CustomizeDiff, as this is generally more descriptive of the process
that's happening, despite the lengthy name.
The old comments said that this interface was API compatible with
terraform.ResourceProvider's Diff method - with the addition of passing
down meta to it, this is no longer the case.
Not too sure if this is really a big deal - schema.Resource never fully
implemented terraform.ResourceProvider, from what I can see, and the
path from Provdier.Diff to Resource.Diff is still pretty clear. Just
wanted to remove an outdated comment.
This could panic if we sent a parent that was actually longer than the
child (should almost never come up, but the guard will make it safe
anyway).
Also fixed a slice style lint warning in UpdatedKeys.
The consensus is that it's a generally better idea to move setting the
functionality of old values completely to the refresh/read process,
hence it's moot to have this function around anymore. This also means we
don't need the old value reader/writer anymore, which simplifies things.
When working on this initially, I think I thought that since NewComputed
values in the diff were empty strings, that it was using the zero value.
After review, it doesn't seem like this is the case - so I have adjusted
NewComputed to pass nil values. There is also a guard now that keeps the
new value writer from accepting computed fields with non-nil values.
To keep with the current convention of most other schema.Resource
functional fields being fairly short, CustomizeDiff has been changed to
"Review". It would be "Diff", however it is already used by existing
functions in schema.Provider and schema.Resource.
Both Destroy and DestroyDeposed are not propagated down the diff stack,
meaning that there is no way we can tell at this point if an instance is
being destroyed or deposed, so this check would never be used.
In this regard, Destroy never runs a diff down the stack at all, and a
deposition check is not run until *after* the provider's diff function
is called. To answer this question and close it off, we could either
determine if a resource is deposed earlier, and propagate that down, or
treat deposed resources like full destroy nodes, and not diff them at
all (but rather making a diff with the only thing in it being
DestroyDeposed flagged).
Added a few more test cases for CustomizeDiff, caught another error in
the process. I think this is ready for review now, and possibly some
real-world testing of the waters by way of porting some resources that
would benefit from the feature.
It's alive! CustomizeDiff logic now has been inserted into the diff
process. The test_resource_with_custom_diff resource provides some basic
testing and a reference implementation.
There should now be plenty of test coverage for this feature via the
tests added for ResourceDiff, and the basic test added to the
schemaMap.Diff test, and the test resource, but more can be added to
test any specific case that comes up otherwise.
As the interface has been specced out for ResourceDiff, the only diff
operation that can function on a non-computed key is ForceNew, and that
is only if a diff exists for that key. As such, allowing the user to
clear keys that cannot be operated on afterwards does not really make
much sense.
Will re-add this function if it is determined it's needed after all on
review.
Final set of coverage for ResourceDiff and bug fixes to correct issues
the test cases brought up.
Will be dropping ClearAll in next commit, as with how this interface is
intended to be used, it does not really make much sense.
This provides a deep copy of a schemaMap, which will be needed for the
diff customization process as ResourceDiff will be able to flag fields
as ForceNew - we don't want to affect the source schema.
In diffString, removed values are marked if the old value is non-nil and
the new value is nil, regardless of if the new value is marked as a
computed value. With the new diff override behaviour, this can lead to
issues where a nil attribute may get inserted into the diff if the new
value has been marked as computed (as the value will be marked as
missing, but computed).
This propagates into finalizeDiff in some respects because even if
NewComputed is manually set in the diff that gets passed in, it is
ignored, and the schema becomes the only source of truth. Since our new
diff behaviour is mainly designed to be supported on computed keys only,
it stands to reason that we there will be a time where we want to set a
diff as NewComputed on a key, and see that change in the diff.
These two small changes makes that happen. No regressions in tests have
been observed via this change, so it seems safe and non-invasive.
The beginning of tests for SetNew. Noticed that removed values are not
logged for computed keys in a diff - not too sure if that is going to be
a problem (possibly not as GetChange in ResourceData should still
reference state for the old value). I came on this most notably in the
"TestSetNew/complex-ish_set_diff" test, for reference.
More tests pending for other functions for coverage of other functions
and test cases as defined in #8769.
This ensures that when we hook this into the main diff logic, that we
can just re-diff the keys that are modified, ensuring that the diff is
not re-run on keys that were not touched, or on keys that were
intentionally removed.
This should complete the feature set of the prototype. This function
removes a specific key from the existing diff, preventing conflicts.
Further functionality might be needed to make this behave as expected,
namely ensuring that we don't re-process the whole diff after the
CustomizeDiffFunc runs.
This new prototype removes the dependence on a underlying ResourceData,
moving several items up to ensure an approach that more matches
ResourceData. Namely, we create our own MultiLevelFieldReader that
also layers updated diff values on top. New values use a small
re-implementation of the MapFieldWriter/Reader that allows computed
values to be set.
The assumption here now is that a second diff will happen after the
first one is done, processing the new values set in the 2 new
reader/writer levels and updating the diff as necessary.
This adds a new object, ResourceDiff, to the schema package. This
object, in conjunction with a function defined in CustomizeDiff in the
resource schema, allows for the in-flight customization of a Terraform
diff. This helps support use cases such as when there are necessary
changes to a resource that cannot be detected in config, such as via
computed fields (most of the utility in this object works on computed
fields only). It also allows for a wholesale wipe of the diff to allow
for diff logic completely offloaded to an external API, if it is a
better use case for a specific provider.
As part of this work, many internal diff functions have been moved to
use a special resourceDiffer interface, to allow for shared
functionality between ResourceDiff and ResourceData. This may be
extended to the DiffSuppressFunc as well which would restrict use of
ResourceData in DiffSuppressFunc to generally read-only fields.
This work is not yet in its final state - CustomizeDiff is not yet
implemented in the general diff workflow, new functions may be added
(notably Clear() for a single key), and functionality may be altered.
Tests will follow as well.
Update the command package to use the new module storage. Move the old
command output strings into the module storage itself. This could be
moved back later either by using ui callbacks, or designing a module
storage interface once we know what the final requirements will look
like.
In order to parse provider, resource and data source configuration from
HCL2 config files, we need to know the relevant configuration schema.
This new method allows Terraform Core to request these from a provider.
This is a breaking change to this interface, so all of its implementers
in this package are updated too. This includes concrete implementations
of the new method in helper/schema that use the schema conversion code
added in an earlier commit to produce a configschema.Block automatically.
Plugins compiled against prior versions of helper/schema will not have
support for this method, and so calls to them will fail. Callers of
this new method will therefore need to sniff for support using the
SchemaAvailable field added to both ResourceType and DataSource.
This careful handling will need to persist until next time we increment
the plugin protocol version, at which point we can make the breaking
change of requiring this information to be available.
Uses Levenshtein distance to decide if the input is similar enough to one
of the given suggestions, and returns that suggestion if so.
The distance threshold of three was arrived at experimentally, and has
no objective basis.
We don't currently have any need for this information, but we're
propagating it out of helper/schema here pre-emptively so that once we
later have a use for it we will not need to rebuild the providers to gain
access to it.
The long-term expected use-case for this is to have Terraform Core use
static analysis techniques to trace the path of sensitive data through
interpolations so that intermediate results can be flagged as sensitive
too, but we have a lot more work to do before such a thing would actually
be possible.
As part of moving to the next-generation HCL implementation,
Terraform Core is getting its own representation of configuration schema
that is tailored for configuration-processing use-cases. The capabilities
of this are a subset of the helper/schema model primarily concerned with
the configuration structure and value types, leaving detailed validation
and defaults for helper/schema to still solve.
These new methods allow mechanical creation of a schema in the new Core
schema model from a schema expressed in the helper/schema model. This is
not yet used as of this commit, but will be used later to implement some
new ResourceProvider methods that will allow core to obtain the schema
for provider, resource and data source configuration while remaining
source-compatible with existing provider implementations.
This adds NoZeroValues, a small SchemaValidateFunc that checks that a
defined value is not a zero value. It's useful for situations where you
want to keep someone from explicitly entering a zero value (ie:
literally "0", or an empty string) on a required field, and want to
catch it in the validation stage, versus during apply using GetOk.
Add an ImportStateIdFunc field to the ImportState testing functionality.
This will allow for more powerful generation of complex import state IDs
that can't be accomplished by ImportStateId or ImportStateIdPrefix
themselves.