Since the SDK's schema system conflates attributes and nested blocks, it's
possible to state some nonsensical schema situations such as:
- A nested block is both optional but has MinItems > 0
- A nested block is entirely computed but has MinItems or MaxItems set
Both of these weird situations are handled here in the same way that the
existing helper/schema validation code would've handled them: by
effectively disabling the MinItems/MaxItems checks where they would've
been ignored before.
the MinItems/MaxItems
The SDK has a mechanism that effectively makes it possible to declare an
attribute as being _conditionally_ required, which is not a concept that
Terraform Core is aware of.
Since this mechanism is in practice only used for a small UX improvement
in prompting for these values interactively when the environment variable
is not set, we avoid here introducing all of this complexity into the
plugin protocol by just having the provider selectively modify its schema
if it detects that such an attribute might be set dynamically.
This then prevents Terraform Core from validating the presence of the
argument or prompting for a new value for it, allowing the null value to
pass through into the provider so that the default value can be generated
again dynamically.
This is a kinda-kludgey solution which we're accepting here because the
alternative would be a much-more-complex two-pass decode operation within
Core itself, and that doesn't seem worth it.
This fixes#19139.
The main significant change here is that the package name for the proto
definition is "tfplugin5", which is important because this name is part
of the wire protocol for references to types defined in our package.
Along with that, we also move the generated package into "internal" to
make it explicit that importing the generated Go package from elsewhere is
not the right approach for externally-implemented SDKs, which should
instead vendor the proto definition they are using and generate their
own stubs to ensure that the wire protocol is the only hard dependency
between Terraform Core and plugins.
After this is merged, any provider binaries built against our
helper/schema package will need to be rebuilt so that they use the new
"tfplugin5" package name instead of "proto".
In a future commit we will include more elaborate and organized
documentation on how an external codebase might make use of our RPC
interface definition to implement an SDK, but the primary concern here
is to ensure we have the right wire package name before release.
In order to prevent mismatched states between read/plan/apply, we need
to ensure that the attributes are generated consistently each time.
Because of the various ways in which helper/schema and the hcl2 shims
interpret empty values, the only way to ensure consistency is to always
remove them altogether.
This makes sure the diff is generated with the matching set ids from
helper/schema.
Update the tests to add ID fields to the state, which will exists in
practice, since any state traversing through the shims will have the ID
inserted.
helper/schema will remove "timeouts" from the config, and stash them in
the diff.Meta map. Terraform sees "timeouts" as a regular config block,
so needs them to be present in the state in order to not show a diff.
Have the GRPCProviderServer shim copy all timeout values into any state
it returns to provide consistent diffs in core.
Resource timeouts were a separate config block, but did not exist in the
resource schema. Insert any defined timeouts when generating the
configshema.Block so that the fields can be accepted and validated by
core.
`Any()` allows any single passing validation of multiple `SchemaValidateFunc` to pass validation to cover cases where a standard validation function does not cover the functionality or to make error messaging simpler.
Example provider usage:
```go
ValidateFunc: validation.Any(
validation.IntAtLeast(42),
validation.IntAtMost(5),
),
```
`All()` combines the outputs of multiple `SchemaValidateFunc`, to reduce the usage of custom validation functions that implement standard validation functions.
Example provider usage:
```go
ValidateFunc: validation.All(
StringLenBetween(5, 42),
StringMatch(regexp.MustCompile(`[a-zA-Z0-9]+`), "value must be alphanumeric"),
),
```
`IntInSlice()` is the `int` equivalent of `StringInSlice()`
Example provider usage:
```go
ValidateFunc: validation.IntInSlice([]int{30, 60, 120})
```
Output from unit testing:
```
$ make test TEST=./helper/validation
==> Checking that code complies with gofmt requirements...
go generate ./...
2018/10/17 14:16:03 Generated command/internal_plugin_list.go
go list ./helper/validation | xargs -t -n4 go test -timeout=2m -parallel=4
go test -timeout=2m -parallel=4 github.com/hashicorp/terraform/helper/validation
ok github.com/hashicorp/terraform/helper/validation 1.106s
```
The helper/resource unit tests will panic, because they were using the
legacy terraform.MockResourceProvider, which doesn't have the same
internals required by the new GRPC shims.
Fail these tests for now, and a new test provider will need to be made
out of a schema.Provider instance.
Use the new SimpleDiff method of the provider so that the diff isn't
altered by ForceNew attributes.
Always set an "id" as RequiresReplace so core knows an instance will be
replaced, even if all ForceNew attributes are filtered out due to
ignore_changes.
Terraform now handles any actual "diffing" of resource, and the existing
Diff functions are only used to shim the schema.Provider to the new
methods. Since terraform is handling what used to be the Diff, the
provider now should not modify the diff based on RequiresNew due to it
interfering with the ignore_changes handling.
While the schema Diff fucntion returns a nil diff when creating an empty
(except for id) resource, the Apply function expects the diff to be
initialized and ampty.
PlanResourceChange isn't returning the diff, but rather it is returning
the destired state. If the propsed state results in a nil diff, then,
the propsed state is what should be returned.
Make sure Meta fields are not nil, as the schema package expects those
to be initialised.
An earlier change introduced a new function testConfig to the main code
for this package, which conflicted with a function of the same name in
the test code.
Here we rename the function from the test code, allowing for the more
generally-named testConfig to be the one in the main code.
The "id" field is assumed to always exist, and must have a valid value.
Set "id" to unknown when planning new resource changes to indicate that
it will be computed.
Previously we just left these out of the plan altogether, but in the new
plan types we intentionally include change information for every resource
instance, even if no changes are actually planned, to allow alternative
plan file viewers to show what isn't changing as well as what is.
Managing which function need to be shared between the terraform plugin
and the helper plugin without creating cycles was becoming difficult.
Move all functions related to converting between terraform and proto
type into plugin/convert.
Due to how often the state and plan types are referenced throughout
Terraform, there isn't a great way to switch them out gradually. As a
consequence, this huge commit gets us from the old world to a _compilable_
new world, but still has a large number of known test failures due to
key functionality being stubbed out.
The stubs here are for anything that interacts with providers, since we
now need to do the follow-up work to similarly replace the old
terraform.ResourceProvider interface with its replacement in the new
"providers" package. That work, along with work to fix the remaining
failing tests, will follow in subsequent commits.
The aim here was to replace all references to terraform.State and its
downstream types with states.State, terraform.Plan with plans.Plan,
state.State with statemgr.State, and switch to the new implementations of
the state and plan file formats. However, due to the number of times those
types are used, this also ended up affecting numerous other parts of core
such as terraform.Hook, the backend.Backend interface, and most of the CLI
commands.
Just as with 5861dbf3fc49b19587a31816eb06f511ab861bb4 before, I apologize
in advance to the person who inevitably just found this huge commit while
spelunking through the commit history.
The new helper/plugin package contains the grpc servers for handling the
new plugin protocol
The GRPCProviderServer and GRPCProvisionerServer handle the grpc plugin
protocol, and convert the requests to the legacy schema.Provider and
schema.Provisioner methods.
In order to not require state migrations to be supported in both
MigrateState and StateUpgraders, the legacy provider codepath needs to
handle the StateUpgraders transparently during Refresh.
This adds some of the required shim functions to the schema package.
While this further bloats the already huge package, adding the helpers
here was significantly less disruptive than refactoring types into
separate packages to prevent import cycles.
The majority of tests here are directly adapted from existing schema
tests to provide as many known good values to the shims as possible.
It turns out that state upgrades need to be handled differently since
providers are going to be backwards compatible. This means that new
state upgrades may still be stored in the flatmap format when used wih
terraform 0.11. Because we can't account for the specific version which
could produce a legacy state, all future state upgrades need to record
the schema types for decoding.
Rather than defining a single Upgrade function for states, we now have a
list of functions, each of which handle upgrading a specific version to
the next. In practice this isn't much different from the way many
resources implement upgrades themselves, with a separate function for
each version dispatched from the MigrateState function. The only added
burden is the recording of the schema type, and we intend to supply
tools and helper function to prevent the need to copy the entire
existing schema in all cases.
This is the provider-side UpgradeState implementation for a particular
resource. This new function will be called to upgrade a saved state with
an old schema version to the current schema.
UpgradeState also requires a record of the last schema and version that
could have been stored as a flatmapped state. If the stored state is in
the legacy flatmap format, this will allow the provider to properly
decode the flatmapped state into the expected structure for the new json
encoded state. If the stored state's version is below that of the
LegacySchema.Version value, it will first be processed by the legacy
MigrateState function.
The update protocol shims will also check for this this, but eventually
"id" will only be a normal attribute, and we shouldn't have to special
case this.
When converting a legacy schemaMap to a configschema, we need to add
"id" as a required attribute to top-level resources if it's not
declared.
The "id" field will be required to interoperate with the legacy helper
schema, since the presence of an id was used to indicate the existence
of a resource.
The "config" package is no longer used and will be removed as part
of the 0.12 release cleanup. Since configschema is part of the
"new world" of configuration modelling, it makes more sense for
it to live as a subdirectory of the newer "configs" package.
Due to how deeply the configuration types go into Terraform Core, there
isn't a great way to switch out to HCL2 gradually. As a consequence, this
huge commit gets us from the old state to a _compilable_ new state, but
does not yet attempt to fix any tests and has a number of known missing
parts and bugs. We will continue to iterate on this in forthcoming
commits, heading back towards passing tests and making Terraform
fully-functional again.
The three main goals here are:
- Use the configuration models from the "configs" package instead of the
older models in the "config" package, which is now deprecated and
preserved only to help us write our migration tool.
- Do expression inspection and evaluation using the functionality of the
new "lang" package, instead of the Interpolator type and related
functionality in the main "terraform" package.
- Represent addresses of various objects using types in the addrs package,
rather than hand-constructed strings. This is not critical to support
the above, but was a big help during the implementation of these other
points since it made it much more explicit what kind of address is
expected in each context.
Since our new packages are built to accommodate some future planned
features that are not yet implemented (e.g. the "for_each" argument on
resources, "count"/"for_each" on modules), and since there's still a fair
amount of functionality still using old-style APIs, there is a moderate
amount of shimming here to connect new assumptions with old, hopefully in
a way that makes it easier to find and eliminate these shims later.
I apologize in advance to the person who inevitably just found this huge
commit while spelunking through the commit history.
The new config loader requires some steps to happen in a different
order, particularly in regard to knowing the schema in order to
decode the configuration.
Here we lean directly on the configschema package, rather than
on helper/schema.Backend as before, because it's generally
sufficient for our needs here and this prepares us for the
helper/schema package later moving out into its own repository
to seed a "plugin SDK".
We will need access to this information in order to render interactive
input prompts, and it will also be useful in returning schema information
to external tools such as text editors that have autocomplete-like
functionality.
One of the tests in this package doesn't work when the tests are run as
root - like inside of a Docker container. The test is still useful to
specify `pathorcontents` behavior when a file is not readable, so it's
better to skip than just delete it. See linked issue for further
disussion.
Closes#7707
When checking if a ResourceDiff key is valid, allow for keys that exist
on sub-blocks. This allows things like diff.Clear to be called on
sub-block fields, instead of just on top-level fields.
The `remote` backend config contains an attribute that is defined as a `*schema.Set`, but currently only `string` values are accepted as the `config` attribute is defined as a `schema.TypeMap`.
Additionally the `b.Validate()` method wasn’t called to prevent a possible panic in case of unexpected configurations being passed to `b.Configure()`.
This commit is a bit of a hack to be able to support this in the 0.11 series. The 0.12 series will have proper support, so when merging 0.12 this should be reverted again.
While this initial implementation is a very simple wrapper function, implementing this in the helper/resource package provides some downstream benefits:
* Provides a standard interface for plugin developers to enable parallel acceptance testing
* Existing plugins can simply convert resource.Test to resource.ParallelTest references (as appropriate) to enable the functionality, rather than worrying about additional line(s) to each acceptance test function or TestCase
* Potential enhancements to ParallelTest (e.g. adding an environment variable to skip enabling the behavior) are consistently propagated
We already had the functionality to make resources deprecated, which was
used when migrating resources to data sources, but the functionality was
unexported, so only the schema package could do it. Now it's exported,
meaning providers can mark entire resources as deprecated. I also added
a test in hopefully-the-right place?
ImportStateVerify does not respect DiffSuppressFunc or CustomizeDiff, which is worth documenting (and maybe, possibly, worth changing?). ImportStateVerifyIgnore is a list of prefixes, rather than a list of fields.
This adds the Taint field to the acceptance testing framework, allowing
the ability to pre-taint resources at the beginning of a particular
TestStep. This can be useful for when an explicit ForceNew is required
for a specific resource for troubleshooting things like diff mismatches,
etc.
The field accepts resource addresses as a list of strings. To keep
things simple for the time being, only addresses in the root module are
accepted. If we ever want to expand this past that, I'd be almost
inclined to add some facilities to the core terraform package to help
translate actual module resource addresses (ie:
module.foo.module.bar.some_resource.baz) into the correct state, versus
the current convention in some acceptance testing facilities that take
the module address as a list of strings (ie: []string{"root", "foo",
"bar"}).
A couple of bugs have been discovered in ResourceDiff.ForceNew:
* NewRemoved is not preserved when a diff for a key is already present.
This is because the second diff that happens after customization
performs a second getChange on not just state and config, but also on
the pre-existing diff. This results in Exists == true, meaning nil is
never returned as a new value.
* ForceNew was doing the work of adding the key to the list of changed
keys by doing a full SetNew on the existing value. This has a side
effect of fetching zero values from what were otherwise undefined values
and creating diffs for these values where there should not have been
(example: "" => "0").
This update fixes these scenarios by:
* Adding a new private function to check the existing diff for
NewRemoved keys. This is included in the check on new values in
diffChange.
* Keys that have been flagged as ForceNew (or parent keys of lists and
sets that have been flagged as ForceNew) are now maintained in a
separate map. UpdatedKeys now returns the results of both of these maps,
but otherwise these keys are ignored by ResourceDiff.
* Pursuant the above, values are no longer pushed into the newDiff
writer by ForceNew. This prevents the zero value problem, and makes for
a cleaner implementation where the provider has to "manually" SetNew to
update the appropriate values in the writer. It also prevents
non-computed keys from winding up in the diff, which ResourceDiff
normally blocks by design.
There are also a couple of tests for cases that should never come up
right now involving Optional/Computed values and NewRemoved, for which
explanations are given in annotations of each test. These are here to
guard against future regressions.
Added some more context to GetOkExists, moved Computed to NewValueKnown
to accommodate some changes that will be coming up in HCL2 that may make
"Computed" less intuitive of a function name, and updated the docs for
NewValueKnown as well.
This adds a new method to ResourceDiff: Computed, which exposes the
computed read result field to ResourceDiff. In the context of
customizing the diff, this is important as interpolated and otherwise
computed values will show up in the diff as blank, with no way of
determining if the value is actually blank or if it's a computed value
not available at diff customization time. Currently assumptions need to
be made on this, but this does not help in validation scenarios where
one needs to differentiate between an actual blank value and a value
that will be available later.
This is exposed for the most part via NewComputed in the diff, but the
tests cover both the config reader as well (with no diff, even though
this should not come up in normal operation) and also the newDiff reader
when someone sets a new value using SetNew and SetNewComputed.
This commit also exposes GetOkExists. The tests were mostly pulled from
ResourceData but a few were added to ensure that config was being
properly covered as well, in addition to covering SetNew and
SetNewComputed.
Return the global default timeout if the ResourceData timeouts are nil.
Set the timeouts from the Resource when calling Resource.Data, so that
the config values are always available.
For historical reasons, the handling of element types for maps is inconsistent with other collection types.
Here we begin a multi-step process to make it consistent, starting by supporting both the "consistent" form of using a schema.Schema and an existing erroneous form of using a schema.Type directly. In subsequent commits we will phase out the erroneous form and require the schema.Schema approach, the same as we do for TypeList and TypeSet.
This is rarely needed, but sometimes tests need to create temporary files as part of their operation. This should be used sparingly, since it prevents the pro-active cleanup of the temporary working directory.
In terraform-providers/terraform-provider-aws#2935, we have been cleaning code
duplication by benefiting from the "NormalizeJsonString" present in the "structure" helper.
It appears that tests in the AWS provider are covering more use-cases,
which are added in this work.
This new codepath with the getDiff "customzed" return value, along with
the associated test need to be removed as soon as we can support unset
fields from the config, so we don't continue to carry this broken
behavior forward any longer than needed.
This extends the internal diffChange method so that ResourceDiff's
implementation of it can report back whether or not the value came from
a customized diff.
This is an effort to work to preserve the pre-ResourceDiff behaviour
that ignores the diff for computed keys when the old value was populated
but the new value wasn't - this behaviour is actually being depended on
by users that are using it to exploit using zero values in modules. This
should allow both scenarios to co-exist by shifting the NewComputed
exemption over to exempting values that come from diff customization.
This reverts one of the changes from 6a4f7b0, which broke empty strings
being seen as unset for computed values.
This breaks a number of other tests, and is only an intermediate change
for evaluating other solutions.
This case should be expected to fail with the current diff algorithm,
but the existing behavior was widely relied upon so we need to roll this
back until there is a representable nil value.
The CustomizeDiff functionality in helper/schema is powerful, but directly
writing single CustomizeDiff functions can obscure the intent when a
number of different, orthogonal diff-customization behaviors are required.
This new library provides some building blocks that aim to allow a more
declarative form of CustomizeDiff implementation, by composing a number of
smaller operations. For example:
&schema.Resource{
// ...
CustomizeDiff: customdiff.All(
customdiff.ValidateChange("size", func (old, new, meta interface{}) error {
// If we are increasing "size" then the new value must be
// a multiple of the old value.
if new.(int) <= old.(int) {
return nil
}
if (new.(int) % old.(int)) != 0 {
return fmt.Errorf("new size value must be an integer multiple of old value %d", old.(int))
}
return nil
}),
customdiff.ForceNewIfChange("size", func (old, new, meta interface{}) bool {
// "size" can only increase in-place, so we must create a new resource
// if it is decreased.
return new.(int) < old.(int)
}),
customdiff.ComputedIf("version_id", func (d *schema.ResourceDiff, meta interface{}) bool {
// Any change to "content" causes a new "version_id" to be allocated.
return d.HasChange("content")
}),
),
}
The goal is to allow the various separate operations to be quickly seen
and to ensure that each of them runs independently of the others. These
functions all create closures on the call parameters, so the result is
still just a normal CustomizeDiffFunc and so the helpers in this package
can be combined with hand-written functions as needed.
As we get more experience writing CustomizeDiff functions we may wish to
expand the repertoire of functions here in future; this initial set
attempts to cover some common cases we've seen so far. We may also
investigate some helper functions that are entirely declarative and so
don't take callback functions at all, but want to learn what the relevant
use-cases are before going in too deep here.
Looks like while we were checking errors correctly when ExpectError was
set, we weren't checking for the *absence* of an error, which is should
be checked as well (no error is still not the error we are looking for).
Added a few more tests for ExpectError as well.
Validation is the best time to return detailed diagnostics
to the user since we're much more likely to have source
location information, etc than we are in later operations.
This change doesn't actually add any detail to the messages
yet, but it changes the interface so that we can gradually
introduce more detailed diagnostics over time.
While here there are some minor adjustments to some of the
messages to improve their consistency with terminology we
use elsewhere.
StringMatch returns a validation function that can be used to match a
string against a regular expression. This can be used for simple
substring validations or more complex validation scenarios. Optionally,
an error message can be returned so that the user is returned a better
error message other than that their field did not match a regular
expression that they might not be able to understand.
There are situations where one may need to write to a set, list, or map
more than once per single TF operation (apply/refresh/etc). In these
cases, further writes using Set (example: d.Set("some_set", newSet))
currently create unstable results in the set writer (the name of the
writer layer that holds the data set by these calls) because old keys
are not being cleared out first.
This bug is most visible when using sets. Example: First write to set
writes elements that have been hashed at 10 and 20, and the second write
writes elements that have been hashed at 30 and 40. While the set length
has been correctly set at 2, since a set is basically a map (as is the
entire map writer) and map results are non-deterministic, reads to this
set will now deliver unstable results in a random but predictable
fashion as the map results are delivered to the caller non-deterministic
- sometimes you may correctly get 30 and 40, but sometimes you may get
10 and 20, or even 10 and 30, etc.
This problem propagates to state which is even more damaging as unstable
results are set to state where they become part of the permanent data
set going forward.
The problem also applies to lists and maps. This is probably more of an
issue with maps as a map can contain any key/value combination and hence
there is no predictable pattern where keys would be overwritten with
default or zero values. This is contrary to complex lists, which has
this problem as well, but since lists are deterministic and the length
of a list properly gets updated during the overwrite, the problem is
masked by the fact that a read will only read to the boundary of the
list, skipping any bad data that may still be available due past the
list boundary.
This update clears the child contents of any set, list, or map before
beginning a new write to address this issue. Tests are included for all
three data types.
This keeps CustomizeDiff from being defined on data sources, where it
would be useless. We just catch this in InternalValidate like the rest
of the CRUD functions that are not used in data sources.
Added some more detailed comments to CustomizeDiff's comments. The new
comments detail how CustomizeDiff will be called in the event of
different scenarios like creating a new resource, diffing an existing
one, diffing an existing resource that has a change that requires a new
resource, and destroy/tainted resources.
Also added similar detail to ForceNew in ResourceDiff.
This should help mitigate any confusion that may come up when using
CustomizeDiff, especially in the ForceNew scenario when the second run
happens with no state.
setDiff does not make use of its new parameter anymore, so it has been
removed. Also, as there is no more SetDiff (exported) function, mentions
of that have been removed from comments too.
This fixes nil pointer issues that could come up if an invalid key was
referenced (ie: not one in the schema). Also ships a helper validation
function to streamline things.
Restoring the naming of this field in the resource back to
CustomizeDiff, as this is generally more descriptive of the process
that's happening, despite the lengthy name.
The old comments said that this interface was API compatible with
terraform.ResourceProvider's Diff method - with the addition of passing
down meta to it, this is no longer the case.
Not too sure if this is really a big deal - schema.Resource never fully
implemented terraform.ResourceProvider, from what I can see, and the
path from Provdier.Diff to Resource.Diff is still pretty clear. Just
wanted to remove an outdated comment.
This could panic if we sent a parent that was actually longer than the
child (should almost never come up, but the guard will make it safe
anyway).
Also fixed a slice style lint warning in UpdatedKeys.
The consensus is that it's a generally better idea to move setting the
functionality of old values completely to the refresh/read process,
hence it's moot to have this function around anymore. This also means we
don't need the old value reader/writer anymore, which simplifies things.
When working on this initially, I think I thought that since NewComputed
values in the diff were empty strings, that it was using the zero value.
After review, it doesn't seem like this is the case - so I have adjusted
NewComputed to pass nil values. There is also a guard now that keeps the
new value writer from accepting computed fields with non-nil values.
To keep with the current convention of most other schema.Resource
functional fields being fairly short, CustomizeDiff has been changed to
"Review". It would be "Diff", however it is already used by existing
functions in schema.Provider and schema.Resource.
Both Destroy and DestroyDeposed are not propagated down the diff stack,
meaning that there is no way we can tell at this point if an instance is
being destroyed or deposed, so this check would never be used.
In this regard, Destroy never runs a diff down the stack at all, and a
deposition check is not run until *after* the provider's diff function
is called. To answer this question and close it off, we could either
determine if a resource is deposed earlier, and propagate that down, or
treat deposed resources like full destroy nodes, and not diff them at
all (but rather making a diff with the only thing in it being
DestroyDeposed flagged).
Added a few more test cases for CustomizeDiff, caught another error in
the process. I think this is ready for review now, and possibly some
real-world testing of the waters by way of porting some resources that
would benefit from the feature.
It's alive! CustomizeDiff logic now has been inserted into the diff
process. The test_resource_with_custom_diff resource provides some basic
testing and a reference implementation.
There should now be plenty of test coverage for this feature via the
tests added for ResourceDiff, and the basic test added to the
schemaMap.Diff test, and the test resource, but more can be added to
test any specific case that comes up otherwise.
As the interface has been specced out for ResourceDiff, the only diff
operation that can function on a non-computed key is ForceNew, and that
is only if a diff exists for that key. As such, allowing the user to
clear keys that cannot be operated on afterwards does not really make
much sense.
Will re-add this function if it is determined it's needed after all on
review.
Final set of coverage for ResourceDiff and bug fixes to correct issues
the test cases brought up.
Will be dropping ClearAll in next commit, as with how this interface is
intended to be used, it does not really make much sense.
This provides a deep copy of a schemaMap, which will be needed for the
diff customization process as ResourceDiff will be able to flag fields
as ForceNew - we don't want to affect the source schema.
In diffString, removed values are marked if the old value is non-nil and
the new value is nil, regardless of if the new value is marked as a
computed value. With the new diff override behaviour, this can lead to
issues where a nil attribute may get inserted into the diff if the new
value has been marked as computed (as the value will be marked as
missing, but computed).
This propagates into finalizeDiff in some respects because even if
NewComputed is manually set in the diff that gets passed in, it is
ignored, and the schema becomes the only source of truth. Since our new
diff behaviour is mainly designed to be supported on computed keys only,
it stands to reason that we there will be a time where we want to set a
diff as NewComputed on a key, and see that change in the diff.
These two small changes makes that happen. No regressions in tests have
been observed via this change, so it seems safe and non-invasive.
The beginning of tests for SetNew. Noticed that removed values are not
logged for computed keys in a diff - not too sure if that is going to be
a problem (possibly not as GetChange in ResourceData should still
reference state for the old value). I came on this most notably in the
"TestSetNew/complex-ish_set_diff" test, for reference.
More tests pending for other functions for coverage of other functions
and test cases as defined in #8769.
This ensures that when we hook this into the main diff logic, that we
can just re-diff the keys that are modified, ensuring that the diff is
not re-run on keys that were not touched, or on keys that were
intentionally removed.
This should complete the feature set of the prototype. This function
removes a specific key from the existing diff, preventing conflicts.
Further functionality might be needed to make this behave as expected,
namely ensuring that we don't re-process the whole diff after the
CustomizeDiffFunc runs.
This new prototype removes the dependence on a underlying ResourceData,
moving several items up to ensure an approach that more matches
ResourceData. Namely, we create our own MultiLevelFieldReader that
also layers updated diff values on top. New values use a small
re-implementation of the MapFieldWriter/Reader that allows computed
values to be set.
The assumption here now is that a second diff will happen after the
first one is done, processing the new values set in the 2 new
reader/writer levels and updating the diff as necessary.
This adds a new object, ResourceDiff, to the schema package. This
object, in conjunction with a function defined in CustomizeDiff in the
resource schema, allows for the in-flight customization of a Terraform
diff. This helps support use cases such as when there are necessary
changes to a resource that cannot be detected in config, such as via
computed fields (most of the utility in this object works on computed
fields only). It also allows for a wholesale wipe of the diff to allow
for diff logic completely offloaded to an external API, if it is a
better use case for a specific provider.
As part of this work, many internal diff functions have been moved to
use a special resourceDiffer interface, to allow for shared
functionality between ResourceDiff and ResourceData. This may be
extended to the DiffSuppressFunc as well which would restrict use of
ResourceData in DiffSuppressFunc to generally read-only fields.
This work is not yet in its final state - CustomizeDiff is not yet
implemented in the general diff workflow, new functions may be added
(notably Clear() for a single key), and functionality may be altered.
Tests will follow as well.
Update the command package to use the new module storage. Move the old
command output strings into the module storage itself. This could be
moved back later either by using ui callbacks, or designing a module
storage interface once we know what the final requirements will look
like.
In order to parse provider, resource and data source configuration from
HCL2 config files, we need to know the relevant configuration schema.
This new method allows Terraform Core to request these from a provider.
This is a breaking change to this interface, so all of its implementers
in this package are updated too. This includes concrete implementations
of the new method in helper/schema that use the schema conversion code
added in an earlier commit to produce a configschema.Block automatically.
Plugins compiled against prior versions of helper/schema will not have
support for this method, and so calls to them will fail. Callers of
this new method will therefore need to sniff for support using the
SchemaAvailable field added to both ResourceType and DataSource.
This careful handling will need to persist until next time we increment
the plugin protocol version, at which point we can make the breaking
change of requiring this information to be available.
Uses Levenshtein distance to decide if the input is similar enough to one
of the given suggestions, and returns that suggestion if so.
The distance threshold of three was arrived at experimentally, and has
no objective basis.
We don't currently have any need for this information, but we're
propagating it out of helper/schema here pre-emptively so that once we
later have a use for it we will not need to rebuild the providers to gain
access to it.
The long-term expected use-case for this is to have Terraform Core use
static analysis techniques to trace the path of sensitive data through
interpolations so that intermediate results can be flagged as sensitive
too, but we have a lot more work to do before such a thing would actually
be possible.
As part of moving to the next-generation HCL implementation,
Terraform Core is getting its own representation of configuration schema
that is tailored for configuration-processing use-cases. The capabilities
of this are a subset of the helper/schema model primarily concerned with
the configuration structure and value types, leaving detailed validation
and defaults for helper/schema to still solve.
These new methods allow mechanical creation of a schema in the new Core
schema model from a schema expressed in the helper/schema model. This is
not yet used as of this commit, but will be used later to implement some
new ResourceProvider methods that will allow core to obtain the schema
for provider, resource and data source configuration while remaining
source-compatible with existing provider implementations.
This adds NoZeroValues, a small SchemaValidateFunc that checks that a
defined value is not a zero value. It's useful for situations where you
want to keep someone from explicitly entering a zero value (ie:
literally "0", or an empty string) on a required field, and want to
catch it in the validation stage, versus during apply using GetOk.
Add an ImportStateIdFunc field to the ImportState testing functionality.
This will allow for more powerful generation of complex import state IDs
that can't be accomplished by ImportStateId or ImportStateIdPrefix
themselves.
Go 1.9 adds this new function which, when called, marks the caller as
being a "helper function". Helper function stack frames are then skipped
when trying to find a line of test code to blame for a test failure, so
that the code in the main test function appears in the test failure output
rather than a line within the helper function itself.
This covers many -- but probaly not all -- of our test helpers across
various packages.
Add the ValidateListUniqueStrings function, which is a ValidateFunc that
ensures a list has no duplicate items in it. It's useful for when a list
is needed over a set because order matters, yet the items still need to
be unique.
This adds a ValidateRegexp validation helper to check to see if a field
has a valid regular expression.
Includes test for good regexp, and test for bad regexp.
Equality of schema.Sets gets tricky when dealing with nested sets -
Set.Equal only superficially compares the underlying maps and hence any
sets nested under the root sets cause issues.
This adds a simple method, HashEqual, that does a top-level hash
comparison, helping to work around this without any complex re-invention
of things like reflect.DeepEqual.
Of course, in order to make effective use of this function, the user
needs to make sure they are properly hashing their nested sets, however
this is trivial with things like HashResource.
Adds `GetOkRaw` as a schema function. This should only be used to verify
boolean attributes are either set or not set, regardless of their zero
value for their type. There are a few small use cases outside of the boolean
type where this will be helpful as well.
Overall, this shouldn't detract from the zero-value checks that `GetOK()`
currently has, and should only be used when absolutely needed. However,
there are enough use-cases for this addition without checking for the
zero-value of the type, that this is needed.
Primary use case is for a boolean attribute that is `Optional` and `Computed`,
without a default value. There's currently no way to verify that the boolean
attribute was explicitly set to the zero-value literal with the current
`GetOk()` function. This new function allows for that check, keeping the
`Computed` check for the returned `exists` boolean.
```
$ make test TEST=./helper/schema TESTARGS="-run=TestResourceDataGetOkRaw"
==> Checking that code complies with gofmt requirements...
go generate $(go list ./... | grep -v /terraform/vendor/)
2017/08/02 11:17:32 Generated command/internal_plugin_list.go
go test -i ./helper/schema || exit 1
echo ./helper/schema | \
xargs -t -n4 go test -run=TestResourceDataGetOkRaw -timeout=60s -parallel=4
go test -run=TestResourceDataGetOkRaw -timeout=60s -parallel=4 ./helper/schema
ok github.com/hashicorp/terraform/helper/schema 0.005s
```
The field reader code path is extremely inefficient, but refactoring
it all is much to invasive a change at the moment.
Have DiffFieldReader internally cache results for ReadField.
Provider import tests previously didn't have to supply a config, but
terraform now requires the provider to be declared for discovery.
testProviderConfig returns a stub config with provider blocks based
on the TestCase Providers. This allows basic import tests in providers
to remain unchanged.
The Close methods on shadow.Values require pointer receivers because
they contain a sync.Mutex, but that value was being copied through
Value.Interface by the closeWalker. Because reflectwalk passes the
struct fields to the StructField method as they are defined in the
struct, and they may have been read as a value, we can't immediately
call Interface() to check the method set without possibly copying the
internal mutex values. Use the Implements method to first check if we
need to call Interface, and if it's not, then we can check if the value
is addressable.
Because of this use of reflection, we can't vet for the copying of these
locks. The minimal amount of code in the Close method left us only with
a race detected within the mutex itself, which leads to a stacktrace
pointing to the runtime rather than our code.
It turns out that `d.GetOk` also returns `false` when the user _did_ actually supply a value for it in the config, but the value itself needs to be evaluated before it can be used.
So instead of passing a `ResourceData` we now pass a `ResourceConfig`
which makes much more sense for doing the validation anyway.
The timestamp prefix added in #8249 was removed in #10152 to ensure that
returned IDs really are properly ordered. However, this meant that IDs were no
longer ordered over multiple invocations of terraform, which was the main
motivation for adding the timestamp in the first place. This commit does a
hybrid: timestamp-plus-incrementing-counter instead of just incrementing counter
or timestamp-plus-random.
Rather than providing an already-resolved map of plugins to core, we now
provide a "provider resolver" which knows how to resolve a set of provider
dependencies, to be determined later, and produce that map.
This requires the context to be instantiated in a different way, so this
very noisy diff is a mostly-mechanical update of all of the existing
places where contexts get created for testing, using some adapted versions
of the pre-existing utilities for passing in mock providers.
Previously having a config was mutually exclusive with running an import,
but we need to provide a config so that the provider is declared, or else
we can't actually complete the import in the future world where providers
are installed dynamically based on their declarations.
* provider/aws: Add Sweeper setup, Sweepers for DB Option Group, Key Pair
* provider/google: Add sweeper for any leaked databases
* more recursion and added LC sweeper, to test out the Dependency path
* implement a dependency example
* implement sweep-run flag to filter runs
* stub a test for TestMain
* test for multiple -sweep-run list
GH-14784 allowed nested structures to be validate, rather than relying
on the raw value. However this still returns the same validation error
if the structures contain a computed value, since Get will return the
raw string in that case.
This simply skips the validation in the IsComputed case, since there's
nothing that can be checked.
When interpreting a nested object, we were validating against the "raw"
value, and not the interpolated value, causing incorrect errors.
This affects structures such as:
```tf
tags = "${list(map("foo", "bar"))}"
```
Prior to this, a complaint about "expected object, got string" since the
raw value is obviously a string, when the interpolated value is the
correct shape.
The tests did pass, but that was because they only tested part of the changes. By using the `schema.TestResourceDataRaw` function the schema and config are better tested and so they pointed out a problem with the schema of the Chef provisioner.
The `Elem` fields did not have a `*schema.Schema` but a `schema.Schema` and in an `Elem` schema only the `Type` field may (and must) be set. Any other fields like `Optional` are not allowed here.
Next to fixing that problem I also did a little refactoring and cleaning up. Mainly making the `ProvisionerS` private (`provisioner`) and removing the deprecated fields.
1. Migrate `chef` provisioner to `schema.Provisioner`:
* `chef.Provisioner` structure was renamed to `ProvisionerS`and now it's decoded from `schema.ResourceData` instead of `terraform.ResourceConfig` using simple copy-paste-based solution;
* Added simple schema without any validation yet.
2. Support `ValidateFunc` validate function : implemented in `file` and `chef` provisioners.
Besides the support for DO certificates themselves, this commit also
includes:
1) A new `RandTLSCert` function that generates a valid, self-signed TLS
certificate to be used in the test
2) A fix for the PEM encoding of the private key generated in
`RandSSHKeyPair`: the PEM was always empty
If a schema.TypeList had a Schema with ForceNew, and if that list was
NewComputed, the diff would not have RequiresNew set. This causes apply
to fail when the diffs didn't match because of the change to
RequiresNew.
Set the RequiresNew field on the list's ResourceAttrDiff based on the
Schema value.
stringer has changed the boilerplate it generates in a recent version.
We'd previously updated to the new format but accientally rolled back
to the old while merging a long-running feature branch.
This restores us back to the new format again.
A couple tests require lowering the grace period to keep the test from
taking the full 30s timeout.
The Retry_hang test also needed to be removed from the Parallel group,
becuase it modifies the global refreshGracePeriod variable.
Refresh calls may have side effects that need to be recorded if it
succeeds, especially common when when WaitForState is called from
resource.Retry.
If the WaitForState timeout is reached and there is a Refresh call
in-flight, wait up to refreshGracePeriod (set to 30s) for it to
complete.
This test unfortunately relies on the timing of the loops in
WaitForState, and the text of the error message. Adjust the timing so
the timeout isn't an even multiple of the poll interval, and make sure
we reach a minimum number of retries.
Make sure that we can cancel the WaitForState refresh loop when reaching
a timeout, otherwise it may run indefinitely. There's no need to try and
store and read the Result concurrently, just pass the value over a
channel.
* Revert #11245, #11321, #11498 and #11757
These PR’s are all related to issue #11170 for which I would like to propose a different solution then the one currently implemented.
* A different approach to solve #11170
This approach has (IMHO) a few advantages with regards to the solution currently implemented. I will elaborate on this in the PR.
* provider/aws: Add failing test for EMR Bootstrap Actions
* aws_emr_cluster: Fix bootstrap action parameter ordering
* provider/aws: Fix EMR Bootstrap arguments
* provider/aws: Args needs to be ForceNew, because we can't update them
Discussion in #9512 revealed that some of the comments here were
inaccurate and that the comments here did not paint a complete enough
picture of the behavior and expectations of Default and DefaultFunc.
This is a comments-only change that aims to clarify the situation and
call attention to the fact that the defaults only affect the handling of
the configuration and that changes to defaults may require migration of
existing resource states.
This closes#9512.
Adds the `ImportStateIdPrefix` field for import acceptance tests. There are (albeit fairly rare) import cases where a resource needs to be imported with a combination of the resource's ID and a known string prefix. This allows the developer to specify the known prefix, and omit the `ImportStateId` field.
```
$ make test TEST=./helper/resource TESTARGS="-run=TestTest_importStateIdPrefix"
==> Checking that code complies with gofmt requirements...
==> Checking AWS provider for unchecked errors...
==> NOTE: at this time we only look for uncheck errors in the AWS package
go generate $(go list ./... | grep -v /terraform/vendor/)
2017/03/30 18:08:36 Generated command/internal_plugin_list.go
go test -i ./helper/resource || exit 1
echo ./helper/resource | \
xargs -t -n4 go test -run=TestTest_importStateIdPrefix -timeout=60s -parallel=4
go test -run=TestTest_importStateIdPrefix -timeout=60s -parallel=4 ./helper/resource
ok github.com/hashicorp/terraform/helper/resource 0.025s
```