Provider tests often rely on checking values contained within sets, by
directly accessing their flatmapped representation. In order to provider
the test harness with the expected set hashes, the sets must be
generated by the schema.Resource itself.
During the test we now build a fixed map of the providers, which should
only contain schema.Provider instances, and pass them into each
TestStep. The individual schema.Resource instances can then be pulled
from the providers, and used to recreate the state from the cty.Value
returned by the core operations.
Stricter type handling in the new shims may add empty containers into
the state where they were previously elided. Since the detection of
missing and empty containers in the legacy state was never reliable,
allow TestCheckNoResourceAttr to succeed if the key is a container count
index, and the value is "0"
Missing containers were often erroneously kept in the state, but since
the addition of the new provider shims, they can often be correctly
eliminated. There are however many tests that check for a "0" count in
the flatmap state when there shouldn't be a key at all. This addition
looks for a container count key and "0" pair, and allows for the key to
be missing.
There may be some tests negatively effected by this which were
legitimately checking for empty containers, but those were also not
reliably detected, and there should be much fewer tests involved.
Zero values and empty containers can be lost during the shimming
process, and during the provider's Apply step.
If we have known zero value containers and primitives in the source,
which appear as null values in the destination, we copy over the zero
value. Sets (and lists to an extent) are more difficult, since there
before and after indexes may not correlate. In that case we take the
entire container if it's wholly known, expecting the provider to have
correctly handled the value.
Due to incorrect use of a loop iterator variable inside a closure, all of
the given providers were ending up with the same factory function.
Now we copy the factory function to a local within the loop first so that
each iteration has its own variable.
This is the second round of similar bugs in this function, so we'll also
add a test case for it to reduce the risk of future regressions given that
most real callers don't exercise this with multiple providers in practice.
We use a shim to convert from the new state model back to the old because
the provider test API is still using the old API throughout. However, the
shim was not preserving the schema version recorded in the new-style state
and so a round-trip through this shim would cause the schema versions to
all revert to zero.
This can cause trouble with the destroy phase of provider tests because
(for API legacy reasons) we round-trip from old state back to new again
before the destroy phase and thus causing the providers to try to upgrade
from state version zero even though the data was already latest, which
can cause errors because state upgrades are generally not idempotent.
With the introduction of explicit "null" in 0.12 it's possible for a value
that is unknown during plan to become a known null during apply, so we
need to slightly weaken our validation rules to accommodate that, in
particular skipping the validation of conflicting attributes if the result
could potentially be valid after the unknown values become known.
This change is in the codepath that is common to both 0.12 and 0.11
callers, but that's safe because 0.11 re-runs validation during the apply
step and so will still catch problems here, albeit in the apply step
rather than in the plan step, thus matching the 0.12 behavior. This new
behavior is a superset of the old in the sense that everything that was
valid before is still valid.
The implementation here also causes us to skip all other validation for
an attribute whose value is unknown. Most of the downstream validation
functions handle this directly anyway, but again this doesn't add any new
failure cases, and should clean up some of the rough edges we've seen with
unknown values in 0.11 once people upgrade to 0.12-compatible providers.
Any issues we now short-circuit during planning will still be caught
during apply.
While working on this I found that the existing "Not a list" test was not
actually testing the correct behavior, so this also includes a tweak to
that to ensure that it really is checking the "should be a list" path
rather than the "cannot be set" codepath it was inadvertently testing
before.
This causes the output to include additional helpful context such as
the values of variables referenced in the config, etc. The output is in
the same format as normal Terraform CLI error output, though we don't
retain a source code cache in this codepath so it will not include a
source code snippet.
Previously the test harness was preloading schemas from the providers
before running any test steps.
Since terraform.NewContext already deals with loading provider schemas,
we can instead just use the schemas it loaded for our shimming needs,
avoiding the need to reimplement the schema lookup behavior and thus
the need to create a throwaway provider instance with which to do it.
Previously we were running the factory function only once when
constructing the provider resolver, which means that all contexts created
from that resolver share the same provider instance.
Instead now we will call the given factory function once for each
instantiation, ensuring that each caller ends up with a separate object
as would be the case in real-world use.
The added test in this commit, without the fix, will make d.Set return
the following error:
`Invalid address to set: []string{"ports", "0", "set"}`
This was due to the fact that setSet in feild_writer_map tried to
convert a slice into a set by creating a temp set schema and calling
writeField on that with the address(`[]string{"ports", "0", "set"}"` in
this case). However the temp schema was only for the set and not the
whole schema as seen in the address so, it should have been `[]string{"set"}"`
so it would align with the schema.
This commits adds another variable there(tempAddr) which will only
contain the last entry of the address that would be the set key, which
would match the created schema
This commit potentially fixes the problem described in #16331
Any state modifying functions can only be run once during the plan-apply
cycle. When regenerating the Diff during ApplyResourceChange, strip out
all StateFunc and CustomizeDiff functions from the schema.
Thew NewExtra diff field was where config data that was modified by a
StateFunc was stored, and needs to be maintained between plan and apply.
During PlanResourceChange, store any NewExtra data from the Diff in the
PlannedPrivate data, and re-insert the NewExtra data into the Diff
generated during ApplyResourceChange.
Errors were being ignore with the intention that they would be caught
later in validation, but it turns out we nee dto catch those earlier.
The legacy schemas also allowed providers to set and empty string for a
bool value, which we need to handle here, since it's not being handled
from user input like a normal config value.
The rest of Terraform is still using uint64 for this in various spots, but
we'll update that gradually later. We use int64 here because that matches
what's used in our protobuf definition, and unsigned integers are not
portable across all of the protobuf target languages anyway.
When normalizing flatmapped containers, compare the attributes to the
prior state and preserve pre-existing zero-length or unknown values. A
zero-length value that was previously unknown is preserved as a
zero-length value, as that may have been computed as such by the
provider.
Since the SDK's schema system conflates attributes and nested blocks, it's
possible to state some nonsensical schema situations such as:
- A nested block is both optional but has MinItems > 0
- A nested block is entirely computed but has MinItems or MaxItems set
Both of these weird situations are handled here in the same way that the
existing helper/schema validation code would've handled them: by
effectively disabling the MinItems/MaxItems checks where they would've
been ignored before.
the MinItems/MaxItems
The SDK has a mechanism that effectively makes it possible to declare an
attribute as being _conditionally_ required, which is not a concept that
Terraform Core is aware of.
Since this mechanism is in practice only used for a small UX improvement
in prompting for these values interactively when the environment variable
is not set, we avoid here introducing all of this complexity into the
plugin protocol by just having the provider selectively modify its schema
if it detects that such an attribute might be set dynamically.
This then prevents Terraform Core from validating the presence of the
argument or prompting for a new value for it, allowing the null value to
pass through into the provider so that the default value can be generated
again dynamically.
This is a kinda-kludgey solution which we're accepting here because the
alternative would be a much-more-complex two-pass decode operation within
Core itself, and that doesn't seem worth it.
This fixes#19139.
The main significant change here is that the package name for the proto
definition is "tfplugin5", which is important because this name is part
of the wire protocol for references to types defined in our package.
Along with that, we also move the generated package into "internal" to
make it explicit that importing the generated Go package from elsewhere is
not the right approach for externally-implemented SDKs, which should
instead vendor the proto definition they are using and generate their
own stubs to ensure that the wire protocol is the only hard dependency
between Terraform Core and plugins.
After this is merged, any provider binaries built against our
helper/schema package will need to be rebuilt so that they use the new
"tfplugin5" package name instead of "proto".
In a future commit we will include more elaborate and organized
documentation on how an external codebase might make use of our RPC
interface definition to implement an SDK, but the primary concern here
is to ensure we have the right wire package name before release.
In order to prevent mismatched states between read/plan/apply, we need
to ensure that the attributes are generated consistently each time.
Because of the various ways in which helper/schema and the hcl2 shims
interpret empty values, the only way to ensure consistency is to always
remove them altogether.
This makes sure the diff is generated with the matching set ids from
helper/schema.
Update the tests to add ID fields to the state, which will exists in
practice, since any state traversing through the shims will have the ID
inserted.
helper/schema will remove "timeouts" from the config, and stash them in
the diff.Meta map. Terraform sees "timeouts" as a regular config block,
so needs them to be present in the state in order to not show a diff.
Have the GRPCProviderServer shim copy all timeout values into any state
it returns to provide consistent diffs in core.
Resource timeouts were a separate config block, but did not exist in the
resource schema. Insert any defined timeouts when generating the
configshema.Block so that the fields can be accepted and validated by
core.
`Any()` allows any single passing validation of multiple `SchemaValidateFunc` to pass validation to cover cases where a standard validation function does not cover the functionality or to make error messaging simpler.
Example provider usage:
```go
ValidateFunc: validation.Any(
validation.IntAtLeast(42),
validation.IntAtMost(5),
),
```
`All()` combines the outputs of multiple `SchemaValidateFunc`, to reduce the usage of custom validation functions that implement standard validation functions.
Example provider usage:
```go
ValidateFunc: validation.All(
StringLenBetween(5, 42),
StringMatch(regexp.MustCompile(`[a-zA-Z0-9]+`), "value must be alphanumeric"),
),
```
`IntInSlice()` is the `int` equivalent of `StringInSlice()`
Example provider usage:
```go
ValidateFunc: validation.IntInSlice([]int{30, 60, 120})
```
Output from unit testing:
```
$ make test TEST=./helper/validation
==> Checking that code complies with gofmt requirements...
go generate ./...
2018/10/17 14:16:03 Generated command/internal_plugin_list.go
go list ./helper/validation | xargs -t -n4 go test -timeout=2m -parallel=4
go test -timeout=2m -parallel=4 github.com/hashicorp/terraform/helper/validation
ok github.com/hashicorp/terraform/helper/validation 1.106s
```
The helper/resource unit tests will panic, because they were using the
legacy terraform.MockResourceProvider, which doesn't have the same
internals required by the new GRPC shims.
Fail these tests for now, and a new test provider will need to be made
out of a schema.Provider instance.
Use the new SimpleDiff method of the provider so that the diff isn't
altered by ForceNew attributes.
Always set an "id" as RequiresReplace so core knows an instance will be
replaced, even if all ForceNew attributes are filtered out due to
ignore_changes.
Terraform now handles any actual "diffing" of resource, and the existing
Diff functions are only used to shim the schema.Provider to the new
methods. Since terraform is handling what used to be the Diff, the
provider now should not modify the diff based on RequiresNew due to it
interfering with the ignore_changes handling.
While the schema Diff fucntion returns a nil diff when creating an empty
(except for id) resource, the Apply function expects the diff to be
initialized and ampty.
PlanResourceChange isn't returning the diff, but rather it is returning
the destired state. If the propsed state results in a nil diff, then,
the propsed state is what should be returned.
Make sure Meta fields are not nil, as the schema package expects those
to be initialised.
An earlier change introduced a new function testConfig to the main code
for this package, which conflicted with a function of the same name in
the test code.
Here we rename the function from the test code, allowing for the more
generally-named testConfig to be the one in the main code.
The "id" field is assumed to always exist, and must have a valid value.
Set "id" to unknown when planning new resource changes to indicate that
it will be computed.
Previously we just left these out of the plan altogether, but in the new
plan types we intentionally include change information for every resource
instance, even if no changes are actually planned, to allow alternative
plan file viewers to show what isn't changing as well as what is.
Managing which function need to be shared between the terraform plugin
and the helper plugin without creating cycles was becoming difficult.
Move all functions related to converting between terraform and proto
type into plugin/convert.
Due to how often the state and plan types are referenced throughout
Terraform, there isn't a great way to switch them out gradually. As a
consequence, this huge commit gets us from the old world to a _compilable_
new world, but still has a large number of known test failures due to
key functionality being stubbed out.
The stubs here are for anything that interacts with providers, since we
now need to do the follow-up work to similarly replace the old
terraform.ResourceProvider interface with its replacement in the new
"providers" package. That work, along with work to fix the remaining
failing tests, will follow in subsequent commits.
The aim here was to replace all references to terraform.State and its
downstream types with states.State, terraform.Plan with plans.Plan,
state.State with statemgr.State, and switch to the new implementations of
the state and plan file formats. However, due to the number of times those
types are used, this also ended up affecting numerous other parts of core
such as terraform.Hook, the backend.Backend interface, and most of the CLI
commands.
Just as with 5861dbf3fc49b19587a31816eb06f511ab861bb4 before, I apologize
in advance to the person who inevitably just found this huge commit while
spelunking through the commit history.
The new helper/plugin package contains the grpc servers for handling the
new plugin protocol
The GRPCProviderServer and GRPCProvisionerServer handle the grpc plugin
protocol, and convert the requests to the legacy schema.Provider and
schema.Provisioner methods.
In order to not require state migrations to be supported in both
MigrateState and StateUpgraders, the legacy provider codepath needs to
handle the StateUpgraders transparently during Refresh.
This adds some of the required shim functions to the schema package.
While this further bloats the already huge package, adding the helpers
here was significantly less disruptive than refactoring types into
separate packages to prevent import cycles.
The majority of tests here are directly adapted from existing schema
tests to provide as many known good values to the shims as possible.
It turns out that state upgrades need to be handled differently since
providers are going to be backwards compatible. This means that new
state upgrades may still be stored in the flatmap format when used wih
terraform 0.11. Because we can't account for the specific version which
could produce a legacy state, all future state upgrades need to record
the schema types for decoding.
Rather than defining a single Upgrade function for states, we now have a
list of functions, each of which handle upgrading a specific version to
the next. In practice this isn't much different from the way many
resources implement upgrades themselves, with a separate function for
each version dispatched from the MigrateState function. The only added
burden is the recording of the schema type, and we intend to supply
tools and helper function to prevent the need to copy the entire
existing schema in all cases.
This is the provider-side UpgradeState implementation for a particular
resource. This new function will be called to upgrade a saved state with
an old schema version to the current schema.
UpgradeState also requires a record of the last schema and version that
could have been stored as a flatmapped state. If the stored state is in
the legacy flatmap format, this will allow the provider to properly
decode the flatmapped state into the expected structure for the new json
encoded state. If the stored state's version is below that of the
LegacySchema.Version value, it will first be processed by the legacy
MigrateState function.
The update protocol shims will also check for this this, but eventually
"id" will only be a normal attribute, and we shouldn't have to special
case this.
When converting a legacy schemaMap to a configschema, we need to add
"id" as a required attribute to top-level resources if it's not
declared.
The "id" field will be required to interoperate with the legacy helper
schema, since the presence of an id was used to indicate the existence
of a resource.
The "config" package is no longer used and will be removed as part
of the 0.12 release cleanup. Since configschema is part of the
"new world" of configuration modelling, it makes more sense for
it to live as a subdirectory of the newer "configs" package.
Due to how deeply the configuration types go into Terraform Core, there
isn't a great way to switch out to HCL2 gradually. As a consequence, this
huge commit gets us from the old state to a _compilable_ new state, but
does not yet attempt to fix any tests and has a number of known missing
parts and bugs. We will continue to iterate on this in forthcoming
commits, heading back towards passing tests and making Terraform
fully-functional again.
The three main goals here are:
- Use the configuration models from the "configs" package instead of the
older models in the "config" package, which is now deprecated and
preserved only to help us write our migration tool.
- Do expression inspection and evaluation using the functionality of the
new "lang" package, instead of the Interpolator type and related
functionality in the main "terraform" package.
- Represent addresses of various objects using types in the addrs package,
rather than hand-constructed strings. This is not critical to support
the above, but was a big help during the implementation of these other
points since it made it much more explicit what kind of address is
expected in each context.
Since our new packages are built to accommodate some future planned
features that are not yet implemented (e.g. the "for_each" argument on
resources, "count"/"for_each" on modules), and since there's still a fair
amount of functionality still using old-style APIs, there is a moderate
amount of shimming here to connect new assumptions with old, hopefully in
a way that makes it easier to find and eliminate these shims later.
I apologize in advance to the person who inevitably just found this huge
commit while spelunking through the commit history.
The new config loader requires some steps to happen in a different
order, particularly in regard to knowing the schema in order to
decode the configuration.
Here we lean directly on the configschema package, rather than
on helper/schema.Backend as before, because it's generally
sufficient for our needs here and this prepares us for the
helper/schema package later moving out into its own repository
to seed a "plugin SDK".
We will need access to this information in order to render interactive
input prompts, and it will also be useful in returning schema information
to external tools such as text editors that have autocomplete-like
functionality.
One of the tests in this package doesn't work when the tests are run as
root - like inside of a Docker container. The test is still useful to
specify `pathorcontents` behavior when a file is not readable, so it's
better to skip than just delete it. See linked issue for further
disussion.
Closes#7707
When checking if a ResourceDiff key is valid, allow for keys that exist
on sub-blocks. This allows things like diff.Clear to be called on
sub-block fields, instead of just on top-level fields.
The `remote` backend config contains an attribute that is defined as a `*schema.Set`, but currently only `string` values are accepted as the `config` attribute is defined as a `schema.TypeMap`.
Additionally the `b.Validate()` method wasn’t called to prevent a possible panic in case of unexpected configurations being passed to `b.Configure()`.
This commit is a bit of a hack to be able to support this in the 0.11 series. The 0.12 series will have proper support, so when merging 0.12 this should be reverted again.
While this initial implementation is a very simple wrapper function, implementing this in the helper/resource package provides some downstream benefits:
* Provides a standard interface for plugin developers to enable parallel acceptance testing
* Existing plugins can simply convert resource.Test to resource.ParallelTest references (as appropriate) to enable the functionality, rather than worrying about additional line(s) to each acceptance test function or TestCase
* Potential enhancements to ParallelTest (e.g. adding an environment variable to skip enabling the behavior) are consistently propagated
We already had the functionality to make resources deprecated, which was
used when migrating resources to data sources, but the functionality was
unexported, so only the schema package could do it. Now it's exported,
meaning providers can mark entire resources as deprecated. I also added
a test in hopefully-the-right place?
This adds the Taint field to the acceptance testing framework, allowing
the ability to pre-taint resources at the beginning of a particular
TestStep. This can be useful for when an explicit ForceNew is required
for a specific resource for troubleshooting things like diff mismatches,
etc.
The field accepts resource addresses as a list of strings. To keep
things simple for the time being, only addresses in the root module are
accepted. If we ever want to expand this past that, I'd be almost
inclined to add some facilities to the core terraform package to help
translate actual module resource addresses (ie:
module.foo.module.bar.some_resource.baz) into the correct state, versus
the current convention in some acceptance testing facilities that take
the module address as a list of strings (ie: []string{"root", "foo",
"bar"}).
A couple of bugs have been discovered in ResourceDiff.ForceNew:
* NewRemoved is not preserved when a diff for a key is already present.
This is because the second diff that happens after customization
performs a second getChange on not just state and config, but also on
the pre-existing diff. This results in Exists == true, meaning nil is
never returned as a new value.
* ForceNew was doing the work of adding the key to the list of changed
keys by doing a full SetNew on the existing value. This has a side
effect of fetching zero values from what were otherwise undefined values
and creating diffs for these values where there should not have been
(example: "" => "0").
This update fixes these scenarios by:
* Adding a new private function to check the existing diff for
NewRemoved keys. This is included in the check on new values in
diffChange.
* Keys that have been flagged as ForceNew (or parent keys of lists and
sets that have been flagged as ForceNew) are now maintained in a
separate map. UpdatedKeys now returns the results of both of these maps,
but otherwise these keys are ignored by ResourceDiff.
* Pursuant the above, values are no longer pushed into the newDiff
writer by ForceNew. This prevents the zero value problem, and makes for
a cleaner implementation where the provider has to "manually" SetNew to
update the appropriate values in the writer. It also prevents
non-computed keys from winding up in the diff, which ResourceDiff
normally blocks by design.
There are also a couple of tests for cases that should never come up
right now involving Optional/Computed values and NewRemoved, for which
explanations are given in annotations of each test. These are here to
guard against future regressions.
Added some more context to GetOkExists, moved Computed to NewValueKnown
to accommodate some changes that will be coming up in HCL2 that may make
"Computed" less intuitive of a function name, and updated the docs for
NewValueKnown as well.
This adds a new method to ResourceDiff: Computed, which exposes the
computed read result field to ResourceDiff. In the context of
customizing the diff, this is important as interpolated and otherwise
computed values will show up in the diff as blank, with no way of
determining if the value is actually blank or if it's a computed value
not available at diff customization time. Currently assumptions need to
be made on this, but this does not help in validation scenarios where
one needs to differentiate between an actual blank value and a value
that will be available later.
This is exposed for the most part via NewComputed in the diff, but the
tests cover both the config reader as well (with no diff, even though
this should not come up in normal operation) and also the newDiff reader
when someone sets a new value using SetNew and SetNewComputed.
This commit also exposes GetOkExists. The tests were mostly pulled from
ResourceData but a few were added to ensure that config was being
properly covered as well, in addition to covering SetNew and
SetNewComputed.
Return the global default timeout if the ResourceData timeouts are nil.
Set the timeouts from the Resource when calling Resource.Data, so that
the config values are always available.
For historical reasons, the handling of element types for maps is inconsistent with other collection types.
Here we begin a multi-step process to make it consistent, starting by supporting both the "consistent" form of using a schema.Schema and an existing erroneous form of using a schema.Type directly. In subsequent commits we will phase out the erroneous form and require the schema.Schema approach, the same as we do for TypeList and TypeSet.
This is rarely needed, but sometimes tests need to create temporary files as part of their operation. This should be used sparingly, since it prevents the pro-active cleanup of the temporary working directory.
In terraform-providers/terraform-provider-aws#2935, we have been cleaning code
duplication by benefiting from the "NormalizeJsonString" present in the "structure" helper.
It appears that tests in the AWS provider are covering more use-cases,
which are added in this work.
This new codepath with the getDiff "customzed" return value, along with
the associated test need to be removed as soon as we can support unset
fields from the config, so we don't continue to carry this broken
behavior forward any longer than needed.