Managing which function need to be shared between the terraform plugin
and the helper plugin without creating cycles was becoming difficult.
Move all functions related to converting between terraform and proto
type into plugin/convert.
Due to how often the state and plan types are referenced throughout
Terraform, there isn't a great way to switch them out gradually. As a
consequence, this huge commit gets us from the old world to a _compilable_
new world, but still has a large number of known test failures due to
key functionality being stubbed out.
The stubs here are for anything that interacts with providers, since we
now need to do the follow-up work to similarly replace the old
terraform.ResourceProvider interface with its replacement in the new
"providers" package. That work, along with work to fix the remaining
failing tests, will follow in subsequent commits.
The aim here was to replace all references to terraform.State and its
downstream types with states.State, terraform.Plan with plans.Plan,
state.State with statemgr.State, and switch to the new implementations of
the state and plan file formats. However, due to the number of times those
types are used, this also ended up affecting numerous other parts of core
such as terraform.Hook, the backend.Backend interface, and most of the CLI
commands.
Just as with 5861dbf3fc49b19587a31816eb06f511ab861bb4 before, I apologize
in advance to the person who inevitably just found this huge commit while
spelunking through the commit history.
The new helper/plugin package contains the grpc servers for handling the
new plugin protocol
The GRPCProviderServer and GRPCProvisionerServer handle the grpc plugin
protocol, and convert the requests to the legacy schema.Provider and
schema.Provisioner methods.
In order to not require state migrations to be supported in both
MigrateState and StateUpgraders, the legacy provider codepath needs to
handle the StateUpgraders transparently during Refresh.
This adds some of the required shim functions to the schema package.
While this further bloats the already huge package, adding the helpers
here was significantly less disruptive than refactoring types into
separate packages to prevent import cycles.
The majority of tests here are directly adapted from existing schema
tests to provide as many known good values to the shims as possible.
It turns out that state upgrades need to be handled differently since
providers are going to be backwards compatible. This means that new
state upgrades may still be stored in the flatmap format when used wih
terraform 0.11. Because we can't account for the specific version which
could produce a legacy state, all future state upgrades need to record
the schema types for decoding.
Rather than defining a single Upgrade function for states, we now have a
list of functions, each of which handle upgrading a specific version to
the next. In practice this isn't much different from the way many
resources implement upgrades themselves, with a separate function for
each version dispatched from the MigrateState function. The only added
burden is the recording of the schema type, and we intend to supply
tools and helper function to prevent the need to copy the entire
existing schema in all cases.
This is the provider-side UpgradeState implementation for a particular
resource. This new function will be called to upgrade a saved state with
an old schema version to the current schema.
UpgradeState also requires a record of the last schema and version that
could have been stored as a flatmapped state. If the stored state is in
the legacy flatmap format, this will allow the provider to properly
decode the flatmapped state into the expected structure for the new json
encoded state. If the stored state's version is below that of the
LegacySchema.Version value, it will first be processed by the legacy
MigrateState function.
The update protocol shims will also check for this this, but eventually
"id" will only be a normal attribute, and we shouldn't have to special
case this.
When converting a legacy schemaMap to a configschema, we need to add
"id" as a required attribute to top-level resources if it's not
declared.
The "id" field will be required to interoperate with the legacy helper
schema, since the presence of an id was used to indicate the existence
of a resource.
The "config" package is no longer used and will be removed as part
of the 0.12 release cleanup. Since configschema is part of the
"new world" of configuration modelling, it makes more sense for
it to live as a subdirectory of the newer "configs" package.
Due to how deeply the configuration types go into Terraform Core, there
isn't a great way to switch out to HCL2 gradually. As a consequence, this
huge commit gets us from the old state to a _compilable_ new state, but
does not yet attempt to fix any tests and has a number of known missing
parts and bugs. We will continue to iterate on this in forthcoming
commits, heading back towards passing tests and making Terraform
fully-functional again.
The three main goals here are:
- Use the configuration models from the "configs" package instead of the
older models in the "config" package, which is now deprecated and
preserved only to help us write our migration tool.
- Do expression inspection and evaluation using the functionality of the
new "lang" package, instead of the Interpolator type and related
functionality in the main "terraform" package.
- Represent addresses of various objects using types in the addrs package,
rather than hand-constructed strings. This is not critical to support
the above, but was a big help during the implementation of these other
points since it made it much more explicit what kind of address is
expected in each context.
Since our new packages are built to accommodate some future planned
features that are not yet implemented (e.g. the "for_each" argument on
resources, "count"/"for_each" on modules), and since there's still a fair
amount of functionality still using old-style APIs, there is a moderate
amount of shimming here to connect new assumptions with old, hopefully in
a way that makes it easier to find and eliminate these shims later.
I apologize in advance to the person who inevitably just found this huge
commit while spelunking through the commit history.
The new config loader requires some steps to happen in a different
order, particularly in regard to knowing the schema in order to
decode the configuration.
Here we lean directly on the configschema package, rather than
on helper/schema.Backend as before, because it's generally
sufficient for our needs here and this prepares us for the
helper/schema package later moving out into its own repository
to seed a "plugin SDK".
We will need access to this information in order to render interactive
input prompts, and it will also be useful in returning schema information
to external tools such as text editors that have autocomplete-like
functionality.
One of the tests in this package doesn't work when the tests are run as
root - like inside of a Docker container. The test is still useful to
specify `pathorcontents` behavior when a file is not readable, so it's
better to skip than just delete it. See linked issue for further
disussion.
Closes#7707
When checking if a ResourceDiff key is valid, allow for keys that exist
on sub-blocks. This allows things like diff.Clear to be called on
sub-block fields, instead of just on top-level fields.
The `remote` backend config contains an attribute that is defined as a `*schema.Set`, but currently only `string` values are accepted as the `config` attribute is defined as a `schema.TypeMap`.
Additionally the `b.Validate()` method wasn’t called to prevent a possible panic in case of unexpected configurations being passed to `b.Configure()`.
This commit is a bit of a hack to be able to support this in the 0.11 series. The 0.12 series will have proper support, so when merging 0.12 this should be reverted again.
While this initial implementation is a very simple wrapper function, implementing this in the helper/resource package provides some downstream benefits:
* Provides a standard interface for plugin developers to enable parallel acceptance testing
* Existing plugins can simply convert resource.Test to resource.ParallelTest references (as appropriate) to enable the functionality, rather than worrying about additional line(s) to each acceptance test function or TestCase
* Potential enhancements to ParallelTest (e.g. adding an environment variable to skip enabling the behavior) are consistently propagated
We already had the functionality to make resources deprecated, which was
used when migrating resources to data sources, but the functionality was
unexported, so only the schema package could do it. Now it's exported,
meaning providers can mark entire resources as deprecated. I also added
a test in hopefully-the-right place?
This adds the Taint field to the acceptance testing framework, allowing
the ability to pre-taint resources at the beginning of a particular
TestStep. This can be useful for when an explicit ForceNew is required
for a specific resource for troubleshooting things like diff mismatches,
etc.
The field accepts resource addresses as a list of strings. To keep
things simple for the time being, only addresses in the root module are
accepted. If we ever want to expand this past that, I'd be almost
inclined to add some facilities to the core terraform package to help
translate actual module resource addresses (ie:
module.foo.module.bar.some_resource.baz) into the correct state, versus
the current convention in some acceptance testing facilities that take
the module address as a list of strings (ie: []string{"root", "foo",
"bar"}).
A couple of bugs have been discovered in ResourceDiff.ForceNew:
* NewRemoved is not preserved when a diff for a key is already present.
This is because the second diff that happens after customization
performs a second getChange on not just state and config, but also on
the pre-existing diff. This results in Exists == true, meaning nil is
never returned as a new value.
* ForceNew was doing the work of adding the key to the list of changed
keys by doing a full SetNew on the existing value. This has a side
effect of fetching zero values from what were otherwise undefined values
and creating diffs for these values where there should not have been
(example: "" => "0").
This update fixes these scenarios by:
* Adding a new private function to check the existing diff for
NewRemoved keys. This is included in the check on new values in
diffChange.
* Keys that have been flagged as ForceNew (or parent keys of lists and
sets that have been flagged as ForceNew) are now maintained in a
separate map. UpdatedKeys now returns the results of both of these maps,
but otherwise these keys are ignored by ResourceDiff.
* Pursuant the above, values are no longer pushed into the newDiff
writer by ForceNew. This prevents the zero value problem, and makes for
a cleaner implementation where the provider has to "manually" SetNew to
update the appropriate values in the writer. It also prevents
non-computed keys from winding up in the diff, which ResourceDiff
normally blocks by design.
There are also a couple of tests for cases that should never come up
right now involving Optional/Computed values and NewRemoved, for which
explanations are given in annotations of each test. These are here to
guard against future regressions.
Added some more context to GetOkExists, moved Computed to NewValueKnown
to accommodate some changes that will be coming up in HCL2 that may make
"Computed" less intuitive of a function name, and updated the docs for
NewValueKnown as well.
This adds a new method to ResourceDiff: Computed, which exposes the
computed read result field to ResourceDiff. In the context of
customizing the diff, this is important as interpolated and otherwise
computed values will show up in the diff as blank, with no way of
determining if the value is actually blank or if it's a computed value
not available at diff customization time. Currently assumptions need to
be made on this, but this does not help in validation scenarios where
one needs to differentiate between an actual blank value and a value
that will be available later.
This is exposed for the most part via NewComputed in the diff, but the
tests cover both the config reader as well (with no diff, even though
this should not come up in normal operation) and also the newDiff reader
when someone sets a new value using SetNew and SetNewComputed.
This commit also exposes GetOkExists. The tests were mostly pulled from
ResourceData but a few were added to ensure that config was being
properly covered as well, in addition to covering SetNew and
SetNewComputed.
Return the global default timeout if the ResourceData timeouts are nil.
Set the timeouts from the Resource when calling Resource.Data, so that
the config values are always available.
For historical reasons, the handling of element types for maps is inconsistent with other collection types.
Here we begin a multi-step process to make it consistent, starting by supporting both the "consistent" form of using a schema.Schema and an existing erroneous form of using a schema.Type directly. In subsequent commits we will phase out the erroneous form and require the schema.Schema approach, the same as we do for TypeList and TypeSet.