As we've improved the cty.Value normalization, we need to remove
normalization procedures from the flatmap handling. Keeping the empty
containers in the flatmap will prevent unexpected nils from being added
to some schema configurations
Providers were not strict (and were not forced to be) about customizing
the diff when a computed attribute needed to be updated during apply.
The fix we have in place to prevent loss of information during the
helper/schema apply process would add in single missing value back in.
The first place this was caught was when we attempt to fix up the
flatmapped attributes. The 1->0 count error is now better handled by our
cty.Value normalization step, so we can remove the special apply case
here altogether
The next place is in normalizeNullValues, and since the intent was to
re-insert missing zero-value lists and sets, adding a check for a length
of 0 protects us from adding in extra elements.
The new test fixture emulated common provider behavior of re-computing
values without customizing the diff. Since we can work around it, and
core will provider appropriate warnings, the shims should try to
maintain the legacy behavior.
The NewExtra values are stored outside the diff from plan, and the
original keys may not contain the ~ prefix. Adding the NewExtra back
into the diff with the mismatched key was causing an entire new set
element to be populated. Since this symbol isn't used to apply the diff
in helper/schema, we can simply strip them out.
This mirrors the change made for providers, so that default values can
be inserted into the config by the backend implementation. This is only
the interface and method name changes, it does not yet add any default
values.
The helper/schema handling of lists loses empty string values, but
retains the correct count. Only re-count the values if the count is
missing entirely, and allow our shims to re-populate the zero values.
Terraform core expects a sane state even when the provider returns an
error. Make sure at the prior state is always the default value to
return, and then alway attempt to process any state returned by
provider.Apply.
Previously we were using the type name requested in the import to select
the schema, but a provider is free to return additional objects of other
types as part of an import result, and so it's important that we perform
schema selection separately for each returned object.
If we don't do this, we get confusing downstream errors where the
resulting object decodes to the wrong type and breaks various invariants
expected by Terraform Core.
The testResourceImportOther test in the test provider didn't catch this
previously because it happened to have an identical schema to the other
resource type being imported. Now the schema is changed and also there's
a computed attribute we can set as part of the refresh phase to make sure
we're completing the Read call properly during import. Refresh was working
correctly, but we didn't have any tests for it as part of the import flow.
With the new diff.Apply we can keep the diff mostly intact, but we need
turn off all RequiresNew flags so that the prior state is not removed
from the apply.
One quirky aspect of our import feature is that we allow the importer to
produce additional resources alongside the one that was imported, such as
to create separate rules for each rule of an imported security group.
Providers need to be able to set the types of these other resources since
they may not match the "main" resource type. They do this by calling
ResourceData.SetType, which in turn sets InstanceState.Ephemeral.Type.
In our shims here we therefore need to copy that out into our new TypeName
field so that the new core import code can see it and create the right
type in the state.
Testing this required a minor change to the test harness to allow the
ImportStateCheck function to see the resource type.
If there were no matching keys, and there was no diff at all, don't set
a zero count for the container. Normally Providers can't reliably detect
empty vs unset here, but there are some cases that worked.
This is a HCL feature rather than a Terraform feature really, but we want
to make sure it keeps working consistently in future versions of Terraform
so this is a Terraform-flavored test for the block expansion behavior.
In particular, it tests that a nested dynamic block can access the parent
iterator, so that we won't regress #19543 in future.
In prior versions of Terraform we permitted inconsistent use of indexes
in resource references, but in as of 0.12 the index usage must correlate
properly with whether "count" is set on the resource.
Since users are likely to have existing configurations with incorrect
usage, here we introduce some specialized error messages for situations
where we can detect such issues statically. This seems to cover all of the
common patterns we've seen in practice.
Some usage patterns will fall back on a less-helpful dynamic error here,
but no configurations coming from 0.11 can end up that way because 0.11
did not permit forms such as aws_instance.no_count[count.index].bar that
this validation would not be able to "see".
Our configuration upgrade tool also contains a fix for this already, but
it takes a more conservative approach of adding the index [1] rather than
[count.index] because it can't be sure (without human help) if correlation
of indices is what was intended.
Terraform used to provide empty diffs to the provider when calculating
`ignore_changes`, which would cause some DiffSuppressFunc to fail, as
can be seen in #18209.
Verify that this is no longer the case in 0.12
Booleans in the legacy form were stored as strings, and can appear as
the incorrect type in the new type system.
Unset fields in sets also might show up erroneously in diffs, with
equal old and new values.
This work was done against APIs that were already changed in the branch
before work began, and so it doesn't apply to the v0.12 development work.
To allow v0.12 to merge down to master, we'll revert this work out for now
and then re-introduce equivalent functionality in later commits that works
against the new APIs.
Various things drifted since these tests were originally written. This
catches them up to the latest implementations of state decoding,
upgrading, etc.
Due to how often the state and plan types are referenced throughout
Terraform, there isn't a great way to switch them out gradually. As a
consequence, this huge commit gets us from the old world to a _compilable_
new world, but still has a large number of known test failures due to
key functionality being stubbed out.
The stubs here are for anything that interacts with providers, since we
now need to do the follow-up work to similarly replace the old
terraform.ResourceProvider interface with its replacement in the new
"providers" package. That work, along with work to fix the remaining
failing tests, will follow in subsequent commits.
The aim here was to replace all references to terraform.State and its
downstream types with states.State, terraform.Plan with plans.Plan,
state.State with statemgr.State, and switch to the new implementations of
the state and plan file formats. However, due to the number of times those
types are used, this also ended up affecting numerous other parts of core
such as terraform.Hook, the backend.Backend interface, and most of the CLI
commands.
Just as with 5861dbf3fc49b19587a31816eb06f511ab861bb4 before, I apologize
in advance to the person who inevitably just found this huge commit while
spelunking through the commit history.
* builtin/providers: implement terraform remote state datasource as providers.Interface
* append and return diags separately (to match the idiomatic usage
elsewhere in Terraform)
* diagnostic summary style improvements
* update tests to pass config to schema.CoerceValue
* trust that the schema will be enforced and there is no need to check
that a given attribute exists
* added dataSourceRemoteStateGetSchema() (effectively replacing a
function that was inappropriately removed) for consistency with other
terraform providers
* builtin/provider terraform test: added InternalValidate() test for dataSourceRemoteStateGetSchema
The new config loader requires some steps to happen in a different
order, particularly in regard to knowing the schema in order to
decode the configuration.
Here we lean directly on the configschema package, rather than
on helper/schema.Backend as before, because it's generally
sufficient for our needs here and this prepares us for the
helper/schema package later moving out into its own repository
to seed a "plugin SDK".
The `remote` backend config contains an attribute that is defined as a `*schema.Set`, but currently only `string` values are accepted as the `config` attribute is defined as a `schema.TypeMap`.
Additionally the `b.Validate()` method wasn’t called to prevent a possible panic in case of unexpected configurations being passed to `b.Configure()`.
This commit is a bit of a hack to be able to support this in the 0.11 series. The 0.12 series will have proper support, so when merging 0.12 this should be reverted again.
Due to an incorrect slice allocation, the environment variable list was created with an empty string
element for each real element added.
It appears that this was silently ignored on Unix, but caused the following environment settings
to be ignored altogether on Windows.
Add a test to remote-exec to make sure the proper timeout is honored
during apply.
TODO: we need some test helpers for provisioners, so they can all be
verified.
The timeout for a provisioner is expected to only apply to the initial
connection. Keep the context for the communicator.Retry separate from
the global cancellation context.
* Updates the chef provisioner to allow specifying a channel
This also updates the omnitruck url to the current url.
Signed-off-by: Scott Hain <shain@chef.io>
* Update omnitruck URL
Signed-off-by: Scott Hain <shain@chef.io>
Combine the ExitStatus and Err values from remote.Cmd into an error
returned by Wait, better matching the behavior of the os/exec package.
Non-zero exit codes are returned from Wait as a remote.ExitError.
Communicator related errors are returned directly.
Clean up all the error handling in the provisioners using a
communicator. Also remove the extra copyOutput synchronization that was
copied from package to package.
Use the new ExitStatus method, and also check the cmd.Err() method for
errors.
Remove leaks from the output goroutines in both provisioners by
deferring their cleanup, and returning early on all error conditions.
The timeout for the remote command was taken from the wrong config
field, and the connection timeout was being used which is 5 min. Any
remote command taking more than 5 min would be terminated by
disconnecting the communicator. Remove the timeout from the context, and
rely on the global timeout provided by terraform.
There was no way to get the error from the communicator previously, so
the broken connection was silently ignored and the provisioner returned
successfully. Now we can use the new cmd.Err() method to retrieve any
errors encountered during execution.
provisioner. Also fixes an issue where channels and URLs are
not honored in the initial package install.
Signed-off-by: Rob Campbell <rcampbell@chef.io>
This new argument allows overriding of the working directory of the child process, with the default still being the working directory of Terraform itself.
There no reason to retry around the execution of remote scripts. We've
already established a connection, so the only that could happen here is
to continually retry uploading or executing a script that can't succeed.
This also simplifies the streaming output from the command, which
doesn't need such explicit synchronization. Closing the output pipes is
sufficient to stop the copyOutput functions, and they don't close around
any values that are accessed again after the command executes.