This is rarely needed, but sometimes tests need to create temporary files as part of their operation. This should be used sparingly, since it prevents the pro-active cleanup of the temporary working directory.
In terraform-providers/terraform-provider-aws#2935, we have been cleaning code
duplication by benefiting from the "NormalizeJsonString" present in the "structure" helper.
It appears that tests in the AWS provider are covering more use-cases,
which are added in this work.
This new codepath with the getDiff "customzed" return value, along with
the associated test need to be removed as soon as we can support unset
fields from the config, so we don't continue to carry this broken
behavior forward any longer than needed.
This extends the internal diffChange method so that ResourceDiff's
implementation of it can report back whether or not the value came from
a customized diff.
This is an effort to work to preserve the pre-ResourceDiff behaviour
that ignores the diff for computed keys when the old value was populated
but the new value wasn't - this behaviour is actually being depended on
by users that are using it to exploit using zero values in modules. This
should allow both scenarios to co-exist by shifting the NewComputed
exemption over to exempting values that come from diff customization.
This reverts one of the changes from 6a4f7b0, which broke empty strings
being seen as unset for computed values.
This breaks a number of other tests, and is only an intermediate change
for evaluating other solutions.
This case should be expected to fail with the current diff algorithm,
but the existing behavior was widely relied upon so we need to roll this
back until there is a representable nil value.
The CustomizeDiff functionality in helper/schema is powerful, but directly
writing single CustomizeDiff functions can obscure the intent when a
number of different, orthogonal diff-customization behaviors are required.
This new library provides some building blocks that aim to allow a more
declarative form of CustomizeDiff implementation, by composing a number of
smaller operations. For example:
&schema.Resource{
// ...
CustomizeDiff: customdiff.All(
customdiff.ValidateChange("size", func (old, new, meta interface{}) error {
// If we are increasing "size" then the new value must be
// a multiple of the old value.
if new.(int) <= old.(int) {
return nil
}
if (new.(int) % old.(int)) != 0 {
return fmt.Errorf("new size value must be an integer multiple of old value %d", old.(int))
}
return nil
}),
customdiff.ForceNewIfChange("size", func (old, new, meta interface{}) bool {
// "size" can only increase in-place, so we must create a new resource
// if it is decreased.
return new.(int) < old.(int)
}),
customdiff.ComputedIf("version_id", func (d *schema.ResourceDiff, meta interface{}) bool {
// Any change to "content" causes a new "version_id" to be allocated.
return d.HasChange("content")
}),
),
}
The goal is to allow the various separate operations to be quickly seen
and to ensure that each of them runs independently of the others. These
functions all create closures on the call parameters, so the result is
still just a normal CustomizeDiffFunc and so the helpers in this package
can be combined with hand-written functions as needed.
As we get more experience writing CustomizeDiff functions we may wish to
expand the repertoire of functions here in future; this initial set
attempts to cover some common cases we've seen so far. We may also
investigate some helper functions that are entirely declarative and so
don't take callback functions at all, but want to learn what the relevant
use-cases are before going in too deep here.
Looks like while we were checking errors correctly when ExpectError was
set, we weren't checking for the *absence* of an error, which is should
be checked as well (no error is still not the error we are looking for).
Added a few more tests for ExpectError as well.
Validation is the best time to return detailed diagnostics
to the user since we're much more likely to have source
location information, etc than we are in later operations.
This change doesn't actually add any detail to the messages
yet, but it changes the interface so that we can gradually
introduce more detailed diagnostics over time.
While here there are some minor adjustments to some of the
messages to improve their consistency with terminology we
use elsewhere.
StringMatch returns a validation function that can be used to match a
string against a regular expression. This can be used for simple
substring validations or more complex validation scenarios. Optionally,
an error message can be returned so that the user is returned a better
error message other than that their field did not match a regular
expression that they might not be able to understand.
There are situations where one may need to write to a set, list, or map
more than once per single TF operation (apply/refresh/etc). In these
cases, further writes using Set (example: d.Set("some_set", newSet))
currently create unstable results in the set writer (the name of the
writer layer that holds the data set by these calls) because old keys
are not being cleared out first.
This bug is most visible when using sets. Example: First write to set
writes elements that have been hashed at 10 and 20, and the second write
writes elements that have been hashed at 30 and 40. While the set length
has been correctly set at 2, since a set is basically a map (as is the
entire map writer) and map results are non-deterministic, reads to this
set will now deliver unstable results in a random but predictable
fashion as the map results are delivered to the caller non-deterministic
- sometimes you may correctly get 30 and 40, but sometimes you may get
10 and 20, or even 10 and 30, etc.
This problem propagates to state which is even more damaging as unstable
results are set to state where they become part of the permanent data
set going forward.
The problem also applies to lists and maps. This is probably more of an
issue with maps as a map can contain any key/value combination and hence
there is no predictable pattern where keys would be overwritten with
default or zero values. This is contrary to complex lists, which has
this problem as well, but since lists are deterministic and the length
of a list properly gets updated during the overwrite, the problem is
masked by the fact that a read will only read to the boundary of the
list, skipping any bad data that may still be available due past the
list boundary.
This update clears the child contents of any set, list, or map before
beginning a new write to address this issue. Tests are included for all
three data types.
This keeps CustomizeDiff from being defined on data sources, where it
would be useless. We just catch this in InternalValidate like the rest
of the CRUD functions that are not used in data sources.
Added some more detailed comments to CustomizeDiff's comments. The new
comments detail how CustomizeDiff will be called in the event of
different scenarios like creating a new resource, diffing an existing
one, diffing an existing resource that has a change that requires a new
resource, and destroy/tainted resources.
Also added similar detail to ForceNew in ResourceDiff.
This should help mitigate any confusion that may come up when using
CustomizeDiff, especially in the ForceNew scenario when the second run
happens with no state.
setDiff does not make use of its new parameter anymore, so it has been
removed. Also, as there is no more SetDiff (exported) function, mentions
of that have been removed from comments too.
This fixes nil pointer issues that could come up if an invalid key was
referenced (ie: not one in the schema). Also ships a helper validation
function to streamline things.
Restoring the naming of this field in the resource back to
CustomizeDiff, as this is generally more descriptive of the process
that's happening, despite the lengthy name.
The old comments said that this interface was API compatible with
terraform.ResourceProvider's Diff method - with the addition of passing
down meta to it, this is no longer the case.
Not too sure if this is really a big deal - schema.Resource never fully
implemented terraform.ResourceProvider, from what I can see, and the
path from Provdier.Diff to Resource.Diff is still pretty clear. Just
wanted to remove an outdated comment.
This could panic if we sent a parent that was actually longer than the
child (should almost never come up, but the guard will make it safe
anyway).
Also fixed a slice style lint warning in UpdatedKeys.
The consensus is that it's a generally better idea to move setting the
functionality of old values completely to the refresh/read process,
hence it's moot to have this function around anymore. This also means we
don't need the old value reader/writer anymore, which simplifies things.
When working on this initially, I think I thought that since NewComputed
values in the diff were empty strings, that it was using the zero value.
After review, it doesn't seem like this is the case - so I have adjusted
NewComputed to pass nil values. There is also a guard now that keeps the
new value writer from accepting computed fields with non-nil values.
To keep with the current convention of most other schema.Resource
functional fields being fairly short, CustomizeDiff has been changed to
"Review". It would be "Diff", however it is already used by existing
functions in schema.Provider and schema.Resource.
Both Destroy and DestroyDeposed are not propagated down the diff stack,
meaning that there is no way we can tell at this point if an instance is
being destroyed or deposed, so this check would never be used.
In this regard, Destroy never runs a diff down the stack at all, and a
deposition check is not run until *after* the provider's diff function
is called. To answer this question and close it off, we could either
determine if a resource is deposed earlier, and propagate that down, or
treat deposed resources like full destroy nodes, and not diff them at
all (but rather making a diff with the only thing in it being
DestroyDeposed flagged).
Added a few more test cases for CustomizeDiff, caught another error in
the process. I think this is ready for review now, and possibly some
real-world testing of the waters by way of porting some resources that
would benefit from the feature.
It's alive! CustomizeDiff logic now has been inserted into the diff
process. The test_resource_with_custom_diff resource provides some basic
testing and a reference implementation.
There should now be plenty of test coverage for this feature via the
tests added for ResourceDiff, and the basic test added to the
schemaMap.Diff test, and the test resource, but more can be added to
test any specific case that comes up otherwise.
As the interface has been specced out for ResourceDiff, the only diff
operation that can function on a non-computed key is ForceNew, and that
is only if a diff exists for that key. As such, allowing the user to
clear keys that cannot be operated on afterwards does not really make
much sense.
Will re-add this function if it is determined it's needed after all on
review.
Final set of coverage for ResourceDiff and bug fixes to correct issues
the test cases brought up.
Will be dropping ClearAll in next commit, as with how this interface is
intended to be used, it does not really make much sense.
This provides a deep copy of a schemaMap, which will be needed for the
diff customization process as ResourceDiff will be able to flag fields
as ForceNew - we don't want to affect the source schema.
In diffString, removed values are marked if the old value is non-nil and
the new value is nil, regardless of if the new value is marked as a
computed value. With the new diff override behaviour, this can lead to
issues where a nil attribute may get inserted into the diff if the new
value has been marked as computed (as the value will be marked as
missing, but computed).
This propagates into finalizeDiff in some respects because even if
NewComputed is manually set in the diff that gets passed in, it is
ignored, and the schema becomes the only source of truth. Since our new
diff behaviour is mainly designed to be supported on computed keys only,
it stands to reason that we there will be a time where we want to set a
diff as NewComputed on a key, and see that change in the diff.
These two small changes makes that happen. No regressions in tests have
been observed via this change, so it seems safe and non-invasive.
The beginning of tests for SetNew. Noticed that removed values are not
logged for computed keys in a diff - not too sure if that is going to be
a problem (possibly not as GetChange in ResourceData should still
reference state for the old value). I came on this most notably in the
"TestSetNew/complex-ish_set_diff" test, for reference.
More tests pending for other functions for coverage of other functions
and test cases as defined in #8769.
This ensures that when we hook this into the main diff logic, that we
can just re-diff the keys that are modified, ensuring that the diff is
not re-run on keys that were not touched, or on keys that were
intentionally removed.
This should complete the feature set of the prototype. This function
removes a specific key from the existing diff, preventing conflicts.
Further functionality might be needed to make this behave as expected,
namely ensuring that we don't re-process the whole diff after the
CustomizeDiffFunc runs.
This new prototype removes the dependence on a underlying ResourceData,
moving several items up to ensure an approach that more matches
ResourceData. Namely, we create our own MultiLevelFieldReader that
also layers updated diff values on top. New values use a small
re-implementation of the MapFieldWriter/Reader that allows computed
values to be set.
The assumption here now is that a second diff will happen after the
first one is done, processing the new values set in the 2 new
reader/writer levels and updating the diff as necessary.
This adds a new object, ResourceDiff, to the schema package. This
object, in conjunction with a function defined in CustomizeDiff in the
resource schema, allows for the in-flight customization of a Terraform
diff. This helps support use cases such as when there are necessary
changes to a resource that cannot be detected in config, such as via
computed fields (most of the utility in this object works on computed
fields only). It also allows for a wholesale wipe of the diff to allow
for diff logic completely offloaded to an external API, if it is a
better use case for a specific provider.
As part of this work, many internal diff functions have been moved to
use a special resourceDiffer interface, to allow for shared
functionality between ResourceDiff and ResourceData. This may be
extended to the DiffSuppressFunc as well which would restrict use of
ResourceData in DiffSuppressFunc to generally read-only fields.
This work is not yet in its final state - CustomizeDiff is not yet
implemented in the general diff workflow, new functions may be added
(notably Clear() for a single key), and functionality may be altered.
Tests will follow as well.
Update the command package to use the new module storage. Move the old
command output strings into the module storage itself. This could be
moved back later either by using ui callbacks, or designing a module
storage interface once we know what the final requirements will look
like.
In order to parse provider, resource and data source configuration from
HCL2 config files, we need to know the relevant configuration schema.
This new method allows Terraform Core to request these from a provider.
This is a breaking change to this interface, so all of its implementers
in this package are updated too. This includes concrete implementations
of the new method in helper/schema that use the schema conversion code
added in an earlier commit to produce a configschema.Block automatically.
Plugins compiled against prior versions of helper/schema will not have
support for this method, and so calls to them will fail. Callers of
this new method will therefore need to sniff for support using the
SchemaAvailable field added to both ResourceType and DataSource.
This careful handling will need to persist until next time we increment
the plugin protocol version, at which point we can make the breaking
change of requiring this information to be available.
Uses Levenshtein distance to decide if the input is similar enough to one
of the given suggestions, and returns that suggestion if so.
The distance threshold of three was arrived at experimentally, and has
no objective basis.
We don't currently have any need for this information, but we're
propagating it out of helper/schema here pre-emptively so that once we
later have a use for it we will not need to rebuild the providers to gain
access to it.
The long-term expected use-case for this is to have Terraform Core use
static analysis techniques to trace the path of sensitive data through
interpolations so that intermediate results can be flagged as sensitive
too, but we have a lot more work to do before such a thing would actually
be possible.
As part of moving to the next-generation HCL implementation,
Terraform Core is getting its own representation of configuration schema
that is tailored for configuration-processing use-cases. The capabilities
of this are a subset of the helper/schema model primarily concerned with
the configuration structure and value types, leaving detailed validation
and defaults for helper/schema to still solve.
These new methods allow mechanical creation of a schema in the new Core
schema model from a schema expressed in the helper/schema model. This is
not yet used as of this commit, but will be used later to implement some
new ResourceProvider methods that will allow core to obtain the schema
for provider, resource and data source configuration while remaining
source-compatible with existing provider implementations.
This adds NoZeroValues, a small SchemaValidateFunc that checks that a
defined value is not a zero value. It's useful for situations where you
want to keep someone from explicitly entering a zero value (ie:
literally "0", or an empty string) on a required field, and want to
catch it in the validation stage, versus during apply using GetOk.
Add an ImportStateIdFunc field to the ImportState testing functionality.
This will allow for more powerful generation of complex import state IDs
that can't be accomplished by ImportStateId or ImportStateIdPrefix
themselves.
Go 1.9 adds this new function which, when called, marks the caller as
being a "helper function". Helper function stack frames are then skipped
when trying to find a line of test code to blame for a test failure, so
that the code in the main test function appears in the test failure output
rather than a line within the helper function itself.
This covers many -- but probaly not all -- of our test helpers across
various packages.
Add the ValidateListUniqueStrings function, which is a ValidateFunc that
ensures a list has no duplicate items in it. It's useful for when a list
is needed over a set because order matters, yet the items still need to
be unique.
This adds a ValidateRegexp validation helper to check to see if a field
has a valid regular expression.
Includes test for good regexp, and test for bad regexp.
Equality of schema.Sets gets tricky when dealing with nested sets -
Set.Equal only superficially compares the underlying maps and hence any
sets nested under the root sets cause issues.
This adds a simple method, HashEqual, that does a top-level hash
comparison, helping to work around this without any complex re-invention
of things like reflect.DeepEqual.
Of course, in order to make effective use of this function, the user
needs to make sure they are properly hashing their nested sets, however
this is trivial with things like HashResource.
Adds `GetOkRaw` as a schema function. This should only be used to verify
boolean attributes are either set or not set, regardless of their zero
value for their type. There are a few small use cases outside of the boolean
type where this will be helpful as well.
Overall, this shouldn't detract from the zero-value checks that `GetOK()`
currently has, and should only be used when absolutely needed. However,
there are enough use-cases for this addition without checking for the
zero-value of the type, that this is needed.
Primary use case is for a boolean attribute that is `Optional` and `Computed`,
without a default value. There's currently no way to verify that the boolean
attribute was explicitly set to the zero-value literal with the current
`GetOk()` function. This new function allows for that check, keeping the
`Computed` check for the returned `exists` boolean.
```
$ make test TEST=./helper/schema TESTARGS="-run=TestResourceDataGetOkRaw"
==> Checking that code complies with gofmt requirements...
go generate $(go list ./... | grep -v /terraform/vendor/)
2017/08/02 11:17:32 Generated command/internal_plugin_list.go
go test -i ./helper/schema || exit 1
echo ./helper/schema | \
xargs -t -n4 go test -run=TestResourceDataGetOkRaw -timeout=60s -parallel=4
go test -run=TestResourceDataGetOkRaw -timeout=60s -parallel=4 ./helper/schema
ok github.com/hashicorp/terraform/helper/schema 0.005s
```
The field reader code path is extremely inefficient, but refactoring
it all is much to invasive a change at the moment.
Have DiffFieldReader internally cache results for ReadField.
Provider import tests previously didn't have to supply a config, but
terraform now requires the provider to be declared for discovery.
testProviderConfig returns a stub config with provider blocks based
on the TestCase Providers. This allows basic import tests in providers
to remain unchanged.
The Close methods on shadow.Values require pointer receivers because
they contain a sync.Mutex, but that value was being copied through
Value.Interface by the closeWalker. Because reflectwalk passes the
struct fields to the StructField method as they are defined in the
struct, and they may have been read as a value, we can't immediately
call Interface() to check the method set without possibly copying the
internal mutex values. Use the Implements method to first check if we
need to call Interface, and if it's not, then we can check if the value
is addressable.
Because of this use of reflection, we can't vet for the copying of these
locks. The minimal amount of code in the Close method left us only with
a race detected within the mutex itself, which leads to a stacktrace
pointing to the runtime rather than our code.
It turns out that `d.GetOk` also returns `false` when the user _did_ actually supply a value for it in the config, but the value itself needs to be evaluated before it can be used.
So instead of passing a `ResourceData` we now pass a `ResourceConfig`
which makes much more sense for doing the validation anyway.
The timestamp prefix added in #8249 was removed in #10152 to ensure that
returned IDs really are properly ordered. However, this meant that IDs were no
longer ordered over multiple invocations of terraform, which was the main
motivation for adding the timestamp in the first place. This commit does a
hybrid: timestamp-plus-incrementing-counter instead of just incrementing counter
or timestamp-plus-random.
Rather than providing an already-resolved map of plugins to core, we now
provide a "provider resolver" which knows how to resolve a set of provider
dependencies, to be determined later, and produce that map.
This requires the context to be instantiated in a different way, so this
very noisy diff is a mostly-mechanical update of all of the existing
places where contexts get created for testing, using some adapted versions
of the pre-existing utilities for passing in mock providers.
Previously having a config was mutually exclusive with running an import,
but we need to provide a config so that the provider is declared, or else
we can't actually complete the import in the future world where providers
are installed dynamically based on their declarations.
* provider/aws: Add Sweeper setup, Sweepers for DB Option Group, Key Pair
* provider/google: Add sweeper for any leaked databases
* more recursion and added LC sweeper, to test out the Dependency path
* implement a dependency example
* implement sweep-run flag to filter runs
* stub a test for TestMain
* test for multiple -sweep-run list
GH-14784 allowed nested structures to be validate, rather than relying
on the raw value. However this still returns the same validation error
if the structures contain a computed value, since Get will return the
raw string in that case.
This simply skips the validation in the IsComputed case, since there's
nothing that can be checked.
When interpreting a nested object, we were validating against the "raw"
value, and not the interpolated value, causing incorrect errors.
This affects structures such as:
```tf
tags = "${list(map("foo", "bar"))}"
```
Prior to this, a complaint about "expected object, got string" since the
raw value is obviously a string, when the interpolated value is the
correct shape.
The tests did pass, but that was because they only tested part of the changes. By using the `schema.TestResourceDataRaw` function the schema and config are better tested and so they pointed out a problem with the schema of the Chef provisioner.
The `Elem` fields did not have a `*schema.Schema` but a `schema.Schema` and in an `Elem` schema only the `Type` field may (and must) be set. Any other fields like `Optional` are not allowed here.
Next to fixing that problem I also did a little refactoring and cleaning up. Mainly making the `ProvisionerS` private (`provisioner`) and removing the deprecated fields.
1. Migrate `chef` provisioner to `schema.Provisioner`:
* `chef.Provisioner` structure was renamed to `ProvisionerS`and now it's decoded from `schema.ResourceData` instead of `terraform.ResourceConfig` using simple copy-paste-based solution;
* Added simple schema without any validation yet.
2. Support `ValidateFunc` validate function : implemented in `file` and `chef` provisioners.
Besides the support for DO certificates themselves, this commit also
includes:
1) A new `RandTLSCert` function that generates a valid, self-signed TLS
certificate to be used in the test
2) A fix for the PEM encoding of the private key generated in
`RandSSHKeyPair`: the PEM was always empty
If a schema.TypeList had a Schema with ForceNew, and if that list was
NewComputed, the diff would not have RequiresNew set. This causes apply
to fail when the diffs didn't match because of the change to
RequiresNew.
Set the RequiresNew field on the list's ResourceAttrDiff based on the
Schema value.
stringer has changed the boilerplate it generates in a recent version.
We'd previously updated to the new format but accientally rolled back
to the old while merging a long-running feature branch.
This restores us back to the new format again.
A couple tests require lowering the grace period to keep the test from
taking the full 30s timeout.
The Retry_hang test also needed to be removed from the Parallel group,
becuase it modifies the global refreshGracePeriod variable.
Refresh calls may have side effects that need to be recorded if it
succeeds, especially common when when WaitForState is called from
resource.Retry.
If the WaitForState timeout is reached and there is a Refresh call
in-flight, wait up to refreshGracePeriod (set to 30s) for it to
complete.
This test unfortunately relies on the timing of the loops in
WaitForState, and the text of the error message. Adjust the timing so
the timeout isn't an even multiple of the poll interval, and make sure
we reach a minimum number of retries.
Make sure that we can cancel the WaitForState refresh loop when reaching
a timeout, otherwise it may run indefinitely. There's no need to try and
store and read the Result concurrently, just pass the value over a
channel.
* Revert #11245, #11321, #11498 and #11757
These PR’s are all related to issue #11170 for which I would like to propose a different solution then the one currently implemented.
* A different approach to solve #11170
This approach has (IMHO) a few advantages with regards to the solution currently implemented. I will elaborate on this in the PR.
* provider/aws: Add failing test for EMR Bootstrap Actions
* aws_emr_cluster: Fix bootstrap action parameter ordering
* provider/aws: Fix EMR Bootstrap arguments
* provider/aws: Args needs to be ForceNew, because we can't update them
Discussion in #9512 revealed that some of the comments here were
inaccurate and that the comments here did not paint a complete enough
picture of the behavior and expectations of Default and DefaultFunc.
This is a comments-only change that aims to clarify the situation and
call attention to the fact that the defaults only affect the handling of
the configuration and that changes to defaults may require migration of
existing resource states.
This closes#9512.
Adds the `ImportStateIdPrefix` field for import acceptance tests. There are (albeit fairly rare) import cases where a resource needs to be imported with a combination of the resource's ID and a known string prefix. This allows the developer to specify the known prefix, and omit the `ImportStateId` field.
```
$ make test TEST=./helper/resource TESTARGS="-run=TestTest_importStateIdPrefix"
==> Checking that code complies with gofmt requirements...
==> Checking AWS provider for unchecked errors...
==> NOTE: at this time we only look for uncheck errors in the AWS package
go generate $(go list ./... | grep -v /terraform/vendor/)
2017/03/30 18:08:36 Generated command/internal_plugin_list.go
go test -i ./helper/resource || exit 1
echo ./helper/resource | \
xargs -t -n4 go test -run=TestTest_importStateIdPrefix -timeout=60s -parallel=4
go test -run=TestTest_importStateIdPrefix -timeout=60s -parallel=4 ./helper/resource
ok github.com/hashicorp/terraform/helper/resource 0.025s
```
golang/tools commit 23ca8a263 changed the format of the leading comment
to comply with some new standards discussed here:
https://golang.org/issue/13560
This is the result of running generate with the latest version of
stringer. Everyone working on Terraform will need to update stringer
after this is merged, to avoid reverting this:
go get -u golang.org/x/tools/cmd/stringer
Sometimes when waiting on a target state, the set of valid states
through which a value will transition is unknown. This commit adds
support for an empty Pending slice and will treat any states that are not
the target as valid provided the timeouts are not exceeded.
This commit adds a new RandIntRange function to the helper/acctest
package for generating random integers in a given range. This is useful
when randomizing test spaces with a constrained range (e.g. 0-4095).
If the list was marked as computed, all values will be raw config
values. Fetch the individual keys from the config to get any known
values before validating.
Many cloud services prevent duplicate key pairs with different names.
Among them are Digital Ocean, Joyent Triton and Packet. Consequently, if
tests leave dangling resources it is not enough to simply randomise the
name, the entire key material must be regenerated.
This commit adds a helper method that returns a new randomly generated
key pair, where the public key material is formatted in OpenSSH
"authorized keys" format, and the private key material is PEM encoded.
The Required||Optional logic in schemaMap.Input was incorrect, causing
it to always request input. Fix the logic, and the associated tests
which were passing "just because".
This augments backend-config to also accept key=value pairs.
This should make Terraform easier to script rather than having to
generate a JSON file.
You must still specify the backend type as a minimal amount in
configurations, example:
```
terraform { backend "consul" {} }
```
This is required because Terraform needs to be able to detect the
_absense_ of that value for unsetting, if that is necessary at some
point.
helper/schema: Rename Timeout resource block to Timeouts
- Pluralize configuration argument name to better represent that there is
one block for many timeouts
- use a const for the configuration timeouts key
- update docs
the terraform package doesn't know about TestProvider, so don't put the
hooks in terraform.MockResourceProvider. Wrap the mock in the test where
we need to check the TestProvider functionality.
Call all ResourceProviderFactories and reset the providers before tests.
Store the provider and/or the error in a fixed factory function to be
returned again later.
Provider.TestReset resets the internal state of the Provider at the
start of a test. This also adds a MetaReset function field to
schema.Provider, which is called by TestReset and can be used to reset
any other tsated stored in the provider metadata.
This is currently used to reset the internal Context returned by
StopContext between tests, and should be implemented by a provider if
it stores a Context from a previous test.
* helper/schema: Add custom Timeout block for resources
* refactor DefaultTimeout to suuport multiple types. Load meta in Refresh from Instance State
* update vpc but it probably wont last anyway
* refactor test into table test for more cases
* rename constant keys
* refactor configdecode
* remove VPC demo
* remove comments
* remove more comments
* refactor some
* rename timeKeys to timeoutKeys
* remove note
* documentation/resources: Document the Timeout block
* document timeouts
* have a test case that covers 'hours'
* restore a System default timeout of 20 minutes, instead of 0
* restore system default timeout of 20 minutes, refactor tests, add test method to handle system default
* rename timeout key constants
* test applying timeout to state
* refactor test
* Add resource Diff test
* clarify docs
* update to use constants
This changes the type of values in Meta for InstanceState to
`interface{}`. They were `string` before.
This will allow richer structures to be persisted to this without
flatmapping them (down with flatmap!). The documentation clearly states
that only primitives/collections are allowed here.
The only thing using this was helper/schema for schema versioning.
Appropriate type checking was added to make this change safe.
The timeout work @catsby is doing will use this for a richer structure.
Fixes#12183
The fix is in flatmap for this but the entire issue is a bit more
complex. Given a schema with a computed set, if you reference it like
this:
lookup(attr[0], "field")
And "attr" contains a computed set within it, it would panic even though
"field" is available. There were a couple avenues I could've taken to
fix this:
1.) Any complex value containing any unknown value at any point is
entirely unknown.
2.) Only the specific part of the complex value is unknown.
I took route 2 so that the above works without any computed (since
"name" is not computed but something else is). This may actually have an
effect on other parts of Terraform configs, however those similar
configs would've simply crashed previously so it shouldn't break any
pre-existing configs.
To avoid chasing down issues like #11635 I'm proposing we disable the
shadow graph for end users now that we have merged in all the new
graphs. I've kept it around and default-on for tests so that we can use
it to test new features as we build them. I think it'll still have value
going forward but I don't want to hold us for making it work 100% with
all of Terraform at all times.
I propose backporting this to 0-8-stable, too.
Before this patch it was not possible to test for a key in a map where
the value is an empty string. With this patch, however, it is now
possible to write a check like:
```
resource.TestCheckResourceAttr("res.name", "mymap.KeyWithEmptyValue", ""),
```
To test that `KeyWithEmptyValue` is a valid key in `mymap`.
Fixes#10716
This trims whitespace around the key in the `-var` flag.
This is a regression from 0.7.x.
The value is whitespace sensitive unless double-quoted. This is the same
behavior as 0.7.x. I considered rejecting whitespace around the '='
completely but I don't want to introduce BC and the behavior actually
seems quite obvious to me.
This commit extracts the GPG code used for aws_iam_user_login_profile
into a library that can be reused for other resources, and updates the
call sites appropriately.
Accessing an interpolated value in a map through ConfigFieldReader can
fail, because GetRaw can't access interpolated values, so check if the
value exists at all by looking in the config. If the config has a value,
assume our map's value is interpolated and proceed as such.
We also need to lookup the correct schema to properly read a field from
a nested structure.
- Maps previously always defaulted to TypeString. Now check if Elem is a
ValueType and use that if applicable
- Lists now return the schema for nested element types, defaulting to a
TypeString like maps.
This only allows maps and lists to be nested one level deep, and the
inner map or list must only contain string values.
Fixes#10266
panicwrap was using Extrafiles to get the original standard streams for
`terraform console`. This doesn't work on Windows. Instead, we must use
the Win32 APIs to get the exact handles.
Needed due to work done in 95d37ea, we may need to adjust
hasComputedSubKeys to propagate NewComputed in the same way that we
have added "~", however will wait for comment from @mitchellh.
This covers:
* Complex sets with computed fields in a set
* Complex lists with computed fields in a set
Adding a test to test basic lists with computed fields seemed to fail,
but possibly for an unrelated reason (the list returned as nil). The fix
to this inparticular case may be out of the scope of this specific
issue.
Reference gist and details in hashicorp/terraform#9171.
This fixes some edge-ish cases where a set in a config has a set or list
in it that contains computed values, but non-set or list values in the
parent do not.
This can cause "diffs didn't match during apply" errors in a scenario
such as when a set's hash is calculated off of child items (including
any sub-lists or sets, as it should be), and the hash changes between
the plan and apply diffs due to the computed values present in the
sub-list or set items. These will be marked as computed, but due to the
fact that the function was not iterating over the list or set items
properly (ie: not adding the item number to the address, so
set.0.set.foo was being yielded instead of set.0.set.0.foo), these
computed values were not being properly propagated to the parent set to
be marked as computed.
Fixeshashicorp/terraform#6527.
Fixeshashicorp/terraform#8271.
This possibly fixes other non-CloudFront related issues too.
UniqueId attempted to provide an ordered unique id by using a nanosecond
timestamp, but doesn't take into account that time is not monotonic
increasing. This provides an implementation that will always be
increasing.
Fixes#10125
If the elements are computed and the field is ForceNew, then we should
mark the computed count as potentially forcing a new operation.
Example, assuming `groups` forces new...
**Step 1:**
groups = ["1", "2", "3"]
At this point, the resource isn't create, so this should result in a
diff like:
CREATE resource:
groups: "" => ["1", "2", "3"]
**Step 2:**
groups = ["${computedvar}"]
The OLD behavior was:
UPDATE resource
groups.#: "3" => "computed"
This would cause a diff mismatch because if `${computedvar}` was
different then it should force new. The NEW behavior is:
DESTROY/CREATE resource:
groups.#: "3" => "computed" (forces new)
Fixes#10122
The simple fix was that we forgot to close `ReadDataApply` for the
provider. But I've always felt that this section of the code was brittle
and I wanted to put in a more robust solution. The `shadow.Close` method
uses reflection to automatically close all values.
This turns the new graphs on by default and puts the old graphs behind a
flag `-Xlegacy-graph`. This effectively inverts the current 0.7.x
behavior with the new graphs.
We've incubated most of these for a few weeks now. We've found issues
and we've fixed them and we've been using these graphs internally for
awhile without any major issue. Its time to default them on and get them
part of a beta.
Fixes#2748
This changes the diff to only mark "forces new resource" on the fields
that actually caused the new resource, not every field that changed.
This makes diffs much more accurate.
I'd like to request a review but I'm going to defer merging until
Terraform 0.8. Changes like this are very possible to cause "diffs
didn't match" errors and I want some real world testing in a beta before
we hit prod with this.
Fixes#7715
If a bool field was computed and the raw value was not convertable to a
boolean, helper/schema would crash. The correct behavior is to try not
to read the raw value when the value is computed and to simply mark that
it is computed. This does that (and matches the behavior of the other
primitives).
Fixes#5138
If an item is optional and is removed completely from the configuration,
it should still trigger a destroy/create if the field itself was marked
as "ForceNew".
See the example in #5138.
This creates a standard package and interface for defining, querying,
setting experiments (`-X` flags).
I expect we'll want to continue to introduce various features behind
experimental flags. I want to make doing this as easy as possible and I
want to make _removing_ experiments as easy as possible as well.
The goal with this packge has been to rely on the compiler enforcing our
experiment references as much as possible. This means that every
experiment is a global variable that must be referenced directly, so
when it is removed you'll get compiler errors where the experiment is
referenced.
This also unifies and makes it easy to grab CLI flags to enable/disable
experiments as well as env vars! This way defining an experiment is just
a couple lines of code (documented on the package).
Fixes#3309
There are two primary changes, one to how helper/schema creates diffs
and one to how Terraform compares diffs. Both require careful
understanding.
== 1. helper/schema Changes
helper/schema, given any primitive field (string, int, bool, etc.)
_used to_ create a basic diff when given a computed new value (i.e. from
an unkown interpolation). This would put in the plan that the old value
is whatever the old value was, and the new value was the actual
interpolation. For example, from #3309, the diff showed the following:
```
~ module.test.aws_eip.test-instance.0
instance: "<INSTANCE ID>" => "${element(aws_instance.test-instance.*.id, count.index)}"
```
Then, when running `apply`, the diff would be realized and you would get
a diff mismatch error because it would realize the final value is the
same and remove it from the diff.
**The change:** `helper/schema` now marks unknown primitive values with
`NewComputed` set to true. Semantically this is correct for the diff to
have this information.
== 2. Terraform Diff.Same Changes
Next, the way Terraform compares diffs needed to be updated
Specifically, the case where the diff from the plan had a NewComputed
primitive and the diff from the apply _no longer has that value_. This
is possible if the computed value ended up being the same as the old
value. This is allowed to pass through.
Together, these fix#3309.
This reverts commit c3a4cff133, reversing
changes made to 791a02e6e4.
This change requires plugin recompilation and we should hold off until a
minor release for that.
This commit implements reusable functions for when resources have no
need to implement a particular operation:
- Noop - does nothing and returns no error.
- RemoveFromState - sets the resource ID to empty string (removing it
from state) and returns no error.
This is required for the times when the configuration cannot have an
empty configuration. An example would be in AzureRM, when you create a
LoadBalancer with a configuration, you can delete *all* but 1 of these
configurations
This changes the key for the storage to be the _raw_ source from the
module, not the fully expanded source. Example: it'll be a relative path
instead of an absolute path.
This allows the ".terraform/modules" directory to be portable when
moving to other machines. This was a behavior that existed in <= 0.7.2
and was broken with #8398. This amends that and adds a test to verify.
This commit adds a new callback, DiffSuppressFunc, to the schema.Schema
structure. If set for a given schema, a callback to the user-supplied
function will be made for each attribute for which the default
type-based diff mechanism produces an attribute diff. Returning `true`
from the callback will suppress the diff (i.e. pretend there was no
diff), and returning false will retain it as part of the plan.
There are a number of motivating examples for this - one of which is
included as an example:
1. On SSH public keys, trailing whitespace does not matter in many
cases - and in some cases it is added by provider APIs. For
digitalocean_ssh_key resources we previously had a StateFunc that
trimmed the whitespace - we now have a DiffSuppressFunc which
verifies whether the trimmed strings are equivalent.
2. IAM policy equivalence for AWS. A good proportion of AWS issues
relate to IAM policies which have been "normalized" (used loosely)
by the IAM API endpoints. This can make the JSON strings differ
from those generated by iam_policy_document resources or template
files, even though the semantics are the same (for example,
reordering of `bucket-prefix/` and `bucket-prefix/*` in an S3
bucket policy. DiffSupressFunc can be used to test for semantic
equivalence rather than pure text equivalence, but without having to
deal with the complexity associated with a full "provider-land" diff
implementation without helper/schema.
The WaitForState method can't read the result values in a timeout
because they are still owned by the running goroutine. Keep all values
scoped inside the goroutine, and save them into an atomic.Value to be
returned.
Fixes race introduced in #8510
This means that two resources created by the same rule will get names
which sort in the order they are created.
The rest of the ID is still random base32 characters; we no longer set
the bit values that denote UUIDv4.
The length of the ID returned by PrefixedUniqueId is not changed by this
commit; that way we don't break any resources where the underlying
resource has a name length limit.
Fixes#8143.
This commit adds a function which composes a series of TestFuncs, but
will run all tests before returning an error, unlike ComposeTestFunc.
This is useful when verifying contents of state in acceptance tests and
it is desirable to see all the failing cases in one run for slow
resources.
This commit adds a TestCheckFunc which ensures that a value is set for a
given name/key combination. It is primarily useful for ensuring that
computed values are set where it is not possible to know the expected
value ahead of time.
Fixes issue where a resource marked as tainted with no other attribute
diffs would never show up in the plan or apply as needing to be
replaced.
One unrelated test needed updating due to a quirk in the testDiffFn
logic - it adds a "type" field diff if the diff is non-Empty. NBD
Set the default log package output to iotuil.Discard during tests if the
`-v` flag isn't set. If we are verbose, then apply the filter according
to the TF_LOG env variable.
In #7170 we found two scenarios where the type checking done during the
`context.Validate()` graph walk was circumvented, and the subsequent
assumption of type safety in the provider's `Diff()` implementation
caused panics.
Both scenarios have to do with interpolations that reference Computed
values. The sentinel we use to indicate that a value is Computed does
not carry any type information with it yet.
That means that an incorrect reference to a list or a map in a string
attribute can "sneak through" validation only to crop up...
1. ...during Plan for Data Source References
2. ...during Apply for Resource references
In order to address this, we:
* add high-level tests for each of these two scenarios in `provider/test`
* add context-level tests for the same two scenarios in `terraform`
(these tests proved _really_ tricky to write!)
* place an `EvalValidateResource` just before `EvalDiff` and `EvalApply` to
catch these errors
* add some plumbing to `Plan()` and `Apply()` to return validation
errors, which were previously only generated during `Validate()`
* wrap unit-tests around `EvalValidateResource`
* add an `IgnoreWarnings` option to `EvalValidateResource` to prevent
active warnings from halting execution on the second-pass validation
Eventually, we might be able to attach type information to Computed
values, which would allow for these errors to be caught earlier. For
now, this solution keeps us safe from panics and raises the proper
errors to the user.
Fixes#7170
I noticed we had two mechanisms for unit test override. One that dropped
a sentinel into the env var, and another with a struct member on
TestCase. This consolidates the two, using the cleaner struct member
internal mechanism and the nicer `resource.UnitTest()` entry point.
Although DiffFieldReader was the one mostly responsible for a buggy behaviour
more tests were added throughout the debugging process most of which
would fail without the bugfix.
- ResourceData
- MultiLevelFieldReader
- MapFieldReader
- DiffFieldReader
The helper/schema framework for building providers previously validated
in all cases that each field being set in state was in the schema.
However, in order to support remote state in a usable fashion, the need
has arisen for the top level attributes of the resource to be created
dynamically. In order to still be able to use helper/schema, this commit
adds the capability to assign additional fields.
Though I do not forsee this being used by providers other than remote
state (and that eventually may move into Terraform Core rather than
being a provider), the usage and semantics are:
To opt into dynamic attributes, add a schema attribute named
"__has_dynamic_attributes", and make it an optional string with no
default value, in order that it does not appear in diffs:
"__has_dynamic_attributes": {
Type: schema.TypeString
Optional: true
}
In the read callback, use the d.UnsafeSetFieldRaw(key, value) function
to set the dynamic attributes.
Note that other fields in the schema _are_ copied into state, and that
the names of the schema fields cannot currently be used as dynamic
attribute names, as we check to ensure a value is not already set for a
given key.
This commit adds a flag to acceptance tests in order to make
appropriately named tests work during `make test` irrespective of the
TF_ACC environment variable. This should only be used on tests which are
known to be fast.
The serializeCollectionMemberForHash helper can't be called for the
MapType values, because MapType doesn't have a schema.Elem. Instead, we
can write the key/value pairs directly to the buffer. This still doesn't
allow for nested maps or lists, but we need to define that use case
before committing to it here.
The flatmapped representation of state prior to this commit encoded maps
and lists (and therefore by extension, sets) with a key corresponding to
the number of elements, or the unknown variable indicator under a .# key
and then individual items. For example, the list ["a", "b", "c"] would
have been encoded as:
listname.# = 3
listname.0 = "a"
listname.1 = "b"
listname.2 = "c"
And the map {"key1": "value1", "key2", "value2"} would have been encoded
as:
mapname.# = 2
mapname.key1 = "value1"
mapname.key2 = "value2"
Sets use the hash code as the key - for example a set with a (fictional)
hashcode calculation may look like:
setname.# = 2
setname.12312512 = "value1"
setname.56345233 = "value2"
Prior to the work done to extend the type system, this was sufficient
since the internal representation of these was effectively the same.
However, following the separation of maps and lists into distinct
first-class types, this encoding presents a problem: given a state file,
it is impossible to tell the encoding of an empty list and an empty map
apart. This presents problems for the type checker during interpolation,
as many interpolation functions will operate on only one of these two
structures.
This commit therefore changes the representation in state of maps to use
a "%" as the key for the number of elements. Consequently the map above
will now be encoded as:
mapname.% = 2
mapname.key1 = "value1"
mapname.key2 = "value2"
This has the effect of an empty list (or set) now being encoded as:
listname.# = 0
And an empty map now being encoded as:
mapname.% = 0
Therefore we can eliminate some nasty guessing logic from the resource
variable supplier for interpolation, at the cost of having to migrate
state up front (to follow in a subsequent commit).
In order to reduce the number of potential situations in which resources
would be "forced new", we continue to accept "#" as the count key when
reading maps via helper/schema. There is no situation under which we can
allow "#" as an actual map key in any case, as it would not be
distinguishable from a list or set in state.
The mapstructure library has a regrettable backward compatibility
concern whereby a WeakDecode of []interface{}{} into a target of
map[string]interface{} yields an empty map rather than an error. One
possibility is to switch to using Decode instead of WeakDecode, but this
loses the nice handling of type conversion, requiring a large volume of
code to be added to Terraform or HIL in order to retain that behaviour.
Instead we add a DecodeHook to our usage of the mapstructure library
which checks for decoding []interface{}{} or []string{} into a map and
returns an error instead.
This has the effect of defeating the code added to retain backwards
compatibility in mapstructure, giving us the correct (for our
circumstances) behaviour of Decode for empty structures and the type
conversion of WeakDecode.
The code is identical to that in the HIL library, and packaged into a
helper.
This an effort to address hashicorp/terraform#516.
Adding the Sensitive attribute to the resource schema, opening up the
ability for resource maintainers to mark some fields as sensitive.
Sensitive fields are hidden in the output, and, possibly in the future,
could be encrypted.
This means it’s shown correctly in a plan and takes into account any
actions that are dependant on the tainted resource and, vice verse, any
actions that the tainted resource depends on.
So this changes the behaviour from saying this resource is tainted so
just forget about it and make sure it gets deleted in the background,
to saying I want that resource to be recreated (taking into account the
existing resource and it’s place in the graph).
When testing destroy, the test harness calls Refresh followed by Plan,
with the expectation that the resulting diff will be empty.
Data resources challenge this expectation, because they will always be
instantiated during refresh if their configuration isn't computed, and so
the subsequent diff will want to destroy what was instantiated.
To work around this, we make an exception that data resource destroy
diffs may appear in the plan but nothing else.
This fixes#6713.
This commit forward ports the changes made for 0.6.17, in order to store
the type and sensitive flag against outputs.
It also refactors the logic of the import for V0 to V1 state, and
fixes up the call sites of the new format for outputs in V2 state.
Finally we fix up tests which did not previously set a state version
where one is required.
For backward compatibility we will continue to support using the data
sources that were formerly logical resources as resources for the moment,
but we want to warn the user about it since this support is likely to
be removed in future.
This is done by adding a new "deprecation message" feature to
schema.Resource, but for the moment this is done as an internal feature
(not usable directly by plugins) so that we can collect additional
use-cases and design a more general interface before creating a
compatibility constraint.
Historically we've had some "read-only" and "logical" resources. With the
addition of the data source concept these will gradually become data
sources, but we need to retain backward compatibility with existing
configurations that use the now-deprecated resources.
This shim is intended to allow us to easily create a resource from a
data source implementation. It adjusts the schema as needed and adds
stub Create and Delete implementations.
This would ideally also produce a deprecation warning whenever such a
shimmed resource is used, but the schema system doesn't currently have
a mechanism for resource-specific validation, so that remains just a TODO
for the moment.
In the "schema" layer a Resource is just any "thing" that has a schema
and supports some or all of the CRUD operations. Data sources introduce
a new use of Resource to represent read-only resources, which require
some different InternalValidate logic.
This is a breaking change to the ResourceProvider interface that adds the
new operations relating to data sources.
DataSources, ValidateDataSource, ReadDataDiff and ReadDataApply are the
data source equivalents of Resources, Validate, Diff and Apply (respectively)
for managed resources.
The diff/apply model seems at first glance a rather strange workflow for
read-only resources, but implementing data resources in this way allows them
to fit cleanly into the standard plan/apply lifecycle in cases where the
configuration contains computed arguments and thus the read must be deferred
until apply time.
Along with breaking the interface, we also fix up the plugin client/server
and helper/schema implementations of it, which are all of the callers
used when provider plugins use helper/schema. This would be a breaking
change for any provider plugin that directly implements the provider
interface, but no known plugins do this and it is not recommended.
At the helper/schema layer the implementer sees ReadDataApply as a "Read",
as opposed to "Create" or "Update" as in the managed resource Apply
implementation. The planning mechanics are handled entirely within
helper/schema, so that complexity is hidden from the provider implementation
itself.
This adds a test and the support necessary to read from native maps
passed as variables via interpolation - for example:
```
resource ...... {
mapValue = "${var.map}"
}
```
We also add support for interpolating maps from the flat-mapped resource
config, which is necessary to support assignment of computed maps, which
is now valid.
Unfortunately there is no good way to distinguish between a list and a
map in the flatmap. In lieu of changing that representation (which is
risky), we assume that if all the keys are numeric, this is intended to
be a list, and if not it is intended to be a map. This does preclude
maps which have purely numeric keys, which should be noted as a
backwards compatibility concern.
This commit adds support for native list variables and outputs, building
up on the previous change to state. Interpolation functions now return
native lists in preference to StringList.
List variables are defined like this:
variable "test" {
# This can also be inferred
type = "list"
default = ["Hello", "World"]
}
output "test_out" {
value = "${var.a_list}"
}
This results in the following state:
```
...
"outputs": {
"test_out": [
"hello",
"world"
]
},
...
```
And the result of terraform output is as follows:
```
$ terraform output
test_out = [
hello
world
]
```
Using the output name, an xargs-friendly representation is output:
```
$ terraform output test_out
hello
world
```
The output command also supports indexing into the list (with
appropriate range checking and no wrapping):
```
$ terraform output test_out 1
world
```
Along with maps, list outputs from one module may be passed as variables
into another, removing the need for the `join(",", var.list_as_string)`
and `split(",", var.list_as_string)` which was previously necessary in
Terraform configuration.
This commit also updates the tests and implementations of built-in
interpolation functions to take and return native lists where
appropriate.
A backwards compatibility note: previously the concat interpolation
function was capable of concatenating either strings or lists. The
strings use case was deprectated a long time ago but still remained.
Because we cannot return `ast.TypeAny` from an interpolation function,
this use case is no longer supported for strings - `concat` is only
capable of concatenating lists. This should not be a huge issue - the
type checker picks up incorrect parameters, and the native HIL string
concatenation - or the `join` function - can be used to replicate the
missing behaviour.
This adds a field terraform_version to the state that represents the
Terraform version that wrote that state. If Terraform encounters a state
written by a future version, it will error. You must use at least the
version that wrote that state.
Internally we have fields to override this behavior (StateFutureAllowed),
but I chose not to expose them as CLI flags, since the user can just
modify the state directly. This is tricky, but should be tricky to
represent the horrible disaster that can happen by enabling it.
We didn't have to bump the state format version since the absense of the
field means it was written by version "0.0.0" which will always be
older. In effect though this change will always apply to version 2 of
the state since it appears in 0.7 which bumped the version for other
purposes.
For a long time now, the diff logic has relied on the behavior of
`mapstructure.WeakDecode` to determine how various primitives are
converted into strings. The `schema.DiffString` function is used for
all primitive field types: TypeBool, TypeInt, TypeFloat, and TypeString.
The `mapstructure` library's string representation of booleans is "0"
and "1", which differs from `strconv.FormatBool`'s "false" and "true"
(which is used in writing out boolean fields to the state).
Because of this difference, diffs have long had the potential for
cosmetically odd but semantically neutral output like:
"true" => "1"
"false" => "0"
So long as `mapstructure.Decode` or `strconv.ParseBool` are used to
interpret these strings, there's no functional problem.
We had our first clear functional problem with #6005 and friends, where
users noticed diffs like the above showing up unexpectedly and causing
troubles when `ignore_changes` was in play.
This particular bug occurs down in Terraform core's EvalIgnoreChanges.
There, the diff is modified to account for ignored attributes, and
special logic attempts to handle properly the situation where the
ignored attribute was going to trigger a resource replacement. That
logic relies on the string representations of the Old and New fields in
the diff to be the same so that it filters properly.
So therefore, we now get a bug when a diff includes `Old: "0", New:
"false"` since the strings do not match, and `ignore_changes` is not
properly handled.
Here, we introduce `TypeBool`-specific normalizing into `finalizeDiff`.
I spiked out a full `diffBool` function, but figuring out which pieces
of `diffString` to duplicate there got hairy. This seemed like a simpler
and more direct solution.
Fixes#6005 (and potentially others!)
As I've been working through the resources, I'm finding that a lot are
going to need some serious work. Given we have hundreds, I think it
might be prudent to make this opt-in for now and we can revisit
automatic/opt-out at some future point.
Importability will likely be opt-in it appears so this will match up
with that.