When converting a legacy schemaMap to a configschema, we need to add
"id" as a required attribute to top-level resources if it's not
declared.
The "id" field will be required to interoperate with the legacy helper
schema, since the presence of an id was used to indicate the existence
of a resource.
The "config" package is no longer used and will be removed as part
of the 0.12 release cleanup. Since configschema is part of the
"new world" of configuration modelling, it makes more sense for
it to live as a subdirectory of the newer "configs" package.
Due to how deeply the configuration types go into Terraform Core, there
isn't a great way to switch out to HCL2 gradually. As a consequence, this
huge commit gets us from the old state to a _compilable_ new state, but
does not yet attempt to fix any tests and has a number of known missing
parts and bugs. We will continue to iterate on this in forthcoming
commits, heading back towards passing tests and making Terraform
fully-functional again.
The three main goals here are:
- Use the configuration models from the "configs" package instead of the
older models in the "config" package, which is now deprecated and
preserved only to help us write our migration tool.
- Do expression inspection and evaluation using the functionality of the
new "lang" package, instead of the Interpolator type and related
functionality in the main "terraform" package.
- Represent addresses of various objects using types in the addrs package,
rather than hand-constructed strings. This is not critical to support
the above, but was a big help during the implementation of these other
points since it made it much more explicit what kind of address is
expected in each context.
Since our new packages are built to accommodate some future planned
features that are not yet implemented (e.g. the "for_each" argument on
resources, "count"/"for_each" on modules), and since there's still a fair
amount of functionality still using old-style APIs, there is a moderate
amount of shimming here to connect new assumptions with old, hopefully in
a way that makes it easier to find and eliminate these shims later.
I apologize in advance to the person who inevitably just found this huge
commit while spelunking through the commit history.
The new config loader requires some steps to happen in a different
order, particularly in regard to knowing the schema in order to
decode the configuration.
Here we lean directly on the configschema package, rather than
on helper/schema.Backend as before, because it's generally
sufficient for our needs here and this prepares us for the
helper/schema package later moving out into its own repository
to seed a "plugin SDK".
We will need access to this information in order to render interactive
input prompts, and it will also be useful in returning schema information
to external tools such as text editors that have autocomplete-like
functionality.
When checking if a ResourceDiff key is valid, allow for keys that exist
on sub-blocks. This allows things like diff.Clear to be called on
sub-block fields, instead of just on top-level fields.
The `remote` backend config contains an attribute that is defined as a `*schema.Set`, but currently only `string` values are accepted as the `config` attribute is defined as a `schema.TypeMap`.
Additionally the `b.Validate()` method wasn’t called to prevent a possible panic in case of unexpected configurations being passed to `b.Configure()`.
This commit is a bit of a hack to be able to support this in the 0.11 series. The 0.12 series will have proper support, so when merging 0.12 this should be reverted again.
We already had the functionality to make resources deprecated, which was
used when migrating resources to data sources, but the functionality was
unexported, so only the schema package could do it. Now it's exported,
meaning providers can mark entire resources as deprecated. I also added
a test in hopefully-the-right place?
A couple of bugs have been discovered in ResourceDiff.ForceNew:
* NewRemoved is not preserved when a diff for a key is already present.
This is because the second diff that happens after customization
performs a second getChange on not just state and config, but also on
the pre-existing diff. This results in Exists == true, meaning nil is
never returned as a new value.
* ForceNew was doing the work of adding the key to the list of changed
keys by doing a full SetNew on the existing value. This has a side
effect of fetching zero values from what were otherwise undefined values
and creating diffs for these values where there should not have been
(example: "" => "0").
This update fixes these scenarios by:
* Adding a new private function to check the existing diff for
NewRemoved keys. This is included in the check on new values in
diffChange.
* Keys that have been flagged as ForceNew (or parent keys of lists and
sets that have been flagged as ForceNew) are now maintained in a
separate map. UpdatedKeys now returns the results of both of these maps,
but otherwise these keys are ignored by ResourceDiff.
* Pursuant the above, values are no longer pushed into the newDiff
writer by ForceNew. This prevents the zero value problem, and makes for
a cleaner implementation where the provider has to "manually" SetNew to
update the appropriate values in the writer. It also prevents
non-computed keys from winding up in the diff, which ResourceDiff
normally blocks by design.
There are also a couple of tests for cases that should never come up
right now involving Optional/Computed values and NewRemoved, for which
explanations are given in annotations of each test. These are here to
guard against future regressions.
Added some more context to GetOkExists, moved Computed to NewValueKnown
to accommodate some changes that will be coming up in HCL2 that may make
"Computed" less intuitive of a function name, and updated the docs for
NewValueKnown as well.
This adds a new method to ResourceDiff: Computed, which exposes the
computed read result field to ResourceDiff. In the context of
customizing the diff, this is important as interpolated and otherwise
computed values will show up in the diff as blank, with no way of
determining if the value is actually blank or if it's a computed value
not available at diff customization time. Currently assumptions need to
be made on this, but this does not help in validation scenarios where
one needs to differentiate between an actual blank value and a value
that will be available later.
This is exposed for the most part via NewComputed in the diff, but the
tests cover both the config reader as well (with no diff, even though
this should not come up in normal operation) and also the newDiff reader
when someone sets a new value using SetNew and SetNewComputed.
This commit also exposes GetOkExists. The tests were mostly pulled from
ResourceData but a few were added to ensure that config was being
properly covered as well, in addition to covering SetNew and
SetNewComputed.
Return the global default timeout if the ResourceData timeouts are nil.
Set the timeouts from the Resource when calling Resource.Data, so that
the config values are always available.
For historical reasons, the handling of element types for maps is inconsistent with other collection types.
Here we begin a multi-step process to make it consistent, starting by supporting both the "consistent" form of using a schema.Schema and an existing erroneous form of using a schema.Type directly. In subsequent commits we will phase out the erroneous form and require the schema.Schema approach, the same as we do for TypeList and TypeSet.
This new codepath with the getDiff "customzed" return value, along with
the associated test need to be removed as soon as we can support unset
fields from the config, so we don't continue to carry this broken
behavior forward any longer than needed.
This extends the internal diffChange method so that ResourceDiff's
implementation of it can report back whether or not the value came from
a customized diff.
This is an effort to work to preserve the pre-ResourceDiff behaviour
that ignores the diff for computed keys when the old value was populated
but the new value wasn't - this behaviour is actually being depended on
by users that are using it to exploit using zero values in modules. This
should allow both scenarios to co-exist by shifting the NewComputed
exemption over to exempting values that come from diff customization.
This reverts one of the changes from 6a4f7b0, which broke empty strings
being seen as unset for computed values.
This breaks a number of other tests, and is only an intermediate change
for evaluating other solutions.
This case should be expected to fail with the current diff algorithm,
but the existing behavior was widely relied upon so we need to roll this
back until there is a representable nil value.
There are situations where one may need to write to a set, list, or map
more than once per single TF operation (apply/refresh/etc). In these
cases, further writes using Set (example: d.Set("some_set", newSet))
currently create unstable results in the set writer (the name of the
writer layer that holds the data set by these calls) because old keys
are not being cleared out first.
This bug is most visible when using sets. Example: First write to set
writes elements that have been hashed at 10 and 20, and the second write
writes elements that have been hashed at 30 and 40. While the set length
has been correctly set at 2, since a set is basically a map (as is the
entire map writer) and map results are non-deterministic, reads to this
set will now deliver unstable results in a random but predictable
fashion as the map results are delivered to the caller non-deterministic
- sometimes you may correctly get 30 and 40, but sometimes you may get
10 and 20, or even 10 and 30, etc.
This problem propagates to state which is even more damaging as unstable
results are set to state where they become part of the permanent data
set going forward.
The problem also applies to lists and maps. This is probably more of an
issue with maps as a map can contain any key/value combination and hence
there is no predictable pattern where keys would be overwritten with
default or zero values. This is contrary to complex lists, which has
this problem as well, but since lists are deterministic and the length
of a list properly gets updated during the overwrite, the problem is
masked by the fact that a read will only read to the boundary of the
list, skipping any bad data that may still be available due past the
list boundary.
This update clears the child contents of any set, list, or map before
beginning a new write to address this issue. Tests are included for all
three data types.
This keeps CustomizeDiff from being defined on data sources, where it
would be useless. We just catch this in InternalValidate like the rest
of the CRUD functions that are not used in data sources.
Added some more detailed comments to CustomizeDiff's comments. The new
comments detail how CustomizeDiff will be called in the event of
different scenarios like creating a new resource, diffing an existing
one, diffing an existing resource that has a change that requires a new
resource, and destroy/tainted resources.
Also added similar detail to ForceNew in ResourceDiff.
This should help mitigate any confusion that may come up when using
CustomizeDiff, especially in the ForceNew scenario when the second run
happens with no state.
setDiff does not make use of its new parameter anymore, so it has been
removed. Also, as there is no more SetDiff (exported) function, mentions
of that have been removed from comments too.
This fixes nil pointer issues that could come up if an invalid key was
referenced (ie: not one in the schema). Also ships a helper validation
function to streamline things.
Restoring the naming of this field in the resource back to
CustomizeDiff, as this is generally more descriptive of the process
that's happening, despite the lengthy name.
The old comments said that this interface was API compatible with
terraform.ResourceProvider's Diff method - with the addition of passing
down meta to it, this is no longer the case.
Not too sure if this is really a big deal - schema.Resource never fully
implemented terraform.ResourceProvider, from what I can see, and the
path from Provdier.Diff to Resource.Diff is still pretty clear. Just
wanted to remove an outdated comment.
This could panic if we sent a parent that was actually longer than the
child (should almost never come up, but the guard will make it safe
anyway).
Also fixed a slice style lint warning in UpdatedKeys.
The consensus is that it's a generally better idea to move setting the
functionality of old values completely to the refresh/read process,
hence it's moot to have this function around anymore. This also means we
don't need the old value reader/writer anymore, which simplifies things.
When working on this initially, I think I thought that since NewComputed
values in the diff were empty strings, that it was using the zero value.
After review, it doesn't seem like this is the case - so I have adjusted
NewComputed to pass nil values. There is also a guard now that keeps the
new value writer from accepting computed fields with non-nil values.
To keep with the current convention of most other schema.Resource
functional fields being fairly short, CustomizeDiff has been changed to
"Review". It would be "Diff", however it is already used by existing
functions in schema.Provider and schema.Resource.
Both Destroy and DestroyDeposed are not propagated down the diff stack,
meaning that there is no way we can tell at this point if an instance is
being destroyed or deposed, so this check would never be used.
In this regard, Destroy never runs a diff down the stack at all, and a
deposition check is not run until *after* the provider's diff function
is called. To answer this question and close it off, we could either
determine if a resource is deposed earlier, and propagate that down, or
treat deposed resources like full destroy nodes, and not diff them at
all (but rather making a diff with the only thing in it being
DestroyDeposed flagged).
Added a few more test cases for CustomizeDiff, caught another error in
the process. I think this is ready for review now, and possibly some
real-world testing of the waters by way of porting some resources that
would benefit from the feature.
It's alive! CustomizeDiff logic now has been inserted into the diff
process. The test_resource_with_custom_diff resource provides some basic
testing and a reference implementation.
There should now be plenty of test coverage for this feature via the
tests added for ResourceDiff, and the basic test added to the
schemaMap.Diff test, and the test resource, but more can be added to
test any specific case that comes up otherwise.
As the interface has been specced out for ResourceDiff, the only diff
operation that can function on a non-computed key is ForceNew, and that
is only if a diff exists for that key. As such, allowing the user to
clear keys that cannot be operated on afterwards does not really make
much sense.
Will re-add this function if it is determined it's needed after all on
review.
Final set of coverage for ResourceDiff and bug fixes to correct issues
the test cases brought up.
Will be dropping ClearAll in next commit, as with how this interface is
intended to be used, it does not really make much sense.
This provides a deep copy of a schemaMap, which will be needed for the
diff customization process as ResourceDiff will be able to flag fields
as ForceNew - we don't want to affect the source schema.
In diffString, removed values are marked if the old value is non-nil and
the new value is nil, regardless of if the new value is marked as a
computed value. With the new diff override behaviour, this can lead to
issues where a nil attribute may get inserted into the diff if the new
value has been marked as computed (as the value will be marked as
missing, but computed).
This propagates into finalizeDiff in some respects because even if
NewComputed is manually set in the diff that gets passed in, it is
ignored, and the schema becomes the only source of truth. Since our new
diff behaviour is mainly designed to be supported on computed keys only,
it stands to reason that we there will be a time where we want to set a
diff as NewComputed on a key, and see that change in the diff.
These two small changes makes that happen. No regressions in tests have
been observed via this change, so it seems safe and non-invasive.
The beginning of tests for SetNew. Noticed that removed values are not
logged for computed keys in a diff - not too sure if that is going to be
a problem (possibly not as GetChange in ResourceData should still
reference state for the old value). I came on this most notably in the
"TestSetNew/complex-ish_set_diff" test, for reference.
More tests pending for other functions for coverage of other functions
and test cases as defined in #8769.
This ensures that when we hook this into the main diff logic, that we
can just re-diff the keys that are modified, ensuring that the diff is
not re-run on keys that were not touched, or on keys that were
intentionally removed.
This should complete the feature set of the prototype. This function
removes a specific key from the existing diff, preventing conflicts.
Further functionality might be needed to make this behave as expected,
namely ensuring that we don't re-process the whole diff after the
CustomizeDiffFunc runs.
This new prototype removes the dependence on a underlying ResourceData,
moving several items up to ensure an approach that more matches
ResourceData. Namely, we create our own MultiLevelFieldReader that
also layers updated diff values on top. New values use a small
re-implementation of the MapFieldWriter/Reader that allows computed
values to be set.
The assumption here now is that a second diff will happen after the
first one is done, processing the new values set in the 2 new
reader/writer levels and updating the diff as necessary.
This adds a new object, ResourceDiff, to the schema package. This
object, in conjunction with a function defined in CustomizeDiff in the
resource schema, allows for the in-flight customization of a Terraform
diff. This helps support use cases such as when there are necessary
changes to a resource that cannot be detected in config, such as via
computed fields (most of the utility in this object works on computed
fields only). It also allows for a wholesale wipe of the diff to allow
for diff logic completely offloaded to an external API, if it is a
better use case for a specific provider.
As part of this work, many internal diff functions have been moved to
use a special resourceDiffer interface, to allow for shared
functionality between ResourceDiff and ResourceData. This may be
extended to the DiffSuppressFunc as well which would restrict use of
ResourceData in DiffSuppressFunc to generally read-only fields.
This work is not yet in its final state - CustomizeDiff is not yet
implemented in the general diff workflow, new functions may be added
(notably Clear() for a single key), and functionality may be altered.
Tests will follow as well.
In order to parse provider, resource and data source configuration from
HCL2 config files, we need to know the relevant configuration schema.
This new method allows Terraform Core to request these from a provider.
This is a breaking change to this interface, so all of its implementers
in this package are updated too. This includes concrete implementations
of the new method in helper/schema that use the schema conversion code
added in an earlier commit to produce a configschema.Block automatically.
Plugins compiled against prior versions of helper/schema will not have
support for this method, and so calls to them will fail. Callers of
this new method will therefore need to sniff for support using the
SchemaAvailable field added to both ResourceType and DataSource.
This careful handling will need to persist until next time we increment
the plugin protocol version, at which point we can make the breaking
change of requiring this information to be available.
We don't currently have any need for this information, but we're
propagating it out of helper/schema here pre-emptively so that once we
later have a use for it we will not need to rebuild the providers to gain
access to it.
The long-term expected use-case for this is to have Terraform Core use
static analysis techniques to trace the path of sensitive data through
interpolations so that intermediate results can be flagged as sensitive
too, but we have a lot more work to do before such a thing would actually
be possible.
As part of moving to the next-generation HCL implementation,
Terraform Core is getting its own representation of configuration schema
that is tailored for configuration-processing use-cases. The capabilities
of this are a subset of the helper/schema model primarily concerned with
the configuration structure and value types, leaving detailed validation
and defaults for helper/schema to still solve.
These new methods allow mechanical creation of a schema in the new Core
schema model from a schema expressed in the helper/schema model. This is
not yet used as of this commit, but will be used later to implement some
new ResourceProvider methods that will allow core to obtain the schema
for provider, resource and data source configuration while remaining
source-compatible with existing provider implementations.
Go 1.9 adds this new function which, when called, marks the caller as
being a "helper function". Helper function stack frames are then skipped
when trying to find a line of test code to blame for a test failure, so
that the code in the main test function appears in the test failure output
rather than a line within the helper function itself.
This covers many -- but probaly not all -- of our test helpers across
various packages.
Equality of schema.Sets gets tricky when dealing with nested sets -
Set.Equal only superficially compares the underlying maps and hence any
sets nested under the root sets cause issues.
This adds a simple method, HashEqual, that does a top-level hash
comparison, helping to work around this without any complex re-invention
of things like reflect.DeepEqual.
Of course, in order to make effective use of this function, the user
needs to make sure they are properly hashing their nested sets, however
this is trivial with things like HashResource.
Adds `GetOkRaw` as a schema function. This should only be used to verify
boolean attributes are either set or not set, regardless of their zero
value for their type. There are a few small use cases outside of the boolean
type where this will be helpful as well.
Overall, this shouldn't detract from the zero-value checks that `GetOK()`
currently has, and should only be used when absolutely needed. However,
there are enough use-cases for this addition without checking for the
zero-value of the type, that this is needed.
Primary use case is for a boolean attribute that is `Optional` and `Computed`,
without a default value. There's currently no way to verify that the boolean
attribute was explicitly set to the zero-value literal with the current
`GetOk()` function. This new function allows for that check, keeping the
`Computed` check for the returned `exists` boolean.
```
$ make test TEST=./helper/schema TESTARGS="-run=TestResourceDataGetOkRaw"
==> Checking that code complies with gofmt requirements...
go generate $(go list ./... | grep -v /terraform/vendor/)
2017/08/02 11:17:32 Generated command/internal_plugin_list.go
go test -i ./helper/schema || exit 1
echo ./helper/schema | \
xargs -t -n4 go test -run=TestResourceDataGetOkRaw -timeout=60s -parallel=4
go test -run=TestResourceDataGetOkRaw -timeout=60s -parallel=4 ./helper/schema
ok github.com/hashicorp/terraform/helper/schema 0.005s
```
The field reader code path is extremely inefficient, but refactoring
it all is much to invasive a change at the moment.
Have DiffFieldReader internally cache results for ReadField.
It turns out that `d.GetOk` also returns `false` when the user _did_ actually supply a value for it in the config, but the value itself needs to be evaluated before it can be used.
So instead of passing a `ResourceData` we now pass a `ResourceConfig`
which makes much more sense for doing the validation anyway.
GH-14784 allowed nested structures to be validate, rather than relying
on the raw value. However this still returns the same validation error
if the structures contain a computed value, since Get will return the
raw string in that case.
This simply skips the validation in the IsComputed case, since there's
nothing that can be checked.
When interpreting a nested object, we were validating against the "raw"
value, and not the interpolated value, causing incorrect errors.
This affects structures such as:
```tf
tags = "${list(map("foo", "bar"))}"
```
Prior to this, a complaint about "expected object, got string" since the
raw value is obviously a string, when the interpolated value is the
correct shape.
The tests did pass, but that was because they only tested part of the changes. By using the `schema.TestResourceDataRaw` function the schema and config are better tested and so they pointed out a problem with the schema of the Chef provisioner.
The `Elem` fields did not have a `*schema.Schema` but a `schema.Schema` and in an `Elem` schema only the `Type` field may (and must) be set. Any other fields like `Optional` are not allowed here.
Next to fixing that problem I also did a little refactoring and cleaning up. Mainly making the `ProvisionerS` private (`provisioner`) and removing the deprecated fields.
1. Migrate `chef` provisioner to `schema.Provisioner`:
* `chef.Provisioner` structure was renamed to `ProvisionerS`and now it's decoded from `schema.ResourceData` instead of `terraform.ResourceConfig` using simple copy-paste-based solution;
* Added simple schema without any validation yet.
2. Support `ValidateFunc` validate function : implemented in `file` and `chef` provisioners.
If a schema.TypeList had a Schema with ForceNew, and if that list was
NewComputed, the diff would not have RequiresNew set. This causes apply
to fail when the diffs didn't match because of the change to
RequiresNew.
Set the RequiresNew field on the list's ResourceAttrDiff based on the
Schema value.
stringer has changed the boilerplate it generates in a recent version.
We'd previously updated to the new format but accientally rolled back
to the old while merging a long-running feature branch.
This restores us back to the new format again.
* Revert #11245, #11321, #11498 and #11757
These PR’s are all related to issue #11170 for which I would like to propose a different solution then the one currently implemented.
* A different approach to solve #11170
This approach has (IMHO) a few advantages with regards to the solution currently implemented. I will elaborate on this in the PR.
Discussion in #9512 revealed that some of the comments here were
inaccurate and that the comments here did not paint a complete enough
picture of the behavior and expectations of Default and DefaultFunc.
This is a comments-only change that aims to clarify the situation and
call attention to the fact that the defaults only affect the handling of
the configuration and that changes to defaults may require migration of
existing resource states.
This closes#9512.
golang/tools commit 23ca8a263 changed the format of the leading comment
to comply with some new standards discussed here:
https://golang.org/issue/13560
This is the result of running generate with the latest version of
stringer. Everyone working on Terraform will need to update stringer
after this is merged, to avoid reverting this:
go get -u golang.org/x/tools/cmd/stringer
If the list was marked as computed, all values will be raw config
values. Fetch the individual keys from the config to get any known
values before validating.
The Required||Optional logic in schemaMap.Input was incorrect, causing
it to always request input. Fix the logic, and the associated tests
which were passing "just because".
helper/schema: Rename Timeout resource block to Timeouts
- Pluralize configuration argument name to better represent that there is
one block for many timeouts
- use a const for the configuration timeouts key
- update docs
Provider.TestReset resets the internal state of the Provider at the
start of a test. This also adds a MetaReset function field to
schema.Provider, which is called by TestReset and can be used to reset
any other tsated stored in the provider metadata.
This is currently used to reset the internal Context returned by
StopContext between tests, and should be implemented by a provider if
it stores a Context from a previous test.
* helper/schema: Add custom Timeout block for resources
* refactor DefaultTimeout to suuport multiple types. Load meta in Refresh from Instance State
* update vpc but it probably wont last anyway
* refactor test into table test for more cases
* rename constant keys
* refactor configdecode
* remove VPC demo
* remove comments
* remove more comments
* refactor some
* rename timeKeys to timeoutKeys
* remove note
* documentation/resources: Document the Timeout block
* document timeouts
* have a test case that covers 'hours'
* restore a System default timeout of 20 minutes, instead of 0
* restore system default timeout of 20 minutes, refactor tests, add test method to handle system default
* rename timeout key constants
* test applying timeout to state
* refactor test
* Add resource Diff test
* clarify docs
* update to use constants
This changes the type of values in Meta for InstanceState to
`interface{}`. They were `string` before.
This will allow richer structures to be persisted to this without
flatmapping them (down with flatmap!). The documentation clearly states
that only primitives/collections are allowed here.
The only thing using this was helper/schema for schema versioning.
Appropriate type checking was added to make this change safe.
The timeout work @catsby is doing will use this for a richer structure.
Fixes#12183
The fix is in flatmap for this but the entire issue is a bit more
complex. Given a schema with a computed set, if you reference it like
this:
lookup(attr[0], "field")
And "attr" contains a computed set within it, it would panic even though
"field" is available. There were a couple avenues I could've taken to
fix this:
1.) Any complex value containing any unknown value at any point is
entirely unknown.
2.) Only the specific part of the complex value is unknown.
I took route 2 so that the above works without any computed (since
"name" is not computed but something else is). This may actually have an
effect on other parts of Terraform configs, however those similar
configs would've simply crashed previously so it shouldn't break any
pre-existing configs.
Accessing an interpolated value in a map through ConfigFieldReader can
fail, because GetRaw can't access interpolated values, so check if the
value exists at all by looking in the config. If the config has a value,
assume our map's value is interpolated and proceed as such.
We also need to lookup the correct schema to properly read a field from
a nested structure.
- Maps previously always defaulted to TypeString. Now check if Elem is a
ValueType and use that if applicable
- Lists now return the schema for nested element types, defaulting to a
TypeString like maps.
This only allows maps and lists to be nested one level deep, and the
inner map or list must only contain string values.
Needed due to work done in 95d37ea, we may need to adjust
hasComputedSubKeys to propagate NewComputed in the same way that we
have added "~", however will wait for comment from @mitchellh.
This covers:
* Complex sets with computed fields in a set
* Complex lists with computed fields in a set
Adding a test to test basic lists with computed fields seemed to fail,
but possibly for an unrelated reason (the list returned as nil). The fix
to this inparticular case may be out of the scope of this specific
issue.
Reference gist and details in hashicorp/terraform#9171.
This fixes some edge-ish cases where a set in a config has a set or list
in it that contains computed values, but non-set or list values in the
parent do not.
This can cause "diffs didn't match during apply" errors in a scenario
such as when a set's hash is calculated off of child items (including
any sub-lists or sets, as it should be), and the hash changes between
the plan and apply diffs due to the computed values present in the
sub-list or set items. These will be marked as computed, but due to the
fact that the function was not iterating over the list or set items
properly (ie: not adding the item number to the address, so
set.0.set.foo was being yielded instead of set.0.set.0.foo), these
computed values were not being properly propagated to the parent set to
be marked as computed.
Fixeshashicorp/terraform#6527.
Fixeshashicorp/terraform#8271.
This possibly fixes other non-CloudFront related issues too.
Fixes#10125
If the elements are computed and the field is ForceNew, then we should
mark the computed count as potentially forcing a new operation.
Example, assuming `groups` forces new...
**Step 1:**
groups = ["1", "2", "3"]
At this point, the resource isn't create, so this should result in a
diff like:
CREATE resource:
groups: "" => ["1", "2", "3"]
**Step 2:**
groups = ["${computedvar}"]
The OLD behavior was:
UPDATE resource
groups.#: "3" => "computed"
This would cause a diff mismatch because if `${computedvar}` was
different then it should force new. The NEW behavior is:
DESTROY/CREATE resource:
groups.#: "3" => "computed" (forces new)
Fixes#2748
This changes the diff to only mark "forces new resource" on the fields
that actually caused the new resource, not every field that changed.
This makes diffs much more accurate.
I'd like to request a review but I'm going to defer merging until
Terraform 0.8. Changes like this are very possible to cause "diffs
didn't match" errors and I want some real world testing in a beta before
we hit prod with this.
Fixes#7715
If a bool field was computed and the raw value was not convertable to a
boolean, helper/schema would crash. The correct behavior is to try not
to read the raw value when the value is computed and to simply mark that
it is computed. This does that (and matches the behavior of the other
primitives).