If a schema.TypeList had a Schema with ForceNew, and if that list was
NewComputed, the diff would not have RequiresNew set. This causes apply
to fail when the diffs didn't match because of the change to
RequiresNew.
Set the RequiresNew field on the list's ResourceAttrDiff based on the
Schema value.
stringer has changed the boilerplate it generates in a recent version.
We'd previously updated to the new format but accientally rolled back
to the old while merging a long-running feature branch.
This restores us back to the new format again.
* Revert #11245, #11321, #11498 and #11757
These PR’s are all related to issue #11170 for which I would like to propose a different solution then the one currently implemented.
* A different approach to solve #11170
This approach has (IMHO) a few advantages with regards to the solution currently implemented. I will elaborate on this in the PR.
Discussion in #9512 revealed that some of the comments here were
inaccurate and that the comments here did not paint a complete enough
picture of the behavior and expectations of Default and DefaultFunc.
This is a comments-only change that aims to clarify the situation and
call attention to the fact that the defaults only affect the handling of
the configuration and that changes to defaults may require migration of
existing resource states.
This closes#9512.
golang/tools commit 23ca8a263 changed the format of the leading comment
to comply with some new standards discussed here:
https://golang.org/issue/13560
This is the result of running generate with the latest version of
stringer. Everyone working on Terraform will need to update stringer
after this is merged, to avoid reverting this:
go get -u golang.org/x/tools/cmd/stringer
If the list was marked as computed, all values will be raw config
values. Fetch the individual keys from the config to get any known
values before validating.
The Required||Optional logic in schemaMap.Input was incorrect, causing
it to always request input. Fix the logic, and the associated tests
which were passing "just because".
helper/schema: Rename Timeout resource block to Timeouts
- Pluralize configuration argument name to better represent that there is
one block for many timeouts
- use a const for the configuration timeouts key
- update docs
Provider.TestReset resets the internal state of the Provider at the
start of a test. This also adds a MetaReset function field to
schema.Provider, which is called by TestReset and can be used to reset
any other tsated stored in the provider metadata.
This is currently used to reset the internal Context returned by
StopContext between tests, and should be implemented by a provider if
it stores a Context from a previous test.
* helper/schema: Add custom Timeout block for resources
* refactor DefaultTimeout to suuport multiple types. Load meta in Refresh from Instance State
* update vpc but it probably wont last anyway
* refactor test into table test for more cases
* rename constant keys
* refactor configdecode
* remove VPC demo
* remove comments
* remove more comments
* refactor some
* rename timeKeys to timeoutKeys
* remove note
* documentation/resources: Document the Timeout block
* document timeouts
* have a test case that covers 'hours'
* restore a System default timeout of 20 minutes, instead of 0
* restore system default timeout of 20 minutes, refactor tests, add test method to handle system default
* rename timeout key constants
* test applying timeout to state
* refactor test
* Add resource Diff test
* clarify docs
* update to use constants
This changes the type of values in Meta for InstanceState to
`interface{}`. They were `string` before.
This will allow richer structures to be persisted to this without
flatmapping them (down with flatmap!). The documentation clearly states
that only primitives/collections are allowed here.
The only thing using this was helper/schema for schema versioning.
Appropriate type checking was added to make this change safe.
The timeout work @catsby is doing will use this for a richer structure.
Fixes#12183
The fix is in flatmap for this but the entire issue is a bit more
complex. Given a schema with a computed set, if you reference it like
this:
lookup(attr[0], "field")
And "attr" contains a computed set within it, it would panic even though
"field" is available. There were a couple avenues I could've taken to
fix this:
1.) Any complex value containing any unknown value at any point is
entirely unknown.
2.) Only the specific part of the complex value is unknown.
I took route 2 so that the above works without any computed (since
"name" is not computed but something else is). This may actually have an
effect on other parts of Terraform configs, however those similar
configs would've simply crashed previously so it shouldn't break any
pre-existing configs.
Accessing an interpolated value in a map through ConfigFieldReader can
fail, because GetRaw can't access interpolated values, so check if the
value exists at all by looking in the config. If the config has a value,
assume our map's value is interpolated and proceed as such.
We also need to lookup the correct schema to properly read a field from
a nested structure.
- Maps previously always defaulted to TypeString. Now check if Elem is a
ValueType and use that if applicable
- Lists now return the schema for nested element types, defaulting to a
TypeString like maps.
This only allows maps and lists to be nested one level deep, and the
inner map or list must only contain string values.
Needed due to work done in 95d37ea, we may need to adjust
hasComputedSubKeys to propagate NewComputed in the same way that we
have added "~", however will wait for comment from @mitchellh.
This covers:
* Complex sets with computed fields in a set
* Complex lists with computed fields in a set
Adding a test to test basic lists with computed fields seemed to fail,
but possibly for an unrelated reason (the list returned as nil). The fix
to this inparticular case may be out of the scope of this specific
issue.
Reference gist and details in hashicorp/terraform#9171.
This fixes some edge-ish cases where a set in a config has a set or list
in it that contains computed values, but non-set or list values in the
parent do not.
This can cause "diffs didn't match during apply" errors in a scenario
such as when a set's hash is calculated off of child items (including
any sub-lists or sets, as it should be), and the hash changes between
the plan and apply diffs due to the computed values present in the
sub-list or set items. These will be marked as computed, but due to the
fact that the function was not iterating over the list or set items
properly (ie: not adding the item number to the address, so
set.0.set.foo was being yielded instead of set.0.set.0.foo), these
computed values were not being properly propagated to the parent set to
be marked as computed.
Fixeshashicorp/terraform#6527.
Fixeshashicorp/terraform#8271.
This possibly fixes other non-CloudFront related issues too.
Fixes#10125
If the elements are computed and the field is ForceNew, then we should
mark the computed count as potentially forcing a new operation.
Example, assuming `groups` forces new...
**Step 1:**
groups = ["1", "2", "3"]
At this point, the resource isn't create, so this should result in a
diff like:
CREATE resource:
groups: "" => ["1", "2", "3"]
**Step 2:**
groups = ["${computedvar}"]
The OLD behavior was:
UPDATE resource
groups.#: "3" => "computed"
This would cause a diff mismatch because if `${computedvar}` was
different then it should force new. The NEW behavior is:
DESTROY/CREATE resource:
groups.#: "3" => "computed" (forces new)
Fixes#2748
This changes the diff to only mark "forces new resource" on the fields
that actually caused the new resource, not every field that changed.
This makes diffs much more accurate.
I'd like to request a review but I'm going to defer merging until
Terraform 0.8. Changes like this are very possible to cause "diffs
didn't match" errors and I want some real world testing in a beta before
we hit prod with this.
Fixes#7715
If a bool field was computed and the raw value was not convertable to a
boolean, helper/schema would crash. The correct behavior is to try not
to read the raw value when the value is computed and to simply mark that
it is computed. This does that (and matches the behavior of the other
primitives).
Fixes#5138
If an item is optional and is removed completely from the configuration,
it should still trigger a destroy/create if the field itself was marked
as "ForceNew".
See the example in #5138.
Fixes#3309
There are two primary changes, one to how helper/schema creates diffs
and one to how Terraform compares diffs. Both require careful
understanding.
== 1. helper/schema Changes
helper/schema, given any primitive field (string, int, bool, etc.)
_used to_ create a basic diff when given a computed new value (i.e. from
an unkown interpolation). This would put in the plan that the old value
is whatever the old value was, and the new value was the actual
interpolation. For example, from #3309, the diff showed the following:
```
~ module.test.aws_eip.test-instance.0
instance: "<INSTANCE ID>" => "${element(aws_instance.test-instance.*.id, count.index)}"
```
Then, when running `apply`, the diff would be realized and you would get
a diff mismatch error because it would realize the final value is the
same and remove it from the diff.
**The change:** `helper/schema` now marks unknown primitive values with
`NewComputed` set to true. Semantically this is correct for the diff to
have this information.
== 2. Terraform Diff.Same Changes
Next, the way Terraform compares diffs needed to be updated
Specifically, the case where the diff from the plan had a NewComputed
primitive and the diff from the apply _no longer has that value_. This
is possible if the computed value ended up being the same as the old
value. This is allowed to pass through.
Together, these fix#3309.
This reverts commit c3a4cff133, reversing
changes made to 791a02e6e4.
This change requires plugin recompilation and we should hold off until a
minor release for that.
This commit implements reusable functions for when resources have no
need to implement a particular operation:
- Noop - does nothing and returns no error.
- RemoveFromState - sets the resource ID to empty string (removing it
from state) and returns no error.
This is required for the times when the configuration cannot have an
empty configuration. An example would be in AzureRM, when you create a
LoadBalancer with a configuration, you can delete *all* but 1 of these
configurations
This commit adds a new callback, DiffSuppressFunc, to the schema.Schema
structure. If set for a given schema, a callback to the user-supplied
function will be made for each attribute for which the default
type-based diff mechanism produces an attribute diff. Returning `true`
from the callback will suppress the diff (i.e. pretend there was no
diff), and returning false will retain it as part of the plan.
There are a number of motivating examples for this - one of which is
included as an example:
1. On SSH public keys, trailing whitespace does not matter in many
cases - and in some cases it is added by provider APIs. For
digitalocean_ssh_key resources we previously had a StateFunc that
trimmed the whitespace - we now have a DiffSuppressFunc which
verifies whether the trimmed strings are equivalent.
2. IAM policy equivalence for AWS. A good proportion of AWS issues
relate to IAM policies which have been "normalized" (used loosely)
by the IAM API endpoints. This can make the JSON strings differ
from those generated by iam_policy_document resources or template
files, even though the semantics are the same (for example,
reordering of `bucket-prefix/` and `bucket-prefix/*` in an S3
bucket policy. DiffSupressFunc can be used to test for semantic
equivalence rather than pure text equivalence, but without having to
deal with the complexity associated with a full "provider-land" diff
implementation without helper/schema.
Fixes issue where a resource marked as tainted with no other attribute
diffs would never show up in the plan or apply as needing to be
replaced.
One unrelated test needed updating due to a quirk in the testDiffFn
logic - it adds a "type" field diff if the diff is non-Empty. NBD
Although DiffFieldReader was the one mostly responsible for a buggy behaviour
more tests were added throughout the debugging process most of which
would fail without the bugfix.
- ResourceData
- MultiLevelFieldReader
- MapFieldReader
- DiffFieldReader
The helper/schema framework for building providers previously validated
in all cases that each field being set in state was in the schema.
However, in order to support remote state in a usable fashion, the need
has arisen for the top level attributes of the resource to be created
dynamically. In order to still be able to use helper/schema, this commit
adds the capability to assign additional fields.
Though I do not forsee this being used by providers other than remote
state (and that eventually may move into Terraform Core rather than
being a provider), the usage and semantics are:
To opt into dynamic attributes, add a schema attribute named
"__has_dynamic_attributes", and make it an optional string with no
default value, in order that it does not appear in diffs:
"__has_dynamic_attributes": {
Type: schema.TypeString
Optional: true
}
In the read callback, use the d.UnsafeSetFieldRaw(key, value) function
to set the dynamic attributes.
Note that other fields in the schema _are_ copied into state, and that
the names of the schema fields cannot currently be used as dynamic
attribute names, as we check to ensure a value is not already set for a
given key.
The serializeCollectionMemberForHash helper can't be called for the
MapType values, because MapType doesn't have a schema.Elem. Instead, we
can write the key/value pairs directly to the buffer. This still doesn't
allow for nested maps or lists, but we need to define that use case
before committing to it here.
The flatmapped representation of state prior to this commit encoded maps
and lists (and therefore by extension, sets) with a key corresponding to
the number of elements, or the unknown variable indicator under a .# key
and then individual items. For example, the list ["a", "b", "c"] would
have been encoded as:
listname.# = 3
listname.0 = "a"
listname.1 = "b"
listname.2 = "c"
And the map {"key1": "value1", "key2", "value2"} would have been encoded
as:
mapname.# = 2
mapname.key1 = "value1"
mapname.key2 = "value2"
Sets use the hash code as the key - for example a set with a (fictional)
hashcode calculation may look like:
setname.# = 2
setname.12312512 = "value1"
setname.56345233 = "value2"
Prior to the work done to extend the type system, this was sufficient
since the internal representation of these was effectively the same.
However, following the separation of maps and lists into distinct
first-class types, this encoding presents a problem: given a state file,
it is impossible to tell the encoding of an empty list and an empty map
apart. This presents problems for the type checker during interpolation,
as many interpolation functions will operate on only one of these two
structures.
This commit therefore changes the representation in state of maps to use
a "%" as the key for the number of elements. Consequently the map above
will now be encoded as:
mapname.% = 2
mapname.key1 = "value1"
mapname.key2 = "value2"
This has the effect of an empty list (or set) now being encoded as:
listname.# = 0
And an empty map now being encoded as:
mapname.% = 0
Therefore we can eliminate some nasty guessing logic from the resource
variable supplier for interpolation, at the cost of having to migrate
state up front (to follow in a subsequent commit).
In order to reduce the number of potential situations in which resources
would be "forced new", we continue to accept "#" as the count key when
reading maps via helper/schema. There is no situation under which we can
allow "#" as an actual map key in any case, as it would not be
distinguishable from a list or set in state.
This an effort to address hashicorp/terraform#516.
Adding the Sensitive attribute to the resource schema, opening up the
ability for resource maintainers to mark some fields as sensitive.
Sensitive fields are hidden in the output, and, possibly in the future,
could be encrypted.
This means it’s shown correctly in a plan and takes into account any
actions that are dependant on the tainted resource and, vice verse, any
actions that the tainted resource depends on.
So this changes the behaviour from saying this resource is tainted so
just forget about it and make sure it gets deleted in the background,
to saying I want that resource to be recreated (taking into account the
existing resource and it’s place in the graph).
For backward compatibility we will continue to support using the data
sources that were formerly logical resources as resources for the moment,
but we want to warn the user about it since this support is likely to
be removed in future.
This is done by adding a new "deprecation message" feature to
schema.Resource, but for the moment this is done as an internal feature
(not usable directly by plugins) so that we can collect additional
use-cases and design a more general interface before creating a
compatibility constraint.
Historically we've had some "read-only" and "logical" resources. With the
addition of the data source concept these will gradually become data
sources, but we need to retain backward compatibility with existing
configurations that use the now-deprecated resources.
This shim is intended to allow us to easily create a resource from a
data source implementation. It adjusts the schema as needed and adds
stub Create and Delete implementations.
This would ideally also produce a deprecation warning whenever such a
shimmed resource is used, but the schema system doesn't currently have
a mechanism for resource-specific validation, so that remains just a TODO
for the moment.
In the "schema" layer a Resource is just any "thing" that has a schema
and supports some or all of the CRUD operations. Data sources introduce
a new use of Resource to represent read-only resources, which require
some different InternalValidate logic.
This is a breaking change to the ResourceProvider interface that adds the
new operations relating to data sources.
DataSources, ValidateDataSource, ReadDataDiff and ReadDataApply are the
data source equivalents of Resources, Validate, Diff and Apply (respectively)
for managed resources.
The diff/apply model seems at first glance a rather strange workflow for
read-only resources, but implementing data resources in this way allows them
to fit cleanly into the standard plan/apply lifecycle in cases where the
configuration contains computed arguments and thus the read must be deferred
until apply time.
Along with breaking the interface, we also fix up the plugin client/server
and helper/schema implementations of it, which are all of the callers
used when provider plugins use helper/schema. This would be a breaking
change for any provider plugin that directly implements the provider
interface, but no known plugins do this and it is not recommended.
At the helper/schema layer the implementer sees ReadDataApply as a "Read",
as opposed to "Create" or "Update" as in the managed resource Apply
implementation. The planning mechanics are handled entirely within
helper/schema, so that complexity is hidden from the provider implementation
itself.