Discussion in #9512 revealed that some of the comments here were
inaccurate and that the comments here did not paint a complete enough
picture of the behavior and expectations of Default and DefaultFunc.
This is a comments-only change that aims to clarify the situation and
call attention to the fact that the defaults only affect the handling of
the configuration and that changes to defaults may require migration of
existing resource states.
This closes#9512.
If the list was marked as computed, all values will be raw config
values. Fetch the individual keys from the config to get any known
values before validating.
The Required||Optional logic in schemaMap.Input was incorrect, causing
it to always request input. Fix the logic, and the associated tests
which were passing "just because".
helper/schema: Rename Timeout resource block to Timeouts
- Pluralize configuration argument name to better represent that there is
one block for many timeouts
- use a const for the configuration timeouts key
- update docs
* helper/schema: Add custom Timeout block for resources
* refactor DefaultTimeout to suuport multiple types. Load meta in Refresh from Instance State
* update vpc but it probably wont last anyway
* refactor test into table test for more cases
* rename constant keys
* refactor configdecode
* remove VPC demo
* remove comments
* remove more comments
* refactor some
* rename timeKeys to timeoutKeys
* remove note
* documentation/resources: Document the Timeout block
* document timeouts
* have a test case that covers 'hours'
* restore a System default timeout of 20 minutes, instead of 0
* restore system default timeout of 20 minutes, refactor tests, add test method to handle system default
* rename timeout key constants
* test applying timeout to state
* refactor test
* Add resource Diff test
* clarify docs
* update to use constants
Fixes#10125
If the elements are computed and the field is ForceNew, then we should
mark the computed count as potentially forcing a new operation.
Example, assuming `groups` forces new...
**Step 1:**
groups = ["1", "2", "3"]
At this point, the resource isn't create, so this should result in a
diff like:
CREATE resource:
groups: "" => ["1", "2", "3"]
**Step 2:**
groups = ["${computedvar}"]
The OLD behavior was:
UPDATE resource
groups.#: "3" => "computed"
This would cause a diff mismatch because if `${computedvar}` was
different then it should force new. The NEW behavior is:
DESTROY/CREATE resource:
groups.#: "3" => "computed" (forces new)
Fixes#2748
This changes the diff to only mark "forces new resource" on the fields
that actually caused the new resource, not every field that changed.
This makes diffs much more accurate.
I'd like to request a review but I'm going to defer merging until
Terraform 0.8. Changes like this are very possible to cause "diffs
didn't match" errors and I want some real world testing in a beta before
we hit prod with this.
Fixes#5138
If an item is optional and is removed completely from the configuration,
it should still trigger a destroy/create if the field itself was marked
as "ForceNew".
See the example in #5138.
Fixes#3309
There are two primary changes, one to how helper/schema creates diffs
and one to how Terraform compares diffs. Both require careful
understanding.
== 1. helper/schema Changes
helper/schema, given any primitive field (string, int, bool, etc.)
_used to_ create a basic diff when given a computed new value (i.e. from
an unkown interpolation). This would put in the plan that the old value
is whatever the old value was, and the new value was the actual
interpolation. For example, from #3309, the diff showed the following:
```
~ module.test.aws_eip.test-instance.0
instance: "<INSTANCE ID>" => "${element(aws_instance.test-instance.*.id, count.index)}"
```
Then, when running `apply`, the diff would be realized and you would get
a diff mismatch error because it would realize the final value is the
same and remove it from the diff.
**The change:** `helper/schema` now marks unknown primitive values with
`NewComputed` set to true. Semantically this is correct for the diff to
have this information.
== 2. Terraform Diff.Same Changes
Next, the way Terraform compares diffs needed to be updated
Specifically, the case where the diff from the plan had a NewComputed
primitive and the diff from the apply _no longer has that value_. This
is possible if the computed value ended up being the same as the old
value. This is allowed to pass through.
Together, these fix#3309.
This is required for the times when the configuration cannot have an
empty configuration. An example would be in AzureRM, when you create a
LoadBalancer with a configuration, you can delete *all* but 1 of these
configurations
This commit adds a new callback, DiffSuppressFunc, to the schema.Schema
structure. If set for a given schema, a callback to the user-supplied
function will be made for each attribute for which the default
type-based diff mechanism produces an attribute diff. Returning `true`
from the callback will suppress the diff (i.e. pretend there was no
diff), and returning false will retain it as part of the plan.
There are a number of motivating examples for this - one of which is
included as an example:
1. On SSH public keys, trailing whitespace does not matter in many
cases - and in some cases it is added by provider APIs. For
digitalocean_ssh_key resources we previously had a StateFunc that
trimmed the whitespace - we now have a DiffSuppressFunc which
verifies whether the trimmed strings are equivalent.
2. IAM policy equivalence for AWS. A good proportion of AWS issues
relate to IAM policies which have been "normalized" (used loosely)
by the IAM API endpoints. This can make the JSON strings differ
from those generated by iam_policy_document resources or template
files, even though the semantics are the same (for example,
reordering of `bucket-prefix/` and `bucket-prefix/*` in an S3
bucket policy. DiffSupressFunc can be used to test for semantic
equivalence rather than pure text equivalence, but without having to
deal with the complexity associated with a full "provider-land" diff
implementation without helper/schema.
The flatmapped representation of state prior to this commit encoded maps
and lists (and therefore by extension, sets) with a key corresponding to
the number of elements, or the unknown variable indicator under a .# key
and then individual items. For example, the list ["a", "b", "c"] would
have been encoded as:
listname.# = 3
listname.0 = "a"
listname.1 = "b"
listname.2 = "c"
And the map {"key1": "value1", "key2", "value2"} would have been encoded
as:
mapname.# = 2
mapname.key1 = "value1"
mapname.key2 = "value2"
Sets use the hash code as the key - for example a set with a (fictional)
hashcode calculation may look like:
setname.# = 2
setname.12312512 = "value1"
setname.56345233 = "value2"
Prior to the work done to extend the type system, this was sufficient
since the internal representation of these was effectively the same.
However, following the separation of maps and lists into distinct
first-class types, this encoding presents a problem: given a state file,
it is impossible to tell the encoding of an empty list and an empty map
apart. This presents problems for the type checker during interpolation,
as many interpolation functions will operate on only one of these two
structures.
This commit therefore changes the representation in state of maps to use
a "%" as the key for the number of elements. Consequently the map above
will now be encoded as:
mapname.% = 2
mapname.key1 = "value1"
mapname.key2 = "value2"
This has the effect of an empty list (or set) now being encoded as:
listname.# = 0
And an empty map now being encoded as:
mapname.% = 0
Therefore we can eliminate some nasty guessing logic from the resource
variable supplier for interpolation, at the cost of having to migrate
state up front (to follow in a subsequent commit).
In order to reduce the number of potential situations in which resources
would be "forced new", we continue to accept "#" as the count key when
reading maps via helper/schema. There is no situation under which we can
allow "#" as an actual map key in any case, as it would not be
distinguishable from a list or set in state.
This an effort to address hashicorp/terraform#516.
Adding the Sensitive attribute to the resource schema, opening up the
ability for resource maintainers to mark some fields as sensitive.
Sensitive fields are hidden in the output, and, possibly in the future,
could be encrypted.
This means it’s shown correctly in a plan and takes into account any
actions that are dependant on the tainted resource and, vice verse, any
actions that the tainted resource depends on.
So this changes the behaviour from saying this resource is tainted so
just forget about it and make sure it gets deleted in the background,
to saying I want that resource to be recreated (taking into account the
existing resource and it’s place in the graph).
In the "schema" layer a Resource is just any "thing" that has a schema
and supports some or all of the CRUD operations. Data sources introduce
a new use of Resource to represent read-only resources, which require
some different InternalValidate logic.
This adds a test and the support necessary to read from native maps
passed as variables via interpolation - for example:
```
resource ...... {
mapValue = "${var.map}"
}
```
We also add support for interpolating maps from the flat-mapped resource
config, which is necessary to support assignment of computed maps, which
is now valid.
Unfortunately there is no good way to distinguish between a list and a
map in the flatmap. In lieu of changing that representation (which is
risky), we assume that if all the keys are numeric, this is intended to
be a list, and if not it is intended to be a map. This does preclude
maps which have purely numeric keys, which should be noted as a
backwards compatibility concern.
For a long time now, the diff logic has relied on the behavior of
`mapstructure.WeakDecode` to determine how various primitives are
converted into strings. The `schema.DiffString` function is used for
all primitive field types: TypeBool, TypeInt, TypeFloat, and TypeString.
The `mapstructure` library's string representation of booleans is "0"
and "1", which differs from `strconv.FormatBool`'s "false" and "true"
(which is used in writing out boolean fields to the state).
Because of this difference, diffs have long had the potential for
cosmetically odd but semantically neutral output like:
"true" => "1"
"false" => "0"
So long as `mapstructure.Decode` or `strconv.ParseBool` are used to
interpret these strings, there's no functional problem.
We had our first clear functional problem with #6005 and friends, where
users noticed diffs like the above showing up unexpectedly and causing
troubles when `ignore_changes` was in play.
This particular bug occurs down in Terraform core's EvalIgnoreChanges.
There, the diff is modified to account for ignored attributes, and
special logic attempts to handle properly the situation where the
ignored attribute was going to trigger a resource replacement. That
logic relies on the string representations of the Old and New fields in
the diff to be the same so that it filters properly.
So therefore, we now get a bug when a diff includes `Old: "0", New:
"false"` since the strings do not match, and `ignore_changes` is not
properly handled.
Here, we introduce `TypeBool`-specific normalizing into `finalizeDiff`.
I spiked out a full `diffBool` function, but figuring out which pieces
of `diffString` to duplicate there got hairy. This seemed like a simpler
and more direct solution.
Fixes#6005 (and potentially others!)
* MaxItems defines a maximum amount of items that can exist within a
TypeSet or TypeList. Specific use cases would be if a TypeSet is being
used to wrap a complex structure, however more than one instance would
cause instability.
Changing the Set internals makes a lot of sense as it saves doing
conversions in multiple places and gives a central place to alter
the key when a item is computed.
This will have no side effects other then that the ordering is now
based on strings instead on integers, so the order will be different.
This will however have no effect on existing configs as these will
use the individual codes/keys and not the ordering to determine if
there is a diff or not.
Lastly (but I think also most importantly) there is a fix in this PR
that makes diffing sets extremely more performand. Before a full diff
required reading the complete Set for every single parameter/attribute
you wanted to diff, while now it only gets that specific parameter.
We have a use case where we have a Set that has 18 parameters and the
set consist of about 600 items (don't ask 😉). So when doing a diff
it would take 100% CPU of all cores and stay that way for almost an
hour before being able to complete the diff.
Debugging this we learned that for retrieving every single parameter
it made over 52.000 calls to `func (c *ResourceConfig) get(..)`. In
this function a slice is created and used only for the duration of the
call, so the time needed to create all needed slices and on the other
hand the time the garbage collector needed to clean them up again caused
the system to cripple itself. Next to that there are also some expensive
reflect calls in this function which also claimed a fair amount of CPU
time.
After this fix the number of calls needed to get a single parameter
dropped from 52.000+ to only 2! 😃
Previous this would return the following sort of error:
expected type 'string', got unconvertible type '[]interface {}'
This is the raw error returned by the underlying mapstructure library.
This is not a helpful error message for anyone who doesn't know Go's
type system, and it exposes Terraform's internals to the UI.
Instead we'll catch these cases before we try to use mapstructure and
return a more straightforward message.
By checking the type before the IsComputed exception this also avoids
a crash caused when the assigned value is a computed list. Otherwise
the list of interpolations is allowed through here and then crashes later
during Diff when the value is not a primitive as expected.
A common issue with new resource implementations is not considering parts
of a complex structure that's used inside a set, which causes quirky
behavior.
The schema helper has enough information to provide a default reasonable
implementation of a set function that includes all non-computed attributes
in a deterministic way. Here we implement such a function and use it
when no explicit hashing function is provided.
In order to achieve this we encapsulate the construction of the zero
value for a schema in a new method schema.ZeroValue, which allows us to
put the fallback logic to the new default function in a single spot.
It is no longer valid to use &Set{F: schema.Set} and all uses of that
construct should be replaced with schema.ZeroValue().(*Set) .
I couldn't see a simple path get this working for Maps, Sets,
and Lists, so lets land it as a primitive-only schema feature.
I think validation on primitives comprises 80% of the use cases anyways.
Guarantees that the `interface{}` arg to ValidateFunc is the proper
type, allowing implementations to be simpler.
Finish the docstring on `ValidateFunc` to call this out.
/cc @mitchellh