The WaitForState method can't read the result values in a timeout
because they are still owned by the running goroutine. Keep all values
scoped inside the goroutine, and save them into an atomic.Value to be
returned.
Fixes race introduced in #8510
This means that two resources created by the same rule will get names
which sort in the order they are created.
The rest of the ID is still random base32 characters; we no longer set
the bit values that denote UUIDv4.
The length of the ID returned by PrefixedUniqueId is not changed by this
commit; that way we don't break any resources where the underlying
resource has a name length limit.
Fixes#8143.
This commit adds a function which composes a series of TestFuncs, but
will run all tests before returning an error, unlike ComposeTestFunc.
This is useful when verifying contents of state in acceptance tests and
it is desirable to see all the failing cases in one run for slow
resources.
This commit adds a TestCheckFunc which ensures that a value is set for a
given name/key combination. It is primarily useful for ensuring that
computed values are set where it is not possible to know the expected
value ahead of time.
Fixes issue where a resource marked as tainted with no other attribute
diffs would never show up in the plan or apply as needing to be
replaced.
One unrelated test needed updating due to a quirk in the testDiffFn
logic - it adds a "type" field diff if the diff is non-Empty. NBD
Set the default log package output to iotuil.Discard during tests if the
`-v` flag isn't set. If we are verbose, then apply the filter according
to the TF_LOG env variable.
In #7170 we found two scenarios where the type checking done during the
`context.Validate()` graph walk was circumvented, and the subsequent
assumption of type safety in the provider's `Diff()` implementation
caused panics.
Both scenarios have to do with interpolations that reference Computed
values. The sentinel we use to indicate that a value is Computed does
not carry any type information with it yet.
That means that an incorrect reference to a list or a map in a string
attribute can "sneak through" validation only to crop up...
1. ...during Plan for Data Source References
2. ...during Apply for Resource references
In order to address this, we:
* add high-level tests for each of these two scenarios in `provider/test`
* add context-level tests for the same two scenarios in `terraform`
(these tests proved _really_ tricky to write!)
* place an `EvalValidateResource` just before `EvalDiff` and `EvalApply` to
catch these errors
* add some plumbing to `Plan()` and `Apply()` to return validation
errors, which were previously only generated during `Validate()`
* wrap unit-tests around `EvalValidateResource`
* add an `IgnoreWarnings` option to `EvalValidateResource` to prevent
active warnings from halting execution on the second-pass validation
Eventually, we might be able to attach type information to Computed
values, which would allow for these errors to be caught earlier. For
now, this solution keeps us safe from panics and raises the proper
errors to the user.
Fixes#7170
I noticed we had two mechanisms for unit test override. One that dropped
a sentinel into the env var, and another with a struct member on
TestCase. This consolidates the two, using the cleaner struct member
internal mechanism and the nicer `resource.UnitTest()` entry point.
Although DiffFieldReader was the one mostly responsible for a buggy behaviour
more tests were added throughout the debugging process most of which
would fail without the bugfix.
- ResourceData
- MultiLevelFieldReader
- MapFieldReader
- DiffFieldReader
The helper/schema framework for building providers previously validated
in all cases that each field being set in state was in the schema.
However, in order to support remote state in a usable fashion, the need
has arisen for the top level attributes of the resource to be created
dynamically. In order to still be able to use helper/schema, this commit
adds the capability to assign additional fields.
Though I do not forsee this being used by providers other than remote
state (and that eventually may move into Terraform Core rather than
being a provider), the usage and semantics are:
To opt into dynamic attributes, add a schema attribute named
"__has_dynamic_attributes", and make it an optional string with no
default value, in order that it does not appear in diffs:
"__has_dynamic_attributes": {
Type: schema.TypeString
Optional: true
}
In the read callback, use the d.UnsafeSetFieldRaw(key, value) function
to set the dynamic attributes.
Note that other fields in the schema _are_ copied into state, and that
the names of the schema fields cannot currently be used as dynamic
attribute names, as we check to ensure a value is not already set for a
given key.
This commit adds a flag to acceptance tests in order to make
appropriately named tests work during `make test` irrespective of the
TF_ACC environment variable. This should only be used on tests which are
known to be fast.
The serializeCollectionMemberForHash helper can't be called for the
MapType values, because MapType doesn't have a schema.Elem. Instead, we
can write the key/value pairs directly to the buffer. This still doesn't
allow for nested maps or lists, but we need to define that use case
before committing to it here.
The flatmapped representation of state prior to this commit encoded maps
and lists (and therefore by extension, sets) with a key corresponding to
the number of elements, or the unknown variable indicator under a .# key
and then individual items. For example, the list ["a", "b", "c"] would
have been encoded as:
listname.# = 3
listname.0 = "a"
listname.1 = "b"
listname.2 = "c"
And the map {"key1": "value1", "key2", "value2"} would have been encoded
as:
mapname.# = 2
mapname.key1 = "value1"
mapname.key2 = "value2"
Sets use the hash code as the key - for example a set with a (fictional)
hashcode calculation may look like:
setname.# = 2
setname.12312512 = "value1"
setname.56345233 = "value2"
Prior to the work done to extend the type system, this was sufficient
since the internal representation of these was effectively the same.
However, following the separation of maps and lists into distinct
first-class types, this encoding presents a problem: given a state file,
it is impossible to tell the encoding of an empty list and an empty map
apart. This presents problems for the type checker during interpolation,
as many interpolation functions will operate on only one of these two
structures.
This commit therefore changes the representation in state of maps to use
a "%" as the key for the number of elements. Consequently the map above
will now be encoded as:
mapname.% = 2
mapname.key1 = "value1"
mapname.key2 = "value2"
This has the effect of an empty list (or set) now being encoded as:
listname.# = 0
And an empty map now being encoded as:
mapname.% = 0
Therefore we can eliminate some nasty guessing logic from the resource
variable supplier for interpolation, at the cost of having to migrate
state up front (to follow in a subsequent commit).
In order to reduce the number of potential situations in which resources
would be "forced new", we continue to accept "#" as the count key when
reading maps via helper/schema. There is no situation under which we can
allow "#" as an actual map key in any case, as it would not be
distinguishable from a list or set in state.
The mapstructure library has a regrettable backward compatibility
concern whereby a WeakDecode of []interface{}{} into a target of
map[string]interface{} yields an empty map rather than an error. One
possibility is to switch to using Decode instead of WeakDecode, but this
loses the nice handling of type conversion, requiring a large volume of
code to be added to Terraform or HIL in order to retain that behaviour.
Instead we add a DecodeHook to our usage of the mapstructure library
which checks for decoding []interface{}{} or []string{} into a map and
returns an error instead.
This has the effect of defeating the code added to retain backwards
compatibility in mapstructure, giving us the correct (for our
circumstances) behaviour of Decode for empty structures and the type
conversion of WeakDecode.
The code is identical to that in the HIL library, and packaged into a
helper.
This an effort to address hashicorp/terraform#516.
Adding the Sensitive attribute to the resource schema, opening up the
ability for resource maintainers to mark some fields as sensitive.
Sensitive fields are hidden in the output, and, possibly in the future,
could be encrypted.
This means it’s shown correctly in a plan and takes into account any
actions that are dependant on the tainted resource and, vice verse, any
actions that the tainted resource depends on.
So this changes the behaviour from saying this resource is tainted so
just forget about it and make sure it gets deleted in the background,
to saying I want that resource to be recreated (taking into account the
existing resource and it’s place in the graph).
When testing destroy, the test harness calls Refresh followed by Plan,
with the expectation that the resulting diff will be empty.
Data resources challenge this expectation, because they will always be
instantiated during refresh if their configuration isn't computed, and so
the subsequent diff will want to destroy what was instantiated.
To work around this, we make an exception that data resource destroy
diffs may appear in the plan but nothing else.
This fixes#6713.