Return the global default timeout if the ResourceData timeouts are nil.
Set the timeouts from the Resource when calling Resource.Data, so that
the config values are always available.
Adds `GetOkRaw` as a schema function. This should only be used to verify
boolean attributes are either set or not set, regardless of their zero
value for their type. There are a few small use cases outside of the boolean
type where this will be helpful as well.
Overall, this shouldn't detract from the zero-value checks that `GetOK()`
currently has, and should only be used when absolutely needed. However,
there are enough use-cases for this addition without checking for the
zero-value of the type, that this is needed.
Primary use case is for a boolean attribute that is `Optional` and `Computed`,
without a default value. There's currently no way to verify that the boolean
attribute was explicitly set to the zero-value literal with the current
`GetOk()` function. This new function allows for that check, keeping the
`Computed` check for the returned `exists` boolean.
```
$ make test TEST=./helper/schema TESTARGS="-run=TestResourceDataGetOkRaw"
==> Checking that code complies with gofmt requirements...
go generate $(go list ./... | grep -v /terraform/vendor/)
2017/08/02 11:17:32 Generated command/internal_plugin_list.go
go test -i ./helper/schema || exit 1
echo ./helper/schema | \
xargs -t -n4 go test -run=TestResourceDataGetOkRaw -timeout=60s -parallel=4
go test -run=TestResourceDataGetOkRaw -timeout=60s -parallel=4 ./helper/schema
ok github.com/hashicorp/terraform/helper/schema 0.005s
```
* helper/schema: Add custom Timeout block for resources
* refactor DefaultTimeout to suuport multiple types. Load meta in Refresh from Instance State
* update vpc but it probably wont last anyway
* refactor test into table test for more cases
* rename constant keys
* refactor configdecode
* remove VPC demo
* remove comments
* remove more comments
* refactor some
* rename timeKeys to timeoutKeys
* remove note
* documentation/resources: Document the Timeout block
* document timeouts
* have a test case that covers 'hours'
* restore a System default timeout of 20 minutes, instead of 0
* restore system default timeout of 20 minutes, refactor tests, add test method to handle system default
* rename timeout key constants
* test applying timeout to state
* refactor test
* Add resource Diff test
* clarify docs
* update to use constants
Although DiffFieldReader was the one mostly responsible for a buggy behaviour
more tests were added throughout the debugging process most of which
would fail without the bugfix.
- ResourceData
- MultiLevelFieldReader
- MapFieldReader
- DiffFieldReader
The helper/schema framework for building providers previously validated
in all cases that each field being set in state was in the schema.
However, in order to support remote state in a usable fashion, the need
has arisen for the top level attributes of the resource to be created
dynamically. In order to still be able to use helper/schema, this commit
adds the capability to assign additional fields.
Though I do not forsee this being used by providers other than remote
state (and that eventually may move into Terraform Core rather than
being a provider), the usage and semantics are:
To opt into dynamic attributes, add a schema attribute named
"__has_dynamic_attributes", and make it an optional string with no
default value, in order that it does not appear in diffs:
"__has_dynamic_attributes": {
Type: schema.TypeString
Optional: true
}
In the read callback, use the d.UnsafeSetFieldRaw(key, value) function
to set the dynamic attributes.
Note that other fields in the schema _are_ copied into state, and that
the names of the schema fields cannot currently be used as dynamic
attribute names, as we check to ensure a value is not already set for a
given key.
The flatmapped representation of state prior to this commit encoded maps
and lists (and therefore by extension, sets) with a key corresponding to
the number of elements, or the unknown variable indicator under a .# key
and then individual items. For example, the list ["a", "b", "c"] would
have been encoded as:
listname.# = 3
listname.0 = "a"
listname.1 = "b"
listname.2 = "c"
And the map {"key1": "value1", "key2", "value2"} would have been encoded
as:
mapname.# = 2
mapname.key1 = "value1"
mapname.key2 = "value2"
Sets use the hash code as the key - for example a set with a (fictional)
hashcode calculation may look like:
setname.# = 2
setname.12312512 = "value1"
setname.56345233 = "value2"
Prior to the work done to extend the type system, this was sufficient
since the internal representation of these was effectively the same.
However, following the separation of maps and lists into distinct
first-class types, this encoding presents a problem: given a state file,
it is impossible to tell the encoding of an empty list and an empty map
apart. This presents problems for the type checker during interpolation,
as many interpolation functions will operate on only one of these two
structures.
This commit therefore changes the representation in state of maps to use
a "%" as the key for the number of elements. Consequently the map above
will now be encoded as:
mapname.% = 2
mapname.key1 = "value1"
mapname.key2 = "value2"
This has the effect of an empty list (or set) now being encoded as:
listname.# = 0
And an empty map now being encoded as:
mapname.% = 0
Therefore we can eliminate some nasty guessing logic from the resource
variable supplier for interpolation, at the cost of having to migrate
state up front (to follow in a subsequent commit).
In order to reduce the number of potential situations in which resources
would be "forced new", we continue to accept "#" as the count key when
reading maps via helper/schema. There is no situation under which we can
allow "#" as an actual map key in any case, as it would not be
distinguishable from a list or set in state.
Changing the Set internals makes a lot of sense as it saves doing
conversions in multiple places and gives a central place to alter
the key when a item is computed.
This will have no side effects other then that the ordering is now
based on strings instead on integers, so the order will be different.
This will however have no effect on existing configs as these will
use the individual codes/keys and not the ordering to determine if
there is a diff or not.
Lastly (but I think also most importantly) there is a fix in this PR
that makes diffing sets extremely more performand. Before a full diff
required reading the complete Set for every single parameter/attribute
you wanted to diff, while now it only gets that specific parameter.
We have a use case where we have a Set that has 18 parameters and the
set consist of about 600 items (don't ask 😉). So when doing a diff
it would take 100% CPU of all cores and stay that way for almost an
hour before being able to complete the diff.
Debugging this we learned that for retrieving every single parameter
it made over 52.000 calls to `func (c *ResourceConfig) get(..)`. In
this function a slice is created and used only for the duration of the
call, so the time needed to create all needed slices and on the other
hand the time the garbage collector needed to clean them up again caused
the system to cripple itself. Next to that there are also some expensive
reflect calls in this function which also claimed a fair amount of CPU
time.
After this fix the number of calls needed to get a single parameter
dropped from 52.000+ to only 2! 😃
This fixes some perpetual diffs I saw in Atlas AccTests where an empty
map (`map[string]interface{}{}`) was being `d.Set` for "metadata_full".
Because the MapFieldWriter was not distinguishing between empty and nil,
this trigger the "map delete" logic and no count was written to the
state. This caused subsequent plans to improperly report a diff.
Here we redefine the map delete functionality to explicitly trigger only
on `nil`, so we catch the `.#` field for empty maps.
This was actually quite nasty as the first bug covered the second one…
The first bug is with HasChange. This function uses reflect.DeepEqual
to check if two instances are the same/have the same content. This
works fine for all types except for Set’s as they contain a function.
And reflect.DeepEqual will only say the functions are equal if they are
both nil (which they aren’t in a Set). So in effect it means that
currently HasChange will always say true for Set’s, even when they are
actually being equal.
As soon as you fix this problem, you will notice the second one (which
the added test is written for). Without saying you want the exact diff,
you will end up with a merged value which will (in most cases) be the
same.
Run all unit tests and a good part of the acc tests to verify this
works as expected and all look good.
This is a refactored solution for PR #616. Functionally this is still
the same change, but it’s implemented a lot cleaner with less code and
less changes to existing parts of TF.
This fixes a seemingly minor issue (GH-255) around plans showing changes
when in fact there are none. But in reality this turned out to uncover a
really terrible bug.
The effect of what was happening was that multiple items in a set were
being merged. Now, they were being merged in the right order, so if you
didn't have rich types (lists in a set) then you never saw the effect
since the later value would overwrite the earlier. But with lists (such
as in security groups), you would end up with the lists merging. So, if
you had one ingress rule with CIDR blocks and one with SGs, then after
the merge both ingress rules would have BOTH CIDR and SGs, resulting in
an incorrect plan (GH-255).
This fixes the issue by introducing a `getSourceExact` bitflag to the
ResourceData source. When this is set, ALL data must come from this
level, instead of merging lower levels. In the case of sets and diffs,
this is exactly what you want: "Get me the set 'foo' from the config and
the config ONLY (not the state or diff or w/e)".
Andddddd its fixed.
GH-255