Many cloud services prevent duplicate key pairs with different names.
Among them are Digital Ocean, Joyent Triton and Packet. Consequently, if
tests leave dangling resources it is not enough to simply randomise the
name, the entire key material must be regenerated.
This commit adds a helper method that returns a new randomly generated
key pair, where the public key material is formatted in OpenSSH
"authorized keys" format, and the private key material is PEM encoded.
The Required||Optional logic in schemaMap.Input was incorrect, causing
it to always request input. Fix the logic, and the associated tests
which were passing "just because".
This augments backend-config to also accept key=value pairs.
This should make Terraform easier to script rather than having to
generate a JSON file.
You must still specify the backend type as a minimal amount in
configurations, example:
```
terraform { backend "consul" {} }
```
This is required because Terraform needs to be able to detect the
_absense_ of that value for unsetting, if that is necessary at some
point.
helper/schema: Rename Timeout resource block to Timeouts
- Pluralize configuration argument name to better represent that there is
one block for many timeouts
- use a const for the configuration timeouts key
- update docs
the terraform package doesn't know about TestProvider, so don't put the
hooks in terraform.MockResourceProvider. Wrap the mock in the test where
we need to check the TestProvider functionality.
Call all ResourceProviderFactories and reset the providers before tests.
Store the provider and/or the error in a fixed factory function to be
returned again later.
Provider.TestReset resets the internal state of the Provider at the
start of a test. This also adds a MetaReset function field to
schema.Provider, which is called by TestReset and can be used to reset
any other tsated stored in the provider metadata.
This is currently used to reset the internal Context returned by
StopContext between tests, and should be implemented by a provider if
it stores a Context from a previous test.
* helper/schema: Add custom Timeout block for resources
* refactor DefaultTimeout to suuport multiple types. Load meta in Refresh from Instance State
* update vpc but it probably wont last anyway
* refactor test into table test for more cases
* rename constant keys
* refactor configdecode
* remove VPC demo
* remove comments
* remove more comments
* refactor some
* rename timeKeys to timeoutKeys
* remove note
* documentation/resources: Document the Timeout block
* document timeouts
* have a test case that covers 'hours'
* restore a System default timeout of 20 minutes, instead of 0
* restore system default timeout of 20 minutes, refactor tests, add test method to handle system default
* rename timeout key constants
* test applying timeout to state
* refactor test
* Add resource Diff test
* clarify docs
* update to use constants
This changes the type of values in Meta for InstanceState to
`interface{}`. They were `string` before.
This will allow richer structures to be persisted to this without
flatmapping them (down with flatmap!). The documentation clearly states
that only primitives/collections are allowed here.
The only thing using this was helper/schema for schema versioning.
Appropriate type checking was added to make this change safe.
The timeout work @catsby is doing will use this for a richer structure.
Fixes#12183
The fix is in flatmap for this but the entire issue is a bit more
complex. Given a schema with a computed set, if you reference it like
this:
lookup(attr[0], "field")
And "attr" contains a computed set within it, it would panic even though
"field" is available. There were a couple avenues I could've taken to
fix this:
1.) Any complex value containing any unknown value at any point is
entirely unknown.
2.) Only the specific part of the complex value is unknown.
I took route 2 so that the above works without any computed (since
"name" is not computed but something else is). This may actually have an
effect on other parts of Terraform configs, however those similar
configs would've simply crashed previously so it shouldn't break any
pre-existing configs.
To avoid chasing down issues like #11635 I'm proposing we disable the
shadow graph for end users now that we have merged in all the new
graphs. I've kept it around and default-on for tests so that we can use
it to test new features as we build them. I think it'll still have value
going forward but I don't want to hold us for making it work 100% with
all of Terraform at all times.
I propose backporting this to 0-8-stable, too.
Before this patch it was not possible to test for a key in a map where
the value is an empty string. With this patch, however, it is now
possible to write a check like:
```
resource.TestCheckResourceAttr("res.name", "mymap.KeyWithEmptyValue", ""),
```
To test that `KeyWithEmptyValue` is a valid key in `mymap`.
Fixes#10716
This trims whitespace around the key in the `-var` flag.
This is a regression from 0.7.x.
The value is whitespace sensitive unless double-quoted. This is the same
behavior as 0.7.x. I considered rejecting whitespace around the '='
completely but I don't want to introduce BC and the behavior actually
seems quite obvious to me.
This commit extracts the GPG code used for aws_iam_user_login_profile
into a library that can be reused for other resources, and updates the
call sites appropriately.
Accessing an interpolated value in a map through ConfigFieldReader can
fail, because GetRaw can't access interpolated values, so check if the
value exists at all by looking in the config. If the config has a value,
assume our map's value is interpolated and proceed as such.
We also need to lookup the correct schema to properly read a field from
a nested structure.
- Maps previously always defaulted to TypeString. Now check if Elem is a
ValueType and use that if applicable
- Lists now return the schema for nested element types, defaulting to a
TypeString like maps.
This only allows maps and lists to be nested one level deep, and the
inner map or list must only contain string values.
Fixes#10266
panicwrap was using Extrafiles to get the original standard streams for
`terraform console`. This doesn't work on Windows. Instead, we must use
the Win32 APIs to get the exact handles.
Needed due to work done in 95d37ea, we may need to adjust
hasComputedSubKeys to propagate NewComputed in the same way that we
have added "~", however will wait for comment from @mitchellh.
This covers:
* Complex sets with computed fields in a set
* Complex lists with computed fields in a set
Adding a test to test basic lists with computed fields seemed to fail,
but possibly for an unrelated reason (the list returned as nil). The fix
to this inparticular case may be out of the scope of this specific
issue.
Reference gist and details in hashicorp/terraform#9171.
This fixes some edge-ish cases where a set in a config has a set or list
in it that contains computed values, but non-set or list values in the
parent do not.
This can cause "diffs didn't match during apply" errors in a scenario
such as when a set's hash is calculated off of child items (including
any sub-lists or sets, as it should be), and the hash changes between
the plan and apply diffs due to the computed values present in the
sub-list or set items. These will be marked as computed, but due to the
fact that the function was not iterating over the list or set items
properly (ie: not adding the item number to the address, so
set.0.set.foo was being yielded instead of set.0.set.0.foo), these
computed values were not being properly propagated to the parent set to
be marked as computed.
Fixeshashicorp/terraform#6527.
Fixeshashicorp/terraform#8271.
This possibly fixes other non-CloudFront related issues too.
UniqueId attempted to provide an ordered unique id by using a nanosecond
timestamp, but doesn't take into account that time is not monotonic
increasing. This provides an implementation that will always be
increasing.
Fixes#10125
If the elements are computed and the field is ForceNew, then we should
mark the computed count as potentially forcing a new operation.
Example, assuming `groups` forces new...
**Step 1:**
groups = ["1", "2", "3"]
At this point, the resource isn't create, so this should result in a
diff like:
CREATE resource:
groups: "" => ["1", "2", "3"]
**Step 2:**
groups = ["${computedvar}"]
The OLD behavior was:
UPDATE resource
groups.#: "3" => "computed"
This would cause a diff mismatch because if `${computedvar}` was
different then it should force new. The NEW behavior is:
DESTROY/CREATE resource:
groups.#: "3" => "computed" (forces new)
Fixes#10122
The simple fix was that we forgot to close `ReadDataApply` for the
provider. But I've always felt that this section of the code was brittle
and I wanted to put in a more robust solution. The `shadow.Close` method
uses reflection to automatically close all values.
This turns the new graphs on by default and puts the old graphs behind a
flag `-Xlegacy-graph`. This effectively inverts the current 0.7.x
behavior with the new graphs.
We've incubated most of these for a few weeks now. We've found issues
and we've fixed them and we've been using these graphs internally for
awhile without any major issue. Its time to default them on and get them
part of a beta.
Fixes#2748
This changes the diff to only mark "forces new resource" on the fields
that actually caused the new resource, not every field that changed.
This makes diffs much more accurate.
I'd like to request a review but I'm going to defer merging until
Terraform 0.8. Changes like this are very possible to cause "diffs
didn't match" errors and I want some real world testing in a beta before
we hit prod with this.
Fixes#7715
If a bool field was computed and the raw value was not convertable to a
boolean, helper/schema would crash. The correct behavior is to try not
to read the raw value when the value is computed and to simply mark that
it is computed. This does that (and matches the behavior of the other
primitives).
Fixes#5138
If an item is optional and is removed completely from the configuration,
it should still trigger a destroy/create if the field itself was marked
as "ForceNew".
See the example in #5138.
This creates a standard package and interface for defining, querying,
setting experiments (`-X` flags).
I expect we'll want to continue to introduce various features behind
experimental flags. I want to make doing this as easy as possible and I
want to make _removing_ experiments as easy as possible as well.
The goal with this packge has been to rely on the compiler enforcing our
experiment references as much as possible. This means that every
experiment is a global variable that must be referenced directly, so
when it is removed you'll get compiler errors where the experiment is
referenced.
This also unifies and makes it easy to grab CLI flags to enable/disable
experiments as well as env vars! This way defining an experiment is just
a couple lines of code (documented on the package).
Fixes#3309
There are two primary changes, one to how helper/schema creates diffs
and one to how Terraform compares diffs. Both require careful
understanding.
== 1. helper/schema Changes
helper/schema, given any primitive field (string, int, bool, etc.)
_used to_ create a basic diff when given a computed new value (i.e. from
an unkown interpolation). This would put in the plan that the old value
is whatever the old value was, and the new value was the actual
interpolation. For example, from #3309, the diff showed the following:
```
~ module.test.aws_eip.test-instance.0
instance: "<INSTANCE ID>" => "${element(aws_instance.test-instance.*.id, count.index)}"
```
Then, when running `apply`, the diff would be realized and you would get
a diff mismatch error because it would realize the final value is the
same and remove it from the diff.
**The change:** `helper/schema` now marks unknown primitive values with
`NewComputed` set to true. Semantically this is correct for the diff to
have this information.
== 2. Terraform Diff.Same Changes
Next, the way Terraform compares diffs needed to be updated
Specifically, the case where the diff from the plan had a NewComputed
primitive and the diff from the apply _no longer has that value_. This
is possible if the computed value ended up being the same as the old
value. This is allowed to pass through.
Together, these fix#3309.
This reverts commit c3a4cff133, reversing
changes made to 791a02e6e4.
This change requires plugin recompilation and we should hold off until a
minor release for that.
This commit implements reusable functions for when resources have no
need to implement a particular operation:
- Noop - does nothing and returns no error.
- RemoveFromState - sets the resource ID to empty string (removing it
from state) and returns no error.
This is required for the times when the configuration cannot have an
empty configuration. An example would be in AzureRM, when you create a
LoadBalancer with a configuration, you can delete *all* but 1 of these
configurations
This changes the key for the storage to be the _raw_ source from the
module, not the fully expanded source. Example: it'll be a relative path
instead of an absolute path.
This allows the ".terraform/modules" directory to be portable when
moving to other machines. This was a behavior that existed in <= 0.7.2
and was broken with #8398. This amends that and adds a test to verify.
This commit adds a new callback, DiffSuppressFunc, to the schema.Schema
structure. If set for a given schema, a callback to the user-supplied
function will be made for each attribute for which the default
type-based diff mechanism produces an attribute diff. Returning `true`
from the callback will suppress the diff (i.e. pretend there was no
diff), and returning false will retain it as part of the plan.
There are a number of motivating examples for this - one of which is
included as an example:
1. On SSH public keys, trailing whitespace does not matter in many
cases - and in some cases it is added by provider APIs. For
digitalocean_ssh_key resources we previously had a StateFunc that
trimmed the whitespace - we now have a DiffSuppressFunc which
verifies whether the trimmed strings are equivalent.
2. IAM policy equivalence for AWS. A good proportion of AWS issues
relate to IAM policies which have been "normalized" (used loosely)
by the IAM API endpoints. This can make the JSON strings differ
from those generated by iam_policy_document resources or template
files, even though the semantics are the same (for example,
reordering of `bucket-prefix/` and `bucket-prefix/*` in an S3
bucket policy. DiffSupressFunc can be used to test for semantic
equivalence rather than pure text equivalence, but without having to
deal with the complexity associated with a full "provider-land" diff
implementation without helper/schema.
The WaitForState method can't read the result values in a timeout
because they are still owned by the running goroutine. Keep all values
scoped inside the goroutine, and save them into an atomic.Value to be
returned.
Fixes race introduced in #8510
This means that two resources created by the same rule will get names
which sort in the order they are created.
The rest of the ID is still random base32 characters; we no longer set
the bit values that denote UUIDv4.
The length of the ID returned by PrefixedUniqueId is not changed by this
commit; that way we don't break any resources where the underlying
resource has a name length limit.
Fixes#8143.
This commit adds a function which composes a series of TestFuncs, but
will run all tests before returning an error, unlike ComposeTestFunc.
This is useful when verifying contents of state in acceptance tests and
it is desirable to see all the failing cases in one run for slow
resources.
This commit adds a TestCheckFunc which ensures that a value is set for a
given name/key combination. It is primarily useful for ensuring that
computed values are set where it is not possible to know the expected
value ahead of time.
Fixes issue where a resource marked as tainted with no other attribute
diffs would never show up in the plan or apply as needing to be
replaced.
One unrelated test needed updating due to a quirk in the testDiffFn
logic - it adds a "type" field diff if the diff is non-Empty. NBD
Set the default log package output to iotuil.Discard during tests if the
`-v` flag isn't set. If we are verbose, then apply the filter according
to the TF_LOG env variable.
In #7170 we found two scenarios where the type checking done during the
`context.Validate()` graph walk was circumvented, and the subsequent
assumption of type safety in the provider's `Diff()` implementation
caused panics.
Both scenarios have to do with interpolations that reference Computed
values. The sentinel we use to indicate that a value is Computed does
not carry any type information with it yet.
That means that an incorrect reference to a list or a map in a string
attribute can "sneak through" validation only to crop up...
1. ...during Plan for Data Source References
2. ...during Apply for Resource references
In order to address this, we:
* add high-level tests for each of these two scenarios in `provider/test`
* add context-level tests for the same two scenarios in `terraform`
(these tests proved _really_ tricky to write!)
* place an `EvalValidateResource` just before `EvalDiff` and `EvalApply` to
catch these errors
* add some plumbing to `Plan()` and `Apply()` to return validation
errors, which were previously only generated during `Validate()`
* wrap unit-tests around `EvalValidateResource`
* add an `IgnoreWarnings` option to `EvalValidateResource` to prevent
active warnings from halting execution on the second-pass validation
Eventually, we might be able to attach type information to Computed
values, which would allow for these errors to be caught earlier. For
now, this solution keeps us safe from panics and raises the proper
errors to the user.
Fixes#7170
I noticed we had two mechanisms for unit test override. One that dropped
a sentinel into the env var, and another with a struct member on
TestCase. This consolidates the two, using the cleaner struct member
internal mechanism and the nicer `resource.UnitTest()` entry point.
Although DiffFieldReader was the one mostly responsible for a buggy behaviour
more tests were added throughout the debugging process most of which
would fail without the bugfix.
- ResourceData
- MultiLevelFieldReader
- MapFieldReader
- DiffFieldReader
The helper/schema framework for building providers previously validated
in all cases that each field being set in state was in the schema.
However, in order to support remote state in a usable fashion, the need
has arisen for the top level attributes of the resource to be created
dynamically. In order to still be able to use helper/schema, this commit
adds the capability to assign additional fields.
Though I do not forsee this being used by providers other than remote
state (and that eventually may move into Terraform Core rather than
being a provider), the usage and semantics are:
To opt into dynamic attributes, add a schema attribute named
"__has_dynamic_attributes", and make it an optional string with no
default value, in order that it does not appear in diffs:
"__has_dynamic_attributes": {
Type: schema.TypeString
Optional: true
}
In the read callback, use the d.UnsafeSetFieldRaw(key, value) function
to set the dynamic attributes.
Note that other fields in the schema _are_ copied into state, and that
the names of the schema fields cannot currently be used as dynamic
attribute names, as we check to ensure a value is not already set for a
given key.
This commit adds a flag to acceptance tests in order to make
appropriately named tests work during `make test` irrespective of the
TF_ACC environment variable. This should only be used on tests which are
known to be fast.
The serializeCollectionMemberForHash helper can't be called for the
MapType values, because MapType doesn't have a schema.Elem. Instead, we
can write the key/value pairs directly to the buffer. This still doesn't
allow for nested maps or lists, but we need to define that use case
before committing to it here.
The flatmapped representation of state prior to this commit encoded maps
and lists (and therefore by extension, sets) with a key corresponding to
the number of elements, or the unknown variable indicator under a .# key
and then individual items. For example, the list ["a", "b", "c"] would
have been encoded as:
listname.# = 3
listname.0 = "a"
listname.1 = "b"
listname.2 = "c"
And the map {"key1": "value1", "key2", "value2"} would have been encoded
as:
mapname.# = 2
mapname.key1 = "value1"
mapname.key2 = "value2"
Sets use the hash code as the key - for example a set with a (fictional)
hashcode calculation may look like:
setname.# = 2
setname.12312512 = "value1"
setname.56345233 = "value2"
Prior to the work done to extend the type system, this was sufficient
since the internal representation of these was effectively the same.
However, following the separation of maps and lists into distinct
first-class types, this encoding presents a problem: given a state file,
it is impossible to tell the encoding of an empty list and an empty map
apart. This presents problems for the type checker during interpolation,
as many interpolation functions will operate on only one of these two
structures.
This commit therefore changes the representation in state of maps to use
a "%" as the key for the number of elements. Consequently the map above
will now be encoded as:
mapname.% = 2
mapname.key1 = "value1"
mapname.key2 = "value2"
This has the effect of an empty list (or set) now being encoded as:
listname.# = 0
And an empty map now being encoded as:
mapname.% = 0
Therefore we can eliminate some nasty guessing logic from the resource
variable supplier for interpolation, at the cost of having to migrate
state up front (to follow in a subsequent commit).
In order to reduce the number of potential situations in which resources
would be "forced new", we continue to accept "#" as the count key when
reading maps via helper/schema. There is no situation under which we can
allow "#" as an actual map key in any case, as it would not be
distinguishable from a list or set in state.
The mapstructure library has a regrettable backward compatibility
concern whereby a WeakDecode of []interface{}{} into a target of
map[string]interface{} yields an empty map rather than an error. One
possibility is to switch to using Decode instead of WeakDecode, but this
loses the nice handling of type conversion, requiring a large volume of
code to be added to Terraform or HIL in order to retain that behaviour.
Instead we add a DecodeHook to our usage of the mapstructure library
which checks for decoding []interface{}{} or []string{} into a map and
returns an error instead.
This has the effect of defeating the code added to retain backwards
compatibility in mapstructure, giving us the correct (for our
circumstances) behaviour of Decode for empty structures and the type
conversion of WeakDecode.
The code is identical to that in the HIL library, and packaged into a
helper.
This an effort to address hashicorp/terraform#516.
Adding the Sensitive attribute to the resource schema, opening up the
ability for resource maintainers to mark some fields as sensitive.
Sensitive fields are hidden in the output, and, possibly in the future,
could be encrypted.
This means it’s shown correctly in a plan and takes into account any
actions that are dependant on the tainted resource and, vice verse, any
actions that the tainted resource depends on.
So this changes the behaviour from saying this resource is tainted so
just forget about it and make sure it gets deleted in the background,
to saying I want that resource to be recreated (taking into account the
existing resource and it’s place in the graph).
When testing destroy, the test harness calls Refresh followed by Plan,
with the expectation that the resulting diff will be empty.
Data resources challenge this expectation, because they will always be
instantiated during refresh if their configuration isn't computed, and so
the subsequent diff will want to destroy what was instantiated.
To work around this, we make an exception that data resource destroy
diffs may appear in the plan but nothing else.
This fixes#6713.
This commit forward ports the changes made for 0.6.17, in order to store
the type and sensitive flag against outputs.
It also refactors the logic of the import for V0 to V1 state, and
fixes up the call sites of the new format for outputs in V2 state.
Finally we fix up tests which did not previously set a state version
where one is required.
For backward compatibility we will continue to support using the data
sources that were formerly logical resources as resources for the moment,
but we want to warn the user about it since this support is likely to
be removed in future.
This is done by adding a new "deprecation message" feature to
schema.Resource, but for the moment this is done as an internal feature
(not usable directly by plugins) so that we can collect additional
use-cases and design a more general interface before creating a
compatibility constraint.
Historically we've had some "read-only" and "logical" resources. With the
addition of the data source concept these will gradually become data
sources, but we need to retain backward compatibility with existing
configurations that use the now-deprecated resources.
This shim is intended to allow us to easily create a resource from a
data source implementation. It adjusts the schema as needed and adds
stub Create and Delete implementations.
This would ideally also produce a deprecation warning whenever such a
shimmed resource is used, but the schema system doesn't currently have
a mechanism for resource-specific validation, so that remains just a TODO
for the moment.
In the "schema" layer a Resource is just any "thing" that has a schema
and supports some or all of the CRUD operations. Data sources introduce
a new use of Resource to represent read-only resources, which require
some different InternalValidate logic.
This is a breaking change to the ResourceProvider interface that adds the
new operations relating to data sources.
DataSources, ValidateDataSource, ReadDataDiff and ReadDataApply are the
data source equivalents of Resources, Validate, Diff and Apply (respectively)
for managed resources.
The diff/apply model seems at first glance a rather strange workflow for
read-only resources, but implementing data resources in this way allows them
to fit cleanly into the standard plan/apply lifecycle in cases where the
configuration contains computed arguments and thus the read must be deferred
until apply time.
Along with breaking the interface, we also fix up the plugin client/server
and helper/schema implementations of it, which are all of the callers
used when provider plugins use helper/schema. This would be a breaking
change for any provider plugin that directly implements the provider
interface, but no known plugins do this and it is not recommended.
At the helper/schema layer the implementer sees ReadDataApply as a "Read",
as opposed to "Create" or "Update" as in the managed resource Apply
implementation. The planning mechanics are handled entirely within
helper/schema, so that complexity is hidden from the provider implementation
itself.
This adds a test and the support necessary to read from native maps
passed as variables via interpolation - for example:
```
resource ...... {
mapValue = "${var.map}"
}
```
We also add support for interpolating maps from the flat-mapped resource
config, which is necessary to support assignment of computed maps, which
is now valid.
Unfortunately there is no good way to distinguish between a list and a
map in the flatmap. In lieu of changing that representation (which is
risky), we assume that if all the keys are numeric, this is intended to
be a list, and if not it is intended to be a map. This does preclude
maps which have purely numeric keys, which should be noted as a
backwards compatibility concern.
This commit adds support for native list variables and outputs, building
up on the previous change to state. Interpolation functions now return
native lists in preference to StringList.
List variables are defined like this:
variable "test" {
# This can also be inferred
type = "list"
default = ["Hello", "World"]
}
output "test_out" {
value = "${var.a_list}"
}
This results in the following state:
```
...
"outputs": {
"test_out": [
"hello",
"world"
]
},
...
```
And the result of terraform output is as follows:
```
$ terraform output
test_out = [
hello
world
]
```
Using the output name, an xargs-friendly representation is output:
```
$ terraform output test_out
hello
world
```
The output command also supports indexing into the list (with
appropriate range checking and no wrapping):
```
$ terraform output test_out 1
world
```
Along with maps, list outputs from one module may be passed as variables
into another, removing the need for the `join(",", var.list_as_string)`
and `split(",", var.list_as_string)` which was previously necessary in
Terraform configuration.
This commit also updates the tests and implementations of built-in
interpolation functions to take and return native lists where
appropriate.
A backwards compatibility note: previously the concat interpolation
function was capable of concatenating either strings or lists. The
strings use case was deprectated a long time ago but still remained.
Because we cannot return `ast.TypeAny` from an interpolation function,
this use case is no longer supported for strings - `concat` is only
capable of concatenating lists. This should not be a huge issue - the
type checker picks up incorrect parameters, and the native HIL string
concatenation - or the `join` function - can be used to replicate the
missing behaviour.
This adds a field terraform_version to the state that represents the
Terraform version that wrote that state. If Terraform encounters a state
written by a future version, it will error. You must use at least the
version that wrote that state.
Internally we have fields to override this behavior (StateFutureAllowed),
but I chose not to expose them as CLI flags, since the user can just
modify the state directly. This is tricky, but should be tricky to
represent the horrible disaster that can happen by enabling it.
We didn't have to bump the state format version since the absense of the
field means it was written by version "0.0.0" which will always be
older. In effect though this change will always apply to version 2 of
the state since it appears in 0.7 which bumped the version for other
purposes.
For a long time now, the diff logic has relied on the behavior of
`mapstructure.WeakDecode` to determine how various primitives are
converted into strings. The `schema.DiffString` function is used for
all primitive field types: TypeBool, TypeInt, TypeFloat, and TypeString.
The `mapstructure` library's string representation of booleans is "0"
and "1", which differs from `strconv.FormatBool`'s "false" and "true"
(which is used in writing out boolean fields to the state).
Because of this difference, diffs have long had the potential for
cosmetically odd but semantically neutral output like:
"true" => "1"
"false" => "0"
So long as `mapstructure.Decode` or `strconv.ParseBool` are used to
interpret these strings, there's no functional problem.
We had our first clear functional problem with #6005 and friends, where
users noticed diffs like the above showing up unexpectedly and causing
troubles when `ignore_changes` was in play.
This particular bug occurs down in Terraform core's EvalIgnoreChanges.
There, the diff is modified to account for ignored attributes, and
special logic attempts to handle properly the situation where the
ignored attribute was going to trigger a resource replacement. That
logic relies on the string representations of the Old and New fields in
the diff to be the same so that it filters properly.
So therefore, we now get a bug when a diff includes `Old: "0", New:
"false"` since the strings do not match, and `ignore_changes` is not
properly handled.
Here, we introduce `TypeBool`-specific normalizing into `finalizeDiff`.
I spiked out a full `diffBool` function, but figuring out which pieces
of `diffString` to duplicate there got hairy. This seemed like a simpler
and more direct solution.
Fixes#6005 (and potentially others!)
As I've been working through the resources, I'm finding that a lot are
going to need some serious work. Given we have hundreds, I think it
might be prudent to make this opt-in for now and we can revisit
automatic/opt-out at some future point.
Importability will likely be opt-in it appears so this will match up
with that.
Originally I used an empty config module. This caused problems since
important provider configurations weren't available. Instead, I now set
it to use the full config. This isn't an issue since the attributes
themselves aren't available to Refresh anyways.
Here we also introduce a `test` provider meant as an aid to exposing
via automated tests issues involving interactions between
`helper/schema` and Terraform core.
This has been helpful so far in diagnosing `ignore_changes` problems,
and I imagine it will be helpful in other contexts as well.
We'll have to be careful to prevent the `test` provider from becoming a
dumping ground for poorly specified tests that have a clear home
elsewhere. But for bug exposure I think it's useful to have.
This should be quite helpful in debugging aws-sdk-go operations.
Required some tweaking around the `helper/logging` functions to expose an
`IsDebugOrHigher()` helper for us to use.
Change the `RetryFunc` from a plain `error` return type to a
specialized `RetryError` which must decide whether it is
retryable or not.
Add `RetryableError` / `NonRetryableError` factory functions that
callers are meant to use to build up these errors.
This makes it eminently clear whether or not a given error is
retryable from inside the client code.
Goal here is to _not_ change any behavior, simply reflect the
existing behavior with the new, clearer, API.
In #4700 while fixing a data race I made an incorrect assumption about
the return value of StateChangeConf, and ended up changing the behavior
in the timeout case to always return:
> timeout while waiting for state to become '[success]'
When it used to capture the "most recent error" from the function
itself.
It's much more useful to see that error bubbling up, so here we revert
to pulling it out of the function as we did before, and we protect
against the data race with a good old fashioned mutex.
Helpful when iterating on a drift test.
Eventually I think this assertion could be fanned out to something much
more targeted like:
ExpectAttributeDiff(resource, attr, oldval, newval)
But this is a step in the right direction.
* MaxItems defines a maximum amount of items that can exist within a
TypeSet or TypeList. Specific use cases would be if a TypeSet is being
used to wrap a complex structure, however more than one instance would
cause instability.
It was a mistake to switched fully to `==` when activating waiting for
capacity on updates in #3947. Users that didn't set `min_elb_capacity ==
desired_capacity` and instead treated it as an actual "minimum" would
see timeouts for every create, since their target numbers would never be
reached exactly.
Here, we fix that regression by restoring the minimum waiting behavior
during creates.
In order to preserve all the stated behavior, I had to split out
different criteria for create and update, criteria which are now
exhaustively unit tested.
The set of fields that affect capacity waiting behavior has become a bit
of a mess. Next major release I'd like to rework all of these into a
more consistently named block of config. For now, just getting the
behavior correct and documented.
(Also removes all the fixed names from the ASG tests as I was hitting
collision issues running them over here.)
Fixes#4792
Implementation notes:
* The hash implementation was not considering key value, causing "diffs
did not match" errors when a value was updated. Switching to default
HashResource implementation fixes this
* Using HashResource as a default exposed a bug in helper/schema that
needed to be fixed so the Set function is picked up properly during
Read
* Stop writing back values into the `key` attribute; it triggers extra
diffs when `default` is used. Computed values all just go into `var`.
* Includes a state migration to prevent unnecessary diffs based on
"key" attribute hashcodes changing.
In the tests:
* Switch from leaning on the public demo Consul instance to requiring a
CONSUL_HTTP_ADDR variable be set pointing to a `consul agent -dev`
instance to be used only for testing.
* Add a test that exposes the updating issues and covers the fixes
Fixes#774Fixes#1866Fixes#3023
Adds the `TF_SKIP_REMOTE_TESTS` env var to be used in cases where the
`http.Get()` smoke test passes but the network is not able to service
the needs of the tests.
Fixes#4421
The implementation was attempting to capture the error using a scoped
variable reference, but the error value is already exposed via the
return value of `WaitForState()`. Using that instead fixes the data
race.
Fixes the following data race:
```
==================
WARNING: DATA RACE
Read by goroutine 74:
github.com/hashicorp/terraform/helper/resource.Retry()
/Users/phinze/go/src/github.com/hashicorp/terraform/helper/resource/wait.go:35 +0x284
github.com/hashicorp/terraform/helper/resource.TestRetry_timeout()
/Users/phinze/go/src/github.com/hashicorp/terraform/helper/resource/wait_test.go:35 +0x60
testing.tRunner()
/private/var/folders/vd/7l9ys5k57l91x63sh28wl_kc0000gn/T/workdir/go/src/testing/testing.go:456 +0xdc
Previous write by goroutine 90:
github.com/hashicorp/terraform/helper/resource.Retry.func1()
/Users/phinze/go/src/github.com/hashicorp/terraform/helper/resource/wait.go:20 +0x87
github.com/hashicorp/terraform/helper/resource.(*StateChangeConf).WaitForState.func1()
/Users/phinze/go/src/github.com/hashicorp/terraform/helper/resource/state.go:83 +0x284
Goroutine 74 (running) created at:
testing.RunTests()
/private/var/folders/vd/7l9ys5k57l91x63sh28wl_kc0000gn/T/workdir/go/src/testing/testing.go:561 +0xaa3
testing.(*M).Run()
/private/var/folders/vd/7l9ys5k57l91x63sh28wl_kc0000gn/T/workdir/go/src/testing/testing.go:494 +0xe4
main.main()
github.com/hashicorp/terraform/helper/resource/_test/_testmain.go:84 +0x20f
Goroutine 90 (running) created at:
github.com/hashicorp/terraform/helper/resource.(*StateChangeConf).WaitForState()
/Users/phinze/go/src/github.com/hashicorp/terraform/helper/resource/state.go:127 +0x283
github.com/hashicorp/terraform/helper/resource.Retry()
/Users/phinze/go/src/github.com/hashicorp/terraform/helper/resource/wait.go:34 +0x276
github.com/hashicorp/terraform/helper/resource.TestRetry_timeout()
/Users/phinze/go/src/github.com/hashicorp/terraform/helper/resource/wait_test.go:35 +0x60
testing.tRunner()
/private/var/folders/vd/7l9ys5k57l91x63sh28wl_kc0000gn/T/workdir/go/src/testing/testing.go:456 +0xdc
==================
```
Generate bucket names and object names per test instead of once at the
top level. Should help avoid failures like this one:
https://travis-ci.org/hashicorp/terraform/jobs/100254008
All storage tests checked on this commit:
```
TF_ACC=1 go test -v ./builtin/providers/google -run TestAccGoogleStorage
=== RUN TestAccGoogleStorageBucketAcl_basic
--- PASS: TestAccGoogleStorageBucketAcl_basic (8.90s)
=== RUN TestAccGoogleStorageBucketAcl_upgrade
--- PASS: TestAccGoogleStorageBucketAcl_upgrade (14.18s)
=== RUN TestAccGoogleStorageBucketAcl_downgrade
--- PASS: TestAccGoogleStorageBucketAcl_downgrade (12.83s)
=== RUN TestAccGoogleStorageBucketAcl_predefined
--- PASS: TestAccGoogleStorageBucketAcl_predefined (4.51s)
=== RUN TestAccGoogleStorageObject_basic
--- PASS: TestAccGoogleStorageObject_basic (3.77s)
=== RUN TestAccGoogleStorageObjectAcl_basic
--- PASS: TestAccGoogleStorageObjectAcl_basic (4.85s)
=== RUN TestAccGoogleStorageObjectAcl_upgrade
--- PASS: TestAccGoogleStorageObjectAcl_upgrade (7.68s)
=== RUN TestAccGoogleStorageObjectAcl_downgrade
--- PASS: TestAccGoogleStorageObjectAcl_downgrade (7.37s)
=== RUN TestAccGoogleStorageObjectAcl_predefined
--- PASS: TestAccGoogleStorageObjectAcl_predefined (4.16s)
PASS
ok github.com/hashicorp/terraform/builtin/providers/google 68.275s
```
* Add SSH Keys to all droplets in tests, this prevents acctests from
spamming account owner email with root password details
* Add a new helper/acctest package to be a home for random string / int
implementations used in tests.
* Insert some random details into record tests to prevent collisions
* Normalize config style in tests to hclfmt conventions
In the acceptance testing framework, it is neccessary to provide a copy
of the state _before_ the destroy is applied to the check in order that
it can loop over resources to verify their destruction. This patch makes
a deep copy of the state prior to applying test steps which have the
Destroy option set and then passes that to the destroy check.
We weren't doing any log setup for acceptance tests, which made it
difficult to wrangle log output in CI.
This moves the log setup functions we use in `main` over into a helper
package so we can use them for acceptance tests as well.
This means that acceptance tests will by default be a _lot_ quieter,
only printing out actual test output. Setting `TF_LOG=trace` will
restore the full prior noise level.
Only minor behavior change is to make `ioutil.Discard` the default
return value rather than a `nil` that needs to be checked for.
Changing the Set internals makes a lot of sense as it saves doing
conversions in multiple places and gives a central place to alter
the key when a item is computed.
This will have no side effects other then that the ordering is now
based on strings instead on integers, so the order will be different.
This will however have no effect on existing configs as these will
use the individual codes/keys and not the ordering to determine if
there is a diff or not.
Lastly (but I think also most importantly) there is a fix in this PR
that makes diffing sets extremely more performand. Before a full diff
required reading the complete Set for every single parameter/attribute
you wanted to diff, while now it only gets that specific parameter.
We have a use case where we have a Set that has 18 parameters and the
set consist of about 600 items (don't ask 😉). So when doing a diff
it would take 100% CPU of all cores and stay that way for almost an
hour before being able to complete the diff.
Debugging this we learned that for retrieving every single parameter
it made over 52.000 calls to `func (c *ResourceConfig) get(..)`. In
this function a slice is created and used only for the duration of the
call, so the time needed to create all needed slices and on the other
hand the time the garbage collector needed to clean them up again caused
the system to cripple itself. Next to that there are also some expensive
reflect calls in this function which also claimed a fair amount of CPU
time.
After this fix the number of calls needed to get a single parameter
dropped from 52.000+ to only 2! 😃
I promised myself that next time I jumped in this file I'd fix this up.
Now we don't have to manually index the file with comments, we can just
add descriptive names to the test cases!
Because `aws_security_group_rule` resources are an abstraction on top of
Security Groups, they must interact with the AWS Security Group APIs in
a pattern that often results in lots of parallel requests interacting
with the same security group.
We've found that this pattern can trigger race conditions resulting in
inconsistent behavior, including:
* Rules that report as created but don't actually exist on AWS's side
* Rules that show up in AWS but don't register as being created
locally, resulting in follow up attempts to authorize the rule
failing w/ Duplicate errors
Here, we introduce a per-SG mutex that must be held by any security
group before it is allowed to interact with AWS APIs. This protects the
space between `DescribeSecurityGroup` and `Authorize*` / `Revoke*`
calls, ensuring that no other rules interact with the SG during that
span.
The included test exposes the race by applying a security group with
lots of rules, which based on the dependency graph can all be handled in
parallel. This fails most of the time without the new locking behavior.
I've omitted the mutex from `Read`, since it is only called during the
Refresh walk when no changes are being made, meaning a bunch of parallel
`DescribeSecurityGroup` API calls should be consistent in that case.
We've been moving away from config fields expecting file paths that
Terraform will load, instead prefering fields that expect file contents,
leaning on `file()` to do loading from a path.
This helps with consistency and also flexibility - since this makes it
easier to shift sensitive files into environment variables.
Here we add a little helper package to manage the transitional period
for these fields where we support both behaviors.
Also included is the first of several fields being shifted over - SSH
private keys in provisioner connection config.
We're moving to new field names so the behavior is more intuitive, so
instead of `key_file` it's `private_key` now.
Additional field shifts will be included in follow up PRs so they can be
reviewed and discussed individually.
Previous this would return the following sort of error:
expected type 'string', got unconvertible type '[]interface {}'
This is the raw error returned by the underlying mapstructure library.
This is not a helpful error message for anyone who doesn't know Go's
type system, and it exposes Terraform's internals to the UI.
Instead we'll catch these cases before we try to use mapstructure and
return a more straightforward message.
By checking the type before the IsComputed exception this also avoids
a crash caused when the assigned value is a computed list. Otherwise
the list of interpolations is allowed through here and then crashes later
during Diff when the value is not a primitive as expected.
A common issue with new resource implementations is not considering parts
of a complex structure that's used inside a set, which causes quirky
behavior.
The schema helper has enough information to provide a default reasonable
implementation of a set function that includes all non-computed attributes
in a deterministic way. Here we implement such a function and use it
when no explicit hashing function is provided.
In order to achieve this we encapsulate the construction of the zero
value for a schema in a new method schema.ZeroValue, which allows us to
put the fallback logic to the new default function in a single spot.
It is no longer valid to use &Set{F: schema.Set} and all uses of that
construct should be replaced with schema.ZeroValue().(*Set) .
It seems there are 4 locations left that use the `helper/multierror`
package, where the rest is TF settled on the `hashicorp/go-multierror`
package.
Functionally this doesn’t change anything, so I suggest to delete the
builtin version as it can only cause confusion (both packages have the
same name, but are still different types according to Go’s type system.
We need to set the value to an empty value so the state file does
indeed change the value. Otherwise the obsolote value is still
intact and doesn't get changed at all. This means `terraform show`
still shows the obsolote value when the particular value is not
existing anymore. This is due the AWS API which is returning a null
instead of an empty string.
This was just a missed exit from the resource.Apply function -
subsequent refreshes would add the SchemaVersion back into the state,
but having the state recorded once without the meta information can
cause problems with Atlas's remote state checksumming.
By prefixing them with `cmd /c` it will work with both `winner` and
`ssh` connection types.
This PR also reverts some bad stringer changes made in PR #2673
In `helper/schema` we already makes a distinction between `Default`
which is always applied and `InputDefault` which is displayed to the
user for an empty field.
But for variables we just have `Default` which is treated like
`InputDefault`. This changes it to _not_ prompt the user for a value
when the variable declaration includes a default.
Treating this as a UX bugfix and the "don't prompt for variables w/
defaults set" behavior as the originally expected behavior we were
failing to honor.
Added an already-passing test to verify and cover the `helper/schema`
behavior.
Perhaps down the road we can add a `input_default` attribute to
variables to allow similar behavior to `helper/schema` in variables, but
for now just sticking with the fix.
Fixes#2592
This is the initial pure "all tests passing without a diff" stage. The
plan is to change the internal representation of StringList to include a
suffix delimiter, which will allow us to recognize empty and
single-element lists.
The Exists function can run in a context where the contents of the
template have changed, but it uses the old set of variables from the
state. This means that when the set of variables changes, rendering will
fail in Exists. This was returning an error, but really it just needs to
be treated as a scenario where the template needs re-rendering.
fixes#2344 and possibly a few other template issues floating around
I couldn't see a simple path get this working for Maps, Sets,
and Lists, so lets land it as a primitive-only schema feature.
I think validation on primitives comprises 80% of the use cases anyways.
Guarantees that the `interface{}` arg to ValidateFunc is the proper
type, allowing implementations to be simpler.
Finish the docstring on `ValidateFunc` to call this out.
/cc @mitchellh
The runtime impl of ConfictsWith uses Resource.Get(), which makes it
work with any other attribute of the resource - the InternalValidate was
only checking against the local schemaMap though, preventing subResource
from using ConflictsWith properly.
It's a lot of wiring and it's a bit ugly, but it's not runtime code, so
I'm a bit less concerned about that aspect.
This should take care of the problem mentioned in #1909
This reworks the template lifecycle a bit such that we get nicer diff
behavior.
First, we tick ForceNew on for both filename and vars, so that the diff
indicates that the template will be "replaced" on change. This is mostly
cosmetic, but it also tracks conceptually with the fact that the
identifier we use is a hash of the contents, so any change essentially
makes a "new resource".
Second, we change the Exists implementation to only return `false` when
there has been a change in the rendered template. This lets descendent
resources see the computed value changing so that they'll properly
trigger in the plan.
Fixes#1898
Refs #1866 (but does not fix, there's another deeper issue there)