Commit Graph

16 Commits

Author SHA1 Message Date
Martin Atkins 410b60cb7f Stop requiring multi-vars (splats) to be in array brackets
Prior to Terraform 0.7, lists in Terraform were just a shallow abstraction
on top of strings with a magic delimiter between items. Wrapping a single
string in brackets in the configuration was Terraform's prompt that it
needed to split the string on that delimiter during interpolation.

In 0.7, when first-class lists were added, this convention was preserved
by flattening lists-of-lists by one level when they were encountered in
configuration. However, there was an oversight in that change where it
did not correctly handle the case where the inner list was unknown.

In #14135 we removed some code that was flattening partially-unknown lists
into fully-unknown (untyped) values. This inadvertently exposed the missed
case from the previous paragraph, causing issues for list-wrapped splat
expressions with unknown members. While this worked fine for resources,
due to some fixup done inside helper/schema, this did not work for other
interpolation contexts such as module blocks.

Various attempts to fix this up and restore the flattening behavior
selectively were unsuccessful, due to a proliferation of assumptions all
over the core code that would be too risky to change just to fix this bug.

This change, then, takes the different approach of removing the
requirement that splats be presented inside list brackets. This
requirement didn't make much sense anymore anyway, since no other
list-returning expression had this constraint and so the rest of Terraform
was already successfully dealing with both cases.

This leaves us with two different scenarios:

- For resource arguments, existing normalization code in helper/schema
  does its own flattening that preserves compatibility with the common
  practice of using bracketed splats. This change proves this with a test
  within the "test" provider that exercises the whole Terraform core and
  helper/schema stack that assigns bracketed splats to list and set
  attributes.

- For arguments in other blocks, such as in module callsites, the
  interpolator's own flattening behavior applies to known lists,
  preserving compatibility with configurations from before
  partially-computed splats were possible, but those wishing to use
  partially-computed splats are required to drop the surrounding brackets.
  This is less concerning because this scenario was introduced only in
  0.9.5, so the scope for breakage is limited to those who adopted this
  new feature quickly after upgrading.

As of this commit, the recommendation is to stop using brackets around
splats but the old form continues to be supported for backward
compatibility. In a future _major_ version of Terraform we will probably
phase out this legacy form to improve consistency, but for now both
forms are acceptable at the expense of some (pre-existing) weird behavior
when _actual_ lists-of-lists are used.

This addresses #14521 by officially adopting the suggested workaround of
dropping the brackets around the splat. However, it doesn't yet allow
passing of a partially-unknown list between modules: that still violates
assumptions in Terraform's core, so for the moment partially-unknown lists
work only within a _single_ interpolation expression, and cannot be
passed around between expressions. Until more holistic work is done to
improve Terraform's type handling, passing a partially-unknown splat
through to a module will result in a fully-unknown list emerging on
the other side, just as was the case before #14135; this change just
addresses the fact that this was failing with an error in 0.9.5.
2017-05-23 11:22:37 -07:00
Chris Marchesi f63ad1dbd1 providers/test: Add count resource <-> data source dep count scale tests
These tests cover the new refresh behaviour and would fail with "index
out of range" if the refresh graph is not expanded to take new resources
into account as well (scale out), or if it does not with expanded count
orphans in a way that makes sure they don't get interpolated when walked
(scale in).
2017-05-12 15:45:06 -07:00
Martin Atkins e67c359b2d provider/test: allow assigning a label to each instance
When testing the behavior of multiple provider instances (either aliases
or child module overrides) it's convenient to be able to label the
individual instances to determine which one is actually being used for
the purpose of making test assertions.
2017-05-11 10:52:51 -07:00
Chris Marchesi d41b806789 core: Restore CountBoundaryTransformer to apply, add/adjust tests
Moving the transformer wholesale looks like it broke some tests, with
some actually doing legit work in normalizing singular resources from a
foo.0 notation to just foo.

Adjusted the TestPlanGraphBuilder to account for the extra
meta.count-boundary nodes in the graph output now, as well as added
another context test that tests this case. It appears the issue happens
during validate, as this is where the state can be altered to a broken
state if things are not properly transformed in the plan graph.
2017-04-19 22:23:52 -07:00
Chris Marchesi 2802d319d2 core: Move CountBoundaryTransformer to the plan graph builder
This fixes interpolation issues on grandchild data sources that have
multiple instances (ie: counts). For example, baz depends on bar, which
depends on foo.

In this instance, after an initial TF run is done and state is saved,
the next refresh/plan is not properly transformed, and instead of the
graph/state coming through as data.x.bar.0, it comes through as
data.x.bar.  This breaks interpolations that rely on splat operators -
ie: data.x.bar.*.out.
2017-04-19 16:56:54 -07:00
Mitchell Hashimoto c6d0333dc0
flatmap: mark computed list as a computed value in Expand
Fixes #12183

The fix is in flatmap for this but the entire issue is a bit more
complex. Given a schema with a computed set, if you reference it like
this:

    lookup(attr[0], "field")

And "attr" contains a computed set within it, it would panic even though
"field" is available. There were a couple avenues I could've taken to
fix this:

1.) Any complex value containing any unknown value at any point is
entirely unknown.

2.) Only the specific part of the complex value is unknown.

I took route 2 so that the above works without any computed (since
"name" is not computed but something else is). This may actually have an
effect on other parts of Terraform configs, however those similar
configs would've simply crashed previously so it shouldn't break any
pre-existing configs.
2017-02-23 10:03:59 -08:00
Mitchell Hashimoto dd8ee38ba8
providers/test: additional testing via integration tests 2017-01-28 11:09:24 -08:00
James Bardin d1d6907640 Add a provider test for a list of maps
Interpolation of a map from a list of maps was not working. Add a
provider example test to cover this.
2016-12-16 10:36:26 -05:00
James Bardin 68ba2d6ff0 ResourceConfig.get should never return (nil, true)
Fixes a case where ResourceConfig.get inadvertently returns a nil value.

Add an integration test where assigning a map to a list via
interpolation would panic.
2016-11-18 16:24:40 -05:00
James Nugent fb150ef72f provider/test: Add test of data source count.index
This adds a unit test to the test provider that verifies count.index
behaves correctly. Although not ideal this is hard to implement as a
context test without changing around the (non helper/schema)
implementation of the x_data_source.
2016-09-03 13:58:30 -07:00
Paul Hinze 14cea95e86
terraform: another set of ignore_changes fixes
This set of changes addresses two bug scenarios:

(1) When an ignored change canceled a resource replacement, any
downstream resources referencing computer attributes on that resource
would get "diffs didn't match" errors. This happened because the
`EvalDiff` implementation was calling `state.MergeDiff(diff)` on the
unfiltered diff. Generally this is what you want, so that downstream
references catch the "incoming" values. When there's a potential for the
diff to change, thought, this results in problems w/ references.

Here we solve this by doing away with the separate `EvalNode` for
`ignore_changes` processing and integrating it into `EvalDiff`. This
allows us to only call `MergeDiff` with the final, filtered diff.

(2) When a resource had an ignored change but was still being replaced
anyways, the diff was being improperly filtered. This would cause
problems during apply when not all attributes were available to perform
the replacement.

We solve that by deferring actual attribute removal until after we've
decided that we do not have to replace the resource.
2016-07-08 16:48:23 -05:00
Paul Hinze 4a1b36ac0d
core: rerun resource validation before plan and apply
In #7170 we found two scenarios where the type checking done during the
`context.Validate()` graph walk was circumvented, and the subsequent
assumption of type safety in the provider's `Diff()` implementation
caused panics.

Both scenarios have to do with interpolations that reference Computed
values. The sentinel we use to indicate that a value is Computed does
not carry any type information with it yet.

That means that an incorrect reference to a list or a map in a string
attribute can "sneak through" validation only to crop up...

 1. ...during Plan for Data Source References
 2. ...during Apply for Resource references

In order to address this, we:

 * add high-level tests for each of these two scenarios in `provider/test`
 * add context-level tests for the same two scenarios in `terraform`
   (these tests proved _really_ tricky to write!)
 * place an `EvalValidateResource` just before `EvalDiff` and `EvalApply` to
   catch these errors
 * add some plumbing to `Plan()` and `Apply()` to return validation
   errors, which were previously only generated during `Validate()`
 * wrap unit-tests around `EvalValidateResource`
 * add an `IgnoreWarnings` option to `EvalValidateResource` to prevent
   active warnings from halting execution on the second-pass validation

Eventually, we might be able to attach type information to Computed
values, which would allow for these errors to be caught earlier. For
now, this solution keeps us safe from panics and raises the proper
errors to the user.

Fixes #7170
2016-07-01 13:12:57 -05:00
James Nugent 75ef7ab636 provider/test: Add more variants of maps
This commit adds a binary for the test provider, and adds a variety of
different types of map to the schema.
2016-06-09 10:49:49 +01:00
Paul Hinze b4df304b47
helper/schema: Normalize bools to "true"/"false" in diffs
For a long time now, the diff logic has relied on the behavior of
`mapstructure.WeakDecode` to determine how various primitives are
converted into strings.  The `schema.DiffString` function is used for
all primitive field types: TypeBool, TypeInt, TypeFloat, and TypeString.

The `mapstructure` library's string representation of booleans is "0"
and "1", which differs from `strconv.FormatBool`'s "false" and "true"
(which is used in writing out boolean fields to the state).

Because of this difference, diffs have long had the potential for
cosmetically odd but semantically neutral output like:

    "true" => "1"
    "false" => "0"

So long as `mapstructure.Decode` or `strconv.ParseBool` are used to
interpret these strings, there's no functional problem.

We had our first clear functional problem with #6005 and friends, where
users noticed diffs like the above showing up unexpectedly and causing
troubles when `ignore_changes` was in play.

This particular bug occurs down in Terraform core's EvalIgnoreChanges.
There, the diff is modified to account for ignored attributes, and
special logic attempts to handle properly the situation where the
ignored attribute was going to trigger a resource replacement. That
logic relies on the string representations of the Old and New fields in
the diff to be the same so that it filters properly.

So therefore, we now get a bug when a diff includes `Old: "0", New:
"false"` since the strings do not match, and `ignore_changes` is not
properly handled.

Here, we introduce `TypeBool`-specific normalizing into `finalizeDiff`.
I spiked out a full `diffBool` function, but figuring out which pieces
of `diffString` to duplicate there got hairy. This seemed like a simpler
and more direct solution.

Fixes #6005 (and potentially others!)
2016-05-05 09:00:58 -05:00
Paul Hinze f480ae3430 core: Fix issues with ignore_changes
The ignore_changes diff filter was stripping out attributes on Create
but the diff was still making it down to the provider, so Create would
end up missing attributes, causing a full failure if any required
attributes were being ignored.

In addition, any changes that required a replacement of the resource
were causing problems with `ignore_chages`, which didn't properly filter
out the replacement when the triggering attributes were filtered out.

Refs #5627
2016-03-21 14:20:36 -05:00
Paul Hinze c3e27b3e0a provider/test: a test provider
Here we also introduce a `test` provider meant as an aid to exposing
via automated tests issues involving interactions between
`helper/schema` and Terraform core.

This has been helpful so far in diagnosing `ignore_changes` problems,
and I imagine it will be helpful in other contexts as well.

We'll have to be careful to prevent the `test` provider from becoming a
dumping ground for poorly specified tests that have a clear home
elsewhere. But for bug exposure I think it's useful to have.
2016-03-21 08:59:54 -05:00