Commit Graph

24371 Commits

Author SHA1 Message Date
Martin Atkins 31299e688d core: Allow legacy SDK to opt out of plan-time safety checks
Due to the inprecision of our shimming from the legacy SDK type system to
the new Terraform Core type system, the legacy SDK produces a number of
inconsistencies that produce only minor quirky behavior or broken
edge-cases. To retain compatibility with those existing weird behaviors,
the legacy SDK opts out of our safety checks.

The intent here is to allow existing providers to continue to do their
previous unsafe behaviors for now, accepting that this will allow certain
quirky bugs from previous releases to persist, and then gradually migrate
away from the legacy SDK and remove this opt-out on a per-resource basis
over time.

As with the apply-time safety check opt-out, this is reserved only for
the legacy SDK and must not be used in any new SDK implementations. We
still include any inconsistencies as warnings in the logs as an aid to
anyone debugging weird behavior, so that they can see situations where
blame may be misplaced in the user-visible error messages.
2019-02-11 17:26:49 -08:00
Martin Atkins 5649ae6abf core: Improve warnings for legacy SDK apply-time inconsistencies
We've allowed the legacy SDK an opt-out from the post-apply safety checks,
but previously we produced only a generic warning message in that case.
Now instead we'll still run the safety checks, but report the results in
the logs instead of as error diagnostics.

This should allow developers who are debugging strange interactions
between buggy legacy providers to get better insight into what's going
on upstream in order to help explain what's going on when these problems
inevitably get caught by other downstream safety checks when trying to
make use of these invalid results.
2019-02-11 17:26:49 -08:00
Martin Atkins 419f5e58cd core: Enforce the validity of planned new objects
We've been gradually adding safety checks of this sort throughout the
lifecycle to help ensure that buggy providers can't introduce
hard-to-diagnose downstream failures and misbehavior. This completes the
set by verifying during plan time that the provider has produced a plan
that actually achieves the goals defined in the configuration.

In particular, this catches the situation where a provider may incorrectly
override a value explicitly set in configuration, which avoids creating
confusion by betraying the reasonable user expectation that referencing an
explicitly-defined attribute will produce exactly the value shown in
configuration.
2019-02-11 17:26:49 -08:00
cgriggs01 1565abfcaa add four community providers 2019-02-11 17:25:30 -08:00
James Bardin 1ca7531cc7 allow implicit empty strings in lists
The helper/schema handling of lists loses empty string values, but
retains the correct count. Only re-count the values if the count is
missing entirely, and allow our shims to re-populate the zero values.
2019-02-11 19:24:14 -05:00
Chris Doherty 9cfb8797ae First pass at adding CODEOWNERS to link remote-state backends with maintainers of the associated providers. 2019-02-11 15:52:19 -08:00
James Bardin 3cecacb660
Merge pull request #20292 from hashicorp/jbardin/sdk
allow 0 and unset to be equal in count tests
2019-02-11 17:01:57 -05:00
James Bardin d871ce63fc
Merge pull request #20295 from hashicorp/jbardin/apply-error
process state even after provider.Apply errors
2019-02-11 16:51:06 -05:00
Kristin Laemmert f2f35265bc
command/show: json output enhancements (#20291)
* command/jsonplan: 
- add variables to plan output
- print known planned values for resources

Previously, resource attribute values were only displayed if the values
were wholly known. Now we will filter the unknown values out of the
change and print the known values.

* command/jsonstate: added depends_on and tainted fields
* command/show: update tests to reflect added fields
2019-02-11 13:17:03 -08:00
James Bardin 1bfc27817e process state even after provider.Apply errors
Terraform core expects a sane state even when the provider returns an
error. Make sure at the prior state is always the default value to
return, and then alway attempt to process any state returned by
provider.Apply.
2019-02-11 15:41:07 -05:00
James Bardin c02f1d7256 allow 0 and unset to be equal in count tests
This was changed in the single attribute test cases, but the AttrPair
test is used a lot for data source. As far as tests are concerned, 0 and
unset should be treated equally for flatmapped collections.
2019-02-11 11:35:19 -05:00
Yann DEGAT e70b8928e9 remote/backend/swift: Add support for workspaces & locking 2019-02-11 11:13:35 +01:00
Martin Atkins fec6e0328d plans/objchange: AssertPlanValid function
Completing our set of provider result safety-check functions,
AssertPlanValid checks a result from a provider's PlanResourceChange to
make sure it doesn't propose a change that is not valid within the user
model of Terraform.

Specifically, it forbids the provider from planning a value that
contradicts what the user gave in configuration, which is important to
ensure that making a reference to an attribute elsewhere in the
configuration will produce exactly the given result, as users reasonably
expect.

Providers _are_ allowed (and, in fact, required) to make changes to
any Computed attribute values declared in the schema in order to fill in
the default values that the provider has generated. Later checks during
the apply phase will ensure that the provider remains true to these
planned values, to ensure that Terraform can keep its promise of doing
exactly what was planned or presenting an error explaining why not.
2019-02-08 19:47:02 -08:00
Kristin Laemmert 5f8916b4fd
configs/configupgrade: do not panic on HEREDOCs. (#20281)
Previously, configupgrade would panic if it encountered a HEREDOC. For
the time being, we will simply print out the HEREDOC as-is.

Unfortunately, we discovered that terraform 0.11's version of HCL
allowed for HEREDOCs with the termination delimiter inline (instead of
on a newline, which is technically correct). Since 0.12configupgrade
needs to be bug-compatible with terraform 0.11, we must roll back to the
same version of HCL used in terraform 0.11.
2019-02-08 15:51:53 -08:00
Martin Atkins 6eb7bfbdfb
Merge #20265: Don't presume unknown for values unset in config
This changes the contract for `PlanResourceChange` so that the provider is now responsible
for populating all default values during plan, including inserting any unknown values for
defaults it will fill in at apply time.
2019-02-08 13:54:55 -08:00
James Bardin c20164ab31 fix CoerceValue to handle changing dynamic types
Objects with DynamicPseudoType attributes can't be coerced within a map
if a concrete type is set. Change the Value type used to an Object when
there is a type mismatch.
2019-02-08 16:36:27 -05:00
Martin Atkins e3618f915b backend/local: Fix mock provider in tests
We've changed the contract for PlanResourceChange to now require the
provider to populate any default values (including unknowns) it wants to
set for computed arguments, so our mock provider here now needs to be a
little more complex to deal with that.

This fixes several of the tests in this package. A minor change to
TestLocal_applyEmptyDirDestroy was required to make it properly configure
the mock provider so PlanResourceChange can access the schema.
2019-02-08 12:48:32 -08:00
Martin Atkins 7e186f72d9 command: Update "terraform show -json" tests for changed provider contract
We now require a provider to populate all of its defaults -- including
unknown value placeholders -- during PlanResourceChange. That means the
mock provider for testing "terraform show -json" must now manage the
population of the computed "id" attribute during plan.

To make this logic a little easier, we also change the ApplyResourceChange
implementation to fill in a non-null id, since that makes it easier for
the mock PlanResourceChange to recognize when it needs to populate that
default value during an update.
2019-02-08 11:58:21 -08:00
James Bardin 82588af892 switch blocks based on value type, and check attrs
Check attributes on null objects, and fill in unknowns. If we're
evaluating the object, it either means we are at the top level, or a
NestingSingle block was present, and in either case we need to treat the
attributes as null rather than the entire object.

Switch on the block types rather than Nesting, so we don't need add any
logic to change between List/Tuple or Map/Object when DynamicPseudoType
is involved.
2019-02-08 14:46:29 -05:00
Radek Simko e6777105b7
Merge pull request #20275 from hashicorp/vendor-openstack-upgrade
vendor: Bump openstack provider to v1.15.0
2019-02-08 16:22:35 +00:00
Sander van Harmelen 8b22d5a8ce
Merge pull request #20279 from hashicorp/svh/b-test
backend/remote: update the test logic
2019-02-08 17:17:15 +01:00
Sander van Harmelen aefbec63b1 backend/remote: update the test logic 2019-02-08 16:56:37 +01:00
Sander van Harmelen 252fa702dc
Update CHANGELOG.md 2019-02-08 16:53:31 +01:00
Sander van Harmelen 3b80f69eec
Merge pull request #20242 from hashicorp/svh/b-scanner-buffer-v0.12
backend/remote: fix bufio.Scanner: token too long
2019-02-08 16:52:22 +01:00
Sander van Harmelen 593abdb635
Merge pull request #20240 from hashicorp/svh/b-backend-tests-v0.12
backend/remote: cleanup test connections
2019-02-08 16:52:08 +01:00
Radek Simko 5ffb106783
docs/upgrade-guide: Document changes in remote state referencing 2019-02-08 11:14:16 +00:00
Radek Simko a7f0722729
backend/swift: Fix interface after upgrade 2019-02-08 11:03:03 +00:00
Radek Simko a67e6e19b1
vendor: github.com/terraform-providers/terraform-provider-openstack@v1.15.0 2019-02-08 10:59:06 +00:00
Martin Atkins 312d798a89 core: Restore our EvalReadData behavior
In an earlier commit we changed objchange.ProposedNewObject so that the
task of populating unknown values for attributes not known during apply
is the responsibility of the provider's PlanResourceChange method, rather
than being handled automatically.

However, we were also using objchange.ProposedNewObject to construct the
placeholder new object for a deferred data resource read, and so we
inadvertently broke that deferral behavior. Here we restore the old
behavior by introducing a new function objchange.PlannedDataResourceObject
which is a specialized version of objchange.ProposedNewObject that
includes the forced behavior of populating unknown values, because the
provider gets no opportunity to customize a deferred read.

TestContext2Plan_createBeforeDestroy_depends_datasource required some
updates here because its implementation of PlanResourceChange was not
handling the insertion of the unknown value for attribute "computed".
The other changes here are just in an attempt to make the flow of this
test more obvious, by clarifying that it is simulating a -refresh=false
run, which effectively forces a deferred read since we skip the eager
read that would normally happen in the refresh step.
2019-02-07 18:33:14 -08:00
Martin Atkins 8882dcaf86 core: Fix TestContext2Plan_dataResourceBecomesComputed
Now that ProposedNewState uses null to represent Computed attributes not
set in the configuration, the provider must fill in the unknown value for
"computed" in its plan result.

It seems that this test was incorrectly updated during our bulk-fix after
integrating the HCL2 work, but it didn't really matter because the
ReadDataSource function isn't called in the happy path anyway. But to
make the intent clearer here, we also now make ReadDataSource return an
error if it is called, making it explicit that no call is expected.
2019-02-07 18:33:14 -08:00
Martin Atkins c3e7efec35 core: Reject unknown values after reading a data resource
Data resources do not have a plan/apply distinction, so it is never valid
for a data resource to produce unknown values in its result object.

Unknown values in the data resource _config_ cause us to postpone the read
altogether, so a data source never receives unknown values as input and
therefore may never produce unknown values as output.
2019-02-07 18:33:14 -08:00
James Bardin 32671241e0 set unknowns during initial PlanResourceChange
If ID is not set, make sure it's unknown.

Use SetUnknowns to set the rest of the computed values to Unknown.
2019-02-07 20:29:24 -05:00
James Bardin d17ba647a8 add SetUnknowns
SetUnknown walks through a resource and changes any unset (null) values
that are going computed in the schema to Unknown.
2019-02-07 20:24:36 -05:00
James Bardin be127725cc Additional tests with interpolated values 2019-02-07 20:23:39 -05:00
Martin Atkins f8a6f66be4 lang/funcs: Fix panic in "join" when an element is null
It is now a proper error message.
2019-02-07 14:35:13 -08:00
Martin Atkins c794bf5bcc plans/objchange: Don't presume unknown for values unset in config
Previously we would construct a proposed new state with unknown values in
place of any not-set-in-config computed attributes, trying to save the
provider a little work in specifying that itself.

Unfortunately that turns out to be problematic because it conflates two
concerns: attributes can be explicitly set in configuration to an unknown
value, in which case the final result of that unknown overrides any
default value the provider might normally populate.

In other words, this allows the provider to recognize in the proposed new
state the difference between an Optional+Computed attribute being set to
unknown in the config vs not being set in the config at all.

The provider now has the responsibility to replace these proposed null
values with unknown values during PlanResourceChange if it expects to
select a value during the apply step. It may also populate a known value
if the final result can be predicted at plan time, as is the case for
constant defaults specified in the provider code.

This change comes from a realization that from core's perspective the
helper/schema ideas of zero values, explicit default values, and
customizediff tweaks are all just examples of "defaults", and by allowing
the provider to see during plan whether these attributes are being
explicitly set in configuration and thus decide whether the default will
be provided immediately during plan or deferred until apply.
2019-02-07 14:01:39 -08:00
Sander van Harmelen 47a00ea34b backend/remote: cleanup test connections
Cleanup test connection to prevent file descriptor issues when running the tests on a Mac.
2019-02-07 09:55:19 +01:00
Sander van Harmelen 5249d0fe83 backend/remote: fix bufio.Scanner: token too long 2019-02-07 09:54:43 +01:00
Sander van Harmelen 651ee113ac
Merge pull request #20239 from hashicorp/svh/b-build-script-v0.12
build.sh: only set git commit info for dev builds
2019-02-07 09:53:15 +01:00
Chris Griggs 8a964a4bc4
Merge pull request #20251 from cgriggs01/cgriggs01-community1
[Website]New community providers
2019-02-06 12:47:32 -08:00
Martin Atkins 1530fe52f7 core: Legacy SDK providers opt out of our new apply result check
The shim layer for the legacy SDK type system is not precise enough to
guarantee it will produce identical results between plan and apply. In
particular, values that are null during plan will often become zero-valued
during apply.

To avoid breaking those existing providers while still allowing us to
introduce this check in the future, we'll introduce a rather-hacky new
flag that allows the legacy SDK to signal that it is the legacy SDK and
thus disable the check.

Once we start phasing out the legacy SDK in favor of one that natively
understands our new type system, we can stop setting this flag and thus
get the additional safety of this check without breaking any
previously-released providers.

No other SDK is permitted to set this flag, and we will remove it if we
ever introduce protocol version 6 in future, assuming that any provider
supporting that protocol will always produce consistent results.
2019-02-06 11:40:30 -08:00
Martin Atkins a81bc23611 core: Verify that objects don't change unexpectedly during apply
Previously we would allow providers to change anything about the planned
object value during apply, possibly returning an entirely-unrelated object
of the same type. In practice this led to some subtle bugs where a single
planned attribute value would change during apply and cause a downstream
failure due to a dependent resource now seeing input other than what
_it_ expected during plan.

Now we'll produce an explicit error message for this case which places the
blame with the correct party: the upstream resource that changed. Without
this, unexpected changes would often lead to the downstream resource
implementation being blamed in error message even though it was just
reacting to the change from upstream.

As with most errors during apply, we'll still save the updated value in
the state but we'll halt the walk to ensure that the unexpected value
cannot propagate further and cause the result to potentially diverge
greatly from the changeset shown in the plan.

Compared to Terraform 0.11, we expect to see this error in many of the
same cases we saw the "diffs didn't match during apply" error in earlier
versions, since it is likely that many errors of that sort were the result
of unexpected upstream changes being incorrectly blamed on the downstream
resource that then used the result.
2019-02-06 11:40:30 -08:00
Martin Atkins 07930aa7fb core: Context apply tests should produce consistent apply results
Because Terraform Core has traditionally not checked that the final apply
result is consistent with what was planned, some of our apply tests were
producing inconsistent results.

Here we fix all of that so that they produce something compatible with
what they planned. This doesn't actually achieve anything in isolation,
but we're about to start enforcing this consistency in a subsequent
commit.
2019-02-06 11:40:30 -08:00
Sander van Harmelen 03e82687d6 build.sh: only set git commit info for dev builds
This commit fixes a problem where `make bin` would strip of any prerelease info.
2019-02-06 20:32:54 +01:00
Martin Atkins a9274beaca build: Run "go generate" in modules mode
It seems that all of the tools we run here are now sufficiently
modules-aware to run without problems in modules mode, and indeed running
_not_ in modules mode was causing problems with locating packages in
mockgen.
2019-02-06 11:19:44 -08:00
cgriggs01 41af6ce54b two new community providers 2019-02-06 11:01:22 -08:00
Radek Simko 5946fd898e
Merge pull request #20247 from surminus/update-outputs-docs
Update docs for 0.12 terraform_remote_state data source
2019-02-06 15:06:38 +01:00
Laura Martin 76dedfbf9d Update docs for 0.12 terraform_remote_state data source
In 0.12, the outputs for a data source of terraform_remote_state are
nested under the 'outputs' attribute [1]. This updates the docs
to make this change clearer.

Worked with @radeksimko at Terraform hackday, who has submitted a
related upgrade guide [2]

[1] 1f4d2f4c50/builtin/providers/terraform/data_source_state.go (L16-L43)
[2] d8e00191b7
2019-02-06 13:53:51 +01:00
James Bardin 1f4d2f4c50
Merge pull request #20235 from hashicorp/jbardin/id
only force top-level id's back to unknown
2019-02-05 16:42:01 -05:00
James Bardin 411df99f33 only force top-level id's back to unknown
Nested structures may have "id" fields, which should be treated
normally.
2019-02-05 16:16:08 -05:00