Fixes: #13805
Before the fix:
```
Error refreshing state: 1 error(s) occurred:
* logentries_logset.logset: logentries_logset.logset: No such log set with key 278e7344-1201-43ba-9804-77b9a72fe7d6
```
After the fix:
```
% terraform plan ✚ ✭
[WARN] /Users/stacko/Code/go/bin/terraform-provider-logentries overrides an internal plugin for logentries-provider.
If you did not expect to see this message you will need to remove the old plugin.
See https://www.terraform.io/docs/internals/internal-plugins.html
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.
logentries_logset.logset: Refreshing state... (ID: 278e7344-...a72fe7d6)
logentries_log.log: Refreshing state... (ID: 2ae1e8ae-...e932d25c)
The Terraform execution plan has been generated and is shown below.
Resources are shown in alphabetical order for quick scanning. Green resources
will be created (or destroyed and then created if an existing resource
exists), yellow resources are being changed in-place, and red resources
will be destroyed. Cyan entries are data sources to be read.
Note: You didn't specify an "-out" parameter to save this plan, so when
"apply" is called, Terraform can't guarantee this is what will execute.
+ logentries_log.log
logset_id: "${logentries_logset.logset.id}"
name: "test-log"
retention_period: "ACCOUNT_DEFAULT"
source: "token"
token: "<computed>"
+ logentries_logset.logset
location: "nonlocation"
name: "testing-terraform-destroy"
Plan: 2 to add, 0 to change, 0 to destroy.
```
Test Run:
```
% make testacc TEST=./builtin/providers/logentries ✚ ✭
==> Checking that code complies with gofmt requirements...
go generate $(go list ./... | grep -v /terraform/vendor/)
2017/04/20 20:36:20 Generated command/internal_plugin_list.go
TF_ACC=1 go test ./builtin/providers/logentries -v -timeout 120m
=== RUN TestProvider
--- PASS: TestProvider (0.00s)
=== RUN TestProvider_impl
--- PASS: TestProvider_impl (0.00s)
=== RUN TestAccLogentriesLog_Token
--- PASS: TestAccLogentriesLog_Token (39.03s)
=== RUN TestAccLogentriesLog_SourceApi
--- PASS: TestAccLogentriesLog_SourceApi (28.46s)
=== RUN TestAccLogentriesLog_SourceAgent
--- PASS: TestAccLogentriesLog_SourceAgent (6.19s)
=== RUN TestAccLogentriesLog_RetentionPeriod1M
--- PASS: TestAccLogentriesLog_RetentionPeriod1M (3.04s)
=== RUN TestAccLogentriesLog_RetentionPeriodAccountDefault
--- PASS: TestAccLogentriesLog_RetentionPeriodAccountDefault (2.71s)
=== RUN TestAccLogentriesLog_RetentionPeriodAccountUnlimited
--- PASS: TestAccLogentriesLog_RetentionPeriodAccountUnlimited (2.65s)
=== RUN TestAccLogentriesLogSet_Basic
--- PASS: TestAccLogentriesLogSet_Basic (1.54s)
=== RUN TestAccLogentriesLogSet_NoLocation
--- PASS: TestAccLogentriesLogSet_NoLocation (1.54s)
PASS
ok github.com/hashicorp/terraform/builtin/providers/logentries 85.177s
```
Moving the transformer wholesale looks like it broke some tests, with
some actually doing legit work in normalizing singular resources from a
foo.0 notation to just foo.
Adjusted the TestPlanGraphBuilder to account for the extra
meta.count-boundary nodes in the graph output now, as well as added
another context test that tests this case. It appears the issue happens
during validate, as this is where the state can be altered to a broken
state if things are not properly transformed in the plan graph.
This fixes interpolation issues on grandchild data sources that have
multiple instances (ie: counts). For example, baz depends on bar, which
depends on foo.
In this instance, after an initial TF run is done and state is saved,
the next refresh/plan is not properly transformed, and instead of the
graph/state coming through as data.x.bar.0, it comes through as
data.x.bar. This breaks interpolations that rely on splat operators -
ie: data.x.bar.*.out.
A couple tests require lowering the grace period to keep the test from
taking the full 30s timeout.
The Retry_hang test also needed to be removed from the Parallel group,
becuase it modifies the global refreshGracePeriod variable.
Refresh calls may have side effects that need to be recorded if it
succeeds, especially common when when WaitForState is called from
resource.Retry.
If the WaitForState timeout is reached and there is a Refresh call
in-flight, wait up to refreshGracePeriod (set to 30s) for it to
complete.
Previously we fixed this specifically for the Enterprise VCS integration,
but we also had some long-running errors of this sort in the docs for
how to specify module sources on Bitbucket.
This test unfortunately relies on the timing of the loops in
WaitForState, and the text of the error message. Adjust the timing so
the timeout isn't an even multiple of the poll interval, and make sure
we reach a minimum number of retries.
Make sure that we can cancel the WaitForState refresh loop when reaching
a timeout, otherwise it may run indefinitely. There's no need to try and
store and read the Result concurrently, just pass the value over a
channel.