core: band-aid fix for tainted double destroy

After a lot of fun debugging with @mitchellh we finally have a diagnosis
for #1056.

I'm going to attempt to reproduce the diagnosis in prose to test out my
understanding.

----

The `DestroyTransformer` runs twice:

 * `DestroyPrimary` mode creates destroy nodes for normal resources
 * `DestroyTainted` mode creates destroy nodes for tainted resources

These destroy nodes are specializations of `GraphConfigNode`, which has
a `DynamicExpand` step.

In `DynamicExpand` we have a race condition between the normal destroy
node and the tainted destroy node for a given resource when
`CreateBeforeDestroy` is off.

The `DestroyTainted` `GraphConfigNode` must run the `TaintedTransform`
to draw out tainted nodes, since it is reponsible for destroying them
for replacement.

The `DestroyPrimary` `GraphConfigNode` _also_ runs `TaintedTransform` -
this is to catch `Deposed` nodes from CBD, which are piggy backing on
the end of the `Tainted` list.

Here's the bug: when CBD is off and you start an apply with a tainted
resource in your state, both `DestroyPrimary` and `DestroyTainted` catch
it and create their own `graphNodeTaintedResource` in their respective
subgraphs.

If parallelism is disabled, this doesn't happen because the
`DestroyPrimary` subgraph resolves completely before the
`DestroyTainted` node does its `DynamicExpand`, so the `Tainted` list
has been cleared by the time `DestroyTainted` is expanding. With
parallelism, each of these two subgraphs plays out in its own goroutine
simultaneously, and each picks up the tainted resource(s) that the apply
starts with.  So you get two `graphNodeTaintedResource` nodes, and two
destroys.

This band-aid fixes that by skipping the TaintedTransform alltogether in
the `DestroyPrimary` node if CBD is off.

A better fix will follow, which involves reworking the `Deposed` concept
so it no longer piggybacks on `Tainted`.

fixes #1056
This commit is contained in:
Paul Hinze 2015-02-26 18:52:22 -06:00
parent f2e92cfeab
commit c03e44106f
2 changed files with 32 additions and 9 deletions

View File

@ -8,7 +8,9 @@ import (
"sort"
"strings"
"sync"
"sync/atomic"
"testing"
"time"
)
func TestContext2Plan(t *testing.T) {
@ -4646,7 +4648,22 @@ func TestContext2Apply_outputMultiIndex(t *testing.T) {
func TestContext2Apply_taint(t *testing.T) {
m := testModule(t, "apply-taint")
p := testProvider("aws")
p.ApplyFn = testApplyFn
// destroyCount tests against regression of
// https://github.com/hashicorp/terraform/issues/1056
var destroyCount = int32(0)
var once sync.Once
simulateProviderDelay := func() {
time.Sleep(10 * time.Millisecond)
}
p.ApplyFn = func(info *InstanceInfo, s *InstanceState, d *InstanceDiff) (*InstanceState, error) {
once.Do(simulateProviderDelay)
if d.Destroy {
atomic.AddInt32(&destroyCount, 1)
}
return testApplyFn(info, s, d)
}
p.DiffFn = testDiffFn
s := &State{
Modules: []*ModuleState{
@ -4691,6 +4708,10 @@ func TestContext2Apply_taint(t *testing.T) {
if actual != expected {
t.Fatalf("bad:\n%s", actual)
}
if destroyCount != 1 {
t.Fatalf("Expected 1 destroy, got %d", destroyCount)
}
}
func TestContext2Apply_unknownAttribute(t *testing.T) {

View File

@ -293,14 +293,16 @@ func (n *GraphNodeConfigResource) DynamicExpand(ctx EvalContext) (*Graph, error)
View: n.Resource.Id(),
})
// If we're only destroying tainted resources, then we only
// want to find tainted resources and destroy them here.
steps = append(steps, &TaintedTransformer{
State: state,
View: n.Resource.Id(),
Deposed: n.Resource.Lifecycle.CreateBeforeDestroy,
DeposedInclude: true,
})
if n.Resource.Lifecycle.CreateBeforeDestroy {
// If we're only destroying tainted resources, then we only
// want to find tainted resources and destroy them here.
steps = append(steps, &TaintedTransformer{
State: state,
View: n.Resource.Id(),
Deposed: n.Resource.Lifecycle.CreateBeforeDestroy,
DeposedInclude: true,
})
}
case DestroyTainted:
// If we're only destroying tainted resources, then we only
// want to find tainted resources and destroy them here.