Because CBD now runs after a RootTransformer, it's now operating on a
graph that _may_ have had a graphNodeRoot added to it (a noop node whose
only purpose is to be a root).
CBD includes a step that tells the destroy node to depend on any parents
of the create node. When one of those parents was "root", this was
causing the destroy node to depend on "root", making it cease to be an
actual root node.
Because graphNodeRoot is a singleton, the follow-up RootTransformer was
not sufficient to slap another root on top - it wasn't being seen as a
fresh node, so edges were just accumulating, and we ended up in a state
with "no roots".
refs #1903 (not sure if this will fix all the "no root found" cases, or
just the one I bumped into)
fixes#1947
Root cause was a bad edge being made by the CBD transform going from the
flattened destroy node to the unflattened create node, which was no
longer in the graph. The destroy node therefore had a dependency that
could never be satisfied, which locked up the walk.
formatlist distributes formatting over lists.
See the docs for details.
As a colleague commented:
"It happens all the time that we want a set of
outputs, but in a slightly different way than
just simple joining or concatting."
formatlist (combined with join)
makes it easy to satisfy those needs.
The runtime impl of ConfictsWith uses Resource.Get(), which makes it
work with any other attribute of the resource - the InternalValidate was
only checking against the local schemaMap though, preventing subResource
from using ConflictsWith properly.
It's a lot of wiring and it's a bit ugly, but it's not runtime code, so
I'm a bit less concerned about that aspect.
This should take care of the problem mentioned in #1909
Got this while playing around in a module:
> * unflattenable node: aws_security_group.internal (orphan)
> *terraform.graphNodeOrphanResource
Basically just copied implementation from
d503cc2d82
This commit adds a server group resource. Users can create server
groups with different policies. If a server is launched in a certain
group, the server will adhere to that policy. For example, servers
can be made to all launch on the same compute node or different compute
nodes.