This was never intended to be valid syntax, but it worked in the old HCL
parser, and we've found a decent number of examples of it in the wild.
Fixed in https://github.com/hashicorp/hcl/pull/62 and we'll keep this
test in Terraform to cover the behavior.
This commit adds State Change support to the LBaaS resources which should
help with clean terminations.
It also adds an acceptance tests that builds out a 2-node load balance
service.
Because `aws_security_group_rule` resources are an abstraction on top of
Security Groups, they must interact with the AWS Security Group APIs in
a pattern that often results in lots of parallel requests interacting
with the same security group.
We've found that this pattern can trigger race conditions resulting in
inconsistent behavior, including:
* Rules that report as created but don't actually exist on AWS's side
* Rules that show up in AWS but don't register as being created
locally, resulting in follow up attempts to authorize the rule
failing w/ Duplicate errors
Here, we introduce a per-SG mutex that must be held by any security
group before it is allowed to interact with AWS APIs. This protects the
space between `DescribeSecurityGroup` and `Authorize*` / `Revoke*`
calls, ensuring that no other rules interact with the SG during that
span.
The included test exposes the race by applying a security group with
lots of rules, which based on the dependency graph can all be handled in
parallel. This fails most of the time without the new locking behavior.
I've omitted the mutex from `Read`, since it is only called during the
Refresh walk when no changes are being made, meaning a bunch of parallel
`DescribeSecurityGroup` API calls should be consistent in that case.
It's a bit confusing to have Terraform poll until instances come up on
ASG creation but not on update. This changes update to also poll if
min_size or desired_capacity are changed.
This changes the waiting behavior to wait for precisely the desired
number of instances instead of that number as a "minimum". I believe
this shouldn't have any undue side effects, and the behavior can still
be opted out of by setting `wait_for_capacity_timeout` to 0.