Some error-checking was omitted.
Specifically, the cloudTrailSetLogging call in the Create function was
ignoring the return and cloudTrailGetLoggingStatus could crash on a
nil-dereference during the return. Fixed both.
Fixed some needless casting in cloudTrailGetLoggingStatus.
Clarified error message in acceptance tests.
Removed needless option from example in docs.
The default for `enable_logging`, which defines whether CloudTrail
actually logs events was originally written as defaulting to `false`,
since that's how AWS creates trails.
`true` is likely a better default for Terraform users.
Changed the default and updated the docs.
Changed the acceptance tests to verify new default behavior.
Previously we assumed the existence of some default objects that most
Opsworks users have because the Opsworks console creates them by default
when a new stack is created.
However, that meant that these tests wouldn't work correctly for anyone
who either had never used Opsworks via the UI or who had never accepted
the default of having the console create some predefined IAM objects to
use. It may also have led to some weird failures if a particular user had
customized the settings for these default objects.
Now the tests create suitable IAM roles, a policy and an instance profile
and use these when creating Opsworks stacks, avoiding any dependency
on any pre-existing objects.
This fixes#3998.
The AWS CloudTrail resource is capable of creating CloudTrail resources,
but AWS defaults the actual logging of the trails to `false`, and
Terraform has no method to enable or monitor the status of logging.
CloudTrail trails that are inactive aren't very useful, and it's a
surprise to discover they aren't logging on creation.
Added an `enable_logging` parameter to resource_aws_cloudtrail to enable
logging. This requires some extra API calls, which are wrapped in new
internal functions.
For compatibility with AWS, the default of `enable_logging` is set to
`false`.
Because `aws_security_group_rule` resources are an abstraction on top of
Security Groups, they must interact with the AWS Security Group APIs in
a pattern that often results in lots of parallel requests interacting
with the same security group.
We've found that this pattern can trigger race conditions resulting in
inconsistent behavior, including:
* Rules that report as created but don't actually exist on AWS's side
* Rules that show up in AWS but don't register as being created
locally, resulting in follow up attempts to authorize the rule
failing w/ Duplicate errors
Here, we introduce a per-SG mutex that must be held by any security
group before it is allowed to interact with AWS APIs. This protects the
space between `DescribeSecurityGroup` and `Authorize*` / `Revoke*`
calls, ensuring that no other rules interact with the SG during that
span.
The included test exposes the race by applying a security group with
lots of rules, which based on the dependency graph can all be handled in
parallel. This fails most of the time without the new locking behavior.
I've omitted the mutex from `Read`, since it is only called during the
Refresh walk when no changes are being made, meaning a bunch of parallel
`DescribeSecurityGroup` API calls should be consistent in that case.