Fixes#3635
This follows the suggestion of @apparentlymart in
https://github.com/hashicorp/terraform/issues/3635#issuecomment-151000068
to fix the issue of OpsWorks stacks always complaining about the custom
cookbooks SSH key needing to be changed.
Functional tests:
* Created a new stack and gave it an SSH key. The key was written to
OpsWorks properly.
* Ran "plan" again and terraform indicated it needed to change the SSH
key, which is expected since terraform cannot read what the existing
SSH is.
* Removed the key from my resource and this time, "plan" did not have
any changes. The `tfstate` file indicated the SSH key was "" (empty
string).
* Changed an unrelated property of the stack. Previously this was not
working for me due to terraform attempting to change the SSH key.
DB Replica can be of a different storage type, but we were skipping that part.
Note that they are created as the default (or as the primary?) initially,
and then modified to be of the correct type
Added acceptance test for creation in folders
Added 'baseName' as computed schema attribute for convenience
Added 'base_name' computed attribute for convenience
Added new vsphere folder resource
Fixed folder behavior
Assure test folders are properly removed
Avoid creating recreating search index in loop
Fix typeo in vsphere.createFolder
Updated website documentation
Renamed test folders to be unique across tests
Fixes based on acc test findings; code cleanup
Added combined folder and vm acc test
Restored newline; fixed skipped acc tests
Marked 'existing_path' as computed only
Removed debug logging from tests
Changed folder read to return error
This commit prevents Terraform from erroring when an attempt is made
to delete a volume already in a "deleting" state. This can happen when
the volume is the root disk of an instance and the instance was
terminated.
The error string for 404s on DNS domains has (apparently)
changed, causing things to be a little sad when you modify
DNS domains from the DO console instead of terraform. This
is just the same fix as was applied to droplets around this
time last month.
While I was at it I just fixed this everywhere I saw it in the
DO provider source tree.
AWS provider was not checking whether DeleteMarkers are left in S3
bucket causing s3.DeleteObjectsInput to send empty XML which resulted in
400 error and MalformedXML message.
This should make tests more stable going forward. Also switch out the
image used from Ubuntu to Alpine Linux to reduce required download size
during test runs.
Conflicts:
builtin/providers/google/provider.go
builtin/providers/google/resource_subscription.go
builtin/providers/google/resource_subscription_test.go
golang pubsub SDK has been released. moved topics/subscriptions to use that
Conflicts:
builtin/providers/google/provider.go
builtin/providers/google/resource_subscription.go
builtin/providers/google/resource_subscription_test.go
file renames and add documentation files
remove typo'd merge and type file move
add to index page as well
only need to define that once
remove topic_computed schema value
I think this was used at one point but is no longer. away.
cleanup typo
adds a couple more config values
- ackDeadlineSeconds: number of seconds to wait for an ack
- pushAttributes: attributes of a push subscription
- pushEndpoint: target for a push subscription
rearrange to better match current conventions
respond to all of the comments
PR #3896 added support for passing keys by content, but in this same PR
all references to `path.Join()` where changed to `filepath.join()`.
There is however a significant difference between these two calls and
using the latter one now causes issues when running the Chef
provisioner on Windows (see issue #4039).
Some error-checking was omitted.
Specifically, the cloudTrailSetLogging call in the Create function was
ignoring the return and cloudTrailGetLoggingStatus could crash on a
nil-dereference during the return. Fixed both.
Fixed some needless casting in cloudTrailGetLoggingStatus.
Clarified error message in acceptance tests.
Removed needless option from example in docs.
The default for `enable_logging`, which defines whether CloudTrail
actually logs events was originally written as defaulting to `false`,
since that's how AWS creates trails.
`true` is likely a better default for Terraform users.
Changed the default and updated the docs.
Changed the acceptance tests to verify new default behavior.
Previously we assumed the existence of some default objects that most
Opsworks users have because the Opsworks console creates them by default
when a new stack is created.
However, that meant that these tests wouldn't work correctly for anyone
who either had never used Opsworks via the UI or who had never accepted
the default of having the console create some predefined IAM objects to
use. It may also have led to some weird failures if a particular user had
customized the settings for these default objects.
Now the tests create suitable IAM roles, a policy and an instance profile
and use these when creating Opsworks stacks, avoiding any dependency
on any pre-existing objects.
This fixes#3998.
The AWS CloudTrail resource is capable of creating CloudTrail resources,
but AWS defaults the actual logging of the trails to `false`, and
Terraform has no method to enable or monitor the status of logging.
CloudTrail trails that are inactive aren't very useful, and it's a
surprise to discover they aren't logging on creation.
Added an `enable_logging` parameter to resource_aws_cloudtrail to enable
logging. This requires some extra API calls, which are wrapped in new
internal functions.
For compatibility with AWS, the default of `enable_logging` is set to
`false`.
This commit adds State Change support to the LBaaS resources which should
help with clean terminations.
It also adds an acceptance tests that builds out a 2-node load balance
service.
Because `aws_security_group_rule` resources are an abstraction on top of
Security Groups, they must interact with the AWS Security Group APIs in
a pattern that often results in lots of parallel requests interacting
with the same security group.
We've found that this pattern can trigger race conditions resulting in
inconsistent behavior, including:
* Rules that report as created but don't actually exist on AWS's side
* Rules that show up in AWS but don't register as being created
locally, resulting in follow up attempts to authorize the rule
failing w/ Duplicate errors
Here, we introduce a per-SG mutex that must be held by any security
group before it is allowed to interact with AWS APIs. This protects the
space between `DescribeSecurityGroup` and `Authorize*` / `Revoke*`
calls, ensuring that no other rules interact with the SG during that
span.
The included test exposes the race by applying a security group with
lots of rules, which based on the dependency graph can all be handled in
parallel. This fails most of the time without the new locking behavior.
I've omitted the mutex from `Read`, since it is only called during the
Refresh walk when no changes are being made, meaning a bunch of parallel
`DescribeSecurityGroup` API calls should be consistent in that case.