Adds region specific S3 bucket name validation. Currently all regions except for us-east-1 force a dns-compliant naming convention. Thus we cannot utilize the standard `SchemaValidateFunc` interface to validate an S3 bucket name.
This change creates a helper function outside of the schema validation interface so we can validate S3 bucket names for both naming conventions. At a later date, when the us-east-1 region is updated to conform to a dns-compliant naming scheme, we can refactor the `validateS3BucketName` function to fit the `SchemaValidateFunc` interface.
This commit adds support for new helper function which is used to
normalise and validate JSON string.
Signed-off-by: Krzysztof Wilczynski <krzysztof.wilczynski@linux.com>
* Add normalizeJsonString and validateJsonString functions.
Signed-off-by: Krzysztof Wilczynski <krzysztof.wilczynski@linux.com>
* Add unit test for the normalizeJsonString helper function.
Signed-off-by: Krzysztof Wilczynski <krzysztof.wilczynski@linux.com>
* Fix. Remove incrrect format string.
Signed-off-by: Krzysztof Wilczynski <krzysztof.wilczynski@linux.com>
* Remove surplus type assertion.
Signed-off-by: Krzysztof Wilczynski <krzysztof.wilczynski@linux.com>
* Add unit test for the validateJsonStringhelper function.
Signed-off-by: Krzysztof Wilczynski <krzysztof.wilczynski@linux.com>
* Remove surplus whitespaces.
Signed-off-by: Krzysztof Wilczynski <krzysztof.wilczynski@linux.com>
This commit adds a new "attachment" style resource for setting the
policy of an AWS S3 bucket. This is desirable such that the ARN of the
bucket can be referenced in an IAM Policy Document.
In addition, we now suppress diffs on the (now-computed) policy in the
S3 bucket for structurally equivalent policies, which prevents flapping
because of whitespace and map ordering changes made by the S3 endpoint.
* provider/aws: Add errcheck to Makefile, error on unchecked errors
* more exceptions
* updates for errcheck to pass
* reformat and spilt out the ignore statements
* narrow down ignores
* fix typo, only ignore Close and Write, instead of close or write
This commit fixes an issue where CORS rules would not be read and thus refreshed
correctly should there be a change introduced externally e.g. CORS configuration
was edited outside of Terraform.
Signed-off-by: Krzysztof Wilczynski <krzysztof.wilczynski@linux.com>
Any S3 Bucket owner may wish to share data but not incur charges associated
with others accessing the data. This commit adds an optional "request_payer"
attribute to the aws_s3_bucket resource so that the owner of the S3 bucket can
specify who should bear the cost of Amazon S3 data transfer.
Signed-off-by: Krzysztof Wilczynski <krzysztof.wilczynski@linux.com>
or us-gov
Fixes#7969
`acceleration_status` is not available in China or US-Gov data centers.
Even querying for this will give the following:
```
Error refreshing state: 1 error(s) occurred:
2016/08/04 13:58:52 [DEBUG] plugin: waiting for all plugin processes to
complete...
* aws_s3_bucket.registry_cn: UnsupportedArgument: The request contained
* an unsupported argument.
status code: 400, request id: F74BA6AA0985B103
```
We are going to stop any Read calls for acceleration status from these
data centers
```
% make testacc TEST=./builtin/providers/aws
% TESTARGS='-run=TestAccAWSS3Bucket_' ✹
==> Checking that code complies with gofmt requirements...
go generate $(go list ./... | grep -v /terraform/vendor/)
TF_ACC=1 go test ./builtin/providers/aws -v -run=TestAccAWSS3Bucket_
-timeout 120m
=== RUN TestAccAWSS3Bucket_Notification
--- PASS: TestAccAWSS3Bucket_Notification (409.46s)
=== RUN TestAccAWSS3Bucket_NotificationWithoutFilter
--- PASS: TestAccAWSS3Bucket_NotificationWithoutFilter (166.84s)
=== RUN TestAccAWSS3Bucket_basic
--- PASS: TestAccAWSS3Bucket_basic (133.48s)
=== RUN TestAccAWSS3Bucket_acceleration
--- PASS: TestAccAWSS3Bucket_acceleration (282.06s)
=== RUN TestAccAWSS3Bucket_Policy
--- PASS: TestAccAWSS3Bucket_Policy (332.14s)
=== RUN TestAccAWSS3Bucket_UpdateAcl
--- PASS: TestAccAWSS3Bucket_UpdateAcl (225.96s)
=== RUN TestAccAWSS3Bucket_Website_Simple
--- PASS: TestAccAWSS3Bucket_Website_Simple (358.15s)
=== RUN TestAccAWSS3Bucket_WebsiteRedirect
--- PASS: TestAccAWSS3Bucket_WebsiteRedirect (380.38s)
=== RUN TestAccAWSS3Bucket_WebsiteRoutingRules
--- PASS: TestAccAWSS3Bucket_WebsiteRoutingRules (258.29s)
=== RUN TestAccAWSS3Bucket_shouldFailNotFound
--- PASS: TestAccAWSS3Bucket_shouldFailNotFound (92.24s)
=== RUN TestAccAWSS3Bucket_Versioning
--- PASS: TestAccAWSS3Bucket_Versioning (654.19s)
=== RUN TestAccAWSS3Bucket_Cors
--- PASS: TestAccAWSS3Bucket_Cors (143.58s)
=== RUN TestAccAWSS3Bucket_Logging
--- PASS: TestAccAWSS3Bucket_Logging (249.79s)
=== RUN TestAccAWSS3Bucket_Lifecycle
--- PASS: TestAccAWSS3Bucket_Lifecycle (259.87s)
PASS
ok github.com/hashicorp/terraform/builtin/providers/aws
3946.464s
```
thanks to @kwilczynski and @radeksimko for the research on how to handle the generic
errors here
Running these over a 4G tethering connection has been painful :)
The S3 API has two parameters that can be passed to it (HostName
and Protocol) for the RedirectAllRequestsTo functionality.
HostName is somewhat poorly named because it need not be only a
hostname (it can contain a path too.)
The terraform code for this was treating the API as the parameter
name suggests and was truncating out any paths that were passed.
* Update website_endpoint_url_test.go
Allow ap-south-1 (Mumbai) as valid region
* Update hosted_zones.go
Allowing ap-south-1 (Mumbai) as valid region
* Update website_endpoint_url_test.go
reformatting
* Update hosted_zones.go
reformatting
* Update resource_aws_s3_bucket.go
making changes for ap-south-1 (Mumbai) region
Change the `RetryFunc` from a plain `error` return type to a
specialized `RetryError` which must decide whether it is
retryable or not.
Add `RetryableError` / `NonRetryableError` factory functions that
callers are meant to use to build up these errors.
This makes it eminently clear whether or not a given error is
retryable from inside the client code.
Goal here is to _not_ change any behavior, simply reflect the
existing behavior with the new, clearer, API.
AWS provider was not checking whether DeleteMarkers are left in S3
bucket causing s3.DeleteObjectsInput to send empty XML which resulted in
400 error and MalformedXML message.