* Fix crash when reading VPC Peering Connection options.
This resolves the issue introduced in #8310.
Signed-off-by: Krzysztof Wilczynski <krzysztof.wilczynski@linux.com>
* Do not de-reference values when using Set().
Signed-off-by: Krzysztof Wilczynski <krzysztof.wilczynski@linux.com>
* provider/aws: Update VPC Peering connect accept/request attributes
* change from type list to type set
* provider/aws: Update VPC Peering accept/requst options, tests
* errwrap some things
* provider/aws: Change Spot Fleet Request to allow a combination of
subnet_id and availability_zone
Also added a complete set of tests that reflect all of the use cases
that Amazon document
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/spot-fleet-examples.html
It is important to note there that Terraform will be suggesting that
users create multiple launch configurations rather than AWS's version of
combing values into CSV based parameters. This will ensure that we are
able to enforce the correct state
Also note that `associate_public_ip_address` now defaults to `false` - a migration has been
included in this PR to migration users of this functionality. This needs
to be noted in the changelog. The last part of changing functionality
here is waiting for the state of the request to become `active`. Before
we get to this state, we cannot guarantee that Amazon have accepted the
request or it could have failed validation.
```
% make testacc TEST=./builtin/providers/aws
% TESTARGS='-run=TestAccAWSSpotFleetRequest_'
% 2 ↵
==> Checking that code complies with gofmt requirements...
/Users/stacko/Code/go/bin/stringer
go generate $(go list ./... | grep -v /terraform/vendor/)
2016/08/22 15:44:21 Generated command/internal_plugin_list.go
TF_ACC=1 go test ./builtin/providers/aws -v
-run=TestAccAWSSpotFleetRequest_ -timeout 120m
=== RUN TestAccAWSSpotFleetRequest_changePriceForcesNewRequest
--- PASS: TestAccAWSSpotFleetRequest_changePriceForcesNewRequest (133.90s)
=== RUN TestAccAWSSpotFleetRequest_lowestPriceAzOrSubnetInRegion
--- PASS: TestAccAWSSpotFleetRequest_lowestPriceAzOrSubnetInRegion (76.67s)
=== RUN TestAccAWSSpotFleetRequest_lowestPriceAzInGivenList
--- PASS: TestAccAWSSpotFleetRequest_lowestPriceAzInGivenList (75.22s)
=== RUN TestAccAWSSpotFleetRequest_lowestPriceSubnetInGivenList
--- PASS: TestAccAWSSpotFleetRequest_lowestPriceSubnetInGivenList (96.95s)
=== RUN TestAccAWSSpotFleetRequest_multipleInstanceTypesInSameAz
--- PASS: TestAccAWSSpotFleetRequest_multipleInstanceTypesInSameAz (74.44s)
=== RUN TestAccAWSSpotFleetRequest_multipleInstanceTypesInSameSubnet
--- PASS: TestAccAWSSpotFleetRequest_multipleInstanceTypesInSameSubnet (97.82s)
=== RUN TestAccAWSSpotFleetRequest_overriddingSpotPrice
--- PASS: TestAccAWSSpotFleetRequest_overriddingSpotPrice (76.22s)
=== RUN TestAccAWSSpotFleetRequest_diversifiedAllocation
--- PASS: TestAccAWSSpotFleetRequest_diversifiedAllocation (79.81s)
=== RUN TestAccAWSSpotFleetRequest_withWeightedCapacity
--- PASS: TestAccAWSSpotFleetRequest_withWeightedCapacity (77.15s)
=== RUN TestAccAWSSpotFleetRequest_CannotUseEmptyKeyName
--- PASS: TestAccAWSSpotFleetRequest_CannotUseEmptyKeyName (0.00s)
PASS
ok github.com/hashicorp/terraform/builtin/providers/aws 788.184s
```
* Update resource_aws_spot_fleet_request.go
Replication Groups
In order to be able to restore a named snapshot as ElastiCache Cluster
or a Replication Group, the `snapshot_name` parameter was needed to be
passed. Changing the `snapshot_name` will force a new resource to be
created
```
```
resources
Fixes#8420
Adds the ability to update tags on the ALB resource as well as
supporting tags on `aws_alb_target_group`
```
ALB Tests:
% make testacc TEST=./builtin/providers/aws TESTARGS='-run=TestAccAWSALB_' 2 ↵ ✹
==> Checking that code complies with gofmt requirements...
/Users/stacko/Code/go/bin/stringer
go generate $(go list ./... | grep -v /terraform/vendor/)
2016/08/23 19:30:16 Generated command/internal_plugin_list.go
TF_ACC=1 go test ./builtin/providers/aws -v -run=TestAccAWSALB_ -timeout 120m
=== RUN TestAccAWSALB_basic
--- PASS: TestAccAWSALB_basic (67.18s)
=== RUN TestAccAWSALB_tags
--- PASS: TestAccAWSALB_tags (99.88s)
=== RUN TestAccAWSALB_noSecurityGroup
--- PASS: TestAccAWSALB_noSecurityGroup (62.49s)
=== RUN TestAccAWSALB_accesslogs
--- PASS: TestAccAWSALB_accesslogs (126.25s)
PASS
ok github.com/hashicorp/terraform/builtin/providers/aws 355.835s
```
```
ALB Target Group Tests:
% make testacc TEST=./builtin/providers/aws TESTARGS='-run=TestAccAWSALBTargetGroup_'
==> Checking that code complies with gofmt requirements...
/Users/stacko/Code/go/bin/stringer
go generate $(go list ./... | grep -v /terraform/vendor/)
2016/08/23 19:37:37 Generated command/internal_plugin_list.go
TF_ACC=1 go test ./builtin/providers/aws -v -run=TestAccAWSALBTargetGroup_ -timeout 120m
=== RUN TestAccAWSALBTargetGroup_basic
--- PASS: TestAccAWSALBTargetGroup_basic (47.26s)
=== RUN TestAccAWSALBTargetGroup_tags
--- PASS: TestAccAWSALBTargetGroup_tags (81.01s)
=== RUN TestAccAWSALBTargetGroup_updateHealthCheck
--- PASS: TestAccAWSALBTargetGroup_updateHealthCheck (78.74s)
PASS
ok github.com/hashicorp/terraform/builtin/providers/aws 207.025s
```
Renamed the local_name_filter attribute to name_regex and made it clear in the
docs that this runs locally and could have a performance impact on a large set
of AMIs returned from AWS.
`aws_elasticache_replication_group`
Fixes#8377
Now we can output the endpoint of the primary
```
resource "aws_elasticache_replication_group" "bar" {
replication_group_id = "tf-11111"
replication_group_description = "test description"
node_type = "cache.m1.small"
number_cache_clusters = 2
port = 6379
parameter_group_name = "default.redis2.8"
apply_immediately = true
}
output "primary_endpoint_address" {
value = "${aws_elasticache_replication_group.bar.primary_endpoint_address}"
}
```
This gives us:
```
% terraform apply
...................
aws_elasticache_replication_group.bar: Creation complete
Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
Outputs:
primary_endpoint_address = tf-11111.d5jx4z.ng.0001.use1.cache.amazonaws.com
```
This was the addition of a computed field only so the basic test still works as expected:
```
% make testacc TEST=./builtin/providers/aws TESTARGS='-run=TestAccAWSElasticacheReplicationGroup_basic' ✹
==> Checking that code complies with gofmt requirements...
/Users/stacko/Code/go/bin/stringer
go generate $(go list ./... | grep -v /terraform/vendor/)
2016/08/22 17:11:13 Generated command/internal_plugin_list.go
TF_ACC=1 go test ./builtin/providers/aws -v -run=TestAccAWSElasticacheReplicationGroup_basic -timeout 120m
=== RUN TestAccAWSElasticacheReplicationGroup_basic
--- PASS: TestAccAWSElasticacheReplicationGroup_basic (741.71s)
PASS
ok github.com/hashicorp/terraform/builtin/providers/aws 741.735s
```
In cases where the filters provided by AWS against the name of an AMI are not
sufficient, allow adding a "local_name_filter" which is a regex that is used
to filter the AMIs returned by amazon.
The Terraform documentation, rather correctly, refers to a list of options
you can pass to an Elastic Load Balancer from the AWS documentation. All but
one of these options works; 'Server Order Preference' doesn't work, because
the API refers to it as 'Server-Defined-Cipher-Order'.
Add a note to explain this, at least as a temporary solution.
Fixes#8340.
This commit adds two optional blocks called "accepter" and "requester" to the
resource allowing for setting desired VPC Peering Connection options for VPCs
that participate in the VPC peering.
Signed-off-by: Krzysztof Wilczynski <krzysztof.wilczynski@linux.com>
This commit adds an `arn` field to `aws_alb` and `aws_alb_target_group`
resources, in order to present a more coherant user experience to people
using resource variables in fields suffixed "arn".
* provider/consul: first stab at adding prepared query support
* provider/consul: flatten pq resource
* provider/consul: implement updates for PQ's
* provider/consul: implement PQ delete
* provider/consul: add acceptance tests for prepared queries
* provider/consul: add template support to PQ's
* provider/consul: use substructures to express optional related components for PQs
* website: first pass at consul prepared query docs
* provider/consul: PQ's support datacenter option and store_token option
* provider/consul: remove store_token on PQ's for now
* provider/consul: allow specifying a separate stored_token
* website: update consul PQ docs
* website: add link to consul_prepared_query resource
* vendor: update github.com/hashicorp/consul/api
* provider/consul: handle 404's when reading prepared queries
* provider/consul: prepared query failover dcs is a list
* website: update consul PQ example usage
* website: re-order arguments for consul prepared queries
This commit adds a resource, acceptance tests and documentation for the
Target Groups for Application Load Balancers.
This is the second in a series of commits to fully support the new
resources necessary for Application Load Balancers.
This commit adds a resource, acceptance tests and documentation for the
new Application Load Balancer (aws_alb). We choose to use the name alb
over the package name, elbv2, in order to avoid confusion.
This is the first in a series of commits to fully support the new
resources necessary for Application Load Balancers.
- Minor update to remove some posisbly confusing discussion about
pre 0.7 seperate binaries for plugins, and link to internal
plugins section for more clarification of how plugins are
handled in 0.7+.
* provider/aws: Add failing ETC + notifications test
* tidy up the docs some
* provider/aws: Update ElasticTranscoder to allow empty notifications, removing notifications, etc
When you need to enable monitoring for Redshift, you need to create the
correct policy in the bucket for logging. This needs to have the
Redshift Account ID for a given region. This data source provides a
handy lookup for this
http://docs.aws.amazon.com/redshift/latest/mgmt/db-auditing.html#db-auditing-enable-logging
% make testacc TEST=./builtin/providers/aws
% TESTARGS='-run=TestAccAWSRedshiftAccountId_basic' 2 ↵ ✹ ✭
==> Checking that code complies with gofmt requirements...
/Users/stacko/Code/go/bin/stringer
go generate $(go list ./... | grep -v /terraform/vendor/)
2016/08/16 14:39:35 Generated command/internal_plugin_list.go
TF_ACC=1 go test ./builtin/providers/aws -v
-run=TestAccAWSRedshiftAccountId_basic -timeout 120m
=== RUN TestAccAWSRedshiftAccountId_basic
--- PASS: TestAccAWSRedshiftAccountId_basic (19.47s)
PASS
ok github.com/hashicorp/terraform/builtin/providers/aws 19.483s
The whitespace in the documentation for `aws_appautoscaling_policy` was very confusing. I believe it was due to a mix up of tabs and spaces. It's much prettier and easier to read now.
This commit adds the `state rm` command for removing an address from
state. It is the result of a rebase from pull-request #5953 which was
lost at some point during the Terraform 0.7 feature branch merges.
This data source provides access during configuration to the ID of the
AWS account for the connection to AWS. It is primarily useful for
interpolating into policy documents, for example when creating the
policy for an ELB or ALB access log bucket.
This will need revisiting and further testing once the work for
AssumeRole is integrated.
* add dep for servicebus client from azure-sdk-for-node
* add servicebus namespaces support
* add docs for servicebus_namespaces
* add Microsoft.ServiceBus to providers list
AWS Lambda VPC config is an optional configuration and which needs to both subnet_ids and
security_group_ids to tie the lambda function to a VPC. We should make it optional if
both subnet_ids and security_group_ids are not net which would add better flexiblity in
creation of more useful modules as there are "if else" checks. Without this we are creating
duplicate modules one with VPC and one without VPC resulting in various anomalies.
An S3 Bucket owner may wish to select a different underlying storage class
for an object. This commit adds an optional "storage_class" attribute to the
aws_s3_bucket_object resource so that the owner of the S3 bucket can specify
an appropriate storage class to use when creating an object.
Signed-off-by: Krzysztof Wilczynski <krzysztof.wilczynski@linux.com>
- adds "source_uri" field
- "source_uri" expects the URI to an existing blob that you have access
to
- it can be in a different storage account, or in the Azure File service
- the docs have been updated to reflect the change
Signed-off-by: Dan Wendorf <dwendorf@pivotal.io>
* Overriding S3 endpoint - Enable specifying your own
S3 api endpoint to override the default one, under
endpoints.
* Force S3 path style - Expose this option from the aws-sdk-go
configuration to the provider.
* providers/google: Add google_compute_image resource
This change introduces the google_compute_image resource, which allows
Terraform users to create a bootable VM image from a raw disk tarball
stored in Google Cloud Storage. The google_compute_image resource
may be referenced as a boot image for a google_compute_instance.
* providers/google: Support family property in google_compute_image
* provider/google: Idiomatic checking for presence of config val
* vendor: Update Google client libraries
* #7013 add tls config support to consul provider
* #7013 add acceptance tests
* #7013 use GFM tables
* #7013 require one of {CONSUL_ADDRESS,CONSUL_HTTP_ADDR} when running consul acc tests
* Support setting datacenter when using consul remote state
Change-Id: I8c03f4058e9373f0de8fde7ce291ec552321cc60
* Add documentation for setting datacenter when using consul remote state
Change-Id: Ia62feea7a910a76308f0a5e7f9505c9a210e0339
* provider/aws: Re-implement api gateway parameter handling
this PR cleans up some left overs from PR #4295, namely the parameter handling.
now that GH-2143 is finally closed this PR does away with the ugly
`request_parameters_in_json` and `response_parameters_in_json` hack.
* Add deprecation message and conflictsWith settings
following @radeksimko s advice, keeping the old code around with a deprecation
warning.
this should be cleaned up in a few releases
* provider/aws: fix missing append operation
* provider/aws: mark old parameters clearly as deprecated
* provider/aws work around #8104
following @radeksimko s lead
* provider/aws fix cnp error
An S3 Bucket owner may wish to set a canned ACL (as opposite to explicitly set
grantees, etc.) for an object. This commit adds an optional "acl" attribute to
the aws_s3_bucket_object resource so that the owner of the S3 bucket can
specify an appropriate pre-defined ACL to use when creating an object.
Signed-off-by: Krzysztof Wilczynski <krzysztof.wilczynski@linux.com>
Any S3 Bucket owner may wish to share data but not incur charges associated
with others accessing the data. This commit adds an optional "request_payer"
attribute to the aws_s3_bucket resource so that the owner of the S3 bucket can
specify who should bear the cost of Amazon S3 data transfer.
Signed-off-by: Krzysztof Wilczynski <krzysztof.wilczynski@linux.com>
### Explanation for this change
Recently, I've been using Terraform to manage AWS API GWs with Lambda backends.
It appears that an explicit dependency is required. Not setting it would lead to this error:
```
[...] Error creating API Gateway Integration Response: NotFoundException: No integration defined for method
```
Thus, I found the thread below which exposes the problem too.
Relevant Terraform version: checked against 0.6.16
Thread issue: https://github.com/hashicorp/terraform/issues/6128
or us-gov
Fixes#7969
`acceleration_status` is not available in China or US-Gov data centers.
Even querying for this will give the following:
```
Error refreshing state: 1 error(s) occurred:
2016/08/04 13:58:52 [DEBUG] plugin: waiting for all plugin processes to
complete...
* aws_s3_bucket.registry_cn: UnsupportedArgument: The request contained
* an unsupported argument.
status code: 400, request id: F74BA6AA0985B103
```
We are going to stop any Read calls for acceleration status from these
data centers
```
% make testacc TEST=./builtin/providers/aws
% TESTARGS='-run=TestAccAWSS3Bucket_' ✹
==> Checking that code complies with gofmt requirements...
go generate $(go list ./... | grep -v /terraform/vendor/)
TF_ACC=1 go test ./builtin/providers/aws -v -run=TestAccAWSS3Bucket_
-timeout 120m
=== RUN TestAccAWSS3Bucket_Notification
--- PASS: TestAccAWSS3Bucket_Notification (409.46s)
=== RUN TestAccAWSS3Bucket_NotificationWithoutFilter
--- PASS: TestAccAWSS3Bucket_NotificationWithoutFilter (166.84s)
=== RUN TestAccAWSS3Bucket_basic
--- PASS: TestAccAWSS3Bucket_basic (133.48s)
=== RUN TestAccAWSS3Bucket_acceleration
--- PASS: TestAccAWSS3Bucket_acceleration (282.06s)
=== RUN TestAccAWSS3Bucket_Policy
--- PASS: TestAccAWSS3Bucket_Policy (332.14s)
=== RUN TestAccAWSS3Bucket_UpdateAcl
--- PASS: TestAccAWSS3Bucket_UpdateAcl (225.96s)
=== RUN TestAccAWSS3Bucket_Website_Simple
--- PASS: TestAccAWSS3Bucket_Website_Simple (358.15s)
=== RUN TestAccAWSS3Bucket_WebsiteRedirect
--- PASS: TestAccAWSS3Bucket_WebsiteRedirect (380.38s)
=== RUN TestAccAWSS3Bucket_WebsiteRoutingRules
--- PASS: TestAccAWSS3Bucket_WebsiteRoutingRules (258.29s)
=== RUN TestAccAWSS3Bucket_shouldFailNotFound
--- PASS: TestAccAWSS3Bucket_shouldFailNotFound (92.24s)
=== RUN TestAccAWSS3Bucket_Versioning
--- PASS: TestAccAWSS3Bucket_Versioning (654.19s)
=== RUN TestAccAWSS3Bucket_Cors
--- PASS: TestAccAWSS3Bucket_Cors (143.58s)
=== RUN TestAccAWSS3Bucket_Logging
--- PASS: TestAccAWSS3Bucket_Logging (249.79s)
=== RUN TestAccAWSS3Bucket_Lifecycle
--- PASS: TestAccAWSS3Bucket_Lifecycle (259.87s)
PASS
ok github.com/hashicorp/terraform/builtin/providers/aws
3946.464s
```
thanks to @kwilczynski and @radeksimko for the research on how to handle the generic
errors here
Running these over a 4G tethering connection has been painful :)
* provider/google: Support static private IP addresses
The private address of an instance's network interface may now be specified.
If no value is provided, an address will be chosen by Google Compute Engine
and that value will be read into Terraform state.
* docs: GCE private static IP address information
Add firehose elasticsearch configuration documentation
Adding CRUD for elastic search as firehose destination
Updated the firehose stream documentation to add elastic search as destination example.
Adding testing for es as firehose destination
Update the test case for es
This commit adds VPN Gateway attachment resource, and also an initial tests and
documentation stubs.
Signed-off-by: Krzysztof Wilczynski <krzysztof.wilczynski@linux.com>
`elasticsearch_version` 2.3
Fixes#7836
This will allow ElasticSearch domains to be deployed with version 2.3 of
ElasticSearch
The other slight modifications are to stop dereferencing values before
passing to d.Set in the Read func. It is safer to pass the pointer to
d.Set and allow that to dereference if there is a value
```
% make testacc TEST=./builtin/providers/aws TESTARGS='-run=TestAccAWSElasticSearchDomain_'
==> Checking that code complies with gofmt requirements...
go generate $(go list ./... | grep -v /terraform/vendor/)
TF_ACC=1 go test ./builtin/providers/aws -v -run=TestAccAWSElasticSearchDomain_ -timeout 120m
=== RUN TestAccAWSElasticSearchDomain_basic
--- PASS: TestAccAWSElasticSearchDomain_basic (1611.74s)
=== RUN TestAccAWSElasticSearchDomain_v23
--- PASS: TestAccAWSElasticSearchDomain_v23 (1898.80s)
=== RUN TestAccAWSElasticSearchDomain_complex
--- PASS: TestAccAWSElasticSearchDomain_complex (1802.44s)
PASS
ok github.com/hashicorp/terraform/builtin/providers/aws 5313.006s
```
Update resource_aws_elasticsearch_domain.go
* Improve influxdb provider
- reduce public funcs. We should not make things public that don't need to be public
- improve tests by verifying remote state
- add influxdb_user resource
allows you to manage influxdb users:
```
resource "influxdb_user" "admin" {
name = "administrator"
password = "super-secret"
admin = true
}
```
and also database specific grants:
```
resource "influxdb_user" "ro" {
name = "read-only"
password = "read-only"
grant {
database = "a"
privilege = "read"
}
}
```
* Grant/ revoke admin access properly
* Add continuous_query resource
see
https://docs.influxdata.com/influxdb/v0.13/query_language/continuous_queries/
for the details about continuous queries:
```
resource "influxdb_database" "test" {
name = "terraform-test"
}
resource "influxdb_continuous_query" "minnie" {
name = "minnie"
database = "${influxdb_database.test.name}"
query = "SELECT min(mouse) INTO min_mouse FROM zoo GROUP BY time(30m)"
}
```
This commit allows an operator to specify the e-mail address of a service
account to use with a Google Compute Engine instance. If no service account
e-mail is provided, the default service account is used.
Closes#7985
* Add state filter to aws_availability_zones data source.
This commit adds an ability to filter Availability Zones based on state, where
by default it would only list available zones.
Be advised that this does not always works reliably for an older accounts which
have been created in the pre-VPC era of EC2. These accounts tends to retrieve
availability zones that are not VPC-enabled, thus creation of a custom subnet
within such Availability Zone would result in a failure.
Signed-off-by: Krzysztof Wilczynski <krzysztof.wilczynski@linux.com>
* Update documentation for aws_availability_zones data source.
Signed-off-by: Krzysztof Wilczynski <krzysztof.wilczynski@linux.com>
* Do not filter on state by default.
This commit makes the state filter applicable only when set.
Signed-off-by: Krzysztof Wilczynski <krzysztof.wilczynski@linux.com>
* Enables copy of files within vSphere
* Can copy files between different datacenters and datastores
* Update can move uploaded or copied files between datacenters and datastores
* Preserves original functionality for backward compatibility
* Fix link to the remote state link post 0.7.x.
Signed-off-by: Krzysztof Wilczynski <krzysztof.wilczynski@linux.com>
* Correct "resource" to "data source".
Signed-off-by: Krzysztof Wilczynski <krzysztof.wilczynski@linux.com>
As the page says:
> Over time, faces may appear from this list as contributors come and
> go.
Let's reflect the list of peoople working on Terraform nowadays! :)
Some spelling and working fixes.
Added mention of deprecation warning to exisiting data sources as
resources.
Mention `~/.terraform.d` for old builtin plugin location
Added mention of `terraform.tfvars` and `-var-file` in map value
overrides.
This test overrides the AWS_DEFAULT_REGION parameter as the security
groups are created in us-east-1 (due to classic VPC requirements)
```
% make testacc TEST=./builtin/providers/aws TESTARGS='-run=TestAccAWSDBSecurityGroup_importBasic'
==> Checking that code complies with gofmt requirements...
go generate $(go list ./... | grep -v /terraform/vendor/)
TF_ACC=1 go test ./builtin/providers/aws -v
-run=TestAccAWSDBSecurityGroup_importBasic -timeout 120m
=== RUN TestAccAWSDBSecurityGroup_importBasic
--- PASS: TestAccAWSDBSecurityGroup_importBasic (49.46s)
PASS
ok github.com/hashicorp/terraform/builtin/providers/aws 49.487s
```
* Auto-detect the API version
and update the endpoint URL accordingly
* Typo fix
* Make client and resource work with the 4.X API
* Update documentation
* Fix typos
* 204 now counts as a "success" response
See
f0e76cee2c
for the change in the pdns repository.
* Add a note about a possible pitfall when defining some records
* Add ability to set Performance Mode in aws_efs_file_system.
The Elastic File System (EFS) allows for setting a Performance Mode during
creation, thus enabling anyone to chose performance of the file system according
to their particular needs. This commit adds an optional "performance_mode"
attribte to the aws_efs_file_system resource so that an appropriate mode can be
set as needed.
Signed-off-by: Krzysztof Wilczynski <krzysztof.wilczynski@linux.com>
* Add test coverage for the ValidateFunc used.
Signed-off-by: Krzysztof Wilczynski <krzysztof.wilczynski@linux.com>
* Add "creation_token" and deprecate "reference_name".
Add the "creation_token" attribute so that the resource follows the API more
closely (as per the convention), thus deprecate the "reference_name" attribute.
Update tests and documentation accordingly.
Signed-off-by: Krzysztof Wilczynski <krzysztof.wilczynski@linux.com>
If you specify just a bare ID, then the initial application works but
subsequent applications may end up doing bad things, like:
```
-/+ aws_ebs_volume.vol_1
availability_zone: "us-east-1a" => "us-east-1a"
encrypted: "true" => "true"
iops: "" => "<computed>"
kms_key_id: "arn:aws:kms:us-east-1:123456789:key/59faf88b-0912-4cca-8b6c-bd107a6ba8c4" => "59faf88b-0912-4cca-8b6c-bd107a6ba8c4" (forces new resource)
size: "100" => "100"
snapshot_id: "" => "<computed>"
```
Fixes#7423
```
% make testacc TEST=./builtin/providers/aws TESTARGS='-run=TestAccAWSRedshiftCluster_loggingEnabled'
==> Checking that code complies with gofmt requirements...
go generate $(go list ./... | grep -v /terraform/vendor/)
TF_ACC=1 go test ./builtin/providers/aws -v
-run=TestAccAWSRedshiftCluster_loggingEnabled -timeout 120m
=== RUN TestAccAWSRedshiftCluster_loggingEnabled
--- PASS: TestAccAWSRedshiftCluster_loggingEnabled (675.21s)
PASS
ok github.com/hashicorp/terraform/builtin/providers/aws 675.233s
```
* `map(key, value, ...)` - Returns a map consisting of the key/value pairs
specified as arguments. Every odd argument must be a string key, and every
even argument must have the same type as the other values specified.
Duplicate keys are not allowed. Examples:
* `map("hello", "world")`
* `map("us-east", list("a", "b", "c"), "us-west", list("b", "c", "d"))`
* provider/mysql: User Resource
This commit introduces a mysql_user resource. It includes basic
functionality of adding a user@host along with a password.
* provider/mysql: Grant Resource
This commit introduces a mysql_grant resource. It can grant a set
of privileges to a user against a whole database.
* provider/mysql: Adding documentation for user and grant resources
Previously the consul_keys resource did double-duty as both a reader and
writer of values from the Consul key/value store, but that made its
interface rather confusing and complex, as well as having all of the other
general problems associated with read-only resources.
Here we split the functionality such that reading is done with the
consul_keys data source while writing is done with the consul_keys
resource.
The old read behavior of the resource is still supported, but it's no
longer documented (except as a deprecation note) and will generate
deprecation warnings when used.
In future it should be possible to simplify the consul_keys resource by
removing all of the read support, but that is deferred for now to give
users a chance to gracefully migrate to the new data source.
Expose the network interface ID that is created with a new instance.
This can be useful when associating an existing elastic IP to the
default interface on an instance that has multiple network interfaces.
* provider/scaleway: update api version
* provider/scaleway: expose ipv6 support, rename ip attributes
since it can be both ipv4 and ipv6, choose a more generic name.
* provider/scaleway: allow servers in different SGs
* provider/scaleway: update documentation
* provider/scaleway: Update docs with security group
* provider/scaleway: add testcase for server security groups
* provider/scaleway: make deleting of security rules more resilient
* provider/scaleway: make deletion of security group more resilient
* provider/scaleway: guard against missing server
* provider/aws: Delete access keys before deleting IAM user
* provider/aws: Put IAM key removal behind force_destroy option
* provider/aws: Move all access key deletion under force_destroy
* Add iam_user force_destroy to website
* provider/aws: Improve clarity of looping over pages in delete IAM user
Sidebar:
- Rename "Azure (Resource Manager)" to "Microsoft Azure" and sort
accordingly
- Rename "Azure (Service Management)" to "Microsoft Azure (Legacy ASM)"
and sort accordingly
ARM provider docs:
- Name changes everywhere to Microsoft Azure Provider
- Mention and link to "legacy Azure Service Management Provider" in opening paragraph
- Sidebar gains link at bottom to Azure Service Management Provider
ASM provider docs:
- Name changes everywhere to Azure Service Management Provider
- Sidebar gains link at bottom to Microsoft Azure Provider
- Every page gets a header with the following
- "NOTE: The Azure Service Management provider is no longer being actively developed by HashiCorp employees. It continues to be supported by the community. We recommend using the Azure Resource Manager based [Microsoft Azure Provider] instead if possible."
* add opsworks permission resource
* add docs
* remove permission from state if the permission object could not be found
* remove nil validate function. validation is done in schema.Resource.
* add id to the list of exported values
* renge over permission to check that we have found got the correct one
* removed comment
* removed set id
* fix unknown region us-east-1c
* add user_profile resource
* add docs
* add default value
* provider/aws: Support kms_key_id for `aws_rds_cluster`
```
% make testacc TEST=./builtin/providers/aws TESTARGS='-run=TestAccAWSRDSCluster_'
==> Checking that code complies with gofmt requirements...
go generate $(go list ./... | grep -v /terraform/vendor/)
TF_ACC=1 go test ./builtin/providers/aws -v -run=TestAccAWSRDSCluster_
-timeout 120m
=== RUN TestAccAWSRDSCluster_basic
--- PASS: TestAccAWSRDSCluster_basic (127.57s)
=== RUN TestAccAWSRDSCluster_kmsKey
--- PASS: TestAccAWSRDSCluster_kmsKey (323.72s)
=== RUN TestAccAWSRDSCluster_encrypted
--- PASS: TestAccAWSRDSCluster_encrypted (173.25s)
=== RUN TestAccAWSRDSCluster_backupsUpdate
--- PASS: TestAccAWSRDSCluster_backupsUpdate (264.07s)
PASS
ok github.com/hashicorp/terraform/builtin/providers/aws 888.638s
```
* provider/aws: Add KMS Key ID to `aws_rds_cluster_instance`
```
```
Fixes#7299 where it was found that computer_name is not optional (as
the msdn documentation states)
```
make testacc TEST=./builtin/providers/azurerm TESTARGS='-run=TestAccAzureRMVirtualMachine_'
==> Checking that code complies with gofmt requirements...
go generate $(go list ./... | grep -v /vendor/)
TF_ACC=1 go test ./builtin/providers/azurerm -v -run=TestAccAzureRMVirtualMachine_ -timeout 120m
=== RUN TestAccAzureRMVirtualMachine_basicLinuxMachine
--- PASS: TestAccAzureRMVirtualMachine_basicLinuxMachine (403.53s)
=== RUN TestAccAzureRMVirtualMachine_tags
--- PASS: TestAccAzureRMVirtualMachine_tags (488.46s)
=== RUN TestAccAzureRMVirtualMachine_updateMachineSize
--- PASS: TestAccAzureRMVirtualMachine_updateMachineSize (601.82s)
=== RUN TestAccAzureRMVirtualMachine_basicWindowsMachine
--- PASS: TestAccAzureRMVirtualMachine_basicWindowsMachine (646.75s)
=== RUN TestAccAzureRMVirtualMachine_windowsUnattendedConfig
--- PASS: TestAccAzureRMVirtualMachine_windowsUnattendedConfig (891.42s)
=== RUN TestAccAzureRMVirtualMachine_winRMConfig
--- PASS: TestAccAzureRMVirtualMachine_winRMConfig (768.73s)
PASS
ok github.com/hashicorp/terraform/builtin/providers/azurerm 3800.734s
```
Allow lists and maps within the list interpolation function via variable
interpolation. Since this requires setting the variadic type to TypeAny,
we check for non-heterogeneous lists in the callback.
* docs/digitalocean: Adding an import section to the bottom of the DO
importable resources
* docs/azurerm: Adding the Import sections for the AzureRM Importable resources
* docs/aws: Adding the import sections to the AWS provider pages
The list() interpolation function provides a way to add support for list
literals (of strings) to HIL without having to invent new syntax for it
and modify the HIL parser.
It presents as a function, thus:
- list() -> []
- list("a") -> ["a"]
- list("a", "b") -> ["a", "b"]
Thanks to @wr0ngway for the idea of this approach, fixes#7460.
* Add scaleway provider
this PR allows the entire scaleway stack to be managed with terraform
example usage looks like this:
```
provider "scaleway" {
api_key = "snap"
organization = "snip"
}
resource "scaleway_ip" "base" {
server = "${scaleway_server.base.id}"
}
resource "scaleway_server" "base" {
name = "test"
# ubuntu 14.04
image = "aecaed73-51a5-4439-a127-6d8229847145"
type = "C2S"
}
resource "scaleway_volume" "test" {
name = "test"
size_in_gb = 20
type = "l_ssd"
}
resource "scaleway_volume_attachment" "test" {
server = "${scaleway_server.base.id}"
volume = "${scaleway_volume.test.id}"
}
resource "scaleway_security_group" "base" {
name = "public"
description = "public gateway"
}
resource "scaleway_security_group_rule" "http-ingress" {
security_group = "${scaleway_security_group.base.id}"
action = "accept"
direction = "inbound"
ip_range = "0.0.0.0/0"
protocol = "TCP"
port = 80
}
resource "scaleway_security_group_rule" "http-egress" {
security_group = "${scaleway_security_group.base.id}"
action = "accept"
direction = "outbound"
ip_range = "0.0.0.0/0"
protocol = "TCP"
port = 80
}
```
Note that volume attachments require the server to be stopped, which can lead to
downtimes of you attach new volumes to already used servers
* Update IP read to handle 404 gracefully
* Read back resource on update
* Ensure IP detachment works as expected
Sadly this is not part of the official scaleway api just yet
* Adjust detachIP helper
based on feedback from @QuentinPerez in
https://github.com/scaleway/scaleway-cli/pull/378
* Cleanup documentation
* Rename api_key to access_key
following @stack72 suggestion and rename the provider api_key for more clarity
* Make tests less chatty by using custom logger
We cannot use the "id" field to represent policy ID, because it is used
internally by Terraform. Also change the "id" field within a statement
to "sid" for consistency with the generated JSON.
This commit removes the ability to index into complex output types using
`terraform output a_list 1` (for example), and adds a `-json` flag to
the `terraform output` command, such that the output can be piped
through a post-processor such as jq or json. This removes the need to
allow arbitrary traversal of nested structures.
It also adds tests of human readable ("normal") output with nested lists
and maps, and of the new JSON output.
The template resources don't actually need to retain any state, so they
are good candidates to be data sources.
This includes a few tweaks to the acceptance tests -- now configured to
run as unit tests -- since it seems that they have been slightly broken
for a while now. In particular, the "update" cases are no longer tested
because updating is not a meaningful operation for a data source.
When adding multiple notifications from one S3 bucket to one SQS queue, it wasn't immediately intuitive how to do this.
At first I created two `aws_s3_bucket_notification` configs and it seemed to work fine, however the config for one event
will overwrite the other. In order to have multiple events, you can defined the `queue` key twice, or use an array if you're
working with the JSON syntax. I tried to make this more clear in the documentation.
This fixes#7157. It doesn't change the way aws_ami works
```
make testacc TEST=./builtin/providers/aws
TESTARGS='-run=TestAccAWSAMICopy'
==> Checking that code complies with gofmt requirements...
go generate $(go list ./... | grep -v /vendor/)
TF_ACC=1 go test ./builtin/providers/aws -v -run=TestAccAWSAMICopy
-timeout 120m
=== RUN TestAccAWSAMICopy
--- PASS: TestAccAWSAMICopy (479.75s)
PASS
ok github.com/hashicorp/terraform/builtin/providers/aws 479.769s
```
allows load balancer policies and their assignment to backend servers or listeners to be configured independently.
this gives flexibility to configure additional policies on aws elastic load balancers aside from the already provided "convenience" wrappers for cookie stickiness
This resource (unlike the others in this provider) isn't stateful, so it
is a good candidate to be a data source.
The old resource form is preserved via the standard shim in helper/schema,
which will generate a deprecation warning but will still allow the
resource to be used.
When applying or removing 2+ security groups from an instance, an EOF
error will be triggered even though the action was successful. This
patch accounts for and ignores the EOF error. It also adds a test
case.
Security Group and Port documentation are also updated in this
commit.
* small doc update
* provider/atlas: Add docs for Artifact Data Source
* provider/atlas: Remove a test method that isn't used
* provider/atlas: Add Data Source for Atlas Artifact
* provider/atlas: Show deprecation error on atlas_artifact resource
* Added support for redshift destination to firehose delivery streams
* Small documentation fix
* go fmt after rebase
* small fixes after rebase
* provider/aws: Firehose test cleanups
* provider/aws: Update docs
* Convert Redshift and S3 blocks to TypeList
* provider/aws: Add migration for S3 Configuration in Kinesis firehose
* providers/aws: Safety first when building Redshift config options
* restore commented out log statements in the migration
* provider/aws: use MaxItems in schema
* added additional error info for when memory swap assert fails.
related to https://github.com/hashicorp/terraform/pull/7392
* updated docker_container documentation
reflect recent changes to docker provider around tests, dns options and
dns search support.
* Grammar and punctuation changes
Docker container documentation.
* Spell checking, grammar and punctuation.
Docker container documentation.
* Markdown change sto docker container documentation
* Add SES resource
* Detect ReceiptRule deletion outside of Terraform
* Handle order of rule actions
* Add position field to docs
* Fix hashes, add log messages, and other small cleanup
* Fix rebase issue
* Fix formatting
In CloudStack you can dynamically start using an ACL and once you use
an ACL you can dynamically swap ACL’s. But once your using an ACL, you
can no longer stop using an ACL without rebuilding the network.
This change makes the `ForceNew` value dynamic so that it only returns
`true` if you are reverting from using an ACL to not using an ACL
anymore, making this functionally inline with the behaviour CloudStack
offers.
this datasource allows terraform to work with externally modified state, e.g.
when you're using an ECS service which is continously updated by your CI via the
AWS CLI.
right now you'd have to wrap terraform into a shell script which looks up the
current image digest, so running terraform won't change the updated service.
using the aws_ecs_container_definition data source you can now leverage
terraform, removing the wrapper entirely.
Since this resource produces a list it feels more intuitive to give its
attribute a plural name, and since the noun "instance" already means
something specific in the AWS provider that doesn't apply here we use
"names" to indicate that these are availability zone names.
Also includes updating the docs to not show a dynamic count example for
now, since we don't support that yet.
false
Fixes#7035
A known issue in Terraform means that d.GetOk() on a bool which is false
will mean it doesn't get evaulated. Therefore, when people set
publicly_accessible to false, it will never get evaluated on the Create
We are going to make it default to false now
The documentation wording implies that in all cases you have to manually accept peering requests. This change is intended to clarify where this is required. The documentation also separates between "basic usage" and "basic usage with tags", but the expanded usage didn't actually provide much additional useful information. Expanded a bit to show the use of auto_accept since both VPCs are created by the content and to show setting the Name tag for proper display in the console.
resize
When resizing a DO droplet, you can only increase the size not
descrease. If you try and go down in size, the API will return this
error:
```
* digitalocean_droplet.foobar: Error resizing droplet (17090364):
POST https://api.digitalocean.com/v2/droplets/17090364/actions:
422 Size can not decrease size of Droplet's disk image
```
Since the custom_configuration_parameters can't take dots, we cannot
set 'disk.EnableUUID'. This adds a parameter for this options that gets
added to a configSpec. This option causes the vm to mount disks by uuid
on the guest OS.
* Adding debug functionality to log debug api calls
* adding debug and refactoring tests
* more tweaks with tests
* updating documentation
* more refactoring of tests
* working through factor for testing
* removing logging that displays username and password
* more work on getting tests stable
The example is referencing a non-existent variable, `allocation_id`, within the `aws_eip` resource. I believe this should actually be `aws_eip.example.id` instead of `aws_eip.example.allocation_id`.
Add the iam_arn attribute to aws_cloudfront_origin_access_identity,
which computes the IAM ARN for a certain CloudFront origin access
identity.
This is necessary because S3 modifies the bucket policy if CanonicalUser
is sent, causing spurious diffs with aws_s3_bucket resources.
This brings over the work done by @apparentlymart and @radeksimko in
PR #3124, and converts it into a data source for the AWS provider:
This commit adds a helper to construct IAM policy documents using
familiar Terraform concepts. It makes Terraform-style interpolations
easier and resolves the syntax conflict between Terraform interpolations
and IAM policy variables by changing the latter to use &{...} for its
interpolations.
Its use is completely optional and users are free to go on using literal
heredocs, file interpolations or whatever else; this just adds another
option that fits more naturally into a Terraform config.
...as this will hopefully clue people in that this function will indeed
work to manipulate ipv6 networks.
Not that I completely spaced on that for quite some time, or anything
like that.
Nope, not me. Not at all.
This data source allows one to look up the most recent AMI for a specific
set of parameters, much like aws ec2 describe-images in the AWS CLI.
Basically a refresh of hashicorp/terraform#4396, in data source form.
* Add per user, role and group policy attachment
* Add docs for new IAM policy attachment resources.
* Make policy attachment resources manage only 1 entity<->policy attachment
* provider/aws: Tidy up IAM Group/User/Role attachments
This commit adds a data source with a single list, `instance` for the
schema which gets populated with the availability zones to which an
account has access.
Allow a cloud admin to target a specific tenant in which to allocate
a floating IP. This is useful when the cloud admin does not want to
delegate network privileges to the tenants or various Q&A scenarios.
resource
We had a line on the Update func that said:
```
Hash key can only be specified at creation, you cannot modify it.
```
The resource has now been changed to ForceNew on the hashkey
```
aws_dynamodb_table.demo-user-table: Refreshing state... (ID: Users)
aws_dynamodb_table.demo-user-table: Destroying...
aws_dynamodb_table.demo-user-table: Destruction complete
aws_dynamodb_table.demo-user-table: Creating...
aws_dynamodb_table.demo-user-table: Creation complete
```
Changed schema type for disks to support dynamic non-ordered disk
swapping. All Disk attributes have been made non ForceNew since
any changes should be handled in the upgrade() function.
Added 'name' attribute to disks to act as a unique
identifier for when users request for new disks. It is also used as
the filename for the new disk. Templates are considered immutable.
The openstack_networking_subnet_v2 resource was originally designed
to have DHCP disabled by default; however, a bug in the original
implementation caused DHCP to always be enabled and never be
disabled. This bug was fixed in #6052.
Recent discussions have shown that users prefer if DHCP is enabled
by default. This commit implements makes the change.
When stage_name is not passed to the resource
aws_api_gateway_deployment a terraform apply will fail. This is
because the stage_name is required and not optional.
* Grafana provider
* grafana_data_source resource.
Allows data sources to be created in Grafana. Supports all data source
types that are accepted in the current version of Grafana, and will
support any future ones that fit into the existing structure.
* Vendoring of apparentlymart/go-grafana-api
This is in anticipation of adding a Grafana provider plugin.
* grafana_dashboard resource
* Website documentation for the Grafana provider.
* provider/datadog Update go-datadog-api.
* provider/datadog Add support for "require_full_window" and "locked".
* provider/datadog Update tests, update doco, gofmt.
* provider/datadog Add options to update resource.
* provider/datadog "require_full_window" defaults to True, "locked" to False. Use
those initial values as the starting configuration.
* provider/datadog Update notify_audit tests to use the default value for
testAccCheckDatadogMonitorConfig and a custom value for
testAccCheckDatadogMonitorConfigUpdated.
This catches a situation where the code ignores setting the option on creation,
and the update function merely asserts the default value, versus actually changing
the value.
`azurerm_storage_account` access keys
Please note that we do NOT have the ability to manage the access keys -
we are just getting the keys that the account creates for us. To manage
the keys, you would need to use the azure portal still
As a first example of a real-world data source, the pre-existing
terraform_remote_state resource is adapted to be a data source. The
original resource is shimmed to wrap the data source for backward
compatibility.
As requested in #4822, add support for a KMS Key ID (ARN) for Db
Instance
```
make testacc TEST=./builtin/providers/aws
TESTARGS='-run=TestAccAWSDBInstance_kmsKey' 2>~/tf.log
==> Checking that code complies with gofmt requirements...
go generate $(go list ./... | grep -v /vendor/)
TF_ACC=1 go test ./builtin/providers/aws -v
-run=TestAccAWSDBInstance_kmsKey -timeout 120m
=== RUN TestAccAWSDBInstance_basic
--- PASS: TestAccAWSDBInstance_basic (587.37s)
=== RUN TestAccAWSDBInstance_kmsKey
--- PASS: TestAccAWSDBInstance_kmsKey (625.31s)
PASS
ok github.com/hashicorp/terraform/builtin/providers/aws 1212.684s
```
Auto-generating an Instance Template name (or just its suffix) allows the
create_before_destroy lifecycle option to function correctly on the
Instance Template resource. This in turn allows Instance Group Managers
to be updated without being destroyed.
This introduces the terraform state list command to list the resources
within a state. This is the first of many state management commands to
come into 0.7.
This is the first command of many to come that is considered a
"plumbing" command within Terraform (see "plumbing vs porcelain":
http://git.661346.n2.nabble.com/what-are-plumbing-and-porcelain-td2190639.html).
As such, this PR also introduces a bunch of groundwork to support
plumbing commands.
The main changes:
- Main command output is changed to split "common" and "uncommon"
commands.
- mitchellh/cli is updated to support nested subcommands, since
terraform state list is a nested subcommand.
- terraform.StateFilter is introduced as a way in core to filter/search
the state files. This is very basic currently but I expect to make it
more advanced as time goes on.
- terraform state list command is introduced to list resources in a
state. This can take a series of arguments to filter this down.
Known issues, or things that aren't done in this PR on purpose:
- Unit tests for terraform state list are on the way. Unit tests for the
core changes are all there.
This changes the representation of maps in the interpolator from the
dotted flatmap form of a string variable named "var.variablename.key"
per map element to use native HIL maps instead.
This involves porting some of the interpolation functions in order to
keep the tests green, and adding support for map outputs.
There is one backwards incompatibility: as a result of an implementation
detail of maps, one could access an indexed map variable using the
syntax "${var.variablename.key}".
This is no longer possible - instead HIL native syntax -
"${var.variablename["key"]}" must be used. This was previously
documented, (though not heavily used) so it must be noted as a backward
compatibility issue for Terraform 0.7.
This introduces the terraform state list command to list the resources
within a state. This is the first of many state management commands to
come into 0.7.
This is the first command of many to come that is considered a
"plumbing" command within Terraform (see "plumbing vs porcelain":
http://git.661346.n2.nabble.com/what-are-plumbing-and-porcelain-td2190639.html).
As such, this PR also introduces a bunch of groundwork to support
plumbing commands.
The main changes:
- Main command output is changed to split "common" and "uncommon"
commands.
- mitchellh/cli is updated to support nested subcommands, since
terraform state list is a nested subcommand.
- terraform.StateFilter is introduced as a way in core to filter/search
the state files. This is very basic currently but I expect to make it
more advanced as time goes on.
- terraform state list command is introduced to list resources in a
state. This can take a series of arguments to filter this down.
Known issues, or things that aren't done in this PR on purpose:
- Unit tests for terraform state list are on the way. Unit tests for the
core changes are all there.
* core: Add support for marking outputs as sensitive
This commit allows an output to be marked "sensitive", in which case the
value is redacted in the post-refresh and post-apply list of outputs.
For example, the configuration:
```
variable "input" {
default = "Hello world"
}
output "notsensitive" {
value = "${var.input}"
}
output "sensitive" {
sensitive = true
value = "${var.input}"
}
```
Would result in the output:
```
terraform apply
Apply complete! Resources: 0 added, 0 changed, 0 destroyed.
Outputs:
notsensitive = Hello world
sensitive = <sensitive>
```
The `terraform output` command continues to display the value as before.
Limitations: Note that sensitivity is not tracked internally, so if the
output is interpolated in another module into a resource, the value will
be displayed. The value is still present in the state.
* provider/fastly: Add support for Conditions for Fastly Services
Docs here:
- https://docs.fastly.com/guides/conditions/
Also Bump go-fastly version for domain support in S3 Logging
* New top level AWS resource aws_eip_association
* Add documentation for aws_eip_association
* Add tests for aws_eip_association
* provider/aws: Change `aws_elastic_ip_association` to have computed
parameters
The AWS API was send ing more parameters than we had set. Therefore,
Terraform was showing constant changes when plans were being formed
* Adding private ip address reference
* adding private ip address reference
* Updating the docs.
* Removing optional attrib from private_ip_address
Removing optional attribute from private_ip_address, this element is only being used in the read.
* Selecting the first element instead of using a loop for now.
Change this to a loop when https://github.com/Azure/azure-sdk-for-go/issues/259 is fixed
Added the hosted_zone_id attribute, which aliases to the Route 53
zone ID that can be used to route Alias Resource Record Sets to.
This fixeshashicorp/terraform#6489.
adminPassword
Reports from issues showed the following errors:
```
{
"error": {
"code": "InvalidParameter",
"target": "adminPassword",
"message": "The supplied password must be
between 6-72 characters long and must
satisfy at least 3 of password complexity
requirements from the following: \r\n1)
Contains an uppercase character\r\n2)
Contains a lowercase character\r\n3)
Contains a numeric digit\r\n4) Contains a
special character."
}
}
```
This commit adds some documentation for the adminPassword complexity
requirements
ssh_keys were throwing an error similar to this:
```
* azurerm_virtual_machine.test: [DEBUG] Error setting Virtual Machine
* Storage OS Profile Linux Configuration: &errors.errorString{s:"Invalid
* address to set: []string{\"os_profile_linux_config\", \"0\",
* \"ssh_keys\"}"}
```
This was because of nesting of Set within a Set in the schema. By
changing this to a List within a Set, the schema works as expected. This
means we can now set SSH Keys on VMs. This has been tested using a
remote-exec and a connection block with the ssh key
```
azurerm_virtual_machine.test: Still creating... (2m10s elapsed)
azurerm_virtual_machine.test (remote-exec): Connected!
azurerm_virtual_machine.test (remote-exec): CONNECTED!
```
Change the AWS DB Instance to now include the DB Option Group param. Adds a test to prove that it works
Add acceptance tests for the AWS DB Option Group work. This ensures that Options can be added and updated
Documentation for the AWS DB Option resource
automated_snapshot_retention_period
The default value for `automated_snapshot_retention_period` is 1.
Therefore, it can be included in the `CreateClusterInput` without
needing to check that it is set.
This was actually stopping people from setting the value to 0 (disabling
the snapshots) as there is an issue in `d.GetOk()` evaluating 0 for int
Just wanted to call out that the CLI prompts for values for unset variables instead of an error. Guessing that was an enhancement somewhere along the line and just didn't get updated in the docs.
Here is an example that will setup the following:
+ An SSH key resource.
+ A virtual server resource that uses an existing SSH key.
+ A virtual server resource using an existing SSH key and a Terraform managed SSH key (created as "test_key_1" in the example below).
(create this as sl.tf and run terraform commands from this directory):
```hcl
provider "softlayer" {
username = ""
api_key = ""
}
resource "softlayer_ssh_key" "test_key_1" {
name = "test_key_1"
public_key = "${file(\"~/.ssh/id_rsa_test_key_1.pub\")}"
# Windows Example:
# public_key = "${file(\"C:\ssh\keys\path\id_rsa_test_key_1.pub\")}"
}
resource "softlayer_virtual_guest" "my_server_1" {
name = "my_server_1"
domain = "example.com"
ssh_keys = ["123456"]
image = "DEBIAN_7_64"
region = "ams01"
public_network_speed = 10
cpu = 1
ram = 1024
}
resource "softlayer_virtual_guest" "my_server_2" {
name = "my_server_2"
domain = "example.com"
ssh_keys = ["123456", "${softlayer_ssh_key.test_key_1.id}"]
image = "CENTOS_6_64"
region = "ams01"
public_network_speed = 10
cpu = 1
ram = 1024
}
```
You'll need to provide your SoftLayer username and API key,
so that Terraform can connect. If you don't want to put
credentials in your configuration file, you can leave them
out:
```
provider "softlayer" {}
```
...and instead set these environment variables:
- **SOFTLAYER_USERNAME**: Your SoftLayer username
- **SOFTLAYER_API_KEY**: Your API key
IPv6 support added.
We support 1 IPv6 address per interface. It seems like the vSphere SDK supports more than one, since it's provided as a list.
I can change it to support more than one address. I decided to stick with one for now since that's how the configuration parameters
had been set up by other developers.
The global gateway configuration option has been removed. Instead the user should specify a gateway on NIC level (ipv4_gateway and ipv6_gateway).
For now, the global gateway will be used as a fallback for every NICs ipv4_gateway.
The global gateway configuration option has been marked as deprecated.
this implements two new resource types:
* openstack_networking_secgroup_v2 - create a neutron security group
* openstack_networking_secgroup_rule_v2 - create a newutron security
group rule
Unlike their nova counterparts the neutron security groups allow a user
to specify the target tenant_id allowing a cloud admin to create per
tenant resources.
* Adding File Resource for vSphere provider
Allows for file upload to vSphere at specified location. This also
includes update for moving or renaming of file resources.
* Ensuring required parameters are provided
This commit adds several example uses of the
openstack_compute_instance_v2 resource. It also makes a clarification
about booting from volumes and image ids/names.
* command/fmt: Document -diff doesn't disable -write
As noted in hashicorp/terraform#6343, this description misleadingly
suggested that the `-diff` option disables the `-write` option.
This isn't the case and because of the default options (described in
c753390) the behaviour of `terraform fmt -diff` is actually the same as
`terraform fmt -write -list -diff`.
Replace the "instead of rewriting" description to clarify that.
Documentation in hcl/fmtcmd is corrected in hashicorp/hcl#117 but it's not
really necessary to bump the dependency version.
* command/fmt: Show flag defaults in help text
These were documented on the website but not in the `-help` text. This
should help to clarify that you need to pass `-list=false -write=false
-diff` if you only want to see diffs.
Accordingly I've replaced the word "disabled" with "always false" in the
STDIN special cases so that it matches the terminology used in the defaults
and better indicates that it is overridden.
NB: The 3x duplicated defaults and documentation makes me feel uneasy once
again. I'm not sure how to solve that, though.
* Fix headers and header anchor tags
The markdown parser already generates unique ids for header elements by
downcasing all of the words and replacing spaces with hyphens. Knowing
this, we can take the code blocks out of the headers and use the
generated ids as the link targets.
Aside: I tried to see if there was a standard way of documenting
subresources, but couldn't really find one. Both the aws_elb and
aws_instance resources seem to just say "documented below" without a
link. Then the relevant section is just a new paragraph with a list of
arguments.
* Reformat long lines
I find 80 character lines and whitespaces make the lists much easier to
read :)
* Remove extraneous <a> tags for header anchor tags
Now that middleman generates anchor tags for headers automagically, we
don't need to have blank <a> tags for anchor links to use.
* provider/fastly: Add S3 Log Streaming to Fastly Service
Adds streaming logs to an S3 bucket to Fastly Service V1
* provider/fastly: Bump go-fastly version for domain support in S3 Logging
This change adds the support for the proxied configuration option for a
record which enables origin protection for CloudFlare records.
In order to do so the golang library needed to be changed as the old did
not support the option and was using and outdated API version.
Open issues which ask for this (#5049, #3805).
User may specify a vmdk in their disk definition.
The options size, template, and vmdk are considered
to be mutually exclusive. User may also set whether each disk
associated with the vm should try to boot after creation.
Todo: Enforce mutual exclusivity, validate the bootable_vmdk_path
Just saying `id` is ambiguous, it could be interpreted as the resource ID which will fail with the follow error: `CertificateNotFound: Server Certificate not found for the key: <id>`. The AWS documentation states that the ssl certificate id parameter must be the ARN.
Previously we linked to the whole request body definition for valid values of `runtime`.
Now we link directly to the docs for `Runtime`, which will hopefully make it easier to find the valid values.
This fixes#4570.
Official OpenStack clients commonly support specifing a client
certificate/key to enable SSL client authentication when communicating
with OpenStack services. This patch enables such feature in Terraform
with new parameters and environment variables:
* 'cert' provider parameter or OS_CERT env variable to specify client
certificate file,
* 'key' provider parameter or OS_KEY env variable to specify client
certificate private key file.
It can come in handy to be able to mount ISOs programmatically.
For instance if you're developing a custom appliance (that automatically installs itself on the hard drive volume)
that you want to automatically test on every successful build (given the ISO is uploaded to the vmware datastore).
There are probably lots of other reasons for using this functionality.
* provider/aws: Fix hashing on CloudFront certificate parameters
Adding necessary type assertion to values on the viewer_certificate hash
function to ensure that certain fields are indeed not zero string
values, versus simply zero interface{} values (aka nil, as is such for a
map[string]interface{}).
* provider/aws: CloudFront complex structure error handling
Handle errors better on calls to d.Set() in the
aws_cloudfront_distribution, namely in flattenDistributionConfig(). Also
caught a bug in the setting of the origin attribute, was incorrectly
attempting to set origins.
* provider/aws: Pass pointers to set CloudFront primitives
Change a few d.Set() for primitives in aws_cloudfront_distribution and
aws_cloudfront_origin_access_identity to use the pointer versus a
dereference.
* docs: Fix CloudFront examples formatting
Ran each example thru terraform fmt to fix indentation.
* provider/aws: Remove delete retention on CloudFront tests
To play better with Travis and not bloat the test account with disabled
distributions.
Disable-only functionality has been retained - one can enable it with
the TF_TEST_CLOUDFRONT_RETAIN environment variable.
* provider/aws: CloudFront delete waiter error handling
The call to resourceAwsCloudFrontDistributionWaitUntilDeployed() on
deletion of CloudFront distributions was not trapping error messages,
causing issues with waiter failure.
* provider/fastly: Add support for managing Headers
Adds support for managing Headers in a Fastly configuration.
* update acc test
* update website with example of adding a header block
* provider/aws: Default Network ACL resource
Provides a resource to manage the default AWS Network ACL. VPC Only.
* Remove subnet_id update, mark as computed value. Remove extra tag update
* refactor default rule number to be a constant
* refactor revokeRulesForType to be revokeAllNetworkACLEntries
Refactor method to delete all network ACL entries, regardless of type. The
previous implementation was under the assumption that we may only eliminate some
rule types and possibly not others, so the split was necessary.
We're now removing them all, so the logic isn't necessary
Several doc and test cleanups are here as well
* smite subnet_id, improve docs
According to the libpq documentation, "prefer" is the default in the
underlying library and so setting a different default in the Terraform
layer would be a breaking change for existing users of this provider
whose servers do not have TLS correctly configured.
The docs now link to the libpq manual's discussion of the security
implications of each of the ssl_mode options, so the user can understand
the limitations of the "prefer" default and can make an informed decision
about which setting is appropriate for their situation.
This introduces a provider for Cobbler. Cobbler manages bare-metal
deployments and, to some extent, virtual machines. This initial
commit supports the following resources: distros, profiles, systems,
kickstart files, and snippets.
* CloudFront implementation v3
* Update tests
* Refactor - new resource: aws_cloudfront_distribution
* Includes a complete re-write of the old aws_cloudfront_web_distribution
resource to bring it to feature parity with API and CloudFormation.
* Also includes the aws_cloudfront_origin_access_identity resource to generate
origin access identities for use with S3.
* provider/aws: CodeDeploy Deployment Group Triggers
- Create a Trigger to Send Notifications for AWS CodeDeploy Events
- Update aws_codedeploy_deployment_group docs
* Refactor validateTriggerEvent function and test
- also rename TestAccAWSCodeDeployDeploymentGroup_triggerConfiguration test
* Enhance existing Deployment Group integration tests
- by using built in resource attribute helpers
- these can get quite verbose and repetitive, so passing the resource to a function might be better
- can't use these (yet) to assert trigger configuration state
* Unit tests for conversions between aws TriggerConfig and terraform resource schema
- buildTriggerConfigs
- triggerConfigsToMap
We have a curtesy function in place allowing you to specify both a
`name` of `ID`. But in order for the graph to be build correctly when
you recreate or taint stuff that other resources depend on, we need to
reference the `ID` and *not* the `name`.
So in order to enforce this and by that help people to not make this
mistake unknowingly, I deprecated all the parameters this allies to and
changed the logic, docs and tests accordingly.
Added the ability to set the "privacy" of a github_team resource so all teams won't automatically set to private.
* Added the privacy argument to github_team
* Refactored parameter validation to be general for any argument
* Updated testing
This commit enables the ability to authenticate to OpenStack by way
of a Keystone Token. Tokens can provide a way to use Terraform and
OpenStack with an expiring, temporary credential. The token will need
to be generated out of band from Terraform.
On creating CloudWatch metric alarms, I need to get the HealthCheckId dimension. Reference would be useful.
```
dimensions {
"HealthCheckId" = "${aws_route53_health_check.foo.id}"
}
```
This commit adds a no_gateway attribute. When set, the subnet will
not have a gateway. This is different than not specifying a
gateway_ip since that will cause a default gateway of .1 to be used.
This behavior mirrors the OpenStack Neutron command-line tool.
Fixes#6031
When calling AssociateAddress, the PrivateIpAddress parameter must be
used to select which private IP the EIP should associate with, otherwise
the EIP always associates with the _first_ private IP.
Without this parameter, multiple EIPs couldn't be assigned to a single
ENI. Includes covering test and docs update.
Fixes#2997
GitHub really doesn't like when you make the H lowercase, it violates
their brand guidelines and they won't help promote anything that doesn't
use the capital H.
* update docs on required parameter for api_gateway_integration
This parameter was required for lambda integration.
Otherwise,
` Error creating API Gateway Integration: BadRequestException: Enumeration value for HttpMethod must be non-empty`
* documentation: Including the AWS type on the api_gateway_integration docs
Documentation for `aws_cloudwatch_event_target` to warn that in order to be
able to have your AWS Lambda function or SNS topic invoked by a CloudWatch
Events rule, you must setup the right permissions
using `aws_lambda_permission` or `aws_sns_topic.policy`
It turns out all other providers use `ip_address` where the CloudStack
provider uses `ipaddress`. To make this more consistent this PR
deprecates `ipaddress` and adds `ip_address` where needed…
This new resource is an alternative to consul_keys that manages all keys
under a given prefix, rather than arbitrary single keys across the entire
store.
The key advantage of this resource over consul_keys is that it is able to
detect and delete keys that are added outside of Terraform, whereas
consul_keys is only able to detect changes to keys it is explicitly
managing.
This brings across the following resources for Triton from the
joyent/triton-terraform repository, and converts them to the canonical
Terraform style, introducing Terraform-style documentation and
acceptance tests which run against the live API rather than the local
APIs:
- triton_firewall_rule
- triton_machine
- triton_key
This brings across the following resources for Triton from the
joyent/triton-terraform repository, and converts them to the canonical
Terraform style, introducing Terraform-style documentation and
acceptance tests which run against the live API rather than the local
APIs:
- triton_firewall_rule
- triton_machine
- triton_key
Needed to truncate the identifier for SQL Server engines to keep it at
max 15 chars per the docs. Not a full UUID going into it, but should be
"unique enough" to not matter in practice.
Modified the basic test to use the generated value. Other tests are
still working w/ explicitly specified identifiers.
Previously this resource managed the set of keys as a whole rather than
the individual keys, and so it was unable to recognize when a particular
managed key is removed and delete just that one key from Consul.
Here this is addressed by recognizing that each key actually has its own
lifecycle, and detecting when individual keys are added and removed
without replacing the entire consul_keys instance.
Additionally this restores the behavior of updating the "value" attribute
on read, but restricts it only to blocks that already had a value so as
to avoid the quirkiness seen previously when we updated blocks that were
intended to be read-only. Updating the value is important now, because we
rely on this to detect and repair discrepancies between values stored in
Consul and values given in the configuration.
This change produces a change in the handling of the "delete" attribute.
Before it was considered only when the entire consul_keys resource was
deleted, but now it is considered also when a particular key block is
removed from within a resource.
This adds support for Elastic Beanstalk Applications, Configuration Templates,
and Environments.
This is a combined work of @catsby, @dharrisio, @Bowbaq, and @jen20
These options don't make sense when passing STDIN. `-write` will raise an
error because there is no file to write to. `-list` will always say
`<standard input>`. So disable whenever using STDIN, making the command
much simpler:
cat main.tf | terraform fmt -
So that you can do automatic formatting from an editor. You probably want to
disable the `-write` and `-list` options so that you just get the
re-formatted content, e.g.
cat main.tf | terraform fmt -write=false -list=false -
I've added a non-exported field called `input` so that we can override this
for the tests. If not specified, like in `commands.go`, then it will default
to `os.Stdin` which works on the command line.
The most common usage usage will be enabling the `-write` and `-list`
options so that files are updated in place and a list of any modified files
is printed. This matches the default behaviour of `go fmt` (not `gofmt`). So
enable these options by default.
This does mean that you will have to explicitly disable these if you want to
generate valid patches, e.g. `terraform fmt -diff -write=false -list=false`
This uses the `fmtcmd` package which has recently been merged into HCL. Per
the usage text, this rewrites Terraform config files to their canonical
formatting and style.
Some notes about the implementation for this initial commit:
- all of the fmtcmd options are exposed as CLI flags
- it operates on all files that have a `.tf` suffix
- it currently only operates on the working directory and doesn't accept a
directory argument, but I'll extend this in subsequent commits
- output is proxied through `cli.UiWriter` so that we write in the same way
as other commands and we can capture the output during tests
- the test uses a very simple fixture just to ensure that it is working
correctly end-to-end; the fmtcmd package has more exhaustive tests
- we have to write the fixture to a file in a temporary directory because it
will be modified and for this reason it was easier to define the fixture
contents as a raw string
these changes were added to reflect what was required to run the tutorial on my local machine. Below is my context for the above changes:
```shell
[2016-03-04T18:22:44] micperez in terraform_test
λ terraform remote config -backend-config="name=puhrez/getting-started"
missing 'access_token' configuration or ATLAS_TOKEN environmental variable
If the error message above mentions requiring or modifying configuration
options, these are set using the `-backend-config` flag. Example:
-backend-config="name=foo" to set the `name` configuration
[2016-03-04T18:23:27] micperez in terraform_test
λ export ATLAS_TOKEN=<REDACTED>
[2016-03-04T18:24:12] micperez in terraform_test
λ terraform remote config -backend-config="name=puhrez/getting-started"
Remote state management enabled
Remote state configured and pulled.
[2016-03-04T18:24:16] micperez in terraform_test
λ terraform push -name="puhrez/getting-started"
An error has occurred while archiving the module for uploading:
error detecting VCS: no VCS found for path: /Users/micperez/code/terraform_test
[2016-03-04T18:24:39] micperez in terraform_test
λ git init
Initialized empty Git repository in /Users/micperez/code/terraform_test/.git/
[2016-03-04T18:25:09] micperez in terraform_test [git:master]
λ terraform push -name="puhrez/getting-started"
An error has occurred while archiving the module for uploading:
error getting git commit: exit status 128
stdout:
stderr: fatal: bad default revision 'HEAD'
[2016-03-04T18:25:12] micperez in terraform_test [git:master]
λ git status
On branch master
Initial commit
Untracked files:
(use "git add <file>..." to include in what will be committed)
.terraform/
example.tf
terraform.tfstate.backup
nothing added to commit but untracked files present (use "git add" to track)
[2016-03-04T18:25:17] micperez in terraform_test [git:master]
λ git add example.tf
[2016-03-04T18:25:24] micperez in terraform_test [git:master]
λ git commit -m "init commit"
[master (root-commit) 34c4fa5] init commit
1 file changed, 10 insertions(+)
create mode 100644 example.tf
[2016-03-04T18:25:32] micperez in terraform_test [git:master]
λ terraform push -name="puhrez/getting-started"
Uploading Terraform configuration...
Configuration "puhrez/getting-started" uploaded! (v1)
```
The description field for a managed-zone is now a required field when using the Cloud API.
This commit defaults the field to use the text "Managed by Terraform" to minimize required boilerplate for Terraform users.
Ref: https://cloud.google.com/sdk/gcloud/reference/dns/managed-zones/create
The `description` field is easy to confuse for a nice field to
add an arbitrary comment to - and it's surprising that changes to this
field force a new resource, so we add a big note about it to point users
at tags.
Also marked all the other ForceNew attributes on this resource.
tls_self_signed_cert is really just a shorthand over tls_cert_request and
tls_locally_signed_cert, so rather than duplicating all of this
documentation and risking that it will get out of sync (since the
structure is shared in the implementation) we'll just link to the
existing docs.
This fixes#5343.
This commit fixes and cleans up instance block_device configuration.
Reverts #5354 in that `volume_size` is only required in certain
block_device configuration combinations. Therefore, the actual
attribute must be set to Optional and later checks done.
Doc upates, too.
This commit adds the ability to create instances with multiple
ephemeral disks. The ephemeral disks will appear as local block
devices to the instance.
The `volume_size` of a `block_device` was originally set to Optional,
but it's a required parameter in the OpenStack/Nova API. While it's
possible to infer a default size of the block device, making it required
more closely matches the Nova CLI client as well as provides consistent
experience when working with multiple block_devices.
This resource is the first which makes use of the new Riviera library
(at https://github.com/jen20/riviera), so there is some additional set
up work to add the provider to the client which gets passed among
resources.
This commit adds the ability to associate a Floating IP to a specific
network. Previously, there only existed a top-level floating IP
attribute which was automatically associated with either the first
defined network or the default network (when no network block was
used).
Now floating IPs can be associated with networks beyond the first
defined network as well as each network being able to have their own
floating IP.
Specifying the floating IP by using the top-level floating_ip
attribute and the per-network floating IP attribute is not possible.
Additionally, an `access_network` attribute has been added in order
to easily specify which network should be used for provisioning.
See the updated docs for more details and examples, but in short this
enables the `attributes` param from the Chef provisioner to accept a
raw JSON string.
Fixes#3074Fixes#3572
It was a mistake to switched fully to `==` when activating waiting for
capacity on updates in #3947. Users that didn't set `min_elb_capacity ==
desired_capacity` and instead treated it as an actual "minimum" would
see timeouts for every create, since their target numbers would never be
reached exactly.
Here, we fix that regression by restoring the minimum waiting behavior
during creates.
In order to preserve all the stated behavior, I had to split out
different criteria for create and update, criteria which are now
exhaustively unit tested.
The set of fields that affect capacity waiting behavior has become a bit
of a mess. Next major release I'd like to rework all of these into a
more consistently named block of config. For now, just getting the
behavior correct and documented.
(Also removes all the fixed names from the ASG tests as I was hitting
collision issues running them over here.)
Fixes#4792
of the database after creation. So we need to be able to
set the CharacterSetName on creation.
This is an option and will automagically default to
AL32UTF8.
The AWS SDK will give you an error message if you try to
apply this setting to other engines. The patch will only
report the character_set_name attribute, if CharacterSetName
is set on the instance.
Signed-off-by: Lars Bahner <lars.bahner@gmail.com>
This function returns -1 for negative numbers, 0 for 0 and 1 for positive numbers.
Useful when you need to set a value for the first resource and a different value for the rest of the resources.
Example: `${element(split(",", var.r53_failover_policy), signum(count.index))}`
This commit adds support for declaring variable types in Terraform
configuration. Historically, the type has been inferred from the default
value, defaulting to string if no default was supplied. This has caused
users to devise workarounds if they wanted to declare a map but provide
values from a .tfvars file (for example).
The new syntax adds the "type" key to variable blocks:
```
variable "i_am_a_string" {
type = "string"
}
variable "i_am_a_map" {
type = "map"
}
```
This commit does _not_ extend the type system to include bools, integers
or floats - the only two types available are maps and strings.
Validation is performed if a default value is provided in order to
ensure that the default value type matches the declared type.
In the case that a type is not declared, the old logic is used for
determining the type. This allows backwards compatiblity with previous
Terraform configuration.
Adds the `TF_SKIP_REMOTE_TESTS` env var to be used in cases where the
`http.Get()` smoke test passes but the network is not able to service
the needs of the tests.
Fixes#4421
This means that terraform commands like `plan`, `apply`, `show`, and
`graph` will expand all modules by default.
While modules-as-black-boxes is still very true in the conceptual design
of modules, feedback on this behavior has consistently suggested that
users would prefer to see more verbose output by default.
The `-module-depth` flag and env var are retained to allow output to be
optionally limited / summarized by these commands.
http and https SNS topic subscription endpoints require confirmation to set a valid arn otherwise
arn would be set to "pending confirmation". If the endpoints auto confirm then arn is set
asynchronously but if we try to create another subscription with same parameters then api returns
"pending subscription" as arn but does not create another a duplicate subscription. In order to
solve this we should be fetching the subscription list for the topic and identify the subscription
with same parameters i.e., protocol, topic_arn, endpoint and extract the subscription arn.
Following changes were made to support the http/https endpoints that auto confirms
1. Added 3 extra parameters i.e.,
1. endpoint_auto_confirms -> boolean indicates if end points auto confirms
2. max_fetch_retries -> number of times to fetch subscription list for the topic to get the subscription arn
3. fetch_retry_delay -> delay b/w fetch subscription list call as the confirmation is done asynchronously.
With these parameters help added support http and https protocol based endpoints that auto confirm.
2. Update website doc appropriately
In most cases private keys are used to produce certs and cert requests,
but there are some less-common cases where the PEM-formatted keypair is
used alone. The public_key_pem attribute supports such cases.
This also includes a public_key_openssh attribute, which allows this
resource to be used to generate temporary OpenSSH credentials, so that
e.g. a Terraform configuration could generate its own keypair to use
with the aws_key_pair resource. This has the same caveats as all cases
where we generate private keys in Terraform, but could be useful for
temporary/throwaway environments where the state either doesn't live for
long or is stored securely.
This builds on work started by Simarpreet Singh in #4441 .
This adds acceptance tests for specifying extra hosts on Docker
containers. It also renames the repeating block from `hosts` to `host`,
which reads more naturally in the schema when multiple instances of the
block are declared.
This allows specification of the profile for the shared credentials
provider for AWS to be specified in Terraform configuration. This is
useful if defining providers with aliases, or if you don't want to set
environment variables. Example:
$ aws configure --profile this_is_dog
... enter keys
$ cat main.tf
provider "aws" {
profile = "this_is_dog"
# Optionally also specify the path to the credentials file
shared_credentials_file = "/tmp/credentials"
}
This is equivalent to specifying AWS_PROFILE or
AWS_SHARED_CREDENTIALS_FILE in the environment.
The comment on first line of the code example is 82 characters long
and is cut on the 80-th character when viewed online. The second line
contains only two letters "on" without # in front.
The comment is displayed on two lines anyway, it is better if it is split to
two lines of less than 80 characters.
When spinning up from a snapshot or a read replica, these fields are
now optional:
* allocated_storage
* engine
* password
* username
Some validation logic is added to make these fields required when
starting a database from scratch.
The documentation is updated accordingly.
AWS does some funky stuff to handle all the variations in certificates that CA's like to hand out to users. This commit adds a note about this and details how to avoid issues. See #3837 for more information.
Only use the create_before_destroy-hook in launch configurations. The autoscaling group must not use the create_before_destroy-hook, because it can be updated (and not destroyed + re-created). Using the create_before_destroy-hook in autoscaling group also leads to unwanted cyclic dependencies.
This adds a new resource to template to generate multipart cloudinit
configurations to be used with other providers/resources.
The resource has the ability gzip and base64 encode the parts.
also removed the notion of tags from the redshift security group and
parameter group documentation until that has been implemented
Redshift Cluster CRUD and acceptance tests
Removing the Acceptance test for the Cluster Updates. You cannot delete
a cluster immediately after performing an operation on it. We would need
to add a lot of retry logic to the system to get this test to work
Adding some schema validation for RedShift cluster
Adding the last of the pieces of a first draft of the Redshift work - this is the documentation
Changed the aws_redshift_security_group and aws_redshift_parameter_group
to remove the tags from the schema. Tags are a little bit more
complicated than originally though - I will revisit this later
Then added the schema, CRUD functionality and basic acceptance tests for
aws_redshift_subnet_group
Adding an acceptance test for the Update of subnet_ids in AWS Redshift Subnet Group
This action is almost exactly the same as creating a SimpleAD so we
reuse this resource and allow the user to specify the type when creating
the directory (ignoring the size if the type is MicrosoftAD).
This commit adds the openstack_lb_member_v1 resource. This resource models a
load balancing member which was previously coupled to the openstack_lb_pool_v1
resource.
By creating an actual member resource, load balancing members can now be
dynamically managed through terraform.
Added new section to end of Markdown file for OpenStack security groups,
recommending that security groups are referenced by the name attribute
instead of by the ID attribute.
- Add documentation for resources
- Rename files to match standard patterns
- Add acceptance tests for resource groups
- Add acceptance tests for vnets
- Remove ARM_CREDENTIALS file - as discussed this does not appear to be
an Azure standard, and there is scope for confusion with the
azureProfile.json file which the CLI generates. If a standard emerges
we can reconsider this.
- Validate credentials in the schema
- Remove storage testing artefacts
- Use ARM IDs as Terraform IDs
- Use autorest hooks for logging
Added acceptance test for creation in folders
Added 'baseName' as computed schema attribute for convenience
Added 'base_name' computed attribute for convenience
Added new vsphere folder resource
Fixed folder behavior
Assure test folders are properly removed
Avoid creating recreating search index in loop
Fix typeo in vsphere.createFolder
Updated website documentation
Renamed test folders to be unique across tests
Fixes based on acc test findings; code cleanup
Added combined folder and vm acc test
Restored newline; fixed skipped acc tests
Marked 'existing_path' as computed only
Removed debug logging from tests
Changed folder read to return error
Conflicts:
builtin/providers/google/provider.go
builtin/providers/google/resource_subscription.go
builtin/providers/google/resource_subscription_test.go
golang pubsub SDK has been released. moved topics/subscriptions to use that
Conflicts:
builtin/providers/google/provider.go
builtin/providers/google/resource_subscription.go
builtin/providers/google/resource_subscription_test.go
file renames and add documentation files
remove typo'd merge and type file move
add to index page as well
only need to define that once
remove topic_computed schema value
I think this was used at one point but is no longer. away.
cleanup typo
adds a couple more config values
- ackDeadlineSeconds: number of seconds to wait for an ack
- pushAttributes: attributes of a push subscription
- pushEndpoint: target for a push subscription
rearrange to better match current conventions
respond to all of the comments
Some error-checking was omitted.
Specifically, the cloudTrailSetLogging call in the Create function was
ignoring the return and cloudTrailGetLoggingStatus could crash on a
nil-dereference during the return. Fixed both.
Fixed some needless casting in cloudTrailGetLoggingStatus.
Clarified error message in acceptance tests.
Removed needless option from example in docs.
The default for `enable_logging`, which defines whether CloudTrail
actually logs events was originally written as defaulting to `false`,
since that's how AWS creates trails.
`true` is likely a better default for Terraform users.
Changed the default and updated the docs.
Changed the acceptance tests to verify new default behavior.
It's a bit confusing to have Terraform poll until instances come up on
ASG creation but not on update. This changes update to also poll if
min_size or desired_capacity are changed.
This changes the waiting behavior to wait for precisely the desired
number of instances instead of that number as a "minimum". I believe
this shouldn't have any undue side effects, and the behavior can still
be opted out of by setting `wait_for_capacity_timeout` to 0.
Building on the work of #3846, deprecate `filename` in favor of a
`template` attribute that accepts file contents instead of a path.
Required a bit of work in the interpolation code to prevent Terraform
from assuming that template interpolations were resource variables that
needed to be resolved. Leaving them as "Unknown Variables" prevents
interpolation from happening early and lets the `template_file` resource
do its thing.
This commit makes some quick updates to the port attributes to make them
more intuitive:
* `security_groups` to `security_group_ids`: since the port is expecting
IDs and not security group names like in other areas of OpenStack.
* `admin_state_up`: change to Boolean to match this same attribute on
other resources.
* `fixed_ips` to `fixed_ip`: while multiple `fixed_ip` blocks can be
specified, only one fixed IP can be specified in each block.
Builds on the work of #3846, shifting the Chef provisioner's
configuration options from `secret_key_path` and `validation_key_path`
over to `secret_key` and `validation_key`.
We've been moving away from config fields expecting file paths that
Terraform will load, instead prefering fields that expect file contents,
leaning on `file()` to do loading from a path.
This helps with consistency and also flexibility - since this makes it
easier to shift sensitive files into environment variables.
Here we add a little helper package to manage the transitional period
for these fields where we support both behaviors.
Also included is the first of several fields being shifted over - SSH
private keys in provisioner connection config.
We're moving to new field names so the behavior is more intuitive, so
instead of `key_file` it's `private_key` now.
Additional field shifts will be included in follow up PRs so they can be
reviewed and discussed individually.