`elasticsearch_version` 2.3
Fixes#7836
This will allow ElasticSearch domains to be deployed with version 2.3 of
ElasticSearch
The other slight modifications are to stop dereferencing values before
passing to d.Set in the Read func. It is safer to pass the pointer to
d.Set and allow that to dereference if there is a value
```
% make testacc TEST=./builtin/providers/aws TESTARGS='-run=TestAccAWSElasticSearchDomain_'
==> Checking that code complies with gofmt requirements...
go generate $(go list ./... | grep -v /terraform/vendor/)
TF_ACC=1 go test ./builtin/providers/aws -v -run=TestAccAWSElasticSearchDomain_ -timeout 120m
=== RUN TestAccAWSElasticSearchDomain_basic
--- PASS: TestAccAWSElasticSearchDomain_basic (1611.74s)
=== RUN TestAccAWSElasticSearchDomain_v23
--- PASS: TestAccAWSElasticSearchDomain_v23 (1898.80s)
=== RUN TestAccAWSElasticSearchDomain_complex
--- PASS: TestAccAWSElasticSearchDomain_complex (1802.44s)
PASS
ok github.com/hashicorp/terraform/builtin/providers/aws 5313.006s
```
Update resource_aws_elasticsearch_domain.go
* Improve influxdb provider
- reduce public funcs. We should not make things public that don't need to be public
- improve tests by verifying remote state
- add influxdb_user resource
allows you to manage influxdb users:
```
resource "influxdb_user" "admin" {
name = "administrator"
password = "super-secret"
admin = true
}
```
and also database specific grants:
```
resource "influxdb_user" "ro" {
name = "read-only"
password = "read-only"
grant {
database = "a"
privilege = "read"
}
}
```
* Grant/ revoke admin access properly
* Add continuous_query resource
see
https://docs.influxdata.com/influxdb/v0.13/query_language/continuous_queries/
for the details about continuous queries:
```
resource "influxdb_database" "test" {
name = "terraform-test"
}
resource "influxdb_continuous_query" "minnie" {
name = "minnie"
database = "${influxdb_database.test.name}"
query = "SELECT min(mouse) INTO min_mouse FROM zoo GROUP BY time(30m)"
}
```
This commit allows an operator to specify the e-mail address of a service
account to use with a Google Compute Engine instance. If no service account
e-mail is provided, the default service account is used.
Closes#7985
* Add state filter to aws_availability_zones data source.
This commit adds an ability to filter Availability Zones based on state, where
by default it would only list available zones.
Be advised that this does not always works reliably for an older accounts which
have been created in the pre-VPC era of EC2. These accounts tends to retrieve
availability zones that are not VPC-enabled, thus creation of a custom subnet
within such Availability Zone would result in a failure.
Signed-off-by: Krzysztof Wilczynski <krzysztof.wilczynski@linux.com>
* Update documentation for aws_availability_zones data source.
Signed-off-by: Krzysztof Wilczynski <krzysztof.wilczynski@linux.com>
* Do not filter on state by default.
This commit makes the state filter applicable only when set.
Signed-off-by: Krzysztof Wilczynski <krzysztof.wilczynski@linux.com>
* Enables copy of files within vSphere
* Can copy files between different datacenters and datastores
* Update can move uploaded or copied files between datacenters and datastores
* Preserves original functionality for backward compatibility
This test overrides the AWS_DEFAULT_REGION parameter as the security
groups are created in us-east-1 (due to classic VPC requirements)
```
% make testacc TEST=./builtin/providers/aws TESTARGS='-run=TestAccAWSDBSecurityGroup_importBasic'
==> Checking that code complies with gofmt requirements...
go generate $(go list ./... | grep -v /terraform/vendor/)
TF_ACC=1 go test ./builtin/providers/aws -v
-run=TestAccAWSDBSecurityGroup_importBasic -timeout 120m
=== RUN TestAccAWSDBSecurityGroup_importBasic
--- PASS: TestAccAWSDBSecurityGroup_importBasic (49.46s)
PASS
ok github.com/hashicorp/terraform/builtin/providers/aws 49.487s
```
* Auto-detect the API version
and update the endpoint URL accordingly
* Typo fix
* Make client and resource work with the 4.X API
* Update documentation
* Fix typos
* 204 now counts as a "success" response
See
f0e76cee2c
for the change in the pdns repository.
* Add a note about a possible pitfall when defining some records
* Add ability to set Performance Mode in aws_efs_file_system.
The Elastic File System (EFS) allows for setting a Performance Mode during
creation, thus enabling anyone to chose performance of the file system according
to their particular needs. This commit adds an optional "performance_mode"
attribte to the aws_efs_file_system resource so that an appropriate mode can be
set as needed.
Signed-off-by: Krzysztof Wilczynski <krzysztof.wilczynski@linux.com>
* Add test coverage for the ValidateFunc used.
Signed-off-by: Krzysztof Wilczynski <krzysztof.wilczynski@linux.com>
* Add "creation_token" and deprecate "reference_name".
Add the "creation_token" attribute so that the resource follows the API more
closely (as per the convention), thus deprecate the "reference_name" attribute.
Update tests and documentation accordingly.
Signed-off-by: Krzysztof Wilczynski <krzysztof.wilczynski@linux.com>
If you specify just a bare ID, then the initial application works but
subsequent applications may end up doing bad things, like:
```
-/+ aws_ebs_volume.vol_1
availability_zone: "us-east-1a" => "us-east-1a"
encrypted: "true" => "true"
iops: "" => "<computed>"
kms_key_id: "arn:aws:kms:us-east-1:123456789:key/59faf88b-0912-4cca-8b6c-bd107a6ba8c4" => "59faf88b-0912-4cca-8b6c-bd107a6ba8c4" (forces new resource)
size: "100" => "100"
snapshot_id: "" => "<computed>"
```
Fixes#7423
```
% make testacc TEST=./builtin/providers/aws TESTARGS='-run=TestAccAWSRedshiftCluster_loggingEnabled'
==> Checking that code complies with gofmt requirements...
go generate $(go list ./... | grep -v /terraform/vendor/)
TF_ACC=1 go test ./builtin/providers/aws -v
-run=TestAccAWSRedshiftCluster_loggingEnabled -timeout 120m
=== RUN TestAccAWSRedshiftCluster_loggingEnabled
--- PASS: TestAccAWSRedshiftCluster_loggingEnabled (675.21s)
PASS
ok github.com/hashicorp/terraform/builtin/providers/aws 675.233s
```
* provider/mysql: User Resource
This commit introduces a mysql_user resource. It includes basic
functionality of adding a user@host along with a password.
* provider/mysql: Grant Resource
This commit introduces a mysql_grant resource. It can grant a set
of privileges to a user against a whole database.
* provider/mysql: Adding documentation for user and grant resources
Previously the consul_keys resource did double-duty as both a reader and
writer of values from the Consul key/value store, but that made its
interface rather confusing and complex, as well as having all of the other
general problems associated with read-only resources.
Here we split the functionality such that reading is done with the
consul_keys data source while writing is done with the consul_keys
resource.
The old read behavior of the resource is still supported, but it's no
longer documented (except as a deprecation note) and will generate
deprecation warnings when used.
In future it should be possible to simplify the consul_keys resource by
removing all of the read support, but that is deferred for now to give
users a chance to gracefully migrate to the new data source.
Expose the network interface ID that is created with a new instance.
This can be useful when associating an existing elastic IP to the
default interface on an instance that has multiple network interfaces.
* provider/scaleway: update api version
* provider/scaleway: expose ipv6 support, rename ip attributes
since it can be both ipv4 and ipv6, choose a more generic name.
* provider/scaleway: allow servers in different SGs
* provider/scaleway: update documentation
* provider/scaleway: Update docs with security group
* provider/scaleway: add testcase for server security groups
* provider/scaleway: make deleting of security rules more resilient
* provider/scaleway: make deletion of security group more resilient
* provider/scaleway: guard against missing server
* provider/aws: Delete access keys before deleting IAM user
* provider/aws: Put IAM key removal behind force_destroy option
* provider/aws: Move all access key deletion under force_destroy
* Add iam_user force_destroy to website
* provider/aws: Improve clarity of looping over pages in delete IAM user
Sidebar:
- Rename "Azure (Resource Manager)" to "Microsoft Azure" and sort
accordingly
- Rename "Azure (Service Management)" to "Microsoft Azure (Legacy ASM)"
and sort accordingly
ARM provider docs:
- Name changes everywhere to Microsoft Azure Provider
- Mention and link to "legacy Azure Service Management Provider" in opening paragraph
- Sidebar gains link at bottom to Azure Service Management Provider
ASM provider docs:
- Name changes everywhere to Azure Service Management Provider
- Sidebar gains link at bottom to Microsoft Azure Provider
- Every page gets a header with the following
- "NOTE: The Azure Service Management provider is no longer being actively developed by HashiCorp employees. It continues to be supported by the community. We recommend using the Azure Resource Manager based [Microsoft Azure Provider] instead if possible."
* add opsworks permission resource
* add docs
* remove permission from state if the permission object could not be found
* remove nil validate function. validation is done in schema.Resource.
* add id to the list of exported values
* renge over permission to check that we have found got the correct one
* removed comment
* removed set id
* fix unknown region us-east-1c
* add user_profile resource
* add docs
* add default value
* provider/aws: Support kms_key_id for `aws_rds_cluster`
```
% make testacc TEST=./builtin/providers/aws TESTARGS='-run=TestAccAWSRDSCluster_'
==> Checking that code complies with gofmt requirements...
go generate $(go list ./... | grep -v /terraform/vendor/)
TF_ACC=1 go test ./builtin/providers/aws -v -run=TestAccAWSRDSCluster_
-timeout 120m
=== RUN TestAccAWSRDSCluster_basic
--- PASS: TestAccAWSRDSCluster_basic (127.57s)
=== RUN TestAccAWSRDSCluster_kmsKey
--- PASS: TestAccAWSRDSCluster_kmsKey (323.72s)
=== RUN TestAccAWSRDSCluster_encrypted
--- PASS: TestAccAWSRDSCluster_encrypted (173.25s)
=== RUN TestAccAWSRDSCluster_backupsUpdate
--- PASS: TestAccAWSRDSCluster_backupsUpdate (264.07s)
PASS
ok github.com/hashicorp/terraform/builtin/providers/aws 888.638s
```
* provider/aws: Add KMS Key ID to `aws_rds_cluster_instance`
```
```
Fixes#7299 where it was found that computer_name is not optional (as
the msdn documentation states)
```
make testacc TEST=./builtin/providers/azurerm TESTARGS='-run=TestAccAzureRMVirtualMachine_'
==> Checking that code complies with gofmt requirements...
go generate $(go list ./... | grep -v /vendor/)
TF_ACC=1 go test ./builtin/providers/azurerm -v -run=TestAccAzureRMVirtualMachine_ -timeout 120m
=== RUN TestAccAzureRMVirtualMachine_basicLinuxMachine
--- PASS: TestAccAzureRMVirtualMachine_basicLinuxMachine (403.53s)
=== RUN TestAccAzureRMVirtualMachine_tags
--- PASS: TestAccAzureRMVirtualMachine_tags (488.46s)
=== RUN TestAccAzureRMVirtualMachine_updateMachineSize
--- PASS: TestAccAzureRMVirtualMachine_updateMachineSize (601.82s)
=== RUN TestAccAzureRMVirtualMachine_basicWindowsMachine
--- PASS: TestAccAzureRMVirtualMachine_basicWindowsMachine (646.75s)
=== RUN TestAccAzureRMVirtualMachine_windowsUnattendedConfig
--- PASS: TestAccAzureRMVirtualMachine_windowsUnattendedConfig (891.42s)
=== RUN TestAccAzureRMVirtualMachine_winRMConfig
--- PASS: TestAccAzureRMVirtualMachine_winRMConfig (768.73s)
PASS
ok github.com/hashicorp/terraform/builtin/providers/azurerm 3800.734s
```
* docs/digitalocean: Adding an import section to the bottom of the DO
importable resources
* docs/azurerm: Adding the Import sections for the AzureRM Importable resources
* docs/aws: Adding the import sections to the AWS provider pages
* Add scaleway provider
this PR allows the entire scaleway stack to be managed with terraform
example usage looks like this:
```
provider "scaleway" {
api_key = "snap"
organization = "snip"
}
resource "scaleway_ip" "base" {
server = "${scaleway_server.base.id}"
}
resource "scaleway_server" "base" {
name = "test"
# ubuntu 14.04
image = "aecaed73-51a5-4439-a127-6d8229847145"
type = "C2S"
}
resource "scaleway_volume" "test" {
name = "test"
size_in_gb = 20
type = "l_ssd"
}
resource "scaleway_volume_attachment" "test" {
server = "${scaleway_server.base.id}"
volume = "${scaleway_volume.test.id}"
}
resource "scaleway_security_group" "base" {
name = "public"
description = "public gateway"
}
resource "scaleway_security_group_rule" "http-ingress" {
security_group = "${scaleway_security_group.base.id}"
action = "accept"
direction = "inbound"
ip_range = "0.0.0.0/0"
protocol = "TCP"
port = 80
}
resource "scaleway_security_group_rule" "http-egress" {
security_group = "${scaleway_security_group.base.id}"
action = "accept"
direction = "outbound"
ip_range = "0.0.0.0/0"
protocol = "TCP"
port = 80
}
```
Note that volume attachments require the server to be stopped, which can lead to
downtimes of you attach new volumes to already used servers
* Update IP read to handle 404 gracefully
* Read back resource on update
* Ensure IP detachment works as expected
Sadly this is not part of the official scaleway api just yet
* Adjust detachIP helper
based on feedback from @QuentinPerez in
https://github.com/scaleway/scaleway-cli/pull/378
* Cleanup documentation
* Rename api_key to access_key
following @stack72 suggestion and rename the provider api_key for more clarity
* Make tests less chatty by using custom logger
We cannot use the "id" field to represent policy ID, because it is used
internally by Terraform. Also change the "id" field within a statement
to "sid" for consistency with the generated JSON.
The template resources don't actually need to retain any state, so they
are good candidates to be data sources.
This includes a few tweaks to the acceptance tests -- now configured to
run as unit tests -- since it seems that they have been slightly broken
for a while now. In particular, the "update" cases are no longer tested
because updating is not a meaningful operation for a data source.
When adding multiple notifications from one S3 bucket to one SQS queue, it wasn't immediately intuitive how to do this.
At first I created two `aws_s3_bucket_notification` configs and it seemed to work fine, however the config for one event
will overwrite the other. In order to have multiple events, you can defined the `queue` key twice, or use an array if you're
working with the JSON syntax. I tried to make this more clear in the documentation.