Fixes#7423
```
% make testacc TEST=./builtin/providers/aws TESTARGS='-run=TestAccAWSRedshiftCluster_loggingEnabled'
==> Checking that code complies with gofmt requirements...
go generate $(go list ./... | grep -v /terraform/vendor/)
TF_ACC=1 go test ./builtin/providers/aws -v
-run=TestAccAWSRedshiftCluster_loggingEnabled -timeout 120m
=== RUN TestAccAWSRedshiftCluster_loggingEnabled
--- PASS: TestAccAWSRedshiftCluster_loggingEnabled (675.21s)
PASS
ok github.com/hashicorp/terraform/builtin/providers/aws 675.233s
```
* `map(key, value, ...)` - Returns a map consisting of the key/value pairs
specified as arguments. Every odd argument must be a string key, and every
even argument must have the same type as the other values specified.
Duplicate keys are not allowed. Examples:
* `map("hello", "world")`
* `map("us-east", list("a", "b", "c"), "us-west", list("b", "c", "d"))`
* provider/mysql: User Resource
This commit introduces a mysql_user resource. It includes basic
functionality of adding a user@host along with a password.
* provider/mysql: Grant Resource
This commit introduces a mysql_grant resource. It can grant a set
of privileges to a user against a whole database.
* provider/mysql: Adding documentation for user and grant resources
Previously the consul_keys resource did double-duty as both a reader and
writer of values from the Consul key/value store, but that made its
interface rather confusing and complex, as well as having all of the other
general problems associated with read-only resources.
Here we split the functionality such that reading is done with the
consul_keys data source while writing is done with the consul_keys
resource.
The old read behavior of the resource is still supported, but it's no
longer documented (except as a deprecation note) and will generate
deprecation warnings when used.
In future it should be possible to simplify the consul_keys resource by
removing all of the read support, but that is deferred for now to give
users a chance to gracefully migrate to the new data source.
Expose the network interface ID that is created with a new instance.
This can be useful when associating an existing elastic IP to the
default interface on an instance that has multiple network interfaces.
* provider/scaleway: update api version
* provider/scaleway: expose ipv6 support, rename ip attributes
since it can be both ipv4 and ipv6, choose a more generic name.
* provider/scaleway: allow servers in different SGs
* provider/scaleway: update documentation
* provider/scaleway: Update docs with security group
* provider/scaleway: add testcase for server security groups
* provider/scaleway: make deleting of security rules more resilient
* provider/scaleway: make deletion of security group more resilient
* provider/scaleway: guard against missing server
* provider/aws: Delete access keys before deleting IAM user
* provider/aws: Put IAM key removal behind force_destroy option
* provider/aws: Move all access key deletion under force_destroy
* Add iam_user force_destroy to website
* provider/aws: Improve clarity of looping over pages in delete IAM user
Sidebar:
- Rename "Azure (Resource Manager)" to "Microsoft Azure" and sort
accordingly
- Rename "Azure (Service Management)" to "Microsoft Azure (Legacy ASM)"
and sort accordingly
ARM provider docs:
- Name changes everywhere to Microsoft Azure Provider
- Mention and link to "legacy Azure Service Management Provider" in opening paragraph
- Sidebar gains link at bottom to Azure Service Management Provider
ASM provider docs:
- Name changes everywhere to Azure Service Management Provider
- Sidebar gains link at bottom to Microsoft Azure Provider
- Every page gets a header with the following
- "NOTE: The Azure Service Management provider is no longer being actively developed by HashiCorp employees. It continues to be supported by the community. We recommend using the Azure Resource Manager based [Microsoft Azure Provider] instead if possible."
* add opsworks permission resource
* add docs
* remove permission from state if the permission object could not be found
* remove nil validate function. validation is done in schema.Resource.
* add id to the list of exported values
* renge over permission to check that we have found got the correct one
* removed comment
* removed set id
* fix unknown region us-east-1c
* add user_profile resource
* add docs
* add default value
* provider/aws: Support kms_key_id for `aws_rds_cluster`
```
% make testacc TEST=./builtin/providers/aws TESTARGS='-run=TestAccAWSRDSCluster_'
==> Checking that code complies with gofmt requirements...
go generate $(go list ./... | grep -v /terraform/vendor/)
TF_ACC=1 go test ./builtin/providers/aws -v -run=TestAccAWSRDSCluster_
-timeout 120m
=== RUN TestAccAWSRDSCluster_basic
--- PASS: TestAccAWSRDSCluster_basic (127.57s)
=== RUN TestAccAWSRDSCluster_kmsKey
--- PASS: TestAccAWSRDSCluster_kmsKey (323.72s)
=== RUN TestAccAWSRDSCluster_encrypted
--- PASS: TestAccAWSRDSCluster_encrypted (173.25s)
=== RUN TestAccAWSRDSCluster_backupsUpdate
--- PASS: TestAccAWSRDSCluster_backupsUpdate (264.07s)
PASS
ok github.com/hashicorp/terraform/builtin/providers/aws 888.638s
```
* provider/aws: Add KMS Key ID to `aws_rds_cluster_instance`
```
```
Fixes#7299 where it was found that computer_name is not optional (as
the msdn documentation states)
```
make testacc TEST=./builtin/providers/azurerm TESTARGS='-run=TestAccAzureRMVirtualMachine_'
==> Checking that code complies with gofmt requirements...
go generate $(go list ./... | grep -v /vendor/)
TF_ACC=1 go test ./builtin/providers/azurerm -v -run=TestAccAzureRMVirtualMachine_ -timeout 120m
=== RUN TestAccAzureRMVirtualMachine_basicLinuxMachine
--- PASS: TestAccAzureRMVirtualMachine_basicLinuxMachine (403.53s)
=== RUN TestAccAzureRMVirtualMachine_tags
--- PASS: TestAccAzureRMVirtualMachine_tags (488.46s)
=== RUN TestAccAzureRMVirtualMachine_updateMachineSize
--- PASS: TestAccAzureRMVirtualMachine_updateMachineSize (601.82s)
=== RUN TestAccAzureRMVirtualMachine_basicWindowsMachine
--- PASS: TestAccAzureRMVirtualMachine_basicWindowsMachine (646.75s)
=== RUN TestAccAzureRMVirtualMachine_windowsUnattendedConfig
--- PASS: TestAccAzureRMVirtualMachine_windowsUnattendedConfig (891.42s)
=== RUN TestAccAzureRMVirtualMachine_winRMConfig
--- PASS: TestAccAzureRMVirtualMachine_winRMConfig (768.73s)
PASS
ok github.com/hashicorp/terraform/builtin/providers/azurerm 3800.734s
```
Allow lists and maps within the list interpolation function via variable
interpolation. Since this requires setting the variadic type to TypeAny,
we check for non-heterogeneous lists in the callback.
* docs/digitalocean: Adding an import section to the bottom of the DO
importable resources
* docs/azurerm: Adding the Import sections for the AzureRM Importable resources
* docs/aws: Adding the import sections to the AWS provider pages
The list() interpolation function provides a way to add support for list
literals (of strings) to HIL without having to invent new syntax for it
and modify the HIL parser.
It presents as a function, thus:
- list() -> []
- list("a") -> ["a"]
- list("a", "b") -> ["a", "b"]
Thanks to @wr0ngway for the idea of this approach, fixes#7460.
* Add scaleway provider
this PR allows the entire scaleway stack to be managed with terraform
example usage looks like this:
```
provider "scaleway" {
api_key = "snap"
organization = "snip"
}
resource "scaleway_ip" "base" {
server = "${scaleway_server.base.id}"
}
resource "scaleway_server" "base" {
name = "test"
# ubuntu 14.04
image = "aecaed73-51a5-4439-a127-6d8229847145"
type = "C2S"
}
resource "scaleway_volume" "test" {
name = "test"
size_in_gb = 20
type = "l_ssd"
}
resource "scaleway_volume_attachment" "test" {
server = "${scaleway_server.base.id}"
volume = "${scaleway_volume.test.id}"
}
resource "scaleway_security_group" "base" {
name = "public"
description = "public gateway"
}
resource "scaleway_security_group_rule" "http-ingress" {
security_group = "${scaleway_security_group.base.id}"
action = "accept"
direction = "inbound"
ip_range = "0.0.0.0/0"
protocol = "TCP"
port = 80
}
resource "scaleway_security_group_rule" "http-egress" {
security_group = "${scaleway_security_group.base.id}"
action = "accept"
direction = "outbound"
ip_range = "0.0.0.0/0"
protocol = "TCP"
port = 80
}
```
Note that volume attachments require the server to be stopped, which can lead to
downtimes of you attach new volumes to already used servers
* Update IP read to handle 404 gracefully
* Read back resource on update
* Ensure IP detachment works as expected
Sadly this is not part of the official scaleway api just yet
* Adjust detachIP helper
based on feedback from @QuentinPerez in
https://github.com/scaleway/scaleway-cli/pull/378
* Cleanup documentation
* Rename api_key to access_key
following @stack72 suggestion and rename the provider api_key for more clarity
* Make tests less chatty by using custom logger
We cannot use the "id" field to represent policy ID, because it is used
internally by Terraform. Also change the "id" field within a statement
to "sid" for consistency with the generated JSON.
This commit removes the ability to index into complex output types using
`terraform output a_list 1` (for example), and adds a `-json` flag to
the `terraform output` command, such that the output can be piped
through a post-processor such as jq or json. This removes the need to
allow arbitrary traversal of nested structures.
It also adds tests of human readable ("normal") output with nested lists
and maps, and of the new JSON output.
The template resources don't actually need to retain any state, so they
are good candidates to be data sources.
This includes a few tweaks to the acceptance tests -- now configured to
run as unit tests -- since it seems that they have been slightly broken
for a while now. In particular, the "update" cases are no longer tested
because updating is not a meaningful operation for a data source.
When adding multiple notifications from one S3 bucket to one SQS queue, it wasn't immediately intuitive how to do this.
At first I created two `aws_s3_bucket_notification` configs and it seemed to work fine, however the config for one event
will overwrite the other. In order to have multiple events, you can defined the `queue` key twice, or use an array if you're
working with the JSON syntax. I tried to make this more clear in the documentation.
This fixes#7157. It doesn't change the way aws_ami works
```
make testacc TEST=./builtin/providers/aws
TESTARGS='-run=TestAccAWSAMICopy'
==> Checking that code complies with gofmt requirements...
go generate $(go list ./... | grep -v /vendor/)
TF_ACC=1 go test ./builtin/providers/aws -v -run=TestAccAWSAMICopy
-timeout 120m
=== RUN TestAccAWSAMICopy
--- PASS: TestAccAWSAMICopy (479.75s)
PASS
ok github.com/hashicorp/terraform/builtin/providers/aws 479.769s
```
allows load balancer policies and their assignment to backend servers or listeners to be configured independently.
this gives flexibility to configure additional policies on aws elastic load balancers aside from the already provided "convenience" wrappers for cookie stickiness
This resource (unlike the others in this provider) isn't stateful, so it
is a good candidate to be a data source.
The old resource form is preserved via the standard shim in helper/schema,
which will generate a deprecation warning but will still allow the
resource to be used.
When applying or removing 2+ security groups from an instance, an EOF
error will be triggered even though the action was successful. This
patch accounts for and ignores the EOF error. It also adds a test
case.
Security Group and Port documentation are also updated in this
commit.
* small doc update
* provider/atlas: Add docs for Artifact Data Source
* provider/atlas: Remove a test method that isn't used
* provider/atlas: Add Data Source for Atlas Artifact
* provider/atlas: Show deprecation error on atlas_artifact resource
* Added support for redshift destination to firehose delivery streams
* Small documentation fix
* go fmt after rebase
* small fixes after rebase
* provider/aws: Firehose test cleanups
* provider/aws: Update docs
* Convert Redshift and S3 blocks to TypeList
* provider/aws: Add migration for S3 Configuration in Kinesis firehose
* providers/aws: Safety first when building Redshift config options
* restore commented out log statements in the migration
* provider/aws: use MaxItems in schema
* added additional error info for when memory swap assert fails.
related to https://github.com/hashicorp/terraform/pull/7392
* updated docker_container documentation
reflect recent changes to docker provider around tests, dns options and
dns search support.
* Grammar and punctuation changes
Docker container documentation.
* Spell checking, grammar and punctuation.
Docker container documentation.
* Markdown change sto docker container documentation
* Add SES resource
* Detect ReceiptRule deletion outside of Terraform
* Handle order of rule actions
* Add position field to docs
* Fix hashes, add log messages, and other small cleanup
* Fix rebase issue
* Fix formatting
In CloudStack you can dynamically start using an ACL and once you use
an ACL you can dynamically swap ACL’s. But once your using an ACL, you
can no longer stop using an ACL without rebuilding the network.
This change makes the `ForceNew` value dynamic so that it only returns
`true` if you are reverting from using an ACL to not using an ACL
anymore, making this functionally inline with the behaviour CloudStack
offers.
this datasource allows terraform to work with externally modified state, e.g.
when you're using an ECS service which is continously updated by your CI via the
AWS CLI.
right now you'd have to wrap terraform into a shell script which looks up the
current image digest, so running terraform won't change the updated service.
using the aws_ecs_container_definition data source you can now leverage
terraform, removing the wrapper entirely.
Since this resource produces a list it feels more intuitive to give its
attribute a plural name, and since the noun "instance" already means
something specific in the AWS provider that doesn't apply here we use
"names" to indicate that these are availability zone names.
Also includes updating the docs to not show a dynamic count example for
now, since we don't support that yet.
false
Fixes#7035
A known issue in Terraform means that d.GetOk() on a bool which is false
will mean it doesn't get evaulated. Therefore, when people set
publicly_accessible to false, it will never get evaluated on the Create
We are going to make it default to false now
The documentation wording implies that in all cases you have to manually accept peering requests. This change is intended to clarify where this is required. The documentation also separates between "basic usage" and "basic usage with tags", but the expanded usage didn't actually provide much additional useful information. Expanded a bit to show the use of auto_accept since both VPCs are created by the content and to show setting the Name tag for proper display in the console.
resize
When resizing a DO droplet, you can only increase the size not
descrease. If you try and go down in size, the API will return this
error:
```
* digitalocean_droplet.foobar: Error resizing droplet (17090364):
POST https://api.digitalocean.com/v2/droplets/17090364/actions:
422 Size can not decrease size of Droplet's disk image
```
Since the custom_configuration_parameters can't take dots, we cannot
set 'disk.EnableUUID'. This adds a parameter for this options that gets
added to a configSpec. This option causes the vm to mount disks by uuid
on the guest OS.
* Adding debug functionality to log debug api calls
* adding debug and refactoring tests
* more tweaks with tests
* updating documentation
* more refactoring of tests
* working through factor for testing
* removing logging that displays username and password
* more work on getting tests stable
The example is referencing a non-existent variable, `allocation_id`, within the `aws_eip` resource. I believe this should actually be `aws_eip.example.id` instead of `aws_eip.example.allocation_id`.
Add the iam_arn attribute to aws_cloudfront_origin_access_identity,
which computes the IAM ARN for a certain CloudFront origin access
identity.
This is necessary because S3 modifies the bucket policy if CanonicalUser
is sent, causing spurious diffs with aws_s3_bucket resources.
This brings over the work done by @apparentlymart and @radeksimko in
PR #3124, and converts it into a data source for the AWS provider:
This commit adds a helper to construct IAM policy documents using
familiar Terraform concepts. It makes Terraform-style interpolations
easier and resolves the syntax conflict between Terraform interpolations
and IAM policy variables by changing the latter to use &{...} for its
interpolations.
Its use is completely optional and users are free to go on using literal
heredocs, file interpolations or whatever else; this just adds another
option that fits more naturally into a Terraform config.
...as this will hopefully clue people in that this function will indeed
work to manipulate ipv6 networks.
Not that I completely spaced on that for quite some time, or anything
like that.
Nope, not me. Not at all.
This data source allows one to look up the most recent AMI for a specific
set of parameters, much like aws ec2 describe-images in the AWS CLI.
Basically a refresh of hashicorp/terraform#4396, in data source form.
* Add per user, role and group policy attachment
* Add docs for new IAM policy attachment resources.
* Make policy attachment resources manage only 1 entity<->policy attachment
* provider/aws: Tidy up IAM Group/User/Role attachments
This commit adds a data source with a single list, `instance` for the
schema which gets populated with the availability zones to which an
account has access.
Allow a cloud admin to target a specific tenant in which to allocate
a floating IP. This is useful when the cloud admin does not want to
delegate network privileges to the tenants or various Q&A scenarios.
resource
We had a line on the Update func that said:
```
Hash key can only be specified at creation, you cannot modify it.
```
The resource has now been changed to ForceNew on the hashkey
```
aws_dynamodb_table.demo-user-table: Refreshing state... (ID: Users)
aws_dynamodb_table.demo-user-table: Destroying...
aws_dynamodb_table.demo-user-table: Destruction complete
aws_dynamodb_table.demo-user-table: Creating...
aws_dynamodb_table.demo-user-table: Creation complete
```
Changed schema type for disks to support dynamic non-ordered disk
swapping. All Disk attributes have been made non ForceNew since
any changes should be handled in the upgrade() function.
Added 'name' attribute to disks to act as a unique
identifier for when users request for new disks. It is also used as
the filename for the new disk. Templates are considered immutable.
The openstack_networking_subnet_v2 resource was originally designed
to have DHCP disabled by default; however, a bug in the original
implementation caused DHCP to always be enabled and never be
disabled. This bug was fixed in #6052.
Recent discussions have shown that users prefer if DHCP is enabled
by default. This commit implements makes the change.
When stage_name is not passed to the resource
aws_api_gateway_deployment a terraform apply will fail. This is
because the stage_name is required and not optional.
* Grafana provider
* grafana_data_source resource.
Allows data sources to be created in Grafana. Supports all data source
types that are accepted in the current version of Grafana, and will
support any future ones that fit into the existing structure.
* Vendoring of apparentlymart/go-grafana-api
This is in anticipation of adding a Grafana provider plugin.
* grafana_dashboard resource
* Website documentation for the Grafana provider.
* provider/datadog Update go-datadog-api.
* provider/datadog Add support for "require_full_window" and "locked".
* provider/datadog Update tests, update doco, gofmt.
* provider/datadog Add options to update resource.
* provider/datadog "require_full_window" defaults to True, "locked" to False. Use
those initial values as the starting configuration.
* provider/datadog Update notify_audit tests to use the default value for
testAccCheckDatadogMonitorConfig and a custom value for
testAccCheckDatadogMonitorConfigUpdated.
This catches a situation where the code ignores setting the option on creation,
and the update function merely asserts the default value, versus actually changing
the value.