* Grafana provider
* grafana_data_source resource.
Allows data sources to be created in Grafana. Supports all data source
types that are accepted in the current version of Grafana, and will
support any future ones that fit into the existing structure.
* Vendoring of apparentlymart/go-grafana-api
This is in anticipation of adding a Grafana provider plugin.
* grafana_dashboard resource
* Website documentation for the Grafana provider.
* provider/datadog Update go-datadog-api.
* provider/datadog Add support for "require_full_window" and "locked".
* provider/datadog Update tests, update doco, gofmt.
* provider/datadog Add options to update resource.
* provider/datadog "require_full_window" defaults to True, "locked" to False. Use
those initial values as the starting configuration.
* provider/datadog Update notify_audit tests to use the default value for
testAccCheckDatadogMonitorConfig and a custom value for
testAccCheckDatadogMonitorConfigUpdated.
This catches a situation where the code ignores setting the option on creation,
and the update function merely asserts the default value, versus actually changing
the value.
`azurerm_storage_account` access keys
Please note that we do NOT have the ability to manage the access keys -
we are just getting the keys that the account creates for us. To manage
the keys, you would need to use the azure portal still
As a first example of a real-world data source, the pre-existing
terraform_remote_state resource is adapted to be a data source. The
original resource is shimmed to wrap the data source for backward
compatibility.
As requested in #4822, add support for a KMS Key ID (ARN) for Db
Instance
```
make testacc TEST=./builtin/providers/aws
TESTARGS='-run=TestAccAWSDBInstance_kmsKey' 2>~/tf.log
==> Checking that code complies with gofmt requirements...
go generate $(go list ./... | grep -v /vendor/)
TF_ACC=1 go test ./builtin/providers/aws -v
-run=TestAccAWSDBInstance_kmsKey -timeout 120m
=== RUN TestAccAWSDBInstance_basic
--- PASS: TestAccAWSDBInstance_basic (587.37s)
=== RUN TestAccAWSDBInstance_kmsKey
--- PASS: TestAccAWSDBInstance_kmsKey (625.31s)
PASS
ok github.com/hashicorp/terraform/builtin/providers/aws 1212.684s
```
Auto-generating an Instance Template name (or just its suffix) allows the
create_before_destroy lifecycle option to function correctly on the
Instance Template resource. This in turn allows Instance Group Managers
to be updated without being destroyed.
This introduces the terraform state list command to list the resources
within a state. This is the first of many state management commands to
come into 0.7.
This is the first command of many to come that is considered a
"plumbing" command within Terraform (see "plumbing vs porcelain":
http://git.661346.n2.nabble.com/what-are-plumbing-and-porcelain-td2190639.html).
As such, this PR also introduces a bunch of groundwork to support
plumbing commands.
The main changes:
- Main command output is changed to split "common" and "uncommon"
commands.
- mitchellh/cli is updated to support nested subcommands, since
terraform state list is a nested subcommand.
- terraform.StateFilter is introduced as a way in core to filter/search
the state files. This is very basic currently but I expect to make it
more advanced as time goes on.
- terraform state list command is introduced to list resources in a
state. This can take a series of arguments to filter this down.
Known issues, or things that aren't done in this PR on purpose:
- Unit tests for terraform state list are on the way. Unit tests for the
core changes are all there.
This changes the representation of maps in the interpolator from the
dotted flatmap form of a string variable named "var.variablename.key"
per map element to use native HIL maps instead.
This involves porting some of the interpolation functions in order to
keep the tests green, and adding support for map outputs.
There is one backwards incompatibility: as a result of an implementation
detail of maps, one could access an indexed map variable using the
syntax "${var.variablename.key}".
This is no longer possible - instead HIL native syntax -
"${var.variablename["key"]}" must be used. This was previously
documented, (though not heavily used) so it must be noted as a backward
compatibility issue for Terraform 0.7.
This introduces the terraform state list command to list the resources
within a state. This is the first of many state management commands to
come into 0.7.
This is the first command of many to come that is considered a
"plumbing" command within Terraform (see "plumbing vs porcelain":
http://git.661346.n2.nabble.com/what-are-plumbing-and-porcelain-td2190639.html).
As such, this PR also introduces a bunch of groundwork to support
plumbing commands.
The main changes:
- Main command output is changed to split "common" and "uncommon"
commands.
- mitchellh/cli is updated to support nested subcommands, since
terraform state list is a nested subcommand.
- terraform.StateFilter is introduced as a way in core to filter/search
the state files. This is very basic currently but I expect to make it
more advanced as time goes on.
- terraform state list command is introduced to list resources in a
state. This can take a series of arguments to filter this down.
Known issues, or things that aren't done in this PR on purpose:
- Unit tests for terraform state list are on the way. Unit tests for the
core changes are all there.
* core: Add support for marking outputs as sensitive
This commit allows an output to be marked "sensitive", in which case the
value is redacted in the post-refresh and post-apply list of outputs.
For example, the configuration:
```
variable "input" {
default = "Hello world"
}
output "notsensitive" {
value = "${var.input}"
}
output "sensitive" {
sensitive = true
value = "${var.input}"
}
```
Would result in the output:
```
terraform apply
Apply complete! Resources: 0 added, 0 changed, 0 destroyed.
Outputs:
notsensitive = Hello world
sensitive = <sensitive>
```
The `terraform output` command continues to display the value as before.
Limitations: Note that sensitivity is not tracked internally, so if the
output is interpolated in another module into a resource, the value will
be displayed. The value is still present in the state.
* provider/fastly: Add support for Conditions for Fastly Services
Docs here:
- https://docs.fastly.com/guides/conditions/
Also Bump go-fastly version for domain support in S3 Logging
* New top level AWS resource aws_eip_association
* Add documentation for aws_eip_association
* Add tests for aws_eip_association
* provider/aws: Change `aws_elastic_ip_association` to have computed
parameters
The AWS API was send ing more parameters than we had set. Therefore,
Terraform was showing constant changes when plans were being formed
* Adding private ip address reference
* adding private ip address reference
* Updating the docs.
* Removing optional attrib from private_ip_address
Removing optional attribute from private_ip_address, this element is only being used in the read.
* Selecting the first element instead of using a loop for now.
Change this to a loop when https://github.com/Azure/azure-sdk-for-go/issues/259 is fixed
Added the hosted_zone_id attribute, which aliases to the Route 53
zone ID that can be used to route Alias Resource Record Sets to.
This fixeshashicorp/terraform#6489.
adminPassword
Reports from issues showed the following errors:
```
{
"error": {
"code": "InvalidParameter",
"target": "adminPassword",
"message": "The supplied password must be
between 6-72 characters long and must
satisfy at least 3 of password complexity
requirements from the following: \r\n1)
Contains an uppercase character\r\n2)
Contains a lowercase character\r\n3)
Contains a numeric digit\r\n4) Contains a
special character."
}
}
```
This commit adds some documentation for the adminPassword complexity
requirements
ssh_keys were throwing an error similar to this:
```
* azurerm_virtual_machine.test: [DEBUG] Error setting Virtual Machine
* Storage OS Profile Linux Configuration: &errors.errorString{s:"Invalid
* address to set: []string{\"os_profile_linux_config\", \"0\",
* \"ssh_keys\"}"}
```
This was because of nesting of Set within a Set in the schema. By
changing this to a List within a Set, the schema works as expected. This
means we can now set SSH Keys on VMs. This has been tested using a
remote-exec and a connection block with the ssh key
```
azurerm_virtual_machine.test: Still creating... (2m10s elapsed)
azurerm_virtual_machine.test (remote-exec): Connected!
azurerm_virtual_machine.test (remote-exec): CONNECTED!
```
Change the AWS DB Instance to now include the DB Option Group param. Adds a test to prove that it works
Add acceptance tests for the AWS DB Option Group work. This ensures that Options can be added and updated
Documentation for the AWS DB Option resource
automated_snapshot_retention_period
The default value for `automated_snapshot_retention_period` is 1.
Therefore, it can be included in the `CreateClusterInput` without
needing to check that it is set.
This was actually stopping people from setting the value to 0 (disabling
the snapshots) as there is an issue in `d.GetOk()` evaluating 0 for int
Just wanted to call out that the CLI prompts for values for unset variables instead of an error. Guessing that was an enhancement somewhere along the line and just didn't get updated in the docs.
Here is an example that will setup the following:
+ An SSH key resource.
+ A virtual server resource that uses an existing SSH key.
+ A virtual server resource using an existing SSH key and a Terraform managed SSH key (created as "test_key_1" in the example below).
(create this as sl.tf and run terraform commands from this directory):
```hcl
provider "softlayer" {
username = ""
api_key = ""
}
resource "softlayer_ssh_key" "test_key_1" {
name = "test_key_1"
public_key = "${file(\"~/.ssh/id_rsa_test_key_1.pub\")}"
# Windows Example:
# public_key = "${file(\"C:\ssh\keys\path\id_rsa_test_key_1.pub\")}"
}
resource "softlayer_virtual_guest" "my_server_1" {
name = "my_server_1"
domain = "example.com"
ssh_keys = ["123456"]
image = "DEBIAN_7_64"
region = "ams01"
public_network_speed = 10
cpu = 1
ram = 1024
}
resource "softlayer_virtual_guest" "my_server_2" {
name = "my_server_2"
domain = "example.com"
ssh_keys = ["123456", "${softlayer_ssh_key.test_key_1.id}"]
image = "CENTOS_6_64"
region = "ams01"
public_network_speed = 10
cpu = 1
ram = 1024
}
```
You'll need to provide your SoftLayer username and API key,
so that Terraform can connect. If you don't want to put
credentials in your configuration file, you can leave them
out:
```
provider "softlayer" {}
```
...and instead set these environment variables:
- **SOFTLAYER_USERNAME**: Your SoftLayer username
- **SOFTLAYER_API_KEY**: Your API key
IPv6 support added.
We support 1 IPv6 address per interface. It seems like the vSphere SDK supports more than one, since it's provided as a list.
I can change it to support more than one address. I decided to stick with one for now since that's how the configuration parameters
had been set up by other developers.
The global gateway configuration option has been removed. Instead the user should specify a gateway on NIC level (ipv4_gateway and ipv6_gateway).
For now, the global gateway will be used as a fallback for every NICs ipv4_gateway.
The global gateway configuration option has been marked as deprecated.
this implements two new resource types:
* openstack_networking_secgroup_v2 - create a neutron security group
* openstack_networking_secgroup_rule_v2 - create a newutron security
group rule
Unlike their nova counterparts the neutron security groups allow a user
to specify the target tenant_id allowing a cloud admin to create per
tenant resources.
* Adding File Resource for vSphere provider
Allows for file upload to vSphere at specified location. This also
includes update for moving or renaming of file resources.
* Ensuring required parameters are provided
This commit adds several example uses of the
openstack_compute_instance_v2 resource. It also makes a clarification
about booting from volumes and image ids/names.
* command/fmt: Document -diff doesn't disable -write
As noted in hashicorp/terraform#6343, this description misleadingly
suggested that the `-diff` option disables the `-write` option.
This isn't the case and because of the default options (described in
c753390) the behaviour of `terraform fmt -diff` is actually the same as
`terraform fmt -write -list -diff`.
Replace the "instead of rewriting" description to clarify that.
Documentation in hcl/fmtcmd is corrected in hashicorp/hcl#117 but it's not
really necessary to bump the dependency version.
* command/fmt: Show flag defaults in help text
These were documented on the website but not in the `-help` text. This
should help to clarify that you need to pass `-list=false -write=false
-diff` if you only want to see diffs.
Accordingly I've replaced the word "disabled" with "always false" in the
STDIN special cases so that it matches the terminology used in the defaults
and better indicates that it is overridden.
NB: The 3x duplicated defaults and documentation makes me feel uneasy once
again. I'm not sure how to solve that, though.
* Fix headers and header anchor tags
The markdown parser already generates unique ids for header elements by
downcasing all of the words and replacing spaces with hyphens. Knowing
this, we can take the code blocks out of the headers and use the
generated ids as the link targets.
Aside: I tried to see if there was a standard way of documenting
subresources, but couldn't really find one. Both the aws_elb and
aws_instance resources seem to just say "documented below" without a
link. Then the relevant section is just a new paragraph with a list of
arguments.
* Reformat long lines
I find 80 character lines and whitespaces make the lists much easier to
read :)
* Remove extraneous <a> tags for header anchor tags
Now that middleman generates anchor tags for headers automagically, we
don't need to have blank <a> tags for anchor links to use.
* provider/fastly: Add S3 Log Streaming to Fastly Service
Adds streaming logs to an S3 bucket to Fastly Service V1
* provider/fastly: Bump go-fastly version for domain support in S3 Logging
This change adds the support for the proxied configuration option for a
record which enables origin protection for CloudFlare records.
In order to do so the golang library needed to be changed as the old did
not support the option and was using and outdated API version.
Open issues which ask for this (#5049, #3805).
User may specify a vmdk in their disk definition.
The options size, template, and vmdk are considered
to be mutually exclusive. User may also set whether each disk
associated with the vm should try to boot after creation.
Todo: Enforce mutual exclusivity, validate the bootable_vmdk_path
Just saying `id` is ambiguous, it could be interpreted as the resource ID which will fail with the follow error: `CertificateNotFound: Server Certificate not found for the key: <id>`. The AWS documentation states that the ssl certificate id parameter must be the ARN.
Previously we linked to the whole request body definition for valid values of `runtime`.
Now we link directly to the docs for `Runtime`, which will hopefully make it easier to find the valid values.
This fixes#4570.
Official OpenStack clients commonly support specifing a client
certificate/key to enable SSL client authentication when communicating
with OpenStack services. This patch enables such feature in Terraform
with new parameters and environment variables:
* 'cert' provider parameter or OS_CERT env variable to specify client
certificate file,
* 'key' provider parameter or OS_KEY env variable to specify client
certificate private key file.
It can come in handy to be able to mount ISOs programmatically.
For instance if you're developing a custom appliance (that automatically installs itself on the hard drive volume)
that you want to automatically test on every successful build (given the ISO is uploaded to the vmware datastore).
There are probably lots of other reasons for using this functionality.
* provider/aws: Fix hashing on CloudFront certificate parameters
Adding necessary type assertion to values on the viewer_certificate hash
function to ensure that certain fields are indeed not zero string
values, versus simply zero interface{} values (aka nil, as is such for a
map[string]interface{}).
* provider/aws: CloudFront complex structure error handling
Handle errors better on calls to d.Set() in the
aws_cloudfront_distribution, namely in flattenDistributionConfig(). Also
caught a bug in the setting of the origin attribute, was incorrectly
attempting to set origins.
* provider/aws: Pass pointers to set CloudFront primitives
Change a few d.Set() for primitives in aws_cloudfront_distribution and
aws_cloudfront_origin_access_identity to use the pointer versus a
dereference.
* docs: Fix CloudFront examples formatting
Ran each example thru terraform fmt to fix indentation.
* provider/aws: Remove delete retention on CloudFront tests
To play better with Travis and not bloat the test account with disabled
distributions.
Disable-only functionality has been retained - one can enable it with
the TF_TEST_CLOUDFRONT_RETAIN environment variable.
* provider/aws: CloudFront delete waiter error handling
The call to resourceAwsCloudFrontDistributionWaitUntilDeployed() on
deletion of CloudFront distributions was not trapping error messages,
causing issues with waiter failure.
* provider/fastly: Add support for managing Headers
Adds support for managing Headers in a Fastly configuration.
* update acc test
* update website with example of adding a header block
* provider/aws: Default Network ACL resource
Provides a resource to manage the default AWS Network ACL. VPC Only.
* Remove subnet_id update, mark as computed value. Remove extra tag update
* refactor default rule number to be a constant
* refactor revokeRulesForType to be revokeAllNetworkACLEntries
Refactor method to delete all network ACL entries, regardless of type. The
previous implementation was under the assumption that we may only eliminate some
rule types and possibly not others, so the split was necessary.
We're now removing them all, so the logic isn't necessary
Several doc and test cleanups are here as well
* smite subnet_id, improve docs
According to the libpq documentation, "prefer" is the default in the
underlying library and so setting a different default in the Terraform
layer would be a breaking change for existing users of this provider
whose servers do not have TLS correctly configured.
The docs now link to the libpq manual's discussion of the security
implications of each of the ssl_mode options, so the user can understand
the limitations of the "prefer" default and can make an informed decision
about which setting is appropriate for their situation.
This introduces a provider for Cobbler. Cobbler manages bare-metal
deployments and, to some extent, virtual machines. This initial
commit supports the following resources: distros, profiles, systems,
kickstart files, and snippets.
* CloudFront implementation v3
* Update tests
* Refactor - new resource: aws_cloudfront_distribution
* Includes a complete re-write of the old aws_cloudfront_web_distribution
resource to bring it to feature parity with API and CloudFormation.
* Also includes the aws_cloudfront_origin_access_identity resource to generate
origin access identities for use with S3.
* provider/aws: CodeDeploy Deployment Group Triggers
- Create a Trigger to Send Notifications for AWS CodeDeploy Events
- Update aws_codedeploy_deployment_group docs
* Refactor validateTriggerEvent function and test
- also rename TestAccAWSCodeDeployDeploymentGroup_triggerConfiguration test
* Enhance existing Deployment Group integration tests
- by using built in resource attribute helpers
- these can get quite verbose and repetitive, so passing the resource to a function might be better
- can't use these (yet) to assert trigger configuration state
* Unit tests for conversions between aws TriggerConfig and terraform resource schema
- buildTriggerConfigs
- triggerConfigsToMap
We have a curtesy function in place allowing you to specify both a
`name` of `ID`. But in order for the graph to be build correctly when
you recreate or taint stuff that other resources depend on, we need to
reference the `ID` and *not* the `name`.
So in order to enforce this and by that help people to not make this
mistake unknowingly, I deprecated all the parameters this allies to and
changed the logic, docs and tests accordingly.
Added the ability to set the "privacy" of a github_team resource so all teams won't automatically set to private.
* Added the privacy argument to github_team
* Refactored parameter validation to be general for any argument
* Updated testing
This commit enables the ability to authenticate to OpenStack by way
of a Keystone Token. Tokens can provide a way to use Terraform and
OpenStack with an expiring, temporary credential. The token will need
to be generated out of band from Terraform.
On creating CloudWatch metric alarms, I need to get the HealthCheckId dimension. Reference would be useful.
```
dimensions {
"HealthCheckId" = "${aws_route53_health_check.foo.id}"
}
```
This commit adds a no_gateway attribute. When set, the subnet will
not have a gateway. This is different than not specifying a
gateway_ip since that will cause a default gateway of .1 to be used.
This behavior mirrors the OpenStack Neutron command-line tool.
Fixes#6031
When calling AssociateAddress, the PrivateIpAddress parameter must be
used to select which private IP the EIP should associate with, otherwise
the EIP always associates with the _first_ private IP.
Without this parameter, multiple EIPs couldn't be assigned to a single
ENI. Includes covering test and docs update.
Fixes#2997
GitHub really doesn't like when you make the H lowercase, it violates
their brand guidelines and they won't help promote anything that doesn't
use the capital H.
* update docs on required parameter for api_gateway_integration
This parameter was required for lambda integration.
Otherwise,
` Error creating API Gateway Integration: BadRequestException: Enumeration value for HttpMethod must be non-empty`
* documentation: Including the AWS type on the api_gateway_integration docs
Documentation for `aws_cloudwatch_event_target` to warn that in order to be
able to have your AWS Lambda function or SNS topic invoked by a CloudWatch
Events rule, you must setup the right permissions
using `aws_lambda_permission` or `aws_sns_topic.policy`