* Added new evaluation_delay field
Added new evaluation_delay parameter to pass it through the datadog monitor api
* Changed tests for new evaluation_delay field
* changed documentation
* added vmss with managed disk support
* Update vmss docs
* update vmss test
* added vmss managed disk import test
* update vmss tests
* remove unused test resources
* reverting breaking changes on storage_os_disk and storage_image_reference
* updated vmss tests and documentation
* updated vmss flatten osdisk
* updated vmss resource and import test
* update name in vmss osdisk
* update vmss test to include a blank name
* update vmss test to include a blank name
* Add resource
* Add tests
* Add documentation
* Fix invalid comment
* Remove MinItems
* Add newline
* Store expected ID and format
* Add import note
* expiration_time can be computed if dataset has an expiration_time set
* Handle 404 using new check function
Added support for provisioning a native redis cluster elasticache replication group.
A new TypeSet attribute `cluster_mode` has been added. It requires the following
fields:
- `replicas_per_node_group` - The number of replica nodes in each node group
- `num_node_groups` - The number of node groups for this Redis replication group
Notes:
- `automatic_failover_enabled` must be set to true.
- `number_cache_clusters` is now a optional and computed field. If `cluster_mode` is set
its value will be computed as:
```num_node_groups + num_node_groups * replicas_per_node_group```
Below is a sample config:
resource "aws_elasticache_replication_group" "bar" {
replication_group_id = "tf-redis-cluser"
replication_group_description = "test description"
node_type = "cache.t2.micro"
port = 6379
parameter_group_name = "default.redis3.2.cluster.on"
automatic_failover_enabled = true
cluster_mode {
replicas_per_node_group = 1
num_node_groups = 2
}
}
Add a data source for listing available versions for Container Engine
clusters or retrieving the latest available version.
This is mostly to support our tests for specifying a version for cluster
creation; the withVersion test has been updated to use the data source,
meaning it will stop failing on us as new versions get released.
Fixes: #14217
We now check to make sure that we have the correct number of parts when
we have split the resource ID. It isn't an elegant fix but it works as
expected. Also added some more documentation about what is required to
actually construct the Id needed for import
* provider/aws: Add support for aws_ssm_maintenance_window
Fixes: #14027
```
% make testacc TEST=./builtin/providers/aws TESTARGS='-run=TestAccAWSSSMMaintenanceWindow_basic'
==> Checking that code complies with gofmt requirements...
go generate $(go list ./... | grep -v /terraform/vendor/)
2017/04/29 13:38:19 Generated command/internal_plugin_list.go
TF_ACC=1 go test ./builtin/providers/aws -v -run=TestAccAWSSSMMaintenanceWindow_basic -timeout 120m
=== RUN TestAccAWSSSMMaintenanceWindow_basic
--- PASS: TestAccAWSSSMMaintenanceWindow_basic (51.69s)
PASS
ok github.com/hashicorp/terraform/builtin/providers/aws 51.711s
```
* provider/aws: Add documentation for aws_ssm_maintenance_window
* provider/aws: Add support for aws_ssm_maintenance_window_target
```
% make testacc TEST=./builtin/providers/aws TESTARGS='-run=TestAccAWSSSMMaintenanceWindowTarget'
==> Checking that code complies with gofmt requirements...
go generate $(go list ./... | grep -v /terraform/vendor/)
2017/04/29 16:38:22 Generated command/internal_plugin_list.go
TF_ACC=1 go test ./builtin/providers/aws -v -run=TestAccAWSSSMMaintenanceWindowTarget -timeout 120m
=== RUN TestAccAWSSSMMaintenanceWindowTarget_basic
--- PASS: TestAccAWSSSMMaintenanceWindowTarget_basic (34.68s)
PASS
ok github.com/hashicorp/terraform/builtin/providers/aws 34.701s
```
* provider/aws: Adding the documentation for aws_ssm_maintenance_window_target
* provider/aws: Add support for aws_ssm_maintenance_window_task
* provider/aws: Documentation for aws_ssm_maintenance_window_task
This is a fix for PR #11040. The code here lowercases the name and/or prefix before sending it to the AWS API and the terraform state. This means the state will match the actual resource name and be able to converge the diff.
- config.clientCompute.Routers
- peer fields renamed
- more consistent logging
- better handling of SetId for error handling
- function for router locks
- test configs as functions
- simplify exists logic
- use getProject, getRegion logic on acceptance tests
- CheckDestroy for peers an interfaces
- dynamic router name for tunnel test
- extra fields for BgpPeer
- resource documentation
Here we add a new resource type `gitlab_project_hook`. It allows for
management of custom hooks for a gitlab project.
This is a relatively simple resource as a project hook is a simple
association between a project, and a url to hit when one of the flagged
events occurs on that project.
Hooks (called Webhooks in some user documentation, but simply Hooks
in the api documentation) are covered here for users
https://docs.gitlab.com/ce/user/project/integrations/webhooks.html and
in the API documentation at
https://docs.gitlab.com/ce/api/projects.html#hooks
The current google_compute_url_map resource already supports backend
buckets out of the box: Just pass the self_link of the backend buckets
as you would pass the self_link of a backend service.
This adds some example code as well.
When trying to run the example code with the most recent terraform
master you get the following errors:
```
2 error(s) occurred:
* google_compute_backend_service.home: "region": [REMOVED] region has been removed as it was never used
* google_compute_backend_service.login: "region": [REMOVED] region has been removed as it was never used
```
Hence remove it from the documentation.
There's a pre-existing Terraform presence on both Server Fault and
Stack Overflow that seem to be generating good questions and answers, so
here we'll try to send users to the right sites to ask questions and
encourage them to be good participants in those communities.
* Document the data source interpolation usage
Also briefly mentions how to use counts and splat syntax as with resources to further document the usage of counts for data sources (see https://github.com/hashicorp/terraform/pull/14143).
* Output -> Attribute
As per feedback.
* added emr security configurations
* gofmt after rebase
* provider/aws: Update EMR Cluster to support Security Configuration
* update test to create key
* update docs
We've supported GOOGLE_APPLICATION_CREDENTIALS as an environment
variable (it comes free with our OAuth2 client) but it has never been
documented. Documenting it now to resolve#12626.
This commit adds an option to skip TLS verification of the Triton
endpoint, which can be useful for private or temporary installations not
using a certificate signed by a trusted root CA.
Fixes#13722.
With the FQDN specified, it throws error:
```
1 error(s) occurred:
* azurerm_container_service.test: "agent_pool_profile.0.fqdn": this field cannot be set
```
Update our docs for `google_compute_forwarding_rule` to clarify that the
`ip_address` field expects a literal IP address and will not accept the
`self_link` property of a `google_compute_address` resource.
Prompted by #13375
* Make dnsimple_records importable
terraform 0.7 supports importing a resource into the local state, and
this adds that feature to the dnsimple_record resource.
Unfortunately, the DNSimple v1 API requires a domain name and record ID
to fetch a record, so the import command accepts both pieces of data as
a slash-delimted string like so:
terraform import dnsimple_record.test example.com/1234
* add an acceptance test for importing a dnsimple_record
Fixes: #13173
We now tag at instance creation and introduced `volume_tags` that can be
set so that all devices created on instance creation will receive those
tags
```
% make testacc TEST=./builtin/providers/aws TESTARGS='-run=TestAccAWSInstance_volumeTags' 2 ↵ ✚ ✭
==> Checking that code complies with gofmt requirements...
go generate $(go list ./... | grep -v /terraform/vendor/)
2017/04/26 06:30:48 Generated command/internal_plugin_list.go
TF_ACC=1 go test ./builtin/providers/aws -v -run=TestAccAWSInstance_volumeTags -timeout 120m
=== RUN TestAccAWSInstance_volumeTags
--- PASS: TestAccAWSInstance_volumeTags (214.31s)
PASS
ok github.com/hashicorp/terraform/builtin/providers/aws 214.332s
```
As a follow up to #13844, this pull request sorts the AMIs and snapshots returned from the aws_ami_ids and aws_ebs_snapshot_ids data sources, respectively.
When TerraForm is used to configure and deploy infrastructure
applications that require dozens templated files, such as Kubernetes, it
becomes extremely burdensome to template them individually: each of them
requires a data source block as well as an upload/export (file
provisioner, AWS S3, ...).
Instead, this commit introduces a mean to template an entire folder of
files (recursively), that can then be treated as a whole by any provider
or provisioner that support directory inputs (such as the
file provisioner, the archive provider, ...).
This does not intend to make TerraForm a full-fledged templating system
as the templating grammar and capabilities are left unchanged. This only
aims at improving the user-experience of the existing templating
provider by significantly reducing the overhead when several files are
to be generated - without forcing the users to rely on external tools
when these templates stay simple and that their generation in TerraForm
is justified.
This is the minimal amount of work needed to be able to create a list of a subset of subnet IDs in a VPC, allowing people to loop through them easily when creating EC2 instances or provide a list straight to an ELB.
Many apps deployed to Heroku require that multiple buildpacks be
configured in a particular order to operate correctly.
This updates the builtin Heroku provider's app resource to support
configuring buildpacks and the related documentation in the website.
Similar to config vars, externally set buildpacks will not be altered if
the config is not set.
Here we add a basic provider with a single resource type.
It's copied heavily from the `github` provider and `github_repository`
resource, as there is some overlap in those types/apis.
~~~
resource "gitlab_project" "test1" {
name = "test1"
visibility_level = "public"
}
~~~
We implement in terms of the
[go-gitlab](https://github.com/xanzy/go-gitlab) library, which provides
a wrapping of the [gitlab api](https://docs.gitlab.com/ee/api/)
We have been a little selective in the properties we surface for the
project resource, as not all properties are very instructive.
Notable is the removal of the `public` bool as the `visibility_level`
will take precedent if both are supplied which leads to confusing
interactions if they disagree.
This new function allows using a search within one list to filter another list. For example, it can be used to find the ids of EC2 instances in a particular AZ.
The interface is made slightly awkward by the constraints of HIL's featureset.
#13847
Fix issue with an instances label causing a ForceNew if omitted.
Also updates mistyped docs for the `opc_compute_security_list` resource.
```
$ make testacc TEST=./builtin/providers/opc TESTARGS="-run=TestAccOPCInstance_emptyLabel"
==> Checking that code complies with gofmt requirements...
go generate $(go list ./... | grep -v /terraform/vendor/)
2017/04/21 09:57:48 Generated command/internal_plugin_list.go
TF_ACC=1 go test ./builtin/providers/opc -v -run=TestAccOPCInstance_emptyLabel -timeout 120m
=== RUN TestAccOPCInstance_emptyLabel
--- PASS: TestAccOPCInstance_emptyLabel (574.79s)
PASS
ok github.com/hashicorp/terraform/builtin/providers/opc 574.835s
```
Included in this fix:
1) No crash
2) Debug log indicates problem, otherwise unsupported outputs are ignored
3) String, bool and int outputs are supported
4) Documentation indicates these limitations
What is not included:
5) Array, object, securestring, secureobject still not supported
Previously we fixed this specifically for the Enterprise VCS integration,
but we also had some long-running errors of this sort in the docs for
how to specify module sources on Bitbucket.
Wait for instance to be in STOPPED or RUNNING state before invoking
AllocatePublicIP API.
* provider/alicloud: Wait for instance state before allocate public ip
* provider/alicloud: Fix test `TestAccAlicloudInstance_associatePublicIP`
* provider/alicloud: Update alicloud_instance document
Fixes: #13267
* Ensuring we base64 decode the custom data if it's base64 encoded
* Import support for VM Scale Sets
* Updating the docs to mention Import support
* Fixes#13009, where the SSH Keys would be set at the incorrect index
(leaving a null entry at the start, causing a crash on the second apply)
* Adding tests to cover the updating use-case
* Adding an import linux test
* Storing the base64 encoded value
Making custom_data a force new, since it an't be updated
* Updating the docs
* Add an option to skip getting the EC2 platforms
Even through this call fails silently in case of an error (usually lack of rights), it’s still a pretty extensive call.
In our region (eu-west-1) this can take up to 3 seconds. And since we have a system that involves doing much planning with the option `-refresh=false` these additional 3 seconds are really very annoying and totally not needed.
So being able to choose to skip them would make our lives a little better 😉
* Update the docs accordingly
Currently CloudWatch log subscription supports Lambda as a destination. And we can use `aws_cloudwatch_log_subscription_filter` resource for creating subscriptions with Lambda as a destination, but it needs some additional actions. I described them in description, but feel free to improve description if you can say the same better.
This change will help better understand abilities of using this resource.
This commit adds the ability to provision files locally.
This is useful for cases where TerraForm generates assets
such as TLS certificates or templated documents that need
to be saved locally.
- While output variables can be used to return values to
the user, it is not extremly suitable for large content or
when many of these are generated, nor is it practical for
operators to manually save them on disk.
- While `local-exec` could be used with an `echo`, this
provider works across platforms and do not require any
convoluted escaping.
* first version of this datasource
* add network and subnetwork datasource and documentation
* modify sidebar reference in documentation
* fix elements after review on network and subnetwork datasources
* fix fmt on Google provider.go
* modify code with the review
* modify documentation layout order
* fix alphabetic order in provider.go
* fix rebase issue and documentation datasource => data