Merge branch 'master' of github.com:UKCloud/terraform
This commit is contained in:
commit
ac105b5b18
|
@ -1,4 +0,0 @@
|
|||
# Set the default behavior, in case people don't have core.autocrlf set.
|
||||
* text=auto
|
||||
|
||||
*.go eol=lf
|
|
@ -37,6 +37,7 @@ The current list of HashiCorp Providers is as follows:
|
|||
* `aws`
|
||||
* `azurerm`
|
||||
* `google`
|
||||
* `opc`
|
||||
|
||||
Our testing standards are the same for both HashiCorp and Community providers,
|
||||
and HashiCorp runs full acceptance test suites for every provider nightly to
|
||||
|
@ -201,6 +202,9 @@ Implementing a new resource is a good way to learn more about how Terraform
|
|||
interacts with upstream APIs. There are plenty of examples to draw from in the
|
||||
existing resources, but you still get to implement something completely new.
|
||||
|
||||
- [ ] __Minimal LOC__: It can be inefficient for both the reviewer
|
||||
and author to go through long feedback cycles on a big PR with many
|
||||
resources. We therefore encourage you to only submit **1 resource at a time**.
|
||||
- [ ] __Acceptance tests__: New resources should include acceptance tests
|
||||
covering their behavior. See [Writing Acceptance
|
||||
Tests](#writing-acceptance-tests) below for a detailed guide on how to
|
||||
|
@ -223,6 +227,11 @@ Implementing a new provider gives Terraform the ability to manage resources in
|
|||
a whole new API. It's a larger undertaking, but brings major new functionality
|
||||
into Terraform.
|
||||
|
||||
- [ ] __Minimal initial LOC__: Some providers may be big and it can be
|
||||
inefficient for both reviewer & author to go through long feedback cycles
|
||||
on a big PR with many resources. We encourage you to only submit
|
||||
the necessary minimum in a single PR, ideally **just the first resource**
|
||||
of the provider.
|
||||
- [ ] __Acceptance tests__: Each provider should include an acceptance test
|
||||
suite with tests for each resource should include acceptance tests covering
|
||||
its behavior. See [Writing Acceptance Tests](#writing-acceptance-tests) below
|
||||
|
|
|
@ -2,7 +2,7 @@ dist: trusty
|
|||
sudo: false
|
||||
language: go
|
||||
go:
|
||||
- 1.8
|
||||
- 1.8.1
|
||||
|
||||
# add TF_CONSUL_TEST=1 to run consul tests
|
||||
# they were causing timouts in travis
|
||||
|
@ -39,4 +39,4 @@ notifications:
|
|||
matrix:
|
||||
fast_finish: true
|
||||
allow_failures:
|
||||
- go: tip
|
||||
- go: tip
|
409
CHANGELOG.md
409
CHANGELOG.md
|
@ -1,13 +1,402 @@
|
|||
## 0.9.3 (unreleased)
|
||||
## 0.9.6 (Unreleased)
|
||||
|
||||
BACKWARDS INCOMPATIBILITIES / NOTES:
|
||||
|
||||
* provider/google: Users of `google_compute_health_check` who were not setting a value for the `host` property of `http_health_check` or `https_health_check` previously had a faulty default value. This has been fixed and will show as a change in terraform plan/apply. [GH-14441]
|
||||
|
||||
FEATURES:
|
||||
|
||||
* **New Provider:** `OVH` [GH-12669]
|
||||
* **New Resource:** `aws_default_subnet` [GH-14476]
|
||||
* **New Resource:** `aws_default_vpc_dhcp_options` [GH-14475]
|
||||
* **New Resource:** `aws_devicefarm_project` [GH-14288]
|
||||
* **New Resource:** `aws_wafregion_ipset` [GH-13705]
|
||||
* **New Resource:** `aws_wafregion_byte_match_set` [GH-13705]
|
||||
* **New Resource:** `azurerm_express_route_circuit` [GH-14265]
|
||||
* **New Data Source:** `aws_db_snapshot` [GH-10291]
|
||||
|
||||
IMPROVEMENTS:
|
||||
|
||||
* provider/aws: Add support to set iam_role_arn on cloudformation Stack [GH-12547]
|
||||
|
||||
* core/provider-split: Split out the Oracle OPC provider to new structure [GH-14362]
|
||||
* provider/aws: Show state reason when EC2 instance fails to launch [GH-14479]
|
||||
* provider/aws: Show last scaling activity when ASG creation/update fails [GH-14480]
|
||||
* provider/aws: Add `tags` (list of maps) for `aws_autoscaling_group` [GH-13574]
|
||||
* provider/azurerm: Virtual Machine Scale Sets with managed disk support [GH-13717]
|
||||
* provider/azurerm: Virtual Machine Scale Sets with single placement option support [GH-14510]
|
||||
* provider/datadog: Add last aggregator to datadog_timeboard resource [GH-14391]
|
||||
* provider/datadog: Added new evaluation_delay parameter [GH-14433]
|
||||
* provider/docker: Allow Windows Docker containers to map volumes [GH-13584]
|
||||
* provider/google: Add a `url` attribute to `google_storage_bucket` [GH-14393]
|
||||
* provider/google: Make google resource storage bucket importable [GH-14455]
|
||||
* provider/heroku: Add import ability for `heroku_pipeline` resource [GH-14486]
|
||||
* provider/openstack: Add support for all protocols in Security Group Rules [GH-14307]
|
||||
* provider/rundeck: adds `description` to `command` schema in `rundeck_job` resource [GH-14352]
|
||||
* provider/scaleway: allow public_ip to be set on server resource [GH-14515]
|
||||
|
||||
BUG FIXES:
|
||||
|
||||
* core: When using `-target`, any outputs that include attributes of the targeted resources are now updated [GH-14186]
|
||||
* core: Fixed 0.9.5 regression with the conditional operator `.. ? .. : ..` failing to type check with unknown/computed values [GH-14454]
|
||||
* core: Fixed 0.9 regression causing issues during refresh when adding new data resource instances using `count` [GH-14098]
|
||||
* core: Fixed crasher when populating a "splat variable" from an empty (nil) module state. [GH-14526]
|
||||
* provider/aws: Increase EIP update timeout [GH-14381]
|
||||
* provider/aws: Increase timeout for creating security group [GH-14380]
|
||||
* provider/aws: Increase timeout for (dis)associating IPv6 addr to subnet [GH-14401]
|
||||
* provider/aws: Using the new time schema helper for RDS Instance lifecycle mgmt [GH-14369]
|
||||
* provider/aws: Using the timeout schema helper to make alb timeout cofigurable [GH-14375]
|
||||
* provider/aws: Refresh from state when CodePipeline Not Found [GH-14431]
|
||||
* provider/aws: Override spot_instance_requests volume_tags schema [GH-14481]
|
||||
* provider/aws: Allow Internet Gateway IPv6 routes [GH-14484]
|
||||
* provider/aws: ForceNew aws_launch_config when root_block_device changes [GH-14507]
|
||||
* provider/aws: Pass IAM Roles to codepipeline actions [GH-14263]
|
||||
* provider/aws: Create rule(s) for prefix-list-only AWS security group permissions on 'terraform import' [GH-14528]
|
||||
* provider/aws: Set aws_subnet ipv6_cidr_block to computed [GH-14542]
|
||||
* provider/cloudstack: `cloudstack_firewall` panicked when used with older (< v4.6) CloudStack versions [GH-14044]
|
||||
* provider/datadog: Allowed method on aggregator is `avg` ! `average` [GH-14414]
|
||||
* provider/digitalocean: Fix parsing of digitalocean dns records [GH-14215]
|
||||
* provider/github: Log HTTP requests and responses in DEBUG mode [GH-14363]
|
||||
* provider/google: Fix health check http/https defaults [GH-14441]
|
||||
* provider/heroku: Fix issue with setting correct CName in heroku_domain [GH-14443]
|
||||
* provider/opc: Correctly export `ip_address` in IP Addr Reservation [GH-14543]
|
||||
* provider/openstack: Handle Deleted Resources in Floating IP Association [GH-14533]
|
||||
* provider/vault: Prevent panic when no secret found [GH-14435]
|
||||
|
||||
## 0.9.5 (May 11, 2017)
|
||||
|
||||
BACKWARDS INCOMPATIBILITIES / NOTES:
|
||||
|
||||
* provider/aws: Users of aws_cloudfront_distributions with custom_origins have been broken due to changes in the AWS API requiring `OriginReadTimeout` being set for updates. This has been fixed and will show as a change in terraform plan / apply. ([#13367](https://github.com/hashicorp/terraform/issues/13367))
|
||||
* provider/aws: Users of China and Gov clouds, cannot use the new tagging of volumes created as part of aws_instances ([#14055](https://github.com/hashicorp/terraform/issues/14055))
|
||||
* provider/aws: Skip tag operations on cloudwatch logs in govcloud partition. Currently not supported by Amazon. ([#12414](https://github.com/hashicorp/terraform/issues/12414))
|
||||
* provider/aws: More consistent (un)quoting of long TXT/SPF `aws_route53_record`s.
|
||||
Previously we were trimming first 2 quotes and now we're (correctly) trimming first and last one.
|
||||
Depending on the use of quotes in your TXT/SPF records this may result in extra diff in plan/apply ([#14170](https://github.com/hashicorp/terraform/issues/14170))
|
||||
|
||||
FEATURES:
|
||||
|
||||
* **New Provider:** `gitlab` ([#13898](https://github.com/hashicorp/terraform/issues/13898))
|
||||
* **New Resource:** `aws_emr_security_configuration` ([#14080](https://github.com/hashicorp/terraform/issues/14080))
|
||||
* **New Resource:** `aws_ssm_maintenance_window` ([#14087](https://github.com/hashicorp/terraform/issues/14087))
|
||||
* **New Resource:** `aws_ssm_maintenance_window_target` ([#14087](https://github.com/hashicorp/terraform/issues/14087))
|
||||
* **New Resource:** `aws_ssm_maintenance_window_task` ([#14087](https://github.com/hashicorp/terraform/issues/14087))
|
||||
* **New Resource:** `azurerm_sql_elasticpool` ([#14099](https://github.com/hashicorp/terraform/issues/14099))
|
||||
* **New Resource:** `google_bigquery_table` ([#13743](https://github.com/hashicorp/terraform/issues/13743))
|
||||
* **New Resource:** `google_compute_backend_bucket` ([#14015](https://github.com/hashicorp/terraform/issues/14015))
|
||||
* **New Resource:** `google_compute_snapshot` ([#12482](https://github.com/hashicorp/terraform/issues/12482))
|
||||
* **New Resource:** `heroku_app_feature` ([#14035](https://github.com/hashicorp/terraform/issues/14035))
|
||||
* **New Resource:** `heroku_pipeline` ([#14078](https://github.com/hashicorp/terraform/issues/14078))
|
||||
* **New Resource:** `heroku_pipeline_coupling` ([#14078](https://github.com/hashicorp/terraform/issues/14078))
|
||||
* **New Resource:** `kubernetes_limit_range` ([#14285](https://github.com/hashicorp/terraform/issues/14285))
|
||||
* **New Resource:** `kubernetes_resource_quota` ([#13914](https://github.com/hashicorp/terraform/issues/13914))
|
||||
* **New Resource:** `vault_auth_backend` ([#10988](https://github.com/hashicorp/terraform/issues/10988))
|
||||
* **New Data Source:** `aws_efs_file_system` ([#14041](https://github.com/hashicorp/terraform/issues/14041))
|
||||
* **New Data Source:** `http`, for retrieving text data from generic HTTP servers ([#14270](https://github.com/hashicorp/terraform/issues/14270))
|
||||
* **New Data Source:** `google_container_engine_versions`, for retrieving valid versions for clusters ([#14280](https://github.com/hashicorp/terraform/issues/14280))
|
||||
* **New Interpolation Function:** `log`, for computing logarithms ([#12872](https://github.com/hashicorp/terraform/issues/12872))
|
||||
|
||||
IMPROVEMENTS:
|
||||
|
||||
* core: `sha512` and `base64sha512` interpolation functions, similar to their `sha256` equivalents. ([#14100](https://github.com/hashicorp/terraform/issues/14100))
|
||||
* core: It's now possible to use the index operator `[ ]` to select a known value out of a partially-known list, such as using "splat syntax" and increasing the `count`. ([#14135](https://github.com/hashicorp/terraform/issues/14135))
|
||||
* provider/aws: Add support for CustomOrigin timeouts to aws_cloudfront_distribution ([#13367](https://github.com/hashicorp/terraform/issues/13367))
|
||||
* provider/aws: Add support for IAMDatabaseAuthenticationEnabled ([#14092](https://github.com/hashicorp/terraform/issues/14092))
|
||||
* provider/aws: aws_dynamodb_table Add support for TimeToLive ([#14104](https://github.com/hashicorp/terraform/issues/14104))
|
||||
* provider/aws: Add `security_configuration` support to `aws_emr_cluster` ([#14133](https://github.com/hashicorp/terraform/issues/14133))
|
||||
* provider/aws: Add support for the tenancy placement option in `aws_spot_fleet_request` ([#14163](https://github.com/hashicorp/terraform/issues/14163))
|
||||
* provider/aws: `aws_db_option_group` normalizes name to lowercase ([#14192](https://github.com/hashicorp/terraform/issues/14192), [#14366](https://github.com/hashicorp/terraform/issues/14366))
|
||||
* provider/aws: Add support description to aws_iam_role ([#14208](https://github.com/hashicorp/terraform/issues/14208))
|
||||
* provider/aws: Add support for SSM Documents to aws_cloudwatch_event_target ([#14067](https://github.com/hashicorp/terraform/issues/14067))
|
||||
* provider/aws: add additional custom service endpoint options for CloudFormation, KMS, RDS, SNS & SQS ([#14097](https://github.com/hashicorp/terraform/issues/14097))
|
||||
* provider/aws: Add ARN to security group data source ([#14245](https://github.com/hashicorp/terraform/issues/14245))
|
||||
* provider/aws: Improve the wording of DynamoDB Validation error message ([#14256](https://github.com/hashicorp/terraform/issues/14256))
|
||||
* provider/aws: Add support for importing Kinesis Streams ([#14278](https://github.com/hashicorp/terraform/issues/14278))
|
||||
* provider/aws: Add `arn` attribute to `aws_ses_domain_identity` resource ([#14306](https://github.com/hashicorp/terraform/issues/14306))
|
||||
* provider/aws: Add support for targets to aws_ssm_association ([#14246](https://github.com/hashicorp/terraform/issues/14246))
|
||||
* provider/aws: native redis clustering support for elasticache ([#14317](https://github.com/hashicorp/terraform/issues/14317))
|
||||
* provider/aws: Support updating `aws_waf_rule` predicates ([#14089](https://github.com/hashicorp/terraform/issues/14089))
|
||||
* provider/azurerm: `azurerm_template_deployment` now supports String/Int/Boolean outputs ([#13670](https://github.com/hashicorp/terraform/issues/13670))
|
||||
* provider/azurerm: Expose the Private IP Address for a Load Balancer, if available ([#13965](https://github.com/hashicorp/terraform/issues/13965))
|
||||
* provider/dns: Fix data dns txt record set ([#14271](https://github.com/hashicorp/terraform/issues/14271))
|
||||
* provider/dnsimple: Add support for import for dnsimple_records ([#9130](https://github.com/hashicorp/terraform/issues/9130))
|
||||
* provider/dyn: Add verbose Dyn provider logs ([#14076](https://github.com/hashicorp/terraform/issues/14076))
|
||||
* provider/google: Add support for networkIP in compute instance templates ([#13515](https://github.com/hashicorp/terraform/issues/13515))
|
||||
* provider/google: google_dns_managed_zone is now importable ([#13824](https://github.com/hashicorp/terraform/issues/13824))
|
||||
* provider/google: Add support for `compute_route` ([#14065](https://github.com/hashicorp/terraform/issues/14065))
|
||||
* provider/google: Add `path` to `google_pubsub_subscription` ([#14238](https://github.com/hashicorp/terraform/issues/14238))
|
||||
* provider/google: Improve Service Account by offering to recreate if missing ([#14282](https://github.com/hashicorp/terraform/issues/14282))
|
||||
* provider/google: Log HTTP requests and responses in DEBUG mode ([#14281](https://github.com/hashicorp/terraform/issues/14281))
|
||||
* provider/google: Add additional properties for google resource storage bucket object ([#14259](https://github.com/hashicorp/terraform/issues/14259))
|
||||
* provider/google: Handle all 404 checks in read functions via the new function ([#14335](https://github.com/hashicorp/terraform/issues/14335))
|
||||
* provider/heroku: import heroku_app resource ([#14248](https://github.com/hashicorp/terraform/issues/14248))
|
||||
* provider/nomad: Add TLS options ([#13956](https://github.com/hashicorp/terraform/issues/13956))
|
||||
* provider/triton: Add support for reading provider configuration from `TRITON_*` environment variables in addition to `SDC_*`([#14000](https://github.com/hashicorp/terraform/issues/14000))
|
||||
* provider/triton: Add `cloud_config` argument to `triton_machine` resources for Linux containers ([#12840](https://github.com/hashicorp/terraform/issues/12840))
|
||||
* provider/triton: Add `insecure_skip_tls_verify` ([#14077](https://github.com/hashicorp/terraform/issues/14077))
|
||||
|
||||
BUG FIXES:
|
||||
|
||||
* core: `module` blocks without names are now caught in validation, along with various other block types ([#14162](https://github.com/hashicorp/terraform/issues/14162))
|
||||
* core: no longer will errors and normal log output get garbled together on Windows ([#14194](https://github.com/hashicorp/terraform/issues/14194))
|
||||
* core: Avoid crash on empty TypeSet blocks ([#14305](https://github.com/hashicorp/terraform/issues/14305))
|
||||
* provider/aws: Update aws_ebs_volume when attached ([#14005](https://github.com/hashicorp/terraform/issues/14005))
|
||||
* provider/aws: Set aws_instance volume_tags to be Computed ([#14007](https://github.com/hashicorp/terraform/issues/14007))
|
||||
* provider/aws: Fix issue getting partition for federated users ([#13992](https://github.com/hashicorp/terraform/issues/13992))
|
||||
* provider/aws: aws_spot_instance_request not forcenew on volume_tags ([#14046](https://github.com/hashicorp/terraform/issues/14046))
|
||||
* provider/aws: Exclude aws_instance volume tagging for China and Gov Clouds ([#14055](https://github.com/hashicorp/terraform/issues/14055))
|
||||
* provider/aws: Fix source_dest_check with network_interface ([#14079](https://github.com/hashicorp/terraform/issues/14079))
|
||||
* provider/aws: Fixes the bug where SNS delivery policy get always recreated ([#14064](https://github.com/hashicorp/terraform/issues/14064))
|
||||
* provider/aws: Increase timeouts for Route Table retries ([#14345](https://github.com/hashicorp/terraform/issues/14345))
|
||||
* provider/aws: Prevent Crash when importing aws_route53_record ([#14218](https://github.com/hashicorp/terraform/issues/14218))
|
||||
* provider/aws: More consistent (un)quoting of long TXT/SPF `aws_route53_record`s ([#14170](https://github.com/hashicorp/terraform/issues/14170))
|
||||
* provider/aws: Retry deletion of AWSConfig Rule on ResourceInUseException ([#14269](https://github.com/hashicorp/terraform/issues/14269))
|
||||
* provider/aws: Refresh ssm document from state on 404 ([#14279](https://github.com/hashicorp/terraform/issues/14279))
|
||||
* provider/aws: Allow zero-value ELB and ALB names ([#14304](https://github.com/hashicorp/terraform/issues/14304))
|
||||
* provider/aws: Update the ignoring of AWS specific tags ([#14321](https://github.com/hashicorp/terraform/issues/14321))
|
||||
* provider/aws: Adding IPv6 address to instance causes perpetual diff ([#14355](https://github.com/hashicorp/terraform/issues/14355))
|
||||
* provider/aws: Fix SG update on instance with multiple network interfaces ([#14299](https://github.com/hashicorp/terraform/issues/14299))
|
||||
* provider/azurerm: Fixing a bug in `azurerm_network_interface` ([#14365](https://github.com/hashicorp/terraform/issues/14365))
|
||||
* provider/digitalocean: Prevent diffs when using IDs of images instead of slugs ([#13879](https://github.com/hashicorp/terraform/issues/13879))
|
||||
* provider/fastly: Changes setting conditionals to optional ([#14103](https://github.com/hashicorp/terraform/issues/14103))
|
||||
* provider/google: Ignore certain project services that can't be enabled directly via the api ([#13730](https://github.com/hashicorp/terraform/issues/13730))
|
||||
* provider/google: Ability to add more than 25 project services ([#13758](https://github.com/hashicorp/terraform/issues/13758))
|
||||
* provider/google: Fix compute instance panic with bad disk config ([#14169](https://github.com/hashicorp/terraform/issues/14169))
|
||||
* provider/google: Handle `google_storage_bucket_object` not being found ([#14203](https://github.com/hashicorp/terraform/issues/14203))
|
||||
* provider/google: Handle `google_compute_instance_group_manager` not being found ([#14190](https://github.com/hashicorp/terraform/issues/14190))
|
||||
* provider/google: better visibility for compute_region_backend_service ([#14301](https://github.com/hashicorp/terraform/issues/14301))
|
||||
* provider/heroku: Configure buildpacks correctly for both Org Apps and non-org Apps ([#13990](https://github.com/hashicorp/terraform/issues/13990))
|
||||
* provider/heroku: Fix `heroku_cert` update of ssl cert ([#14240](https://github.com/hashicorp/terraform/issues/14240))
|
||||
* provider/openstack: Handle disassociating deleted FloatingIP's from a server ([#14210](https://github.com/hashicorp/terraform/issues/14210))
|
||||
* provider/postgres grant role when creating database ([#11452](https://github.com/hashicorp/terraform/issues/11452))
|
||||
* provider/triton: Make triton machine deletes synchronous. ([#14368](https://github.com/hashicorp/terraform/issues/14368))
|
||||
* provisioner/remote-exec: Fix panic from remote_exec provisioner ([#14134](https://github.com/hashicorp/terraform/issues/14134))
|
||||
|
||||
## 0.9.4 (26th April 2017)
|
||||
|
||||
BACKWARDS INCOMPATIBILITIES / NOTES:
|
||||
|
||||
* provider/template: Fix invalid MIME formatting in `template_cloudinit_config`.
|
||||
While the change itself is not breaking the data source it may be referenced
|
||||
e.g. in `aws_launch_configuration` and similar resources which are immutable
|
||||
and the formatting change will therefore trigger recreation ([#13752](https://github.com/hashicorp/terraform/issues/13752))
|
||||
|
||||
FEATURES:
|
||||
|
||||
* **New Provider:** `opc` - Oracle Public Cloud ([#13468](https://github.com/hashicorp/terraform/issues/13468))
|
||||
* **New Provider:** `oneandone` ([#13633](https://github.com/hashicorp/terraform/issues/13633))
|
||||
* **New Data Source:** `aws_ami_ids` ([#13844](https://github.com/hashicorp/terraform/issues/13844)] [[#13866](https://github.com/hashicorp/terraform/issues/13866))
|
||||
* **New Data Source:** `aws_ebs_snapshot_ids` ([#13844](https://github.com/hashicorp/terraform/issues/13844)] [[#13866](https://github.com/hashicorp/terraform/issues/13866))
|
||||
* **New Data Source:** `aws_kms_alias` ([#13669](https://github.com/hashicorp/terraform/issues/13669))
|
||||
* **New Data Source:** `aws_kinesis_stream` ([#13562](https://github.com/hashicorp/terraform/issues/13562))
|
||||
* **New Data Source:** `digitalocean_image` ([#13787](https://github.com/hashicorp/terraform/issues/13787))
|
||||
* **New Data Source:** `google_compute_network` ([#12442](https://github.com/hashicorp/terraform/issues/12442))
|
||||
* **New Data Source:** `google_compute_subnetwork` ([#12442](https://github.com/hashicorp/terraform/issues/12442))
|
||||
* **New Resource:** `local_file` for creating local files (please see the docs for caveats) ([#12757](https://github.com/hashicorp/terraform/issues/12757))
|
||||
* **New Resource:** `alicloud_ess_scalinggroup` ([#13731](https://github.com/hashicorp/terraform/issues/13731))
|
||||
* **New Resource:** `alicloud_ess_scalingconfiguration` ([#13731](https://github.com/hashicorp/terraform/issues/13731))
|
||||
* **New Resource:** `alicloud_ess_scalingrule` ([#13731](https://github.com/hashicorp/terraform/issues/13731))
|
||||
* **New Resource:** `alicloud_ess_schedule` ([#13731](https://github.com/hashicorp/terraform/issues/13731))
|
||||
* **New Resource:** `alicloud_snat_entry` ([#13731](https://github.com/hashicorp/terraform/issues/13731))
|
||||
* **New Resource:** `alicloud_forward_entry` ([#13731](https://github.com/hashicorp/terraform/issues/13731))
|
||||
* **New Resource:** `aws_cognito_identity_pool` ([#13783](https://github.com/hashicorp/terraform/issues/13783))
|
||||
* **New Resource:** `aws_network_interface_attachment` ([#13861](https://github.com/hashicorp/terraform/issues/13861))
|
||||
* **New Resource:** `github_branch_protection` ([#10476](https://github.com/hashicorp/terraform/issues/10476))
|
||||
* **New Resource:** `google_bigquery_dataset` ([#13436](https://github.com/hashicorp/terraform/issues/13436))
|
||||
* **New Resource:** `heroku_space` ([#13921](https://github.com/hashicorp/terraform/issues/13921))
|
||||
* **New Resource:** `template_dir` for producing a directory from templates ([#13652](https://github.com/hashicorp/terraform/issues/13652))
|
||||
* **New Interpolation Function:** `coalescelist()` ([#12537](https://github.com/hashicorp/terraform/issues/12537))
|
||||
|
||||
|
||||
IMPROVEMENTS:
|
||||
|
||||
* core: Add a `-reconfigure` flag to the `init` command, to configure a backend while ignoring any saved configuration ([#13825](https://github.com/hashicorp/terraform/issues/13825))
|
||||
* helper/schema: Disallow validation+diff suppression on computed fields ([#13878](https://github.com/hashicorp/terraform/issues/13878))
|
||||
* config: The interpolation function `cidrhost` now accepts a negative host number to count backwards from the end of the range ([#13765](https://github.com/hashicorp/terraform/issues/13765))
|
||||
* config: New interpolation function `matchkeys` for using values from one list to filter corresponding values from another list using a matching set. ([#13847](https://github.com/hashicorp/terraform/issues/13847))
|
||||
* state/remote/swift: Support Openstack request logging ([#13583](https://github.com/hashicorp/terraform/issues/13583))
|
||||
* provider/aws: Add an option to skip getting the supported EC2 platforms ([#13672](https://github.com/hashicorp/terraform/issues/13672))
|
||||
* provider/aws: Add `name_prefix` support to `aws_cloudwatch_log_group` ([#13273](https://github.com/hashicorp/terraform/issues/13273))
|
||||
* provider/aws: Add `bucket_prefix` to `aws_s3_bucket` ([#13274](https://github.com/hashicorp/terraform/issues/13274))
|
||||
* provider/aws: Add replica_source_db to the aws_db_instance datasource ([#13842](https://github.com/hashicorp/terraform/issues/13842))
|
||||
* provider/aws: Add IPv6 outputs to aws_subnet datasource ([#13841](https://github.com/hashicorp/terraform/issues/13841))
|
||||
* provider/aws: Exercise SecondaryPrivateIpAddressCount for network interface ([#10590](https://github.com/hashicorp/terraform/issues/10590))
|
||||
* provider/aws: Expose execution ARN + invoke URL for APIG deployment ([#13889](https://github.com/hashicorp/terraform/issues/13889))
|
||||
* provider/aws: Expose invoke ARN from Lambda function (for API Gateway) ([#13890](https://github.com/hashicorp/terraform/issues/13890))
|
||||
* provider/aws: Add tagging support to the 'aws_lambda_function' resource ([#13873](https://github.com/hashicorp/terraform/issues/13873))
|
||||
* provider/aws: Validate WAF metric names ([#13885](https://github.com/hashicorp/terraform/issues/13885))
|
||||
* provider/aws: Allow AWS Subnet to change IPv6 CIDR Block without ForceNew ([#13909](https://github.com/hashicorp/terraform/issues/13909))
|
||||
* provider/aws: Allow filtering of aws_subnet_ids by tags ([#13937](https://github.com/hashicorp/terraform/issues/13937))
|
||||
* provider/aws: Support aws_instance and volume tagging on creation ([#13945](https://github.com/hashicorp/terraform/issues/13945))
|
||||
* provider/aws: Add network_interface to aws_instance ([#12933](https://github.com/hashicorp/terraform/issues/12933))
|
||||
* provider/azurerm: VM Scale Sets - import support ([#13464](https://github.com/hashicorp/terraform/issues/13464))
|
||||
* provider/azurerm: Allow Azure China region support ([#13767](https://github.com/hashicorp/terraform/issues/13767))
|
||||
* provider/digitalocean: Export droplet prices ([#13720](https://github.com/hashicorp/terraform/issues/13720))
|
||||
* provider/fastly: Add support for GCS logging ([#13553](https://github.com/hashicorp/terraform/issues/13553))
|
||||
* provider/google: `google_compute_address` and `google_compute_global_address` are now importable ([#13270](https://github.com/hashicorp/terraform/issues/13270))
|
||||
* provider/google: `google_compute_network` is now importable ([#13834](https://github.com/hashicorp/terraform/issues/13834))
|
||||
* provider/google: add attached_disk field to google_compute_instance ([#13443](https://github.com/hashicorp/terraform/issues/13443))
|
||||
* provider/heroku: Set App buildpacks from config ([#13910](https://github.com/hashicorp/terraform/issues/13910))
|
||||
* provider/heroku: Create Heroku app in a private space ([#13862](https://github.com/hashicorp/terraform/issues/13862))
|
||||
* provider/vault: `vault_generic_secret` resource can now optionally detect drift if it has appropriate access ([#11776](https://github.com/hashicorp/terraform/issues/11776))
|
||||
|
||||
BUG FIXES:
|
||||
|
||||
* core: Prevent resource.Retry from adding untracked resources after the timeout: ([#13778](https://github.com/hashicorp/terraform/issues/13778))
|
||||
* core: Allow a schema.TypeList to be ForceNew and computed ([#13863](https://github.com/hashicorp/terraform/issues/13863))
|
||||
* core: Fix crash when refresh or apply build an invalid graph ([#13665](https://github.com/hashicorp/terraform/issues/13665))
|
||||
* core: Add the close provider/provisioner transformers back ([#13102](https://github.com/hashicorp/terraform/issues/13102))
|
||||
* core: Fix a crash condition by improving the flatmap.Expand() logic ([#13541](https://github.com/hashicorp/terraform/issues/13541))
|
||||
* provider/alicloud: Fix create PrePaid instance ([#13662](https://github.com/hashicorp/terraform/issues/13662))
|
||||
* provider/alicloud: Fix allocate public ip error ([#13268](https://github.com/hashicorp/terraform/issues/13268))
|
||||
* provider/alicloud: alicloud_security_group_rule: check ptr before use it [[#13731](https://github.com/hashicorp/terraform/issues/13731))
|
||||
* provider/alicloud: alicloud_instance: fix ecs internet_max_bandwidth_out cannot set zero bug ([#13731](https://github.com/hashicorp/terraform/issues/13731))
|
||||
* provider/aws: Allow force-destroying `aws_route53_zone` which has trailing dot ([#12421](https://github.com/hashicorp/terraform/issues/12421))
|
||||
* provider/aws: Allow GovCloud KMS ARNs to pass validation in `kms_key_id` attributes ([#13699](https://github.com/hashicorp/terraform/issues/13699))
|
||||
* provider/aws: Changing aws_opsworks_instance should ForceNew ([#13839](https://github.com/hashicorp/terraform/issues/13839))
|
||||
* provider/aws: Fix DB Parameter Group Name ([#13279](https://github.com/hashicorp/terraform/issues/13279))
|
||||
* provider/aws: Fix issue importing some Security Groups and Rules based on rule structure ([#13630](https://github.com/hashicorp/terraform/issues/13630))
|
||||
* provider/aws: Fix issue for cross account IAM role with `aws_lambda_permission` ([#13865](https://github.com/hashicorp/terraform/issues/13865))
|
||||
* provider/aws: Fix WAF IPSet descriptors removal on update ([#13766](https://github.com/hashicorp/terraform/issues/13766))
|
||||
* provider/aws: Increase default number of retries from 11 to 25 ([#13673](https://github.com/hashicorp/terraform/issues/13673))
|
||||
* provider/aws: Remove aws_vpc_dhcp_options if not found ([#13610](https://github.com/hashicorp/terraform/issues/13610))
|
||||
* provider/aws: Remove aws_network_acl_rule if not found ([#13608](https://github.com/hashicorp/terraform/issues/13608))
|
||||
* provider/aws: Use mutex & retry for WAF change operations ([#13656](https://github.com/hashicorp/terraform/issues/13656))
|
||||
* provider/aws: Adding support for ipv6 to aws_subnets needs migration ([#13876](https://github.com/hashicorp/terraform/issues/13876))
|
||||
* provider/aws: Fix validation of the `name_prefix` parameter of the `aws_alb` resource ([#13441](https://github.com/hashicorp/terraform/issues/13441))
|
||||
* provider/azurerm: azurerm_redis_cache resource missing hostname ([#13650](https://github.com/hashicorp/terraform/issues/13650))
|
||||
* provider/azurerm: Locking around Network Security Group / Subnets ([#13637](https://github.com/hashicorp/terraform/issues/13637))
|
||||
* provider/azurerm: Locking route table on subnet create/delete ([#13791](https://github.com/hashicorp/terraform/issues/13791))
|
||||
* provider/azurerm: VM's - fixes a bug where ssh_keys could contain a null entry ([#13755](https://github.com/hashicorp/terraform/issues/13755))
|
||||
* provider/azurerm: VM's - ignoring the case on the `create_option` field during Diff's ([#13933](https://github.com/hashicorp/terraform/issues/13933))
|
||||
* provider/azurerm: fixing a bug refreshing the `azurerm_redis_cache` ([#13899](https://github.com/hashicorp/terraform/issues/13899))
|
||||
* provider/fastly: Fix issue with using 0 for `default_ttl` ([#13648](https://github.com/hashicorp/terraform/issues/13648))
|
||||
* provider/google: Fix panic in GKE provisioning with addons ([#13954](https://github.com/hashicorp/terraform/issues/13954))
|
||||
* provider/fastly: Add ability to associate a healthcheck to a backend ([#13539](https://github.com/hashicorp/terraform/issues/13539))
|
||||
* provider/google: Stop setting the id when project creation fails ([#13644](https://github.com/hashicorp/terraform/issues/13644))
|
||||
* provider/google: Make ports in resource_compute_forwarding_rule ForceNew ([#13833](https://github.com/hashicorp/terraform/issues/13833))
|
||||
* provider/google: Validation fixes for forwarding rules ([#13952](https://github.com/hashicorp/terraform/issues/13952))
|
||||
* provider/ignition: Internal cache moved to global, instead per provider instance ([#13919](https://github.com/hashicorp/terraform/issues/13919))
|
||||
* provider/logentries: Refresh from state when resources not found ([#13810](https://github.com/hashicorp/terraform/issues/13810))
|
||||
* provider/newrelic: newrelic_alert_condition - `condition_scope` must be `application` or `instance` ([#12972](https://github.com/hashicorp/terraform/issues/12972))
|
||||
* provider/opc: fixed issue with unqualifying nats ([#13826](https://github.com/hashicorp/terraform/issues/13826))
|
||||
* provider/opc: Fix instance label if unset ([#13846](https://github.com/hashicorp/terraform/issues/13846))
|
||||
* provider/openstack: Fix updating Ports ([#13604](https://github.com/hashicorp/terraform/issues/13604))
|
||||
* provider/rabbitmq: Allow users without tags ([#13798](https://github.com/hashicorp/terraform/issues/13798))
|
||||
|
||||
## 0.9.3 (April 12, 2017)
|
||||
|
||||
BACKWARDS INCOMPATIBILITIES / NOTES:
|
||||
* provider/aws: Fix a critical bug in `aws_emr_cluster` in order to preserve the ordering
|
||||
of any arguments in `bootstrap_action`. Terraform will now enforce the ordering
|
||||
from the configuration. As a result, `aws_emr_cluster` resources may need to be
|
||||
recreated, as there is no API to update them in-place ([#13580](https://github.com/hashicorp/terraform/issues/13580))
|
||||
|
||||
FEATURES:
|
||||
|
||||
* **New Resource:** `aws_api_gateway_method_settings` ([#13542](https://github.com/hashicorp/terraform/issues/13542))
|
||||
* **New Resource:** `aws_api_gateway_stage` ([#13540](https://github.com/hashicorp/terraform/issues/13540))
|
||||
* **New Resource:** `aws_iam_openid_connect_provider` ([#13456](https://github.com/hashicorp/terraform/issues/13456))
|
||||
* **New Resource:** `aws_lightsail_static_ip` ([#13175](https://github.com/hashicorp/terraform/issues/13175))
|
||||
* **New Resource:** `aws_lightsail_static_ip_attachment` ([#13207](https://github.com/hashicorp/terraform/issues/13207))
|
||||
* **New Resource:** `aws_ses_domain_identity` ([#13098](https://github.com/hashicorp/terraform/issues/13098))
|
||||
* **New Resource:** `azurerm_managed_disk` ([#12455](https://github.com/hashicorp/terraform/issues/12455))
|
||||
* **New Resource:** `kubernetes_persistent_volume` ([#13277](https://github.com/hashicorp/terraform/issues/13277))
|
||||
* **New Resource:** `kubernetes_persistent_volume_claim` ([#13527](https://github.com/hashicorp/terraform/issues/13527))
|
||||
* **New Resource:** `kubernetes_secret` ([#12960](https://github.com/hashicorp/terraform/issues/12960))
|
||||
* **New Data Source:** `aws_iam_role` ([#13213](https://github.com/hashicorp/terraform/issues/13213))
|
||||
|
||||
IMPROVEMENTS:
|
||||
|
||||
* core: add `-lock-timeout` option, which will block and retry locks for the given duration ([#13262](https://github.com/hashicorp/terraform/issues/13262))
|
||||
* core: new `chomp` interpolation function which returns the given string with any trailing newline characters removed ([#13419](https://github.com/hashicorp/terraform/issues/13419))
|
||||
* backend/remote-state: Add support for assume role extensions to s3 backend ([#13236](https://github.com/hashicorp/terraform/issues/13236))
|
||||
* backend/remote-state: Filter extra entries from s3 environment listings ([#13596](https://github.com/hashicorp/terraform/issues/13596))
|
||||
* config: New interpolation functions `basename` and `dirname`, for file path manipulation ([#13080](https://github.com/hashicorp/terraform/issues/13080))
|
||||
* helper/resource: Allow unknown "pending" states ([#13099](https://github.com/hashicorp/terraform/issues/13099))
|
||||
* command/hook_ui: Increase max length of state IDs from 20 to 80 ([#13317](https://github.com/hashicorp/terraform/issues/13317))
|
||||
* provider/aws: Add support to set iam_role_arn on cloudformation Stack ([#12547](https://github.com/hashicorp/terraform/issues/12547))
|
||||
* provider/aws: Support priority and listener_arn update of alb_listener_rule ([#13125](https://github.com/hashicorp/terraform/issues/13125))
|
||||
* provider/aws: Deprecate roles in favour of role in iam_instance_profile ([#13130](https://github.com/hashicorp/terraform/issues/13130))
|
||||
* provider/aws: Make alb_target_group_attachment port optional ([#13139](https://github.com/hashicorp/terraform/issues/13139))
|
||||
* provider/aws: `aws_api_gateway_domain_name` `certificate_private_key` field marked as sensitive ([#13147](https://github.com/hashicorp/terraform/issues/13147))
|
||||
* provider/aws: `aws_directory_service_directory` `password` field marked as sensitive ([#13147](https://github.com/hashicorp/terraform/issues/13147))
|
||||
* provider/aws: `aws_kinesis_firehose_delivery_stream` `password` field marked as sensitive ([#13147](https://github.com/hashicorp/terraform/issues/13147))
|
||||
* provider/aws: `aws_opsworks_application` `app_source.0.password` & `ssl_configuration.0.private_key` fields marked as sensitive ([#13147](https://github.com/hashicorp/terraform/issues/13147))
|
||||
* provider/aws: `aws_opsworks_stack` `custom_cookbooks_source.0.password` field marked as sensitive ([#13147](https://github.com/hashicorp/terraform/issues/13147))
|
||||
* provider/aws: Support the ability to enable / disable ipv6 support in VPC ([#12527](https://github.com/hashicorp/terraform/issues/12527))
|
||||
* provider/aws: Added API Gateway integration update ([#13249](https://github.com/hashicorp/terraform/issues/13249))
|
||||
* provider/aws: Add `identifier` | `name_prefix` to RDS resources ([#13232](https://github.com/hashicorp/terraform/issues/13232))
|
||||
* provider/aws: Validate `aws_ecs_task_definition.container_definitions` ([#12161](https://github.com/hashicorp/terraform/issues/12161))
|
||||
* provider/aws: Update caller_identity data source ([#13092](https://github.com/hashicorp/terraform/issues/13092))
|
||||
* provider/aws: `aws_subnet_ids` data source for getting a list of subnet ids matching certain criteria ([#13188](https://github.com/hashicorp/terraform/issues/13188))
|
||||
* provider/aws: Support ip_address_type for aws_alb ([#13227](https://github.com/hashicorp/terraform/issues/13227))
|
||||
* provider/aws: Migrate `aws_dms_*` resources away from AWS waiters ([#13291](https://github.com/hashicorp/terraform/issues/13291))
|
||||
* provider/aws: Add support for treat_missing_data to cloudwatch_metric_alarm ([#13358](https://github.com/hashicorp/terraform/issues/13358))
|
||||
* provider/aws: Add support for evaluate_low_sample_count_percentiles to cloudwatch_metric_alarm ([#13371](https://github.com/hashicorp/terraform/issues/13371))
|
||||
* provider/aws: Add `name_prefix` to `aws_alb_target_group` ([#13442](https://github.com/hashicorp/terraform/issues/13442))
|
||||
* provider/aws: Add support for EMR clusters to aws_appautoscaling_target ([#13368](https://github.com/hashicorp/terraform/issues/13368))
|
||||
* provider/aws: Add import capabilities to codecommit_repository ([#13577](https://github.com/hashicorp/terraform/issues/13577))
|
||||
* provider/bitbucket: Improved error handling ([#13390](https://github.com/hashicorp/terraform/issues/13390))
|
||||
* provider/cloudstack: Do not force a new resource when updating `cloudstack_loadbalancer_rule` members ([#11786](https://github.com/hashicorp/terraform/issues/11786))
|
||||
* provider/fastly: Add support for Sumologic logging ([#12541](https://github.com/hashicorp/terraform/issues/12541))
|
||||
* provider/github: Handle the case when issue labels already exist ([#13182](https://github.com/hashicorp/terraform/issues/13182))
|
||||
* provider/google: Mark `google_container_cluster`'s `client_key` & `password` inside `master_auth` as sensitive ([#13148](https://github.com/hashicorp/terraform/issues/13148))
|
||||
* provider/google: Add node_pool field in resource_container_cluster ([#13402](https://github.com/hashicorp/terraform/issues/13402))
|
||||
* provider/kubernetes: Allow defining custom config context ([#12958](https://github.com/hashicorp/terraform/issues/12958))
|
||||
* provider/openstack: Add support for 'value_specs' options to `openstack_compute_servergroup_v2` ([#13380](https://github.com/hashicorp/terraform/issues/13380))
|
||||
* provider/statuscake: Add support for StatusCake TriggerRate field ([#13340](https://github.com/hashicorp/terraform/issues/13340))
|
||||
* provider/triton: Move to joyent/triton-go ([#13225](https://github.com/hashicorp/terraform/issues/13225))
|
||||
* provisioner/chef: Make sure we add new Chef-Vault clients as clients ([#13525](https://github.com/hashicorp/terraform/issues/13525))
|
||||
|
||||
BUG FIXES:
|
||||
|
||||
* core: Escaped interpolation-like sequences (like `$${foo}`) now permitted in variable defaults ([#13137](https://github.com/hashicorp/terraform/issues/13137))
|
||||
* core: Fix strange issues with computed values in provider configuration that were worked around with `-input=false` ([#11264](https://github.com/hashicorp/terraform/issues/11264)], [[#13264](https://github.com/hashicorp/terraform/issues/13264))
|
||||
* core: Fix crash when providing nested maps as variable values in a `module` block ([#13343](https://github.com/hashicorp/terraform/issues/13343))
|
||||
* core: `connection` block attributes are now subject to basic validation of attribute names during validate walk ([#13400](https://github.com/hashicorp/terraform/issues/13400))
|
||||
* provider/aws: Add Support for maintenance_window and back_window to rds_cluster_instance ([#13134](https://github.com/hashicorp/terraform/issues/13134))
|
||||
* provider/aws: Increase timeout for AMI registration ([#13159](https://github.com/hashicorp/terraform/issues/13159))
|
||||
* provider/aws: Increase timeouts for ELB ([#13161](https://github.com/hashicorp/terraform/issues/13161))
|
||||
* provider/aws: `volume_type` of `aws_elasticsearch_domain.0.ebs_options` marked as `Computed` which prevents spurious diffs ([#13160](https://github.com/hashicorp/terraform/issues/13160))
|
||||
* provider/aws: Don't set DBName on `aws_db_instance` from snapshot ([#13140](https://github.com/hashicorp/terraform/issues/13140))
|
||||
* provider/aws: Add DiffSuppression to aws_ecs_service placement_strategies ([#13220](https://github.com/hashicorp/terraform/issues/13220))
|
||||
* provider/aws: Refresh aws_alb_target_group stickiness on manual updates ([#13199](https://github.com/hashicorp/terraform/issues/13199))
|
||||
* provider/aws: Preserve default retain_on_delete in cloudfront import ([#13209](https://github.com/hashicorp/terraform/issues/13209))
|
||||
* provider/aws: Refresh aws_alb_target_group tags ([#13200](https://github.com/hashicorp/terraform/issues/13200))
|
||||
* provider/aws: Set aws_vpn_connection to recreate when in deleted state ([#13204](https://github.com/hashicorp/terraform/issues/13204))
|
||||
* provider/aws: Wait for aws_opsworks_instance to be running when it's specified ([#13218](https://github.com/hashicorp/terraform/issues/13218))
|
||||
* provider/aws: Handle `aws_lambda_function` missing s3 key error ([#10960](https://github.com/hashicorp/terraform/issues/10960))
|
||||
* provider/aws: Set stickiness to computed in alb_target_group ([#13278](https://github.com/hashicorp/terraform/issues/13278))
|
||||
* provider/aws: Increase timeout for deploying `cloudfront_distribution` from 40 to 70 mins ([#13319](https://github.com/hashicorp/terraform/issues/13319))
|
||||
* provider/aws: Increase AMI retry timeouts ([#13324](https://github.com/hashicorp/terraform/issues/13324))
|
||||
* provider/aws: Increase subnet deletion timeout ([#13356](https://github.com/hashicorp/terraform/issues/13356))
|
||||
* provider/aws: Increase launch_configuration creation timeout ([#13357](https://github.com/hashicorp/terraform/issues/13357))
|
||||
* provider/aws: Increase Beanstalk env 'ready' timeout ([#13359](https://github.com/hashicorp/terraform/issues/13359))
|
||||
* provider/aws: Raise timeout for deleting APIG REST API ([#13414](https://github.com/hashicorp/terraform/issues/13414))
|
||||
* provider/aws: Raise timeout for attaching/detaching VPN Gateway ([#13457](https://github.com/hashicorp/terraform/issues/13457))
|
||||
* provider/aws: Recreate opsworks_stack on change of service_role_arn ([#13325](https://github.com/hashicorp/terraform/issues/13325))
|
||||
* provider/aws: Fix KMS Key reading with Exists method ([#13348](https://github.com/hashicorp/terraform/issues/13348))
|
||||
* provider/aws: Fix DynamoDB issues about GSIs indexes ([#13256](https://github.com/hashicorp/terraform/issues/13256))
|
||||
* provider/aws: Fix `aws_s3_bucket` drift detection of logging options ([#13281](https://github.com/hashicorp/terraform/issues/13281))
|
||||
* provider/aws: Update ElasticTranscoderPreset to have default for MaxFrameRate ([#13422](https://github.com/hashicorp/terraform/issues/13422))
|
||||
* provider/aws: Fix aws_ami_launch_permission refresh when AMI disappears ([#13469](https://github.com/hashicorp/terraform/issues/13469))
|
||||
* provider/aws: Add support for updating SSM documents ([#13491](https://github.com/hashicorp/terraform/issues/13491))
|
||||
* provider/aws: Fix panic on nil route configs ([#13548](https://github.com/hashicorp/terraform/issues/13548))
|
||||
* provider/azurerm: Network Security Group - ignoring protocol casing at Import time ([#13153](https://github.com/hashicorp/terraform/issues/13153))
|
||||
* provider/azurerm: Fix crash when importing Local Network Gateways ([#13261](https://github.com/hashicorp/terraform/issues/13261))
|
||||
* provider/azurerm: Defaulting the value of `duplicate_detection_history_time_window` for `azurerm_servicebus_topic` ([#13223](https://github.com/hashicorp/terraform/issues/13223))
|
||||
* provider/azurerm: Event Hubs making the Location field idempotent ([#13570](https://github.com/hashicorp/terraform/issues/13570))
|
||||
* provider/bitbucket: Fixed issue where provider would fail with an "EOF" error on some operations ([#13390](https://github.com/hashicorp/terraform/issues/13390))
|
||||
* provider/dnsimple: Handle 404 on DNSimple records ([#13131](https://github.com/hashicorp/terraform/issues/13131))
|
||||
* provider/kubernetes: Use PATCH to update namespace ([#13114](https://github.com/hashicorp/terraform/issues/13114))
|
||||
* provider/ns1: No splitting answer on SPF records. ([#13260](https://github.com/hashicorp/terraform/issues/13260))
|
||||
* provider/openstack: Refresh volume_attachment from state if NotFound ([#13342](https://github.com/hashicorp/terraform/issues/13342))
|
||||
* provider/openstack: Add SOFT_DELETED to delete status ([#13444](https://github.com/hashicorp/terraform/issues/13444))
|
||||
* provider/profitbricks: Changed output type of ips variable of ip_block ProfitBricks resource ([#13290](https://github.com/hashicorp/terraform/issues/13290))
|
||||
* provider/template: Fix panic in cloudinit config ([#13581](https://github.com/hashicorp/terraform/issues/13581))
|
||||
|
||||
## 0.9.2 (March 28, 2017)
|
||||
|
||||
BACKWARDS IMCOMPATIBILITIES / NOTES:
|
||||
BACKWARDS INCOMPATIBILITIES / NOTES:
|
||||
|
||||
* provider/openstack: Port Fixed IPs are able to be read again using the original numerical notation. However, Fixed IP configurations which are obtaining addresses via DHCP must now use the `all_fixed_ips` attribute to reference the returned IP address.
|
||||
* Environment names must be safe to use as a URL path segment without escaping, and is enforced by the CLI.
|
||||
|
@ -58,8 +447,8 @@ IMPROVEMENTS:
|
|||
* provider/pagerduty: Validate credentials ([#12854](https://github.com/hashicorp/terraform/issues/12854))
|
||||
* provider/openstack: Adding all_metadata attribute ([#13061](https://github.com/hashicorp/terraform/issues/13061))
|
||||
* provider/profitbricks: Handling missing resources ([#13053](https://github.com/hashicorp/terraform/issues/13053))
|
||||
|
||||
BUG FIXES:
|
||||
|
||||
BUG FIXES:
|
||||
|
||||
* core: Remove legacy remote state configuration on state migration. This fixes errors when saving plans. ([#12888](https://github.com/hashicorp/terraform/issues/12888))
|
||||
* provider/arukas: Default timeout for launching container increased to 15mins (was 10mins) ([#12849](https://github.com/hashicorp/terraform/issues/12849))
|
||||
|
@ -88,7 +477,7 @@ BUG FIXES:
|
|||
|
||||
## 0.9.1 (March 17, 2017)
|
||||
|
||||
BACKWARDS IMCOMPATIBILITIES / NOTES:
|
||||
BACKWARDS INCOMPATIBILITIES / NOTES:
|
||||
|
||||
* provider/pagerduty: the deprecated `name_regex` field has been removed from vendor data source ([#12396](https://github.com/hashicorp/terraform/issues/12396))
|
||||
|
||||
|
@ -125,7 +514,7 @@ BUG FIXES:
|
|||
* provider/aws: Stop setting weight property on route53_record read ([#12756](https://github.com/hashicorp/terraform/issues/12756))
|
||||
* provider/google: Fix the Google provider asking for account_file input on every run ([#12729](https://github.com/hashicorp/terraform/issues/12729))
|
||||
* provider/profitbricks: Prevent panic on profitbricks volume ([#12819](https://github.com/hashicorp/terraform/issues/12819))
|
||||
|
||||
|
||||
|
||||
## 0.9.0 (March 15, 2017)
|
||||
|
||||
|
@ -273,7 +662,7 @@ BUG FIXES:
|
|||
* provider/google: Correct the incorrect instance group manager URL returned from GKE ([#4336](https://github.com/hashicorp/terraform/issues/4336))
|
||||
* provider/google: Fix a plan/apply cycle in IAM policies ([#12387](https://github.com/hashicorp/terraform/issues/12387))
|
||||
* provider/google: Fix a plan/apply cycle in forwarding rules when only a single port is specified ([#12662](https://github.com/hashicorp/terraform/issues/12662))
|
||||
|
||||
|
||||
## 0.9.0-beta2 (March 2, 2017)
|
||||
|
||||
BACKWARDS INCOMPATIBILITIES / NOTES:
|
||||
|
@ -477,7 +866,7 @@ Bug FIXES:
|
|||
* core: module sources ended in archive extensions without a "." won't be treated as archives ([#11438](https://github.com/hashicorp/terraform/issues/11438))
|
||||
* core: destroy ordering of resources within modules is correct ([#11765](https://github.com/hashicorp/terraform/issues/11765))
|
||||
* core: Fix crash if count interpolates into a non-int ([#11864](https://github.com/hashicorp/terraform/issues/11864))
|
||||
* core: Targeting a module will properly exclude untargeted module outputs ([#11291](https://github.com/hashicorp/terraform/issues/11291))
|
||||
* core: Targeting a module will properly exclude untargeted module outputs ([#11921](https://github.com/hashicorp/terraform/issues/11921))
|
||||
* state/remote/s3: Fix Bug with Assume Role for Federated IAM Account ([#10067](https://github.com/hashicorp/terraform/issues/10067))
|
||||
* provider/aws: Fix security_group_rule resource timeout errors ([#11809](https://github.com/hashicorp/terraform/issues/11809))
|
||||
* provider/aws: Fix diff suppress function for aws_db_instance ([#11909](https://github.com/hashicorp/terraform/issues/11909))
|
||||
|
|
4
Makefile
4
Makefile
|
@ -79,8 +79,8 @@ cover:
|
|||
# vet runs the Go source code static analysis tool `vet` to find
|
||||
# any common errors.
|
||||
vet:
|
||||
@echo "go vet ."
|
||||
@go vet $$(go list ./... | grep -v vendor/) ; if [ $$? -eq 1 ]; then \
|
||||
@echo 'go vet $$(go list ./... | grep -v /terraform/vendor/)'
|
||||
@go vet $$(go list ./... | grep -v /terraform/vendor/) ; if [ $$? -eq 1 ]; then \
|
||||
echo ""; \
|
||||
echo "Vet found suspicious constructs. Please check the reported constructs"; \
|
||||
echo "and fix them if necessary before submitting the code for review."; \
|
||||
|
|
|
@ -5,7 +5,7 @@ Terraform
|
|||
- [![Gitter chat](https://badges.gitter.im/hashicorp-terraform/Lobby.png)](https://gitter.im/hashicorp-terraform/Lobby)
|
||||
- Mailing list: [Google Groups](http://groups.google.com/group/terraform-tool)
|
||||
|
||||
![Terraform](https://raw.githubusercontent.com/hashicorp/terraform/master/website/source/assets/images/readme.png)
|
||||
![Terraform](https://rawgithub.com/hashicorp/terraform/master/website/source/assets/images/logo-hashicorp.svg)
|
||||
|
||||
Terraform is a tool for building, changing, and versioning infrastructure safely and efficiently. Terraform can manage existing and popular service providers as well as custom in-house solutions.
|
||||
|
||||
|
|
|
@ -5,7 +5,7 @@
|
|||
VAGRANTFILE_API_VERSION = "2"
|
||||
|
||||
# Software version variables
|
||||
GOVERSION = "1.8"
|
||||
GOVERSION = "1.8.1"
|
||||
UBUNTUVERSION = "16.04"
|
||||
|
||||
# CPU and RAM can be adjusted depending on your system
|
||||
|
|
|
@ -7,6 +7,7 @@ package backend
|
|||
import (
|
||||
"context"
|
||||
"errors"
|
||||
"time"
|
||||
|
||||
"github.com/hashicorp/terraform/config/module"
|
||||
"github.com/hashicorp/terraform/state"
|
||||
|
@ -132,6 +133,9 @@ type Operation struct {
|
|||
// state.Lockers for its duration, and Unlock when complete.
|
||||
LockState bool
|
||||
|
||||
// The duration to retry obtaining a State lock.
|
||||
StateLockTimeout time.Duration
|
||||
|
||||
// Environment is the named state that should be loaded from the Backend.
|
||||
Environment string
|
||||
}
|
||||
|
|
|
@ -170,9 +170,30 @@ func (b *Local) DeleteState(name string) error {
|
|||
}
|
||||
|
||||
func (b *Local) State(name string) (state.State, error) {
|
||||
statePath, stateOutPath, backupPath := b.StatePaths(name)
|
||||
|
||||
// If we have a backend handling state, defer to that.
|
||||
if b.Backend != nil {
|
||||
return b.Backend.State(name)
|
||||
s, err := b.Backend.State(name)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// make sure we always have a backup state, unless it disabled
|
||||
if backupPath == "" {
|
||||
return s, nil
|
||||
}
|
||||
|
||||
// see if the delegated backend returned a BackupState of its own
|
||||
if s, ok := s.(*state.BackupState); ok {
|
||||
return s, nil
|
||||
}
|
||||
|
||||
s = &state.BackupState{
|
||||
Real: s,
|
||||
Path: backupPath,
|
||||
}
|
||||
return s, nil
|
||||
}
|
||||
|
||||
if s, ok := b.states[name]; ok {
|
||||
|
@ -183,8 +204,6 @@ func (b *Local) State(name string) (state.State, error) {
|
|||
return nil, err
|
||||
}
|
||||
|
||||
statePath, stateOutPath, backupPath := b.StatePaths(name)
|
||||
|
||||
// Otherwise, we need to load the state.
|
||||
var s state.State = &state.LocalState{
|
||||
Path: statePath,
|
||||
|
|
|
@ -9,7 +9,7 @@ import (
|
|||
"github.com/hashicorp/errwrap"
|
||||
"github.com/hashicorp/go-multierror"
|
||||
"github.com/hashicorp/terraform/backend"
|
||||
clistate "github.com/hashicorp/terraform/command/state"
|
||||
"github.com/hashicorp/terraform/command/clistate"
|
||||
"github.com/hashicorp/terraform/config/module"
|
||||
"github.com/hashicorp/terraform/state"
|
||||
"github.com/hashicorp/terraform/terraform"
|
||||
|
@ -52,9 +52,12 @@ func (b *Local) opApply(
|
|||
}
|
||||
|
||||
if op.LockState {
|
||||
lockCtx, cancel := context.WithTimeout(ctx, op.StateLockTimeout)
|
||||
defer cancel()
|
||||
|
||||
lockInfo := state.NewLockInfo()
|
||||
lockInfo.Operation = op.Type.String()
|
||||
lockID, err := clistate.Lock(opState, lockInfo, b.CLI, b.Colorize())
|
||||
lockID, err := clistate.Lock(lockCtx, opState, lockInfo, b.CLI, b.Colorize())
|
||||
if err != nil {
|
||||
runningOp.Err = errwrap.Wrapf("Error locking state: {{err}}", err)
|
||||
return
|
||||
|
@ -99,7 +102,9 @@ func (b *Local) opApply(
|
|||
doneCh := make(chan struct{})
|
||||
go func() {
|
||||
defer close(doneCh)
|
||||
applyState, applyErr = tfCtx.Apply()
|
||||
_, applyErr = tfCtx.Apply()
|
||||
// we always want the state, even if apply failed
|
||||
applyState = tfCtx.State()
|
||||
|
||||
/*
|
||||
// Record any shadow errors for later
|
||||
|
@ -116,7 +121,7 @@ func (b *Local) opApply(
|
|||
select {
|
||||
case <-ctx.Done():
|
||||
if b.CLI != nil {
|
||||
b.CLI.Output("Interrupt received. Gracefully shutting down...")
|
||||
b.CLI.Output("stopping apply operation...")
|
||||
}
|
||||
|
||||
// Stop execution
|
||||
|
|
|
@ -10,8 +10,8 @@ import (
|
|||
"github.com/hashicorp/errwrap"
|
||||
"github.com/hashicorp/go-multierror"
|
||||
"github.com/hashicorp/terraform/backend"
|
||||
"github.com/hashicorp/terraform/command/clistate"
|
||||
"github.com/hashicorp/terraform/command/format"
|
||||
clistate "github.com/hashicorp/terraform/command/state"
|
||||
"github.com/hashicorp/terraform/config/module"
|
||||
"github.com/hashicorp/terraform/state"
|
||||
"github.com/hashicorp/terraform/terraform"
|
||||
|
@ -61,9 +61,12 @@ func (b *Local) opPlan(
|
|||
}
|
||||
|
||||
if op.LockState {
|
||||
lockCtx, cancel := context.WithTimeout(ctx, op.StateLockTimeout)
|
||||
defer cancel()
|
||||
|
||||
lockInfo := state.NewLockInfo()
|
||||
lockInfo.Operation = op.Type.String()
|
||||
lockID, err := clistate.Lock(opState, lockInfo, b.CLI, b.Colorize())
|
||||
lockID, err := clistate.Lock(lockCtx, opState, lockInfo, b.CLI, b.Colorize())
|
||||
if err != nil {
|
||||
runningOp.Err = errwrap.Wrapf("Error locking state: {{err}}", err)
|
||||
return
|
||||
|
|
|
@ -9,7 +9,7 @@ import (
|
|||
"github.com/hashicorp/errwrap"
|
||||
"github.com/hashicorp/go-multierror"
|
||||
"github.com/hashicorp/terraform/backend"
|
||||
clistate "github.com/hashicorp/terraform/command/state"
|
||||
"github.com/hashicorp/terraform/command/clistate"
|
||||
"github.com/hashicorp/terraform/config/module"
|
||||
"github.com/hashicorp/terraform/state"
|
||||
)
|
||||
|
@ -51,9 +51,12 @@ func (b *Local) opRefresh(
|
|||
}
|
||||
|
||||
if op.LockState {
|
||||
lockCtx, cancel := context.WithTimeout(ctx, op.StateLockTimeout)
|
||||
defer cancel()
|
||||
|
||||
lockInfo := state.NewLockInfo()
|
||||
lockInfo.Operation = op.Type.String()
|
||||
lockID, err := clistate.Lock(opState, lockInfo, b.CLI, b.Colorize())
|
||||
lockID, err := clistate.Lock(lockCtx, opState, lockInfo, b.CLI, b.Colorize())
|
||||
if err != nil {
|
||||
runningOp.Err = errwrap.Wrapf("Error locking state: {{err}}", err)
|
||||
return
|
||||
|
|
|
@ -169,6 +169,11 @@ func TestLocal_addAndRemoveStates(t *testing.T) {
|
|||
// verify it's being called.
|
||||
type testDelegateBackend struct {
|
||||
*Local
|
||||
|
||||
// return a sentinel error on these calls
|
||||
stateErr bool
|
||||
statesErr bool
|
||||
deleteErr bool
|
||||
}
|
||||
|
||||
var errTestDelegateState = errors.New("State called")
|
||||
|
@ -176,22 +181,39 @@ var errTestDelegateStates = errors.New("States called")
|
|||
var errTestDelegateDeleteState = errors.New("Delete called")
|
||||
|
||||
func (b *testDelegateBackend) State(name string) (state.State, error) {
|
||||
return nil, errTestDelegateState
|
||||
if b.stateErr {
|
||||
return nil, errTestDelegateState
|
||||
}
|
||||
s := &state.LocalState{
|
||||
Path: "terraform.tfstate",
|
||||
PathOut: "terraform.tfstate",
|
||||
}
|
||||
return s, nil
|
||||
}
|
||||
|
||||
func (b *testDelegateBackend) States() ([]string, error) {
|
||||
return nil, errTestDelegateStates
|
||||
if b.statesErr {
|
||||
return nil, errTestDelegateStates
|
||||
}
|
||||
return []string{"default"}, nil
|
||||
}
|
||||
|
||||
func (b *testDelegateBackend) DeleteState(name string) error {
|
||||
return errTestDelegateDeleteState
|
||||
if b.deleteErr {
|
||||
return errTestDelegateDeleteState
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// verify that the MultiState methods are dispatched to the correct Backend.
|
||||
func TestLocal_multiStateBackend(t *testing.T) {
|
||||
// assign a separate backend where we can read the state
|
||||
b := &Local{
|
||||
Backend: &testDelegateBackend{},
|
||||
Backend: &testDelegateBackend{
|
||||
stateErr: true,
|
||||
statesErr: true,
|
||||
deleteErr: true,
|
||||
},
|
||||
}
|
||||
|
||||
if _, err := b.State("test"); err != errTestDelegateState {
|
||||
|
@ -205,7 +227,43 @@ func TestLocal_multiStateBackend(t *testing.T) {
|
|||
if err := b.DeleteState("test"); err != errTestDelegateDeleteState {
|
||||
t.Fatal("expected errTestDelegateDeleteState, got:", err)
|
||||
}
|
||||
}
|
||||
|
||||
// verify that a remote state backend is always wrapped in a BackupState
|
||||
func TestLocal_remoteStateBackup(t *testing.T) {
|
||||
// assign a separate backend to mock a remote state backend
|
||||
b := &Local{
|
||||
Backend: &testDelegateBackend{},
|
||||
}
|
||||
|
||||
s, err := b.State("default")
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
bs, ok := s.(*state.BackupState)
|
||||
if !ok {
|
||||
t.Fatal("remote state is not backed up")
|
||||
}
|
||||
|
||||
if bs.Path != DefaultStateFilename+DefaultBackupExtension {
|
||||
t.Fatal("bad backup location:", bs.Path)
|
||||
}
|
||||
|
||||
// do the same with a named state, which should use the local env directories
|
||||
s, err = b.State("test")
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
bs, ok = s.(*state.BackupState)
|
||||
if !ok {
|
||||
t.Fatal("remote state is not backed up")
|
||||
}
|
||||
|
||||
if bs.Path != filepath.Join(DefaultEnvDir, "test", DefaultStateFilename+DefaultBackupExtension) {
|
||||
t.Fatal("bad backup location:", bs.Path)
|
||||
}
|
||||
}
|
||||
|
||||
// change into a tmp dir and return a deferable func to change back and cleanup
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
// Code generated by "stringer -type=countHookAction hook_count_action.go"; DO NOT EDIT
|
||||
// Code generated by "stringer -type=countHookAction hook_count_action.go"; DO NOT EDIT.
|
||||
|
||||
package local
|
||||
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
// Code generated by "stringer -type=OperationType operation_type.go"; DO NOT EDIT
|
||||
// Code generated by "stringer -type=OperationType operation_type.go"; DO NOT EDIT.
|
||||
|
||||
package backend
|
||||
|
||||
|
|
|
@ -102,22 +102,19 @@ func (b *Backend) State(name string) (state.State, error) {
|
|||
stateMgr = &state.LockDisabled{Inner: stateMgr}
|
||||
}
|
||||
|
||||
// Get the locker, which we know always exists
|
||||
stateMgrLocker := stateMgr.(state.Locker)
|
||||
|
||||
// Grab a lock, we use this to write an empty state if one doesn't
|
||||
// exist already. We have to write an empty state as a sentinel value
|
||||
// so States() knows it exists.
|
||||
lockInfo := state.NewLockInfo()
|
||||
lockInfo.Operation = "init"
|
||||
lockId, err := stateMgrLocker.Lock(lockInfo)
|
||||
lockId, err := stateMgr.Lock(lockInfo)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to lock state in Consul: %s", err)
|
||||
}
|
||||
|
||||
// Local helper function so we can call it multiple places
|
||||
lockUnlock := func(parent error) error {
|
||||
if err := stateMgrLocker.Unlock(lockId); err != nil {
|
||||
if err := stateMgr.Unlock(lockId); err != nil {
|
||||
return fmt.Errorf(strings.TrimSpace(errStateUnlock), lockId, err)
|
||||
}
|
||||
|
||||
|
|
|
@ -121,16 +121,15 @@ func (c *RemoteClient) Lock(info *state.LockInfo) (string, error) {
|
|||
default:
|
||||
if c.lockCh != nil {
|
||||
// we have an active lock already
|
||||
return "", nil
|
||||
return "", fmt.Errorf("state %q already locked", c.Path)
|
||||
}
|
||||
}
|
||||
|
||||
if c.consulLock == nil {
|
||||
opts := &consulapi.LockOptions{
|
||||
Key: c.Path + lockSuffix,
|
||||
// We currently don't procide any options to block terraform and
|
||||
// retry lock acquisition, but we can wait briefly in case the
|
||||
// lock is about to be freed.
|
||||
// only wait briefly, so terraform has the choice to fail fast or
|
||||
// retry as needed.
|
||||
LockWaitTime: time.Second,
|
||||
LockTryOnce: true,
|
||||
}
|
||||
|
@ -191,6 +190,10 @@ func (c *RemoteClient) Unlock(id string) error {
|
|||
err := c.consulLock.Unlock()
|
||||
c.lockCh = nil
|
||||
|
||||
// This is only cleanup, and will fail if the lock was immediately taken by
|
||||
// another client, so we don't report an error to the user here.
|
||||
c.consulLock.Destroy()
|
||||
|
||||
kv := c.Client.KV()
|
||||
_, delErr := kv.Delete(c.Path+lockInfoSuffix, nil)
|
||||
if delErr != nil {
|
||||
|
|
|
@ -6,6 +6,7 @@ import (
|
|||
"time"
|
||||
|
||||
"github.com/hashicorp/terraform/backend"
|
||||
"github.com/hashicorp/terraform/state"
|
||||
"github.com/hashicorp/terraform/state/remote"
|
||||
)
|
||||
|
||||
|
@ -98,3 +99,43 @@ func TestConsul_stateLock(t *testing.T) {
|
|||
|
||||
remote.TestRemoteLocks(t, sA.(*remote.State).Client, sB.(*remote.State).Client)
|
||||
}
|
||||
|
||||
func TestConsul_destroyLock(t *testing.T) {
|
||||
srv := newConsulTestServer(t)
|
||||
defer srv.Stop()
|
||||
|
||||
// Get the backend
|
||||
b := backend.TestBackendConfig(t, New(), map[string]interface{}{
|
||||
"address": srv.HTTPAddr,
|
||||
"path": fmt.Sprintf("tf-unit/%s", time.Now().String()),
|
||||
})
|
||||
|
||||
// Grab the client
|
||||
s, err := b.State(backend.DefaultStateName)
|
||||
if err != nil {
|
||||
t.Fatalf("err: %s", err)
|
||||
}
|
||||
|
||||
c := s.(*remote.State).Client.(*RemoteClient)
|
||||
|
||||
info := state.NewLockInfo()
|
||||
id, err := c.Lock(info)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
lockPath := c.Path + lockSuffix
|
||||
|
||||
if err := c.Unlock(id); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
// get the lock val
|
||||
pair, _, err := c.Client.KV().Get(lockPath, nil)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if pair != nil {
|
||||
t.Fatalf("lock key not cleaned up at: %s", pair.Key)
|
||||
}
|
||||
}
|
||||
|
|
|
@ -2,15 +2,9 @@ package s3
|
|||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
|
||||
"github.com/aws/aws-sdk-go/aws"
|
||||
"github.com/aws/aws-sdk-go/aws/awserr"
|
||||
"github.com/aws/aws-sdk-go/aws/session"
|
||||
"github.com/aws/aws-sdk-go/service/dynamodb"
|
||||
"github.com/aws/aws-sdk-go/service/s3"
|
||||
cleanhttp "github.com/hashicorp/go-cleanhttp"
|
||||
multierror "github.com/hashicorp/go-multierror"
|
||||
"github.com/hashicorp/terraform/backend"
|
||||
"github.com/hashicorp/terraform/helper/schema"
|
||||
|
||||
|
@ -21,101 +15,122 @@ import (
|
|||
func New() backend.Backend {
|
||||
s := &schema.Backend{
|
||||
Schema: map[string]*schema.Schema{
|
||||
"bucket": &schema.Schema{
|
||||
"bucket": {
|
||||
Type: schema.TypeString,
|
||||
Required: true,
|
||||
Description: "The name of the S3 bucket",
|
||||
},
|
||||
|
||||
"key": &schema.Schema{
|
||||
"key": {
|
||||
Type: schema.TypeString,
|
||||
Required: true,
|
||||
Description: "The path to the state file inside the bucket",
|
||||
},
|
||||
|
||||
"region": &schema.Schema{
|
||||
"region": {
|
||||
Type: schema.TypeString,
|
||||
Required: true,
|
||||
Description: "The region of the S3 bucket.",
|
||||
DefaultFunc: schema.EnvDefaultFunc("AWS_DEFAULT_REGION", nil),
|
||||
},
|
||||
|
||||
"endpoint": &schema.Schema{
|
||||
"endpoint": {
|
||||
Type: schema.TypeString,
|
||||
Optional: true,
|
||||
Description: "A custom endpoint for the S3 API",
|
||||
DefaultFunc: schema.EnvDefaultFunc("AWS_S3_ENDPOINT", ""),
|
||||
},
|
||||
|
||||
"encrypt": &schema.Schema{
|
||||
"encrypt": {
|
||||
Type: schema.TypeBool,
|
||||
Optional: true,
|
||||
Description: "Whether to enable server side encryption of the state file",
|
||||
Default: false,
|
||||
},
|
||||
|
||||
"acl": &schema.Schema{
|
||||
"acl": {
|
||||
Type: schema.TypeString,
|
||||
Optional: true,
|
||||
Description: "Canned ACL to be applied to the state file",
|
||||
Default: "",
|
||||
},
|
||||
|
||||
"access_key": &schema.Schema{
|
||||
"access_key": {
|
||||
Type: schema.TypeString,
|
||||
Optional: true,
|
||||
Description: "AWS access key",
|
||||
Default: "",
|
||||
},
|
||||
|
||||
"secret_key": &schema.Schema{
|
||||
"secret_key": {
|
||||
Type: schema.TypeString,
|
||||
Optional: true,
|
||||
Description: "AWS secret key",
|
||||
Default: "",
|
||||
},
|
||||
|
||||
"kms_key_id": &schema.Schema{
|
||||
"kms_key_id": {
|
||||
Type: schema.TypeString,
|
||||
Optional: true,
|
||||
Description: "The ARN of a KMS Key to use for encrypting the state",
|
||||
Default: "",
|
||||
},
|
||||
|
||||
"lock_table": &schema.Schema{
|
||||
"lock_table": {
|
||||
Type: schema.TypeString,
|
||||
Optional: true,
|
||||
Description: "DynamoDB table for state locking",
|
||||
Default: "",
|
||||
},
|
||||
|
||||
"profile": &schema.Schema{
|
||||
"profile": {
|
||||
Type: schema.TypeString,
|
||||
Optional: true,
|
||||
Description: "AWS profile name",
|
||||
Default: "",
|
||||
},
|
||||
|
||||
"shared_credentials_file": &schema.Schema{
|
||||
"shared_credentials_file": {
|
||||
Type: schema.TypeString,
|
||||
Optional: true,
|
||||
Description: "Path to a shared credentials file",
|
||||
Default: "",
|
||||
},
|
||||
|
||||
"token": &schema.Schema{
|
||||
"token": {
|
||||
Type: schema.TypeString,
|
||||
Optional: true,
|
||||
Description: "MFA token",
|
||||
Default: "",
|
||||
},
|
||||
|
||||
"role_arn": &schema.Schema{
|
||||
"role_arn": {
|
||||
Type: schema.TypeString,
|
||||
Optional: true,
|
||||
Description: "The role to be assumed",
|
||||
Default: "",
|
||||
},
|
||||
|
||||
"session_name": {
|
||||
Type: schema.TypeString,
|
||||
Optional: true,
|
||||
Description: "The session name to use when assuming the role.",
|
||||
Default: "",
|
||||
},
|
||||
|
||||
"external_id": {
|
||||
Type: schema.TypeString,
|
||||
Optional: true,
|
||||
Description: "The external ID to use when assuming the role",
|
||||
Default: "",
|
||||
},
|
||||
|
||||
"assume_role_policy": {
|
||||
Type: schema.TypeString,
|
||||
Optional: true,
|
||||
Description: "The permissions applied when assuming a role.",
|
||||
Default: "",
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
|
@ -154,45 +169,27 @@ func (b *Backend) configure(ctx context.Context) error {
|
|||
b.kmsKeyID = data.Get("kms_key_id").(string)
|
||||
b.lockTable = data.Get("lock_table").(string)
|
||||
|
||||
var errs []error
|
||||
creds, err := terraformAWS.GetCredentials(&terraformAWS.Config{
|
||||
AccessKey: data.Get("access_key").(string),
|
||||
SecretKey: data.Get("secret_key").(string),
|
||||
Token: data.Get("token").(string),
|
||||
Profile: data.Get("profile").(string),
|
||||
CredsFilename: data.Get("shared_credentials_file").(string),
|
||||
AssumeRoleARN: data.Get("role_arn").(string),
|
||||
})
|
||||
cfg := &terraformAWS.Config{
|
||||
AccessKey: data.Get("access_key").(string),
|
||||
AssumeRoleARN: data.Get("role_arn").(string),
|
||||
AssumeRoleExternalID: data.Get("external_id").(string),
|
||||
AssumeRolePolicy: data.Get("assume_role_policy").(string),
|
||||
AssumeRoleSessionName: data.Get("session_name").(string),
|
||||
CredsFilename: data.Get("shared_credentials_file").(string),
|
||||
Profile: data.Get("profile").(string),
|
||||
Region: data.Get("region").(string),
|
||||
S3Endpoint: data.Get("endpoint").(string),
|
||||
SecretKey: data.Get("secret_key").(string),
|
||||
Token: data.Get("token").(string),
|
||||
}
|
||||
|
||||
client, err := cfg.Client()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// Call Get to check for credential provider. If nothing found, we'll get an
|
||||
// error, and we can present it nicely to the user
|
||||
_, err = creds.Get()
|
||||
if err != nil {
|
||||
if awsErr, ok := err.(awserr.Error); ok && awsErr.Code() == "NoCredentialProviders" {
|
||||
errs = append(errs, fmt.Errorf(`No valid credential sources found for AWS S3 remote.
|
||||
Please see https://www.terraform.io/docs/state/remote/s3.html for more information on
|
||||
providing credentials for the AWS S3 remote`))
|
||||
} else {
|
||||
errs = append(errs, fmt.Errorf("Error loading credentials for AWS S3 remote: %s", err))
|
||||
}
|
||||
return &multierror.Error{Errors: errs}
|
||||
}
|
||||
|
||||
endpoint := data.Get("endpoint").(string)
|
||||
region := data.Get("region").(string)
|
||||
|
||||
awsConfig := &aws.Config{
|
||||
Credentials: creds,
|
||||
Endpoint: aws.String(endpoint),
|
||||
Region: aws.String(region),
|
||||
HTTPClient: cleanhttp.DefaultClient(),
|
||||
}
|
||||
sess := session.New(awsConfig)
|
||||
b.s3Client = s3.New(sess)
|
||||
b.dynClient = dynamodb.New(sess)
|
||||
b.s3Client = client.(*terraformAWS.AWSClient).S3()
|
||||
b.dynClient = client.(*terraformAWS.AWSClient).DynamoDB()
|
||||
|
||||
return nil
|
||||
}
|
||||
|
|
|
@ -1,6 +1,7 @@
|
|||
package s3
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"sort"
|
||||
"strings"
|
||||
|
@ -30,29 +31,34 @@ func (b *Backend) States() ([]string, error) {
|
|||
return nil, err
|
||||
}
|
||||
|
||||
var envs []string
|
||||
envs := []string{backend.DefaultStateName}
|
||||
for _, obj := range resp.Contents {
|
||||
env := keyEnv(*obj.Key)
|
||||
env := b.keyEnv(*obj.Key)
|
||||
if env != "" {
|
||||
envs = append(envs, env)
|
||||
}
|
||||
}
|
||||
|
||||
sort.Strings(envs)
|
||||
envs = append([]string{backend.DefaultStateName}, envs...)
|
||||
sort.Strings(envs[1:])
|
||||
return envs, nil
|
||||
}
|
||||
|
||||
// extract the env name from the S3 key
|
||||
func keyEnv(key string) string {
|
||||
parts := strings.Split(key, "/")
|
||||
func (b *Backend) keyEnv(key string) string {
|
||||
// we have 3 parts, the prefix, the env name, and the key name
|
||||
parts := strings.SplitN(key, "/", 3)
|
||||
if len(parts) < 3 {
|
||||
// no env here
|
||||
return ""
|
||||
}
|
||||
|
||||
// shouldn't happen since we listed by prefix
|
||||
if parts[0] != keyEnvPrefix {
|
||||
// not our key, so ignore
|
||||
return ""
|
||||
}
|
||||
|
||||
// not our key, so don't include it in our listing
|
||||
if parts[2] != b.keyName {
|
||||
return ""
|
||||
}
|
||||
|
||||
|
@ -78,6 +84,10 @@ func (b *Backend) DeleteState(name string) error {
|
|||
}
|
||||
|
||||
func (b *Backend) State(name string) (state.State, error) {
|
||||
if name == "" {
|
||||
return nil, errors.New("missing state name")
|
||||
}
|
||||
|
||||
client := &RemoteClient{
|
||||
s3Client: b.s3Client,
|
||||
dynClient: b.dynClient,
|
||||
|
|
|
@ -3,6 +3,7 @@ package s3
|
|||
import (
|
||||
"fmt"
|
||||
"os"
|
||||
"reflect"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
|
@ -10,6 +11,8 @@ import (
|
|||
"github.com/aws/aws-sdk-go/service/dynamodb"
|
||||
"github.com/aws/aws-sdk-go/service/s3"
|
||||
"github.com/hashicorp/terraform/backend"
|
||||
"github.com/hashicorp/terraform/state/remote"
|
||||
"github.com/hashicorp/terraform/terraform"
|
||||
)
|
||||
|
||||
// verify that we are doing ACC tests or the S3 tests specifically
|
||||
|
@ -29,16 +32,12 @@ func TestBackend_impl(t *testing.T) {
|
|||
}
|
||||
|
||||
func TestBackendConfig(t *testing.T) {
|
||||
// This test just instantiates the client. Shouldn't make any actual
|
||||
// requests nor incur any costs.
|
||||
|
||||
testACC(t)
|
||||
config := map[string]interface{}{
|
||||
"region": "us-west-1",
|
||||
"bucket": "tf-test",
|
||||
"key": "state",
|
||||
"encrypt": true,
|
||||
"access_key": "ACCESS_KEY",
|
||||
"secret_key": "SECRET_KEY",
|
||||
"lock_table": "dynamoTable",
|
||||
}
|
||||
|
||||
|
@ -58,11 +57,11 @@ func TestBackendConfig(t *testing.T) {
|
|||
if err != nil {
|
||||
t.Fatalf("Error when requesting credentials")
|
||||
}
|
||||
if credentials.AccessKeyID != "ACCESS_KEY" {
|
||||
t.Fatalf("Incorrect Access Key Id was populated")
|
||||
if credentials.AccessKeyID == "" {
|
||||
t.Fatalf("No Access Key Id was populated")
|
||||
}
|
||||
if credentials.SecretAccessKey != "SECRET_KEY" {
|
||||
t.Fatalf("Incorrect Secret Access Key was populated")
|
||||
if credentials.SecretAccessKey == "" {
|
||||
t.Fatalf("No Secret Access Key was populated")
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -88,7 +87,7 @@ func TestBackendLocked(t *testing.T) {
|
|||
testACC(t)
|
||||
|
||||
bucketName := fmt.Sprintf("terraform-remote-s3-test-%x", time.Now().Unix())
|
||||
keyName := "testState"
|
||||
keyName := "test/state"
|
||||
|
||||
b1 := backend.TestBackendConfig(t, New(), map[string]interface{}{
|
||||
"bucket": bucketName,
|
||||
|
@ -112,6 +111,133 @@ func TestBackendLocked(t *testing.T) {
|
|||
backend.TestBackend(t, b1, b2)
|
||||
}
|
||||
|
||||
// add some extra junk in S3 to try and confuse the env listing.
|
||||
func TestBackendExtraPaths(t *testing.T) {
|
||||
testACC(t)
|
||||
bucketName := fmt.Sprintf("terraform-remote-s3-test-%x", time.Now().Unix())
|
||||
keyName := "test/state/tfstate"
|
||||
|
||||
b := backend.TestBackendConfig(t, New(), map[string]interface{}{
|
||||
"bucket": bucketName,
|
||||
"key": keyName,
|
||||
"encrypt": true,
|
||||
}).(*Backend)
|
||||
|
||||
createS3Bucket(t, b.s3Client, bucketName)
|
||||
defer deleteS3Bucket(t, b.s3Client, bucketName)
|
||||
|
||||
// put multiple states in old env paths.
|
||||
s1 := terraform.NewState()
|
||||
s2 := terraform.NewState()
|
||||
|
||||
// RemoteClient to Put things in various paths
|
||||
client := &RemoteClient{
|
||||
s3Client: b.s3Client,
|
||||
dynClient: b.dynClient,
|
||||
bucketName: b.bucketName,
|
||||
path: b.path("s1"),
|
||||
serverSideEncryption: b.serverSideEncryption,
|
||||
acl: b.acl,
|
||||
kmsKeyID: b.kmsKeyID,
|
||||
lockTable: b.lockTable,
|
||||
}
|
||||
|
||||
stateMgr := &remote.State{Client: client}
|
||||
stateMgr.WriteState(s1)
|
||||
if err := stateMgr.PersistState(); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
client.path = b.path("s2")
|
||||
stateMgr.WriteState(s2)
|
||||
if err := stateMgr.PersistState(); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
if err := checkStateList(b, []string{"default", "s1", "s2"}); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
// put a state in an env directory name
|
||||
client.path = keyEnvPrefix + "/error"
|
||||
stateMgr.WriteState(terraform.NewState())
|
||||
if err := stateMgr.PersistState(); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if err := checkStateList(b, []string{"default", "s1", "s2"}); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
// add state with the wrong key for an existing env
|
||||
client.path = keyEnvPrefix + "/s2/notTestState"
|
||||
stateMgr.WriteState(terraform.NewState())
|
||||
if err := stateMgr.PersistState(); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if err := checkStateList(b, []string{"default", "s1", "s2"}); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
// remove the state with extra subkey
|
||||
if err := b.DeleteState("s2"); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
if err := checkStateList(b, []string{"default", "s1"}); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
// fetch that state again, which should produce a new lineage
|
||||
s2Mgr, err := b.State("s2")
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if err := s2Mgr.RefreshState(); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
if s2Mgr.State().Lineage == s2.Lineage {
|
||||
t.Fatal("state s2 was not deleted")
|
||||
}
|
||||
s2 = s2Mgr.State()
|
||||
|
||||
// add a state with a key that matches an existing environment dir name
|
||||
client.path = keyEnvPrefix + "/s2/"
|
||||
stateMgr.WriteState(terraform.NewState())
|
||||
if err := stateMgr.PersistState(); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
// make sure s2 is OK
|
||||
s2Mgr, err = b.State("s2")
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if err := s2Mgr.RefreshState(); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
if s2Mgr.State().Lineage != s2.Lineage {
|
||||
t.Fatal("we got the wrong state for s2")
|
||||
}
|
||||
|
||||
if err := checkStateList(b, []string{"default", "s1", "s2"}); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
}
|
||||
|
||||
func checkStateList(b backend.Backend, expected []string) error {
|
||||
states, err := b.States()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if !reflect.DeepEqual(states, expected) {
|
||||
return fmt.Errorf("incorrect states listed: %q", states)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func createS3Bucket(t *testing.T, s3Client *s3.S3, bucketName string) {
|
||||
createBucketReq := &s3.CreateBucketInput{
|
||||
Bucket: &bucketName,
|
||||
|
|
|
@ -113,8 +113,7 @@ func (c *RemoteClient) Lock(info *state.LockInfo) (string, error) {
|
|||
return "", nil
|
||||
}
|
||||
|
||||
stateName := fmt.Sprintf("%s/%s", c.bucketName, c.path)
|
||||
info.Path = stateName
|
||||
info.Path = c.lockPath()
|
||||
|
||||
if info.ID == "" {
|
||||
lockID, err := uuid.GenerateUUID()
|
||||
|
@ -127,7 +126,7 @@ func (c *RemoteClient) Lock(info *state.LockInfo) (string, error) {
|
|||
|
||||
putParams := &dynamodb.PutItemInput{
|
||||
Item: map[string]*dynamodb.AttributeValue{
|
||||
"LockID": {S: aws.String(stateName)},
|
||||
"LockID": {S: aws.String(c.lockPath())},
|
||||
"Info": {S: aws.String(string(info.Marshal()))},
|
||||
},
|
||||
TableName: aws.String(c.lockTable),
|
||||
|
@ -153,7 +152,7 @@ func (c *RemoteClient) Lock(info *state.LockInfo) (string, error) {
|
|||
func (c *RemoteClient) getLockInfo() (*state.LockInfo, error) {
|
||||
getParams := &dynamodb.GetItemInput{
|
||||
Key: map[string]*dynamodb.AttributeValue{
|
||||
"LockID": {S: aws.String(fmt.Sprintf("%s/%s", c.bucketName, c.path))},
|
||||
"LockID": {S: aws.String(c.lockPath())},
|
||||
},
|
||||
ProjectionExpression: aws.String("LockID, Info"),
|
||||
TableName: aws.String(c.lockTable),
|
||||
|
@ -202,7 +201,7 @@ func (c *RemoteClient) Unlock(id string) error {
|
|||
|
||||
params := &dynamodb.DeleteItemInput{
|
||||
Key: map[string]*dynamodb.AttributeValue{
|
||||
"LockID": {S: aws.String(fmt.Sprintf("%s/%s", c.bucketName, c.path))},
|
||||
"LockID": {S: aws.String(c.lockPath())},
|
||||
},
|
||||
TableName: aws.String(c.lockTable),
|
||||
}
|
||||
|
@ -214,3 +213,7 @@ func (c *RemoteClient) Unlock(id string) error {
|
|||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (c *RemoteClient) lockPath() string {
|
||||
return fmt.Sprintf("%s/%s", c.bucketName, c.path)
|
||||
}
|
||||
|
|
|
@ -0,0 +1,12 @@
|
|||
package main
|
||||
|
||||
import (
|
||||
"github.com/hashicorp/terraform/builtin/providers/gitlab"
|
||||
"github.com/hashicorp/terraform/plugin"
|
||||
)
|
||||
|
||||
func main() {
|
||||
plugin.Serve(&plugin.ServeOpts{
|
||||
ProviderFunc: gitlab.Provider,
|
||||
})
|
||||
}
|
|
@ -0,0 +1,12 @@
|
|||
package main
|
||||
|
||||
import (
|
||||
"github.com/hashicorp/terraform/builtin/providers/local"
|
||||
"github.com/hashicorp/terraform/plugin"
|
||||
)
|
||||
|
||||
func main() {
|
||||
plugin.Serve(&plugin.ServeOpts{
|
||||
ProviderFunc: local.Provider,
|
||||
})
|
||||
}
|
|
@ -0,0 +1,12 @@
|
|||
package main
|
||||
|
||||
import (
|
||||
"github.com/hashicorp/terraform/builtin/providers/ovh"
|
||||
"github.com/hashicorp/terraform/plugin"
|
||||
)
|
||||
|
||||
func main() {
|
||||
plugin.Serve(&plugin.ServeOpts{
|
||||
ProviderFunc: ovh.Provider,
|
||||
})
|
||||
}
|
|
@ -17,38 +17,39 @@ const (
|
|||
const defaultTimeout = 120
|
||||
|
||||
// timeout for long time progerss product, rds e.g.
|
||||
const defaultLongTimeout = 800
|
||||
const defaultLongTimeout = 1000
|
||||
|
||||
func getRegion(d *schema.ResourceData, meta interface{}) common.Region {
|
||||
return meta.(*AliyunClient).Region
|
||||
}
|
||||
|
||||
func notFoundError(err error) bool {
|
||||
if e, ok := err.(*common.Error); ok && (e.StatusCode == 404 || e.ErrorResponse.Message == "Not found") {
|
||||
if e, ok := err.(*common.Error); ok &&
|
||||
(e.StatusCode == 404 || e.ErrorResponse.Message == "Not found" || e.Code == InstanceNotfound) {
|
||||
return true
|
||||
}
|
||||
|
||||
return false
|
||||
}
|
||||
|
||||
// Protocal represents network protocal
|
||||
type Protocal string
|
||||
// Protocol represents network protocol
|
||||
type Protocol string
|
||||
|
||||
// Constants of protocal definition
|
||||
// Constants of protocol definition
|
||||
const (
|
||||
Http = Protocal("http")
|
||||
Https = Protocal("https")
|
||||
Tcp = Protocal("tcp")
|
||||
Udp = Protocal("udp")
|
||||
Http = Protocol("http")
|
||||
Https = Protocol("https")
|
||||
Tcp = Protocol("tcp")
|
||||
Udp = Protocol("udp")
|
||||
)
|
||||
|
||||
// ValidProtocals network protocal list
|
||||
var ValidProtocals = []Protocal{Http, Https, Tcp, Udp}
|
||||
// ValidProtocols network protocol list
|
||||
var ValidProtocols = []Protocol{Http, Https, Tcp, Udp}
|
||||
|
||||
// simple array value check method, support string type only
|
||||
func isProtocalValid(value string) bool {
|
||||
func isProtocolValid(value string) bool {
|
||||
res := false
|
||||
for _, v := range ValidProtocals {
|
||||
for _, v := range ValidProtocols {
|
||||
if string(v) == value {
|
||||
res = true
|
||||
}
|
||||
|
@ -77,4 +78,16 @@ const DB_DEFAULT_CONNECT_PORT = "3306"
|
|||
|
||||
const COMMA_SEPARATED = ","
|
||||
|
||||
const COLON_SEPARATED = ":"
|
||||
|
||||
const LOCAL_HOST_IP = "127.0.0.1"
|
||||
|
||||
// Takes the result of flatmap.Expand for an array of strings
|
||||
// and returns a []string
|
||||
func expandStringList(configured []interface{}) []string {
|
||||
vs := make([]string, 0, len(configured))
|
||||
for _, v := range configured {
|
||||
vs = append(vs, v.(string))
|
||||
}
|
||||
return vs
|
||||
}
|
||||
|
|
|
@ -5,6 +5,7 @@ import (
|
|||
|
||||
"github.com/denverdino/aliyungo/common"
|
||||
"github.com/denverdino/aliyungo/ecs"
|
||||
"github.com/denverdino/aliyungo/ess"
|
||||
"github.com/denverdino/aliyungo/rds"
|
||||
"github.com/denverdino/aliyungo/slb"
|
||||
)
|
||||
|
@ -20,6 +21,7 @@ type Config struct {
|
|||
type AliyunClient struct {
|
||||
Region common.Region
|
||||
ecsconn *ecs.Client
|
||||
essconn *ess.Client
|
||||
rdsconn *rds.Client
|
||||
// use new version
|
||||
ecsNewconn *ecs.Client
|
||||
|
@ -60,6 +62,11 @@ func (c *Config) Client() (*AliyunClient, error) {
|
|||
return nil, err
|
||||
}
|
||||
|
||||
essconn, err := c.essConn()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return &AliyunClient{
|
||||
Region: c.Region,
|
||||
ecsconn: ecsconn,
|
||||
|
@ -67,6 +74,7 @@ func (c *Config) Client() (*AliyunClient, error) {
|
|||
vpcconn: vpcconn,
|
||||
slbconn: slbconn,
|
||||
rdsconn: rdsconn,
|
||||
essconn: essconn,
|
||||
}, nil
|
||||
}
|
||||
|
||||
|
@ -123,3 +131,8 @@ func (c *Config) vpcConn() (*ecs.Client, error) {
|
|||
return client, nil
|
||||
|
||||
}
|
||||
func (c *Config) essConn() (*ess.Client, error) {
|
||||
client := ess.NewESSClient(c.AccessKey, c.SecretKey, c.Region)
|
||||
client.SetBusinessInfo(BusinessInfoKey)
|
||||
return client, nil
|
||||
}
|
||||
|
|
|
@ -1,5 +1,7 @@
|
|||
package alicloud
|
||||
|
||||
import "github.com/denverdino/aliyungo/common"
|
||||
|
||||
const (
|
||||
// common
|
||||
Notfound = "Not found"
|
||||
|
@ -25,7 +27,23 @@ const (
|
|||
//Nat gateway
|
||||
NatGatewayInvalidRegionId = "Invalid.RegionId"
|
||||
DependencyViolationBandwidthPackages = "DependencyViolation.BandwidthPackages"
|
||||
NotFindSnatEntryBySnatId = "NotFindSnatEntryBySnatId"
|
||||
NotFindForwardEntryByForwardId = "NotFindForwardEntryByForwardId"
|
||||
|
||||
// vswitch
|
||||
VswitcInvalidRegionId = "InvalidRegionId.NotFound"
|
||||
|
||||
// ess
|
||||
InvalidScalingGroupIdNotFound = "InvalidScalingGroupId.NotFound"
|
||||
IncorrectScalingConfigurationLifecycleState = "IncorrectScalingConfigurationLifecycleState"
|
||||
)
|
||||
|
||||
func GetNotFoundErrorFromString(str string) error {
|
||||
return &common.Error{
|
||||
ErrorResponse: common.ErrorResponse{
|
||||
Code: InstanceNotfound,
|
||||
Message: str,
|
||||
},
|
||||
StatusCode: -1,
|
||||
}
|
||||
}
|
||||
|
|
|
@ -38,18 +38,24 @@ func Provider() terraform.ResourceProvider {
|
|||
"alicloud_instance_types": dataSourceAlicloudInstanceTypes(),
|
||||
},
|
||||
ResourcesMap: map[string]*schema.Resource{
|
||||
"alicloud_instance": resourceAliyunInstance(),
|
||||
"alicloud_disk": resourceAliyunDisk(),
|
||||
"alicloud_disk_attachment": resourceAliyunDiskAttachment(),
|
||||
"alicloud_security_group": resourceAliyunSecurityGroup(),
|
||||
"alicloud_security_group_rule": resourceAliyunSecurityGroupRule(),
|
||||
"alicloud_db_instance": resourceAlicloudDBInstance(),
|
||||
"alicloud_vpc": resourceAliyunVpc(),
|
||||
"alicloud_nat_gateway": resourceAliyunNatGateway(),
|
||||
"alicloud_instance": resourceAliyunInstance(),
|
||||
"alicloud_disk": resourceAliyunDisk(),
|
||||
"alicloud_disk_attachment": resourceAliyunDiskAttachment(),
|
||||
"alicloud_security_group": resourceAliyunSecurityGroup(),
|
||||
"alicloud_security_group_rule": resourceAliyunSecurityGroupRule(),
|
||||
"alicloud_db_instance": resourceAlicloudDBInstance(),
|
||||
"alicloud_ess_scaling_group": resourceAlicloudEssScalingGroup(),
|
||||
"alicloud_ess_scaling_configuration": resourceAlicloudEssScalingConfiguration(),
|
||||
"alicloud_ess_scaling_rule": resourceAlicloudEssScalingRule(),
|
||||
"alicloud_ess_schedule": resourceAlicloudEssSchedule(),
|
||||
"alicloud_vpc": resourceAliyunVpc(),
|
||||
"alicloud_nat_gateway": resourceAliyunNatGateway(),
|
||||
//both subnet and vswith exists,cause compatible old version, and compatible aws habit.
|
||||
"alicloud_subnet": resourceAliyunSubnet(),
|
||||
"alicloud_vswitch": resourceAliyunSubnet(),
|
||||
"alicloud_route_entry": resourceAliyunRouteEntry(),
|
||||
"alicloud_snat_entry": resourceAliyunSnatEntry(),
|
||||
"alicloud_forward_entry": resourceAliyunForwardEntry(),
|
||||
"alicloud_eip": resourceAliyunEip(),
|
||||
"alicloud_eip_association": resourceAliyunEipAssociation(),
|
||||
"alicloud_slb": resourceAliyunSlb(),
|
||||
|
|
|
@ -218,7 +218,7 @@ func resourceAlicloudDBInstanceCreate(d *schema.ResourceData, meta interface{})
|
|||
|
||||
// wait instance status change from Creating to running
|
||||
if err := conn.WaitForInstance(d.Id(), rds.Running, defaultLongTimeout); err != nil {
|
||||
log.Printf("[DEBUG] WaitForInstance %s got error: %#v", rds.Running, err)
|
||||
return fmt.Errorf("WaitForInstance %s got error: %#v", rds.Running, err)
|
||||
}
|
||||
|
||||
if err := modifySecurityIps(d.Id(), d.Get("security_ips"), meta); err != nil {
|
||||
|
@ -386,6 +386,11 @@ func resourceAlicloudDBInstanceRead(d *schema.ResourceData, meta interface{}) er
|
|||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if resp.Databases.Database == nil {
|
||||
d.SetId("")
|
||||
return nil
|
||||
}
|
||||
|
||||
d.Set("db_mappings", flattenDatabaseMappings(resp.Databases.Database))
|
||||
|
||||
argn := rds.DescribeDBInstanceNetInfoArgs{
|
||||
|
|
|
@ -535,7 +535,7 @@ func testAccCheckDBInstanceDestroy(s *terraform.State) error {
|
|||
client := testAccProvider.Meta().(*AliyunClient)
|
||||
|
||||
for _, rs := range s.RootModule().Resources {
|
||||
if rs.Type != "alicloud_db_instance.foo" {
|
||||
if rs.Type != "alicloud_db_instance" {
|
||||
continue
|
||||
}
|
||||
|
||||
|
|
|
@ -78,7 +78,14 @@ func resourceAliyunEipRead(d *schema.ResourceData, meta interface{}) error {
|
|||
d.SetId("")
|
||||
return nil
|
||||
}
|
||||
return err
|
||||
return fmt.Errorf("Error Describe Eip Attribute: %#v", err)
|
||||
}
|
||||
|
||||
if eip.InstanceId != "" {
|
||||
d.Set("instance", eip.InstanceId)
|
||||
} else {
|
||||
d.Set("instance", "")
|
||||
return nil
|
||||
}
|
||||
|
||||
bandwidth, _ := strconv.Atoi(eip.Bandwidth)
|
||||
|
@ -87,12 +94,6 @@ func resourceAliyunEipRead(d *schema.ResourceData, meta interface{}) error {
|
|||
d.Set("ip_address", eip.IpAddress)
|
||||
d.Set("status", eip.Status)
|
||||
|
||||
if eip.InstanceId != "" {
|
||||
d.Set("instance", eip.InstanceId)
|
||||
} else {
|
||||
d.Set("instance", "")
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
|
|
|
@ -66,7 +66,7 @@ func resourceAliyunEipAssociationRead(d *schema.ResourceData, meta interface{})
|
|||
d.SetId("")
|
||||
return nil
|
||||
}
|
||||
return err
|
||||
return fmt.Errorf("Error Describe Eip Attribute: %#v", err)
|
||||
}
|
||||
|
||||
if eip.InstanceId != instanceId {
|
||||
|
|
|
@ -0,0 +1,320 @@
|
|||
package alicloud
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"github.com/denverdino/aliyungo/common"
|
||||
"github.com/denverdino/aliyungo/ecs"
|
||||
"github.com/denverdino/aliyungo/ess"
|
||||
"github.com/hashicorp/terraform/helper/resource"
|
||||
"github.com/hashicorp/terraform/helper/schema"
|
||||
"strings"
|
||||
"time"
|
||||
)
|
||||
|
||||
func resourceAlicloudEssScalingConfiguration() *schema.Resource {
|
||||
return &schema.Resource{
|
||||
Create: resourceAliyunEssScalingConfigurationCreate,
|
||||
Read: resourceAliyunEssScalingConfigurationRead,
|
||||
Update: resourceAliyunEssScalingConfigurationUpdate,
|
||||
Delete: resourceAliyunEssScalingConfigurationDelete,
|
||||
|
||||
Schema: map[string]*schema.Schema{
|
||||
"active": &schema.Schema{
|
||||
Type: schema.TypeBool,
|
||||
Optional: true,
|
||||
Computed: true,
|
||||
},
|
||||
"enable": &schema.Schema{
|
||||
Type: schema.TypeBool,
|
||||
Optional: true,
|
||||
},
|
||||
"scaling_group_id": &schema.Schema{
|
||||
Type: schema.TypeString,
|
||||
ForceNew: true,
|
||||
Required: true,
|
||||
},
|
||||
"image_id": &schema.Schema{
|
||||
Type: schema.TypeString,
|
||||
ForceNew: true,
|
||||
Required: true,
|
||||
},
|
||||
"instance_type": &schema.Schema{
|
||||
Type: schema.TypeString,
|
||||
ForceNew: true,
|
||||
Required: true,
|
||||
},
|
||||
"io_optimized": &schema.Schema{
|
||||
Type: schema.TypeString,
|
||||
Required: true,
|
||||
ForceNew: true,
|
||||
ValidateFunc: validateIoOptimized,
|
||||
},
|
||||
"security_group_id": &schema.Schema{
|
||||
Type: schema.TypeString,
|
||||
ForceNew: true,
|
||||
Required: true,
|
||||
},
|
||||
"scaling_configuration_name": &schema.Schema{
|
||||
Type: schema.TypeString,
|
||||
Optional: true,
|
||||
Computed: true,
|
||||
},
|
||||
"internet_charge_type": &schema.Schema{
|
||||
Type: schema.TypeString,
|
||||
ForceNew: true,
|
||||
Optional: true,
|
||||
Computed: true,
|
||||
ValidateFunc: validateInternetChargeType,
|
||||
},
|
||||
"internet_max_bandwidth_in": &schema.Schema{
|
||||
Type: schema.TypeInt,
|
||||
Optional: true,
|
||||
ForceNew: true,
|
||||
Computed: true,
|
||||
},
|
||||
"internet_max_bandwidth_out": &schema.Schema{
|
||||
Type: schema.TypeInt,
|
||||
Optional: true,
|
||||
ForceNew: true,
|
||||
ValidateFunc: validateInternetMaxBandWidthOut,
|
||||
},
|
||||
"system_disk_category": &schema.Schema{
|
||||
Type: schema.TypeString,
|
||||
Optional: true,
|
||||
ForceNew: true,
|
||||
Computed: true,
|
||||
ValidateFunc: validateAllowedStringValue([]string{
|
||||
string(ecs.DiskCategoryCloud),
|
||||
string(ecs.DiskCategoryCloudSSD),
|
||||
string(ecs.DiskCategoryCloudEfficiency),
|
||||
string(ecs.DiskCategoryEphemeralSSD),
|
||||
}),
|
||||
},
|
||||
"data_disk": &schema.Schema{
|
||||
Optional: true,
|
||||
ForceNew: true,
|
||||
Type: schema.TypeList,
|
||||
Elem: &schema.Resource{
|
||||
Schema: map[string]*schema.Schema{
|
||||
"size": &schema.Schema{
|
||||
Type: schema.TypeInt,
|
||||
Optional: true,
|
||||
},
|
||||
"category": &schema.Schema{
|
||||
Type: schema.TypeString,
|
||||
Optional: true,
|
||||
},
|
||||
"snapshot_id": &schema.Schema{
|
||||
Type: schema.TypeString,
|
||||
Optional: true,
|
||||
},
|
||||
"device": &schema.Schema{
|
||||
Type: schema.TypeString,
|
||||
Optional: true,
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
"instance_ids": &schema.Schema{
|
||||
Type: schema.TypeList,
|
||||
Elem: &schema.Schema{Type: schema.TypeString},
|
||||
Optional: true,
|
||||
MaxItems: 20,
|
||||
},
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
func resourceAliyunEssScalingConfigurationCreate(d *schema.ResourceData, meta interface{}) error {
|
||||
|
||||
args, err := buildAlicloudEssScalingConfigurationArgs(d, meta)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
essconn := meta.(*AliyunClient).essconn
|
||||
|
||||
scaling, err := essconn.CreateScalingConfiguration(args)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
d.SetId(d.Get("scaling_group_id").(string) + COLON_SEPARATED + scaling.ScalingConfigurationId)
|
||||
|
||||
return resourceAliyunEssScalingConfigurationUpdate(d, meta)
|
||||
}
|
||||
|
||||
func resourceAliyunEssScalingConfigurationUpdate(d *schema.ResourceData, meta interface{}) error {
|
||||
client := meta.(*AliyunClient)
|
||||
if d.HasChange("active") {
|
||||
active := d.Get("active").(bool)
|
||||
if !active {
|
||||
return fmt.Errorf("Please active the scaling configuration directly.")
|
||||
}
|
||||
ids := strings.Split(d.Id(), COLON_SEPARATED)
|
||||
err := client.ActiveScalingConfigurationById(ids[0], ids[1])
|
||||
|
||||
if err != nil {
|
||||
return fmt.Errorf("Active scaling configuration %s err: %#v", ids[1], err)
|
||||
}
|
||||
}
|
||||
|
||||
if err := enableEssScalingConfiguration(d, meta); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return resourceAliyunEssScalingConfigurationRead(d, meta)
|
||||
}
|
||||
|
||||
func enableEssScalingConfiguration(d *schema.ResourceData, meta interface{}) error {
|
||||
client := meta.(*AliyunClient)
|
||||
ids := strings.Split(d.Id(), COLON_SEPARATED)
|
||||
|
||||
if d.HasChange("enable") {
|
||||
d.SetPartial("enable")
|
||||
enable := d.Get("enable").(bool)
|
||||
if !enable {
|
||||
err := client.DisableScalingConfigurationById(ids[0])
|
||||
|
||||
if err != nil {
|
||||
return fmt.Errorf("Disable scaling group %s err: %#v", ids[0], err)
|
||||
}
|
||||
}
|
||||
|
||||
instance_ids := []string{}
|
||||
if d.HasChange("instance_ids") {
|
||||
d.SetPartial("instance_ids")
|
||||
instances := d.Get("instance_ids").([]interface{})
|
||||
instance_ids = expandStringList(instances)
|
||||
}
|
||||
err := client.EnableScalingConfigurationById(ids[0], ids[1], instance_ids)
|
||||
|
||||
if err != nil {
|
||||
return fmt.Errorf("Enable scaling configuration %s err: %#v", ids[1], err)
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func resourceAliyunEssScalingConfigurationRead(d *schema.ResourceData, meta interface{}) error {
|
||||
|
||||
client := meta.(*AliyunClient)
|
||||
ids := strings.Split(d.Id(), COLON_SEPARATED)
|
||||
c, err := client.DescribeScalingConfigurationById(ids[0], ids[1])
|
||||
if err != nil {
|
||||
if e, ok := err.(*common.Error); ok && e.Code == InstanceNotfound {
|
||||
d.SetId("")
|
||||
return nil
|
||||
}
|
||||
return fmt.Errorf("Error Describe ESS scaling configuration Attribute: %#v", err)
|
||||
}
|
||||
|
||||
d.Set("scaling_group_id", c.ScalingGroupId)
|
||||
d.Set("active", c.LifecycleState == ess.Active)
|
||||
d.Set("image_id", c.ImageId)
|
||||
d.Set("instance_type", c.InstanceType)
|
||||
d.Set("io_optimized", c.IoOptimized)
|
||||
d.Set("security_group_id", c.SecurityGroupId)
|
||||
d.Set("scaling_configuration_name", c.ScalingConfigurationName)
|
||||
d.Set("internet_charge_type", c.InternetChargeType)
|
||||
d.Set("internet_max_bandwidth_in", c.InternetMaxBandwidthIn)
|
||||
d.Set("internet_max_bandwidth_out", c.InternetMaxBandwidthOut)
|
||||
d.Set("system_disk_category", c.SystemDiskCategory)
|
||||
d.Set("data_disk", flattenDataDiskMappings(c.DataDisks.DataDisk))
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func resourceAliyunEssScalingConfigurationDelete(d *schema.ResourceData, meta interface{}) error {
|
||||
client := meta.(*AliyunClient)
|
||||
|
||||
return resource.Retry(5*time.Minute, func() *resource.RetryError {
|
||||
ids := strings.Split(d.Id(), COLON_SEPARATED)
|
||||
err := client.DeleteScalingConfigurationById(ids[0], ids[1])
|
||||
|
||||
if err != nil {
|
||||
e, _ := err.(*common.Error)
|
||||
if e.ErrorResponse.Code == IncorrectScalingConfigurationLifecycleState {
|
||||
return resource.NonRetryableError(
|
||||
fmt.Errorf("Scaling configuration is active - please active another one and trying again."))
|
||||
}
|
||||
if e.ErrorResponse.Code != InvalidScalingGroupIdNotFound {
|
||||
return resource.RetryableError(
|
||||
fmt.Errorf("Scaling configuration in use - trying again while it is deleted."))
|
||||
}
|
||||
}
|
||||
|
||||
_, err = client.DescribeScalingConfigurationById(ids[0], ids[1])
|
||||
if err != nil {
|
||||
if notFoundError(err) {
|
||||
return nil
|
||||
}
|
||||
return resource.NonRetryableError(err)
|
||||
}
|
||||
|
||||
return resource.RetryableError(
|
||||
fmt.Errorf("Scaling configuration in use - trying again while it is deleted."))
|
||||
})
|
||||
}
|
||||
|
||||
func buildAlicloudEssScalingConfigurationArgs(d *schema.ResourceData, meta interface{}) (*ess.CreateScalingConfigurationArgs, error) {
|
||||
args := &ess.CreateScalingConfigurationArgs{
|
||||
ScalingGroupId: d.Get("scaling_group_id").(string),
|
||||
ImageId: d.Get("image_id").(string),
|
||||
InstanceType: d.Get("instance_type").(string),
|
||||
IoOptimized: ecs.IoOptimized(d.Get("io_optimized").(string)),
|
||||
SecurityGroupId: d.Get("security_group_id").(string),
|
||||
}
|
||||
|
||||
if v := d.Get("scaling_configuration_name").(string); v != "" {
|
||||
args.ScalingConfigurationName = v
|
||||
}
|
||||
|
||||
if v := d.Get("internet_charge_type").(string); v != "" {
|
||||
args.InternetChargeType = common.InternetChargeType(v)
|
||||
}
|
||||
|
||||
if v := d.Get("internet_max_bandwidth_in").(int); v != 0 {
|
||||
args.InternetMaxBandwidthIn = v
|
||||
}
|
||||
|
||||
if v := d.Get("internet_max_bandwidth_out").(int); v != 0 {
|
||||
args.InternetMaxBandwidthOut = v
|
||||
}
|
||||
|
||||
if v := d.Get("system_disk_category").(string); v != "" {
|
||||
args.SystemDisk_Category = common.UnderlineString(v)
|
||||
}
|
||||
|
||||
dds, ok := d.GetOk("data_disk")
|
||||
if ok {
|
||||
disks := dds.([]interface{})
|
||||
diskTypes := []ess.DataDiskType{}
|
||||
|
||||
for _, e := range disks {
|
||||
pack := e.(map[string]interface{})
|
||||
disk := ess.DataDiskType{
|
||||
Size: pack["size"].(int),
|
||||
Category: pack["category"].(string),
|
||||
SnapshotId: pack["snapshot_id"].(string),
|
||||
Device: pack["device"].(string),
|
||||
}
|
||||
if v := pack["size"].(int); v != 0 {
|
||||
disk.Size = v
|
||||
}
|
||||
if v := pack["category"].(string); v != "" {
|
||||
disk.Category = v
|
||||
}
|
||||
if v := pack["snapshot_id"].(string); v != "" {
|
||||
disk.SnapshotId = v
|
||||
}
|
||||
if v := pack["device"].(string); v != "" {
|
||||
disk.Device = v
|
||||
}
|
||||
diskTypes = append(diskTypes, disk)
|
||||
}
|
||||
args.DataDisk = diskTypes
|
||||
}
|
||||
|
||||
return args, nil
|
||||
}
|
|
@ -0,0 +1,495 @@
|
|||
package alicloud
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"github.com/denverdino/aliyungo/common"
|
||||
"github.com/denverdino/aliyungo/ess"
|
||||
"github.com/hashicorp/terraform/helper/resource"
|
||||
"github.com/hashicorp/terraform/terraform"
|
||||
"log"
|
||||
"regexp"
|
||||
"strings"
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestAccAlicloudEssScalingConfiguration_basic(t *testing.T) {
|
||||
var sc ess.ScalingConfigurationItemType
|
||||
|
||||
resource.Test(t, resource.TestCase{
|
||||
PreCheck: func() {
|
||||
testAccPreCheck(t)
|
||||
},
|
||||
|
||||
// module name
|
||||
IDRefreshName: "alicloud_ess_scaling_configuration.foo",
|
||||
|
||||
Providers: testAccProviders,
|
||||
CheckDestroy: testAccCheckEssScalingConfigurationDestroy,
|
||||
Steps: []resource.TestStep{
|
||||
resource.TestStep{
|
||||
Config: testAccEssScalingConfigurationConfig,
|
||||
Check: resource.ComposeTestCheckFunc(
|
||||
testAccCheckEssScalingConfigurationExists(
|
||||
"alicloud_ess_scaling_configuration.foo", &sc),
|
||||
resource.TestCheckResourceAttr(
|
||||
"alicloud_ess_scaling_configuration.foo",
|
||||
"instance_type",
|
||||
"ecs.s2.large"),
|
||||
resource.TestMatchResourceAttr(
|
||||
"alicloud_ess_scaling_configuration.foo",
|
||||
"image_id",
|
||||
regexp.MustCompile("^centos_6")),
|
||||
),
|
||||
},
|
||||
},
|
||||
})
|
||||
}
|
||||
|
||||
func TestAccAlicloudEssScalingConfiguration_multiConfig(t *testing.T) {
|
||||
var sc ess.ScalingConfigurationItemType
|
||||
|
||||
resource.Test(t, resource.TestCase{
|
||||
PreCheck: func() {
|
||||
testAccPreCheck(t)
|
||||
},
|
||||
|
||||
// module name
|
||||
IDRefreshName: "alicloud_ess_scaling_configuration.bar",
|
||||
|
||||
Providers: testAccProviders,
|
||||
CheckDestroy: testAccCheckEssScalingConfigurationDestroy,
|
||||
Steps: []resource.TestStep{
|
||||
resource.TestStep{
|
||||
Config: testAccEssScalingConfiguration_multiConfig,
|
||||
Check: resource.ComposeTestCheckFunc(
|
||||
testAccCheckEssScalingConfigurationExists(
|
||||
"alicloud_ess_scaling_configuration.bar", &sc),
|
||||
resource.TestCheckResourceAttr(
|
||||
"alicloud_ess_scaling_configuration.bar",
|
||||
"active",
|
||||
"false"),
|
||||
resource.TestCheckResourceAttr(
|
||||
"alicloud_ess_scaling_configuration.bar",
|
||||
"instance_type",
|
||||
"ecs.s2.large"),
|
||||
resource.TestMatchResourceAttr(
|
||||
"alicloud_ess_scaling_configuration.bar",
|
||||
"image_id",
|
||||
regexp.MustCompile("^centos_6")),
|
||||
),
|
||||
},
|
||||
},
|
||||
})
|
||||
}
|
||||
|
||||
func SkipTestAccAlicloudEssScalingConfiguration_active(t *testing.T) {
|
||||
var sc ess.ScalingConfigurationItemType
|
||||
|
||||
resource.Test(t, resource.TestCase{
|
||||
PreCheck: func() {
|
||||
testAccPreCheck(t)
|
||||
},
|
||||
|
||||
// module name
|
||||
IDRefreshName: "alicloud_ess_scaling_configuration.bar",
|
||||
|
||||
Providers: testAccProviders,
|
||||
CheckDestroy: testAccCheckEssScalingConfigurationDestroy,
|
||||
Steps: []resource.TestStep{
|
||||
resource.TestStep{
|
||||
Config: testAccEssScalingConfiguration_active,
|
||||
Check: resource.ComposeTestCheckFunc(
|
||||
testAccCheckEssScalingConfigurationExists(
|
||||
"alicloud_ess_scaling_configuration.bar", &sc),
|
||||
resource.TestCheckResourceAttr(
|
||||
"alicloud_ess_scaling_configuration.bar",
|
||||
"active",
|
||||
"true"),
|
||||
resource.TestCheckResourceAttr(
|
||||
"alicloud_ess_scaling_configuration.bar",
|
||||
"instance_type",
|
||||
"ecs.s2.large"),
|
||||
resource.TestMatchResourceAttr(
|
||||
"alicloud_ess_scaling_configuration.bar",
|
||||
"image_id",
|
||||
regexp.MustCompile("^centos_6")),
|
||||
),
|
||||
},
|
||||
|
||||
resource.TestStep{
|
||||
Config: testAccEssScalingConfiguration_inActive,
|
||||
Check: resource.ComposeTestCheckFunc(
|
||||
testAccCheckEssScalingConfigurationExists(
|
||||
"alicloud_ess_scaling_configuration.bar", &sc),
|
||||
resource.TestCheckResourceAttr(
|
||||
"alicloud_ess_scaling_configuration.bar",
|
||||
"active",
|
||||
"false"),
|
||||
resource.TestCheckResourceAttr(
|
||||
"alicloud_ess_scaling_configuration.bar",
|
||||
"instance_type",
|
||||
"ecs.s2.large"),
|
||||
resource.TestMatchResourceAttr(
|
||||
"alicloud_ess_scaling_configuration.bar",
|
||||
"image_id",
|
||||
regexp.MustCompile("^centos_6")),
|
||||
),
|
||||
},
|
||||
},
|
||||
})
|
||||
}
|
||||
|
||||
func SkipTestAccAlicloudEssScalingConfiguration_enable(t *testing.T) {
|
||||
var sc ess.ScalingConfigurationItemType
|
||||
|
||||
resource.Test(t, resource.TestCase{
|
||||
PreCheck: func() {
|
||||
testAccPreCheck(t)
|
||||
},
|
||||
|
||||
// module name
|
||||
IDRefreshName: "alicloud_ess_scaling_configuration.foo",
|
||||
|
||||
Providers: testAccProviders,
|
||||
CheckDestroy: testAccCheckEssScalingConfigurationDestroy,
|
||||
Steps: []resource.TestStep{
|
||||
resource.TestStep{
|
||||
Config: testAccEssScalingConfiguration_enable,
|
||||
Check: resource.ComposeTestCheckFunc(
|
||||
testAccCheckEssScalingConfigurationExists(
|
||||
"alicloud_ess_scaling_configuration.foo", &sc),
|
||||
resource.TestCheckResourceAttr(
|
||||
"alicloud_ess_scaling_configuration.foo",
|
||||
"enable",
|
||||
"true"),
|
||||
resource.TestCheckResourceAttr(
|
||||
"alicloud_ess_scaling_configuration.foo",
|
||||
"instance_type",
|
||||
"ecs.s2.large"),
|
||||
resource.TestMatchResourceAttr(
|
||||
"alicloud_ess_scaling_configuration.foo",
|
||||
"image_id",
|
||||
regexp.MustCompile("^centos_6")),
|
||||
),
|
||||
},
|
||||
|
||||
resource.TestStep{
|
||||
Config: testAccEssScalingConfiguration_disable,
|
||||
Check: resource.ComposeTestCheckFunc(
|
||||
testAccCheckEssScalingConfigurationExists(
|
||||
"alicloud_ess_scaling_configuration.foo", &sc),
|
||||
resource.TestCheckResourceAttr(
|
||||
"alicloud_ess_scaling_configuration.foo",
|
||||
"enable",
|
||||
"false"),
|
||||
resource.TestCheckResourceAttr(
|
||||
"alicloud_ess_scaling_configuration.foo",
|
||||
"instance_type",
|
||||
"ecs.s2.large"),
|
||||
resource.TestMatchResourceAttr(
|
||||
"alicloud_ess_scaling_configuration.foo",
|
||||
"image_id",
|
||||
regexp.MustCompile("^centos_6")),
|
||||
),
|
||||
},
|
||||
},
|
||||
})
|
||||
}
|
||||
|
||||
func testAccCheckEssScalingConfigurationExists(n string, d *ess.ScalingConfigurationItemType) resource.TestCheckFunc {
|
||||
return func(s *terraform.State) error {
|
||||
rs, ok := s.RootModule().Resources[n]
|
||||
if !ok {
|
||||
return fmt.Errorf("Not found: %s", n)
|
||||
}
|
||||
|
||||
if rs.Primary.ID == "" {
|
||||
return fmt.Errorf("No ESS Scaling Configuration ID is set")
|
||||
}
|
||||
|
||||
client := testAccProvider.Meta().(*AliyunClient)
|
||||
ids := strings.Split(rs.Primary.ID, COLON_SEPARATED)
|
||||
attr, err := client.DescribeScalingConfigurationById(ids[0], ids[1])
|
||||
log.Printf("[DEBUG] check scaling configuration %s attribute %#v", rs.Primary.ID, attr)
|
||||
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if attr == nil {
|
||||
return fmt.Errorf("Scaling Configuration not found")
|
||||
}
|
||||
|
||||
*d = *attr
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
func testAccCheckEssScalingConfigurationDestroy(s *terraform.State) error {
|
||||
client := testAccProvider.Meta().(*AliyunClient)
|
||||
|
||||
for _, rs := range s.RootModule().Resources {
|
||||
if rs.Type != "alicloud_ess_scaling_configuration" {
|
||||
continue
|
||||
}
|
||||
ids := strings.Split(rs.Primary.ID, COLON_SEPARATED)
|
||||
ins, err := client.DescribeScalingConfigurationById(ids[0], ids[1])
|
||||
|
||||
if ins != nil {
|
||||
return fmt.Errorf("Error ESS scaling configuration still exist")
|
||||
}
|
||||
|
||||
// Verify the error is what we want
|
||||
if err != nil {
|
||||
// Verify the error is what we want
|
||||
e, _ := err.(*common.Error)
|
||||
if e.ErrorResponse.Code == InstanceNotfound {
|
||||
continue
|
||||
}
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
const testAccEssScalingConfigurationConfig = `
|
||||
data "alicloud_images" "ecs_image" {
|
||||
most_recent = true
|
||||
name_regex = "^centos_6\\w{1,5}[64].*"
|
||||
}
|
||||
|
||||
resource "alicloud_security_group" "tf_test_foo" {
|
||||
description = "foo"
|
||||
}
|
||||
|
||||
resource "alicloud_security_group_rule" "ssh-in" {
|
||||
type = "ingress"
|
||||
ip_protocol = "tcp"
|
||||
nic_type = "internet"
|
||||
policy = "accept"
|
||||
port_range = "22/22"
|
||||
priority = 1
|
||||
security_group_id = "${alicloud_security_group.tf_test_foo.id}"
|
||||
cidr_ip = "0.0.0.0/0"
|
||||
}
|
||||
|
||||
resource "alicloud_ess_scaling_group" "foo" {
|
||||
min_size = 1
|
||||
max_size = 1
|
||||
scaling_group_name = "foo"
|
||||
removal_policies = ["OldestInstance", "NewestInstance"]
|
||||
}
|
||||
|
||||
resource "alicloud_ess_scaling_configuration" "foo" {
|
||||
scaling_group_id = "${alicloud_ess_scaling_group.foo.id}"
|
||||
|
||||
image_id = "${data.alicloud_images.ecs_image.images.0.id}"
|
||||
instance_type = "ecs.s2.large"
|
||||
io_optimized = "optimized"
|
||||
security_group_id = "${alicloud_security_group.tf_test_foo.id}"
|
||||
}
|
||||
`
|
||||
|
||||
const testAccEssScalingConfiguration_multiConfig = `
|
||||
data "alicloud_images" "ecs_image" {
|
||||
most_recent = true
|
||||
name_regex = "^centos_6\\w{1,5}[64].*"
|
||||
}
|
||||
|
||||
resource "alicloud_security_group" "tf_test_foo" {
|
||||
description = "foo"
|
||||
}
|
||||
|
||||
resource "alicloud_security_group_rule" "ssh-in" {
|
||||
type = "ingress"
|
||||
ip_protocol = "tcp"
|
||||
nic_type = "internet"
|
||||
policy = "accept"
|
||||
port_range = "22/22"
|
||||
priority = 1
|
||||
security_group_id = "${alicloud_security_group.tf_test_foo.id}"
|
||||
cidr_ip = "0.0.0.0/0"
|
||||
}
|
||||
|
||||
resource "alicloud_ess_scaling_group" "foo" {
|
||||
min_size = 1
|
||||
max_size = 1
|
||||
scaling_group_name = "foo"
|
||||
removal_policies = ["OldestInstance", "NewestInstance"]
|
||||
}
|
||||
|
||||
resource "alicloud_ess_scaling_configuration" "foo" {
|
||||
scaling_group_id = "${alicloud_ess_scaling_group.foo.id}"
|
||||
|
||||
image_id = "${data.alicloud_images.ecs_image.images.0.id}"
|
||||
instance_type = "ecs.s2.large"
|
||||
io_optimized = "optimized"
|
||||
security_group_id = "${alicloud_security_group.tf_test_foo.id}"
|
||||
}
|
||||
|
||||
resource "alicloud_ess_scaling_configuration" "bar" {
|
||||
scaling_group_id = "${alicloud_ess_scaling_group.foo.id}"
|
||||
|
||||
image_id = "${data.alicloud_images.ecs_image.images.0.id}"
|
||||
instance_type = "ecs.s2.large"
|
||||
io_optimized = "optimized"
|
||||
security_group_id = "${alicloud_security_group.tf_test_foo.id}"
|
||||
}
|
||||
`
|
||||
|
||||
const testAccEssScalingConfiguration_active = `
|
||||
data "alicloud_images" "ecs_image" {
|
||||
most_recent = true
|
||||
name_regex = "^centos_6\\w{1,5}[64].*"
|
||||
}
|
||||
|
||||
resource "alicloud_security_group" "tf_test_foo" {
|
||||
description = "foo"
|
||||
}
|
||||
|
||||
resource "alicloud_security_group_rule" "ssh-in" {
|
||||
type = "ingress"
|
||||
ip_protocol = "tcp"
|
||||
nic_type = "internet"
|
||||
policy = "accept"
|
||||
port_range = "22/22"
|
||||
priority = 1
|
||||
security_group_id = "${alicloud_security_group.tf_test_foo.id}"
|
||||
cidr_ip = "0.0.0.0/0"
|
||||
}
|
||||
|
||||
resource "alicloud_ess_scaling_group" "foo" {
|
||||
min_size = 1
|
||||
max_size = 1
|
||||
scaling_group_name = "foo"
|
||||
removal_policies = ["OldestInstance", "NewestInstance"]
|
||||
}
|
||||
|
||||
resource "alicloud_ess_scaling_configuration" "foo" {
|
||||
scaling_group_id = "${alicloud_ess_scaling_group.foo.id}"
|
||||
active = true
|
||||
|
||||
image_id = "${data.alicloud_images.ecs_image.images.0.id}"
|
||||
instance_type = "ecs.s2.large"
|
||||
io_optimized = "optimized"
|
||||
security_group_id = "${alicloud_security_group.tf_test_foo.id}"
|
||||
}
|
||||
`
|
||||
|
||||
const testAccEssScalingConfiguration_inActive = `
|
||||
data "alicloud_images" "ecs_image" {
|
||||
most_recent = true
|
||||
name_regex = "^centos_6\\w{1,5}[64].*"
|
||||
}
|
||||
|
||||
resource "alicloud_security_group" "tf_test_foo" {
|
||||
description = "foo"
|
||||
}
|
||||
|
||||
resource "alicloud_security_group_rule" "ssh-in" {
|
||||
type = "ingress"
|
||||
ip_protocol = "tcp"
|
||||
nic_type = "internet"
|
||||
policy = "accept"
|
||||
port_range = "22/22"
|
||||
priority = 1
|
||||
security_group_id = "${alicloud_security_group.tf_test_foo.id}"
|
||||
cidr_ip = "0.0.0.0/0"
|
||||
}
|
||||
|
||||
resource "alicloud_ess_scaling_group" "foo" {
|
||||
min_size = 1
|
||||
max_size = 1
|
||||
scaling_group_name = "foo"
|
||||
removal_policies = ["OldestInstance", "NewestInstance"]
|
||||
}
|
||||
|
||||
resource "alicloud_ess_scaling_configuration" "foo" {
|
||||
scaling_group_id = "${alicloud_ess_scaling_group.foo.id}"
|
||||
active = false
|
||||
|
||||
image_id = "${data.alicloud_images.ecs_image.images.0.id}"
|
||||
instance_type = "ecs.s2.large"
|
||||
io_optimized = "optimized"
|
||||
security_group_id = "${alicloud_security_group.tf_test_foo.id}"
|
||||
}
|
||||
`
|
||||
|
||||
const testAccEssScalingConfiguration_enable = `
|
||||
data "alicloud_images" "ecs_image" {
|
||||
most_recent = true
|
||||
name_regex = "^centos_6\\w{1,5}[64].*"
|
||||
}
|
||||
|
||||
resource "alicloud_security_group" "tf_test_foo" {
|
||||
description = "foo"
|
||||
}
|
||||
|
||||
resource "alicloud_security_group_rule" "ssh-in" {
|
||||
type = "ingress"
|
||||
ip_protocol = "tcp"
|
||||
nic_type = "internet"
|
||||
policy = "accept"
|
||||
port_range = "22/22"
|
||||
priority = 1
|
||||
security_group_id = "${alicloud_security_group.tf_test_foo.id}"
|
||||
cidr_ip = "0.0.0.0/0"
|
||||
}
|
||||
|
||||
resource "alicloud_ess_scaling_group" "foo" {
|
||||
min_size = 1
|
||||
max_size = 1
|
||||
scaling_group_name = "foo"
|
||||
removal_policies = ["OldestInstance", "NewestInstance"]
|
||||
}
|
||||
|
||||
resource "alicloud_ess_scaling_configuration" "foo" {
|
||||
scaling_group_id = "${alicloud_ess_scaling_group.foo.id}"
|
||||
enable = true
|
||||
|
||||
image_id = "${data.alicloud_images.ecs_image.images.0.id}"
|
||||
instance_type = "ecs.s2.large"
|
||||
io_optimized = "optimized"
|
||||
security_group_id = "${alicloud_security_group.tf_test_foo.id}"
|
||||
}
|
||||
`
|
||||
|
||||
const testAccEssScalingConfiguration_disable = `
|
||||
data "alicloud_images" "ecs_image" {
|
||||
most_recent = true
|
||||
name_regex = "^centos_6\\w{1,5}[64].*"
|
||||
}
|
||||
|
||||
resource "alicloud_security_group" "tf_test_foo" {
|
||||
description = "foo"
|
||||
}
|
||||
|
||||
resource "alicloud_security_group_rule" "ssh-in" {
|
||||
type = "ingress"
|
||||
ip_protocol = "tcp"
|
||||
nic_type = "internet"
|
||||
policy = "accept"
|
||||
port_range = "22/22"
|
||||
priority = 1
|
||||
security_group_id = "${alicloud_security_group.tf_test_foo.id}"
|
||||
cidr_ip = "0.0.0.0/0"
|
||||
}
|
||||
|
||||
resource "alicloud_ess_scaling_group" "foo" {
|
||||
min_size = 1
|
||||
max_size = 1
|
||||
scaling_group_name = "foo"
|
||||
removal_policies = ["OldestInstance", "NewestInstance"]
|
||||
}
|
||||
|
||||
resource "alicloud_ess_scaling_configuration" "foo" {
|
||||
scaling_group_id = "${alicloud_ess_scaling_group.foo.id}"
|
||||
enable = false
|
||||
|
||||
image_id = "${data.alicloud_images.ecs_image.images.0.id}"
|
||||
instance_type = "ecs.s2.large"
|
||||
io_optimized = "optimized"
|
||||
security_group_id = "${alicloud_security_group.tf_test_foo.id}"
|
||||
}
|
||||
`
|
|
@ -0,0 +1,209 @@
|
|||
package alicloud
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"github.com/denverdino/aliyungo/common"
|
||||
"github.com/denverdino/aliyungo/ess"
|
||||
"github.com/hashicorp/terraform/helper/resource"
|
||||
"github.com/hashicorp/terraform/helper/schema"
|
||||
"strings"
|
||||
"time"
|
||||
)
|
||||
|
||||
func resourceAlicloudEssScalingGroup() *schema.Resource {
|
||||
return &schema.Resource{
|
||||
Create: resourceAliyunEssScalingGroupCreate,
|
||||
Read: resourceAliyunEssScalingGroupRead,
|
||||
Update: resourceAliyunEssScalingGroupUpdate,
|
||||
Delete: resourceAliyunEssScalingGroupDelete,
|
||||
|
||||
Schema: map[string]*schema.Schema{
|
||||
"min_size": &schema.Schema{
|
||||
Type: schema.TypeInt,
|
||||
Required: true,
|
||||
ValidateFunc: validateIntegerInRange(0, 100),
|
||||
},
|
||||
"max_size": &schema.Schema{
|
||||
Type: schema.TypeInt,
|
||||
Required: true,
|
||||
ValidateFunc: validateIntegerInRange(0, 100),
|
||||
},
|
||||
"scaling_group_name": &schema.Schema{
|
||||
Type: schema.TypeString,
|
||||
Optional: true,
|
||||
},
|
||||
"default_cooldown": &schema.Schema{
|
||||
Type: schema.TypeInt,
|
||||
Default: 300,
|
||||
Optional: true,
|
||||
ValidateFunc: validateIntegerInRange(0, 86400),
|
||||
},
|
||||
"vswitch_id": &schema.Schema{
|
||||
Type: schema.TypeString,
|
||||
Optional: true,
|
||||
},
|
||||
"removal_policies": &schema.Schema{
|
||||
Type: schema.TypeList,
|
||||
Elem: &schema.Schema{Type: schema.TypeString},
|
||||
Optional: true,
|
||||
MaxItems: 2,
|
||||
},
|
||||
"db_instance_ids": &schema.Schema{
|
||||
Type: schema.TypeList,
|
||||
Elem: &schema.Schema{Type: schema.TypeString},
|
||||
Optional: true,
|
||||
MaxItems: 3,
|
||||
},
|
||||
"loadbalancer_ids": &schema.Schema{
|
||||
Type: schema.TypeList,
|
||||
Elem: &schema.Schema{Type: schema.TypeString},
|
||||
Optional: true,
|
||||
},
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
func resourceAliyunEssScalingGroupCreate(d *schema.ResourceData, meta interface{}) error {
|
||||
|
||||
args, err := buildAlicloudEssScalingGroupArgs(d, meta)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
essconn := meta.(*AliyunClient).essconn
|
||||
|
||||
scaling, err := essconn.CreateScalingGroup(args)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
d.SetId(scaling.ScalingGroupId)
|
||||
|
||||
return resourceAliyunEssScalingGroupUpdate(d, meta)
|
||||
}
|
||||
|
||||
func resourceAliyunEssScalingGroupRead(d *schema.ResourceData, meta interface{}) error {
|
||||
|
||||
client := meta.(*AliyunClient)
|
||||
|
||||
scaling, err := client.DescribeScalingGroupById(d.Id())
|
||||
if err != nil {
|
||||
if e, ok := err.(*common.Error); ok && e.Code == InstanceNotfound {
|
||||
d.SetId("")
|
||||
return nil
|
||||
}
|
||||
return fmt.Errorf("Error Describe ESS scaling group Attribute: %#v", err)
|
||||
}
|
||||
|
||||
d.Set("min_size", scaling.MinSize)
|
||||
d.Set("max_size", scaling.MaxSize)
|
||||
d.Set("scaling_group_name", scaling.ScalingGroupName)
|
||||
d.Set("default_cooldown", scaling.DefaultCooldown)
|
||||
d.Set("removal_policies", scaling.RemovalPolicies)
|
||||
d.Set("db_instance_ids", scaling.DBInstanceIds)
|
||||
d.Set("loadbalancer_ids", scaling.LoadBalancerId)
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func resourceAliyunEssScalingGroupUpdate(d *schema.ResourceData, meta interface{}) error {
|
||||
|
||||
conn := meta.(*AliyunClient).essconn
|
||||
args := &ess.ModifyScalingGroupArgs{
|
||||
ScalingGroupId: d.Id(),
|
||||
}
|
||||
|
||||
if d.HasChange("scaling_group_name") {
|
||||
args.ScalingGroupName = d.Get("scaling_group_name").(string)
|
||||
}
|
||||
|
||||
if d.HasChange("min_size") {
|
||||
args.MinSize = d.Get("min_size").(int)
|
||||
}
|
||||
|
||||
if d.HasChange("max_size") {
|
||||
args.MaxSize = d.Get("max_size").(int)
|
||||
}
|
||||
|
||||
if d.HasChange("default_cooldown") {
|
||||
args.DefaultCooldown = d.Get("default_cooldown").(int)
|
||||
}
|
||||
|
||||
if d.HasChange("removal_policies") {
|
||||
policyStrings := d.Get("removal_policies").([]interface{})
|
||||
args.RemovalPolicy = expandStringList(policyStrings)
|
||||
}
|
||||
|
||||
if _, err := conn.ModifyScalingGroup(args); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return resourceAliyunEssScalingGroupRead(d, meta)
|
||||
}
|
||||
|
||||
func resourceAliyunEssScalingGroupDelete(d *schema.ResourceData, meta interface{}) error {
|
||||
client := meta.(*AliyunClient)
|
||||
|
||||
return resource.Retry(2*time.Minute, func() *resource.RetryError {
|
||||
err := client.DeleteScalingGroupById(d.Id())
|
||||
|
||||
if err != nil {
|
||||
e, _ := err.(*common.Error)
|
||||
if e.ErrorResponse.Code != InvalidScalingGroupIdNotFound {
|
||||
return resource.RetryableError(fmt.Errorf("Scaling group in use - trying again while it is deleted."))
|
||||
}
|
||||
}
|
||||
|
||||
_, err = client.DescribeScalingGroupById(d.Id())
|
||||
if err != nil {
|
||||
if notFoundError(err) {
|
||||
return nil
|
||||
}
|
||||
return resource.NonRetryableError(err)
|
||||
}
|
||||
|
||||
return resource.RetryableError(fmt.Errorf("Scaling group in use - trying again while it is deleted."))
|
||||
})
|
||||
}
|
||||
|
||||
func buildAlicloudEssScalingGroupArgs(d *schema.ResourceData, meta interface{}) (*ess.CreateScalingGroupArgs, error) {
|
||||
client := meta.(*AliyunClient)
|
||||
args := &ess.CreateScalingGroupArgs{
|
||||
RegionId: getRegion(d, meta),
|
||||
MinSize: d.Get("min_size").(int),
|
||||
MaxSize: d.Get("max_size").(int),
|
||||
DefaultCooldown: d.Get("default_cooldown").(int),
|
||||
}
|
||||
|
||||
if v := d.Get("scaling_group_name").(string); v != "" {
|
||||
args.ScalingGroupName = v
|
||||
}
|
||||
|
||||
if v := d.Get("vswitch_id").(string); v != "" {
|
||||
args.VSwitchId = v
|
||||
|
||||
// get vpcId
|
||||
vpcId, err := client.GetVpcIdByVSwitchId(v)
|
||||
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("VswitchId %s is not valid of current region", v)
|
||||
}
|
||||
// fill vpcId by vswitchId
|
||||
args.VpcId = vpcId
|
||||
|
||||
}
|
||||
|
||||
dbs, ok := d.GetOk("db_instance_ids")
|
||||
if ok {
|
||||
dbsStrings := dbs.([]interface{})
|
||||
args.DBInstanceId = expandStringList(dbsStrings)
|
||||
}
|
||||
|
||||
lbs, ok := d.GetOk("loadbalancer_ids")
|
||||
if ok {
|
||||
lbsStrings := lbs.([]interface{})
|
||||
args.LoadBalancerId = strings.Join(expandStringList(lbsStrings), COMMA_SEPARATED)
|
||||
}
|
||||
|
||||
return args, nil
|
||||
}
|
|
@ -0,0 +1,297 @@
|
|||
package alicloud
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"github.com/denverdino/aliyungo/common"
|
||||
"github.com/denverdino/aliyungo/ess"
|
||||
"github.com/hashicorp/terraform/helper/resource"
|
||||
"github.com/hashicorp/terraform/terraform"
|
||||
"log"
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestAccAlicloudEssScalingGroup_basic(t *testing.T) {
|
||||
var sg ess.ScalingGroupItemType
|
||||
|
||||
resource.Test(t, resource.TestCase{
|
||||
PreCheck: func() {
|
||||
testAccPreCheck(t)
|
||||
},
|
||||
|
||||
// module name
|
||||
IDRefreshName: "alicloud_ess_scaling_group.foo",
|
||||
|
||||
Providers: testAccProviders,
|
||||
CheckDestroy: testAccCheckEssScalingGroupDestroy,
|
||||
Steps: []resource.TestStep{
|
||||
resource.TestStep{
|
||||
Config: testAccEssScalingGroupConfig,
|
||||
Check: resource.ComposeTestCheckFunc(
|
||||
testAccCheckEssScalingGroupExists(
|
||||
"alicloud_ess_scaling_group.foo", &sg),
|
||||
resource.TestCheckResourceAttr(
|
||||
"alicloud_ess_scaling_group.foo",
|
||||
"min_size",
|
||||
"1"),
|
||||
resource.TestCheckResourceAttr(
|
||||
"alicloud_ess_scaling_group.foo",
|
||||
"max_size",
|
||||
"1"),
|
||||
resource.TestCheckResourceAttr(
|
||||
"alicloud_ess_scaling_group.foo",
|
||||
"scaling_group_name",
|
||||
"foo"),
|
||||
resource.TestCheckResourceAttr(
|
||||
"alicloud_ess_scaling_group.foo",
|
||||
"removal_policies.#",
|
||||
"2",
|
||||
),
|
||||
),
|
||||
},
|
||||
},
|
||||
})
|
||||
|
||||
}
|
||||
|
||||
func TestAccAlicloudEssScalingGroup_update(t *testing.T) {
|
||||
var sg ess.ScalingGroupItemType
|
||||
|
||||
resource.Test(t, resource.TestCase{
|
||||
PreCheck: func() {
|
||||
testAccPreCheck(t)
|
||||
},
|
||||
|
||||
// module name
|
||||
IDRefreshName: "alicloud_ess_scaling_group.foo",
|
||||
|
||||
Providers: testAccProviders,
|
||||
CheckDestroy: testAccCheckEssScalingGroupDestroy,
|
||||
Steps: []resource.TestStep{
|
||||
resource.TestStep{
|
||||
Config: testAccEssScalingGroup,
|
||||
Check: resource.ComposeTestCheckFunc(
|
||||
testAccCheckEssScalingGroupExists(
|
||||
"alicloud_ess_scaling_group.foo", &sg),
|
||||
resource.TestCheckResourceAttr(
|
||||
"alicloud_ess_scaling_group.foo",
|
||||
"min_size",
|
||||
"1"),
|
||||
resource.TestCheckResourceAttr(
|
||||
"alicloud_ess_scaling_group.foo",
|
||||
"max_size",
|
||||
"1"),
|
||||
resource.TestCheckResourceAttr(
|
||||
"alicloud_ess_scaling_group.foo",
|
||||
"scaling_group_name",
|
||||
"foo"),
|
||||
resource.TestCheckResourceAttr(
|
||||
"alicloud_ess_scaling_group.foo",
|
||||
"removal_policies.#",
|
||||
"2",
|
||||
),
|
||||
),
|
||||
},
|
||||
|
||||
resource.TestStep{
|
||||
Config: testAccEssScalingGroup_update,
|
||||
Check: resource.ComposeTestCheckFunc(
|
||||
testAccCheckEssScalingGroupExists(
|
||||
"alicloud_ess_scaling_group.foo", &sg),
|
||||
resource.TestCheckResourceAttr(
|
||||
"alicloud_ess_scaling_group.foo",
|
||||
"min_size",
|
||||
"2"),
|
||||
resource.TestCheckResourceAttr(
|
||||
"alicloud_ess_scaling_group.foo",
|
||||
"max_size",
|
||||
"2"),
|
||||
resource.TestCheckResourceAttr(
|
||||
"alicloud_ess_scaling_group.foo",
|
||||
"scaling_group_name",
|
||||
"update"),
|
||||
resource.TestCheckResourceAttr(
|
||||
"alicloud_ess_scaling_group.foo",
|
||||
"removal_policies.#",
|
||||
"1",
|
||||
),
|
||||
),
|
||||
},
|
||||
},
|
||||
})
|
||||
|
||||
}
|
||||
|
||||
func SkipTestAccAlicloudEssScalingGroup_vpc(t *testing.T) {
|
||||
var sg ess.ScalingGroupItemType
|
||||
|
||||
resource.Test(t, resource.TestCase{
|
||||
PreCheck: func() {
|
||||
testAccPreCheck(t)
|
||||
},
|
||||
|
||||
// module name
|
||||
IDRefreshName: "alicloud_ess_scaling_group.foo",
|
||||
|
||||
Providers: testAccProviders,
|
||||
CheckDestroy: testAccCheckEssScalingGroupDestroy,
|
||||
Steps: []resource.TestStep{
|
||||
resource.TestStep{
|
||||
Config: testAccEssScalingGroup_vpc,
|
||||
Check: resource.ComposeTestCheckFunc(
|
||||
testAccCheckEssScalingGroupExists(
|
||||
"alicloud_ess_scaling_group.foo", &sg),
|
||||
resource.TestCheckResourceAttr(
|
||||
"alicloud_ess_scaling_group.foo",
|
||||
"min_size",
|
||||
"1"),
|
||||
resource.TestCheckResourceAttr(
|
||||
"alicloud_ess_scaling_group.foo",
|
||||
"max_size",
|
||||
"1"),
|
||||
resource.TestCheckResourceAttr(
|
||||
"alicloud_ess_scaling_group.foo",
|
||||
"scaling_group_name",
|
||||
"foo"),
|
||||
resource.TestCheckResourceAttr(
|
||||
"alicloud_ess_scaling_group.foo",
|
||||
"removal_policies.#",
|
||||
"2",
|
||||
),
|
||||
),
|
||||
},
|
||||
},
|
||||
})
|
||||
|
||||
}
|
||||
|
||||
func testAccCheckEssScalingGroupExists(n string, d *ess.ScalingGroupItemType) resource.TestCheckFunc {
|
||||
return func(s *terraform.State) error {
|
||||
rs, ok := s.RootModule().Resources[n]
|
||||
if !ok {
|
||||
return fmt.Errorf("Not found: %s", n)
|
||||
}
|
||||
|
||||
if rs.Primary.ID == "" {
|
||||
return fmt.Errorf("No ESS Scaling Group ID is set")
|
||||
}
|
||||
|
||||
client := testAccProvider.Meta().(*AliyunClient)
|
||||
attr, err := client.DescribeScalingGroupById(rs.Primary.ID)
|
||||
log.Printf("[DEBUG] check scaling group %s attribute %#v", rs.Primary.ID, attr)
|
||||
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if attr == nil {
|
||||
return fmt.Errorf("Scaling Group not found")
|
||||
}
|
||||
|
||||
*d = *attr
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
func testAccCheckEssScalingGroupDestroy(s *terraform.State) error {
|
||||
client := testAccProvider.Meta().(*AliyunClient)
|
||||
|
||||
for _, rs := range s.RootModule().Resources {
|
||||
if rs.Type != "alicloud_ess_scaling_group" {
|
||||
continue
|
||||
}
|
||||
|
||||
ins, err := client.DescribeScalingGroupById(rs.Primary.ID)
|
||||
|
||||
if ins != nil {
|
||||
return fmt.Errorf("Error ESS scaling group still exist")
|
||||
}
|
||||
|
||||
// Verify the error is what we want
|
||||
if err != nil {
|
||||
// Verify the error is what we want
|
||||
e, _ := err.(*common.Error)
|
||||
if e.ErrorResponse.Code == InstanceNotfound {
|
||||
continue
|
||||
}
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
const testAccEssScalingGroupConfig = `
|
||||
resource "alicloud_ess_scaling_group" "foo" {
|
||||
min_size = 1
|
||||
max_size = 1
|
||||
scaling_group_name = "foo"
|
||||
removal_policies = ["OldestInstance", "NewestInstance"]
|
||||
}
|
||||
`
|
||||
|
||||
const testAccEssScalingGroup = `
|
||||
resource "alicloud_ess_scaling_group" "foo" {
|
||||
min_size = 1
|
||||
max_size = 1
|
||||
scaling_group_name = "foo"
|
||||
removal_policies = ["OldestInstance", "NewestInstance"]
|
||||
}
|
||||
`
|
||||
|
||||
const testAccEssScalingGroup_update = `
|
||||
resource "alicloud_ess_scaling_group" "foo" {
|
||||
min_size = 2
|
||||
max_size = 2
|
||||
scaling_group_name = "update"
|
||||
removal_policies = ["OldestInstance"]
|
||||
}
|
||||
`
|
||||
const testAccEssScalingGroup_vpc = `
|
||||
data "alicloud_images" "ecs_image" {
|
||||
most_recent = true
|
||||
name_regex = "^centos_6\\w{1,5}[64].*"
|
||||
}
|
||||
|
||||
data "alicloud_zones" "default" {
|
||||
"available_disk_category"= "cloud_efficiency"
|
||||
"available_resource_creation"= "VSwitch"
|
||||
}
|
||||
|
||||
resource "alicloud_vpc" "foo" {
|
||||
name = "tf_test_foo"
|
||||
cidr_block = "172.16.0.0/12"
|
||||
}
|
||||
|
||||
resource "alicloud_vswitch" "foo" {
|
||||
vpc_id = "${alicloud_vpc.foo.id}"
|
||||
cidr_block = "172.16.0.0/21"
|
||||
availability_zone = "${data.alicloud_zones.default.zones.0.id}"
|
||||
}
|
||||
|
||||
resource "alicloud_security_group" "tf_test_foo" {
|
||||
description = "foo"
|
||||
vpc_id = "${alicloud_vpc.foo.id}"
|
||||
}
|
||||
|
||||
resource "alicloud_ess_scaling_group" "foo" {
|
||||
min_size = 1
|
||||
max_size = 1
|
||||
scaling_group_name = "foo"
|
||||
default_cooldown = 20
|
||||
vswitch_id = "${alicloud_vswitch.foo.id}"
|
||||
removal_policies = ["OldestInstance", "NewestInstance"]
|
||||
}
|
||||
|
||||
resource "alicloud_ess_scaling_configuration" "foo" {
|
||||
scaling_group_id = "${alicloud_ess_scaling_group.foo.id}"
|
||||
enable = true
|
||||
|
||||
image_id = "${data.alicloud_images.ecs_image.images.0.id}"
|
||||
instance_type = "ecs.n1.medium"
|
||||
io_optimized = "optimized"
|
||||
system_disk_category = "cloud_efficiency"
|
||||
internet_charge_type = "PayByTraffic"
|
||||
internet_max_bandwidth_out = 10
|
||||
security_group_id = "${alicloud_security_group.tf_test_foo.id}"
|
||||
}
|
||||
`
|
|
@ -0,0 +1,168 @@
|
|||
package alicloud
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"github.com/denverdino/aliyungo/common"
|
||||
"github.com/denverdino/aliyungo/ess"
|
||||
"github.com/hashicorp/terraform/helper/resource"
|
||||
"github.com/hashicorp/terraform/helper/schema"
|
||||
"strings"
|
||||
"time"
|
||||
)
|
||||
|
||||
func resourceAlicloudEssScalingRule() *schema.Resource {
|
||||
return &schema.Resource{
|
||||
Create: resourceAliyunEssScalingRuleCreate,
|
||||
Read: resourceAliyunEssScalingRuleRead,
|
||||
Update: resourceAliyunEssScalingRuleUpdate,
|
||||
Delete: resourceAliyunEssScalingRuleDelete,
|
||||
|
||||
Schema: map[string]*schema.Schema{
|
||||
"scaling_group_id": &schema.Schema{
|
||||
Type: schema.TypeString,
|
||||
Required: true,
|
||||
},
|
||||
"adjustment_type": &schema.Schema{
|
||||
Type: schema.TypeString,
|
||||
Required: true,
|
||||
ValidateFunc: validateAllowedStringValue([]string{string(ess.QuantityChangeInCapacity),
|
||||
string(ess.PercentChangeInCapacity), string(ess.TotalCapacity)}),
|
||||
},
|
||||
"adjustment_value": &schema.Schema{
|
||||
Type: schema.TypeInt,
|
||||
Required: true,
|
||||
},
|
||||
"scaling_rule_name": &schema.Schema{
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Optional: true,
|
||||
},
|
||||
"ari": &schema.Schema{
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
},
|
||||
"cooldown": &schema.Schema{
|
||||
Type: schema.TypeInt,
|
||||
Optional: true,
|
||||
ValidateFunc: validateIntegerInRange(0, 86400),
|
||||
},
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
func resourceAliyunEssScalingRuleCreate(d *schema.ResourceData, meta interface{}) error {
|
||||
|
||||
args, err := buildAlicloudEssScalingRuleArgs(d, meta)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
essconn := meta.(*AliyunClient).essconn
|
||||
|
||||
rule, err := essconn.CreateScalingRule(args)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
d.SetId(d.Get("scaling_group_id").(string) + COLON_SEPARATED + rule.ScalingRuleId)
|
||||
|
||||
return resourceAliyunEssScalingRuleUpdate(d, meta)
|
||||
}
|
||||
|
||||
func resourceAliyunEssScalingRuleRead(d *schema.ResourceData, meta interface{}) error {
|
||||
|
||||
client := meta.(*AliyunClient)
|
||||
ids := strings.Split(d.Id(), COLON_SEPARATED)
|
||||
|
||||
rule, err := client.DescribeScalingRuleById(ids[0], ids[1])
|
||||
if err != nil {
|
||||
if e, ok := err.(*common.Error); ok && e.Code == InstanceNotfound {
|
||||
d.SetId("")
|
||||
return nil
|
||||
}
|
||||
return fmt.Errorf("Error Describe ESS scaling rule Attribute: %#v", err)
|
||||
}
|
||||
|
||||
d.Set("scaling_group_id", rule.ScalingGroupId)
|
||||
d.Set("ari", rule.ScalingRuleAri)
|
||||
d.Set("adjustment_type", rule.AdjustmentType)
|
||||
d.Set("adjustment_value", rule.AdjustmentValue)
|
||||
d.Set("scaling_rule_name", rule.ScalingRuleName)
|
||||
d.Set("cooldown", rule.Cooldown)
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func resourceAliyunEssScalingRuleDelete(d *schema.ResourceData, meta interface{}) error {
|
||||
client := meta.(*AliyunClient)
|
||||
ids := strings.Split(d.Id(), COLON_SEPARATED)
|
||||
|
||||
return resource.Retry(2*time.Minute, func() *resource.RetryError {
|
||||
err := client.DeleteScalingRuleById(ids[1])
|
||||
|
||||
if err != nil {
|
||||
return resource.RetryableError(fmt.Errorf("Scaling rule in use - trying again while it is deleted."))
|
||||
}
|
||||
|
||||
_, err = client.DescribeScalingRuleById(ids[0], ids[1])
|
||||
if err != nil {
|
||||
if notFoundError(err) {
|
||||
return nil
|
||||
}
|
||||
return resource.NonRetryableError(err)
|
||||
}
|
||||
|
||||
return resource.RetryableError(fmt.Errorf("Scaling rule in use - trying again while it is deleted."))
|
||||
})
|
||||
}
|
||||
|
||||
func resourceAliyunEssScalingRuleUpdate(d *schema.ResourceData, meta interface{}) error {
|
||||
|
||||
conn := meta.(*AliyunClient).essconn
|
||||
ids := strings.Split(d.Id(), COLON_SEPARATED)
|
||||
|
||||
args := &ess.ModifyScalingRuleArgs{
|
||||
ScalingRuleId: ids[1],
|
||||
}
|
||||
|
||||
if d.HasChange("adjustment_type") {
|
||||
args.AdjustmentType = ess.AdjustmentType(d.Get("adjustment_type").(string))
|
||||
}
|
||||
|
||||
if d.HasChange("adjustment_value") {
|
||||
args.AdjustmentValue = d.Get("adjustment_value").(int)
|
||||
}
|
||||
|
||||
if d.HasChange("scaling_rule_name") {
|
||||
args.ScalingRuleName = d.Get("scaling_rule_name").(string)
|
||||
}
|
||||
|
||||
if d.HasChange("cooldown") {
|
||||
args.Cooldown = d.Get("cooldown").(int)
|
||||
}
|
||||
|
||||
if _, err := conn.ModifyScalingRule(args); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return resourceAliyunEssScalingRuleRead(d, meta)
|
||||
}
|
||||
|
||||
func buildAlicloudEssScalingRuleArgs(d *schema.ResourceData, meta interface{}) (*ess.CreateScalingRuleArgs, error) {
|
||||
args := &ess.CreateScalingRuleArgs{
|
||||
RegionId: getRegion(d, meta),
|
||||
ScalingGroupId: d.Get("scaling_group_id").(string),
|
||||
AdjustmentType: ess.AdjustmentType(d.Get("adjustment_type").(string)),
|
||||
AdjustmentValue: d.Get("adjustment_value").(int),
|
||||
}
|
||||
|
||||
if v := d.Get("scaling_rule_name").(string); v != "" {
|
||||
args.ScalingRuleName = v
|
||||
}
|
||||
|
||||
if v := d.Get("cooldown").(int); v != 0 {
|
||||
args.Cooldown = v
|
||||
}
|
||||
|
||||
return args, nil
|
||||
}
|
|
@ -0,0 +1,290 @@
|
|||
package alicloud
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"github.com/denverdino/aliyungo/common"
|
||||
"github.com/denverdino/aliyungo/ess"
|
||||
"github.com/hashicorp/terraform/helper/resource"
|
||||
"github.com/hashicorp/terraform/terraform"
|
||||
"log"
|
||||
"strings"
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestAccAlicloudEssScalingRule_basic(t *testing.T) {
|
||||
var sc ess.ScalingRuleItemType
|
||||
|
||||
resource.Test(t, resource.TestCase{
|
||||
PreCheck: func() {
|
||||
testAccPreCheck(t)
|
||||
},
|
||||
|
||||
// module name
|
||||
IDRefreshName: "alicloud_ess_scaling_rule.foo",
|
||||
|
||||
Providers: testAccProviders,
|
||||
CheckDestroy: testAccCheckEssScalingRuleDestroy,
|
||||
Steps: []resource.TestStep{
|
||||
resource.TestStep{
|
||||
Config: testAccEssScalingRuleConfig,
|
||||
Check: resource.ComposeTestCheckFunc(
|
||||
testAccCheckEssScalingRuleExists(
|
||||
"alicloud_ess_scaling_rule.foo", &sc),
|
||||
resource.TestCheckResourceAttr(
|
||||
"alicloud_ess_scaling_rule.foo",
|
||||
"adjustment_type",
|
||||
"TotalCapacity"),
|
||||
resource.TestCheckResourceAttr(
|
||||
"alicloud_ess_scaling_rule.foo",
|
||||
"adjustment_value",
|
||||
"1"),
|
||||
),
|
||||
},
|
||||
},
|
||||
})
|
||||
}
|
||||
|
||||
func TestAccAlicloudEssScalingRule_update(t *testing.T) {
|
||||
var sc ess.ScalingRuleItemType
|
||||
|
||||
resource.Test(t, resource.TestCase{
|
||||
PreCheck: func() {
|
||||
testAccPreCheck(t)
|
||||
},
|
||||
|
||||
// module name
|
||||
IDRefreshName: "alicloud_ess_scaling_rule.foo",
|
||||
|
||||
Providers: testAccProviders,
|
||||
CheckDestroy: testAccCheckEssScalingRuleDestroy,
|
||||
Steps: []resource.TestStep{
|
||||
resource.TestStep{
|
||||
Config: testAccEssScalingRule,
|
||||
Check: resource.ComposeTestCheckFunc(
|
||||
testAccCheckEssScalingRuleExists(
|
||||
"alicloud_ess_scaling_rule.foo", &sc),
|
||||
testAccCheckEssScalingRuleExists(
|
||||
"alicloud_ess_scaling_rule.foo", &sc),
|
||||
resource.TestCheckResourceAttr(
|
||||
"alicloud_ess_scaling_rule.foo",
|
||||
"adjustment_type",
|
||||
"TotalCapacity"),
|
||||
resource.TestCheckResourceAttr(
|
||||
"alicloud_ess_scaling_rule.foo",
|
||||
"adjustment_value",
|
||||
"1"),
|
||||
),
|
||||
},
|
||||
|
||||
resource.TestStep{
|
||||
Config: testAccEssScalingRule_update,
|
||||
Check: resource.ComposeTestCheckFunc(
|
||||
testAccCheckEssScalingRuleExists(
|
||||
"alicloud_ess_scaling_rule.foo", &sc),
|
||||
testAccCheckEssScalingRuleExists(
|
||||
"alicloud_ess_scaling_rule.foo", &sc),
|
||||
resource.TestCheckResourceAttr(
|
||||
"alicloud_ess_scaling_rule.foo",
|
||||
"adjustment_type",
|
||||
"TotalCapacity"),
|
||||
resource.TestCheckResourceAttr(
|
||||
"alicloud_ess_scaling_rule.foo",
|
||||
"adjustment_value",
|
||||
"2"),
|
||||
),
|
||||
},
|
||||
},
|
||||
})
|
||||
}
|
||||
|
||||
func testAccCheckEssScalingRuleExists(n string, d *ess.ScalingRuleItemType) resource.TestCheckFunc {
|
||||
return func(s *terraform.State) error {
|
||||
rs, ok := s.RootModule().Resources[n]
|
||||
if !ok {
|
||||
return fmt.Errorf("Not found: %s", n)
|
||||
}
|
||||
|
||||
if rs.Primary.ID == "" {
|
||||
return fmt.Errorf("No ESS Scaling Rule ID is set")
|
||||
}
|
||||
|
||||
client := testAccProvider.Meta().(*AliyunClient)
|
||||
ids := strings.Split(rs.Primary.ID, COLON_SEPARATED)
|
||||
attr, err := client.DescribeScalingRuleById(ids[0], ids[1])
|
||||
log.Printf("[DEBUG] check scaling rule %s attribute %#v", rs.Primary.ID, attr)
|
||||
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if attr == nil {
|
||||
return fmt.Errorf("Scaling rule not found")
|
||||
}
|
||||
|
||||
*d = *attr
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
func testAccCheckEssScalingRuleDestroy(s *terraform.State) error {
|
||||
client := testAccProvider.Meta().(*AliyunClient)
|
||||
|
||||
for _, rs := range s.RootModule().Resources {
|
||||
if rs.Type != "alicloud_ess_scaling_rule" {
|
||||
continue
|
||||
}
|
||||
ids := strings.Split(rs.Primary.ID, COLON_SEPARATED)
|
||||
ins, err := client.DescribeScalingRuleById(ids[0], ids[1])
|
||||
|
||||
if ins != nil {
|
||||
return fmt.Errorf("Error ESS scaling rule still exist")
|
||||
}
|
||||
|
||||
// Verify the error is what we want
|
||||
if err != nil {
|
||||
// Verify the error is what we want
|
||||
e, _ := err.(*common.Error)
|
||||
if e.ErrorResponse.Code == InstanceNotfound {
|
||||
continue
|
||||
}
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
const testAccEssScalingRuleConfig = `
|
||||
data "alicloud_images" "ecs_image" {
|
||||
most_recent = true
|
||||
name_regex = "^centos_6\\w{1,5}[64].*"
|
||||
}
|
||||
|
||||
resource "alicloud_security_group" "tf_test_foo" {
|
||||
description = "foo"
|
||||
}
|
||||
|
||||
resource "alicloud_security_group_rule" "ssh-in" {
|
||||
type = "ingress"
|
||||
ip_protocol = "tcp"
|
||||
nic_type = "internet"
|
||||
policy = "accept"
|
||||
port_range = "22/22"
|
||||
priority = 1
|
||||
security_group_id = "${alicloud_security_group.tf_test_foo.id}"
|
||||
cidr_ip = "0.0.0.0/0"
|
||||
}
|
||||
|
||||
resource "alicloud_ess_scaling_group" "bar" {
|
||||
min_size = 1
|
||||
max_size = 1
|
||||
scaling_group_name = "bar"
|
||||
removal_policies = ["OldestInstance", "NewestInstance"]
|
||||
}
|
||||
|
||||
resource "alicloud_ess_scaling_configuration" "foo" {
|
||||
scaling_group_id = "${alicloud_ess_scaling_group.bar.id}"
|
||||
|
||||
image_id = "${data.alicloud_images.ecs_image.images.0.id}"
|
||||
instance_type = "ecs.s2.large"
|
||||
io_optimized = "optimized"
|
||||
security_group_id = "${alicloud_security_group.tf_test_foo.id}"
|
||||
}
|
||||
|
||||
resource "alicloud_ess_scaling_rule" "foo" {
|
||||
scaling_group_id = "${alicloud_ess_scaling_group.bar.id}"
|
||||
adjustment_type = "TotalCapacity"
|
||||
adjustment_value = 1
|
||||
cooldown = 120
|
||||
}
|
||||
`
|
||||
|
||||
const testAccEssScalingRule = `
|
||||
data "alicloud_images" "ecs_image" {
|
||||
most_recent = true
|
||||
name_regex = "^centos_6\\w{1,5}[64].*"
|
||||
}
|
||||
|
||||
resource "alicloud_security_group" "tf_test_foo" {
|
||||
description = "foo"
|
||||
}
|
||||
|
||||
resource "alicloud_security_group_rule" "ssh-in" {
|
||||
type = "ingress"
|
||||
ip_protocol = "tcp"
|
||||
nic_type = "internet"
|
||||
policy = "accept"
|
||||
port_range = "22/22"
|
||||
priority = 1
|
||||
security_group_id = "${alicloud_security_group.tf_test_foo.id}"
|
||||
cidr_ip = "0.0.0.0/0"
|
||||
}
|
||||
|
||||
resource "alicloud_ess_scaling_group" "bar" {
|
||||
min_size = 1
|
||||
max_size = 1
|
||||
scaling_group_name = "bar"
|
||||
removal_policies = ["OldestInstance", "NewestInstance"]
|
||||
}
|
||||
|
||||
resource "alicloud_ess_scaling_configuration" "foo" {
|
||||
scaling_group_id = "${alicloud_ess_scaling_group.bar.id}"
|
||||
|
||||
image_id = "${data.alicloud_images.ecs_image.images.0.id}"
|
||||
instance_type = "ecs.s2.large"
|
||||
io_optimized = "optimized"
|
||||
security_group_id = "${alicloud_security_group.tf_test_foo.id}"
|
||||
}
|
||||
|
||||
resource "alicloud_ess_scaling_rule" "foo" {
|
||||
scaling_group_id = "${alicloud_ess_scaling_group.bar.id}"
|
||||
adjustment_type = "TotalCapacity"
|
||||
adjustment_value = 1
|
||||
cooldown = 120
|
||||
}
|
||||
`
|
||||
|
||||
const testAccEssScalingRule_update = `
|
||||
data "alicloud_images" "ecs_image" {
|
||||
most_recent = true
|
||||
name_regex = "^centos_6\\w{1,5}[64].*"
|
||||
}
|
||||
|
||||
resource "alicloud_security_group" "tf_test_foo" {
|
||||
description = "foo"
|
||||
}
|
||||
|
||||
resource "alicloud_security_group_rule" "ssh-in" {
|
||||
type = "ingress"
|
||||
ip_protocol = "tcp"
|
||||
nic_type = "internet"
|
||||
policy = "accept"
|
||||
port_range = "22/22"
|
||||
priority = 1
|
||||
security_group_id = "${alicloud_security_group.tf_test_foo.id}"
|
||||
cidr_ip = "0.0.0.0/0"
|
||||
}
|
||||
|
||||
resource "alicloud_ess_scaling_group" "bar" {
|
||||
min_size = 1
|
||||
max_size = 1
|
||||
scaling_group_name = "bar"
|
||||
removal_policies = ["OldestInstance", "NewestInstance"]
|
||||
}
|
||||
|
||||
resource "alicloud_ess_scaling_configuration" "foo" {
|
||||
scaling_group_id = "${alicloud_ess_scaling_group.bar.id}"
|
||||
|
||||
image_id = "${data.alicloud_images.ecs_image.images.0.id}"
|
||||
instance_type = "ecs.s2.large"
|
||||
io_optimized = "optimized"
|
||||
security_group_id = "${alicloud_security_group.tf_test_foo.id}"
|
||||
}
|
||||
|
||||
resource "alicloud_ess_scaling_rule" "foo" {
|
||||
scaling_group_id = "${alicloud_ess_scaling_group.bar.id}"
|
||||
adjustment_type = "TotalCapacity"
|
||||
adjustment_value = 2
|
||||
cooldown = 60
|
||||
}
|
||||
`
|
|
@ -0,0 +1,220 @@
|
|||
package alicloud
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"github.com/denverdino/aliyungo/common"
|
||||
"github.com/denverdino/aliyungo/ess"
|
||||
"github.com/hashicorp/terraform/helper/resource"
|
||||
"github.com/hashicorp/terraform/helper/schema"
|
||||
"time"
|
||||
)
|
||||
|
||||
func resourceAlicloudEssSchedule() *schema.Resource {
|
||||
return &schema.Resource{
|
||||
Create: resourceAliyunEssScheduleCreate,
|
||||
Read: resourceAliyunEssScheduleRead,
|
||||
Update: resourceAliyunEssScheduleUpdate,
|
||||
Delete: resourceAliyunEssScheduleDelete,
|
||||
|
||||
Schema: map[string]*schema.Schema{
|
||||
"scheduled_action": &schema.Schema{
|
||||
Type: schema.TypeString,
|
||||
Required: true,
|
||||
},
|
||||
"launch_time": &schema.Schema{
|
||||
Type: schema.TypeString,
|
||||
Required: true,
|
||||
},
|
||||
"scheduled_task_name": &schema.Schema{
|
||||
Type: schema.TypeString,
|
||||
Optional: true,
|
||||
},
|
||||
"description": &schema.Schema{
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Optional: true,
|
||||
},
|
||||
"launch_expiration_time": &schema.Schema{
|
||||
Type: schema.TypeInt,
|
||||
Default: 600,
|
||||
Optional: true,
|
||||
ValidateFunc: validateIntegerInRange(0, 21600),
|
||||
},
|
||||
"recurrence_type": &schema.Schema{
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Optional: true,
|
||||
ValidateFunc: validateAllowedStringValue([]string{string(ess.Daily),
|
||||
string(ess.Weekly), string(ess.Monthly)}),
|
||||
},
|
||||
"recurrence_value": &schema.Schema{
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Optional: true,
|
||||
},
|
||||
"recurrence_end_time": &schema.Schema{
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Optional: true,
|
||||
},
|
||||
"task_enabled": &schema.Schema{
|
||||
Type: schema.TypeBool,
|
||||
Default: true,
|
||||
Optional: true,
|
||||
},
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
func resourceAliyunEssScheduleCreate(d *schema.ResourceData, meta interface{}) error {
|
||||
|
||||
args, err := buildAlicloudEssScheduleArgs(d, meta)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
essconn := meta.(*AliyunClient).essconn
|
||||
|
||||
rule, err := essconn.CreateScheduledTask(args)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
d.SetId(rule.ScheduledTaskId)
|
||||
|
||||
return resourceAliyunEssScheduleUpdate(d, meta)
|
||||
}
|
||||
|
||||
func resourceAliyunEssScheduleRead(d *schema.ResourceData, meta interface{}) error {
|
||||
|
||||
client := meta.(*AliyunClient)
|
||||
|
||||
rule, err := client.DescribeScheduleById(d.Id())
|
||||
if err != nil {
|
||||
if e, ok := err.(*common.Error); ok && e.Code == InstanceNotfound {
|
||||
d.SetId("")
|
||||
return nil
|
||||
}
|
||||
return fmt.Errorf("Error Describe ESS schedule Attribute: %#v", err)
|
||||
}
|
||||
|
||||
d.Set("scheduled_action", rule.ScheduledAction)
|
||||
d.Set("launch_time", rule.LaunchTime)
|
||||
d.Set("scheduled_task_name", rule.ScheduledTaskName)
|
||||
d.Set("description", rule.Description)
|
||||
d.Set("launch_expiration_time", rule.LaunchExpirationTime)
|
||||
d.Set("recurrence_type", rule.RecurrenceType)
|
||||
d.Set("recurrence_value", rule.RecurrenceValue)
|
||||
d.Set("recurrence_end_time", rule.RecurrenceEndTime)
|
||||
d.Set("task_enabled", rule.TaskEnabled)
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func resourceAliyunEssScheduleUpdate(d *schema.ResourceData, meta interface{}) error {
|
||||
|
||||
conn := meta.(*AliyunClient).essconn
|
||||
|
||||
args := &ess.ModifyScheduledTaskArgs{
|
||||
ScheduledTaskId: d.Id(),
|
||||
}
|
||||
|
||||
if d.HasChange("scheduled_task_name") {
|
||||
args.ScheduledTaskName = d.Get("scheduled_task_name").(string)
|
||||
}
|
||||
|
||||
if d.HasChange("description") {
|
||||
args.Description = d.Get("description").(string)
|
||||
}
|
||||
|
||||
if d.HasChange("scheduled_action") {
|
||||
args.ScheduledAction = d.Get("scheduled_action").(string)
|
||||
}
|
||||
|
||||
if d.HasChange("launch_time") {
|
||||
args.LaunchTime = d.Get("launch_time").(string)
|
||||
}
|
||||
|
||||
if d.HasChange("launch_expiration_time") {
|
||||
args.LaunchExpirationTime = d.Get("launch_expiration_time").(int)
|
||||
}
|
||||
|
||||
if d.HasChange("recurrence_type") {
|
||||
args.RecurrenceType = ess.RecurrenceType(d.Get("recurrence_type").(string))
|
||||
}
|
||||
|
||||
if d.HasChange("recurrence_value") {
|
||||
args.RecurrenceValue = d.Get("recurrence_value").(string)
|
||||
}
|
||||
|
||||
if d.HasChange("recurrence_end_time") {
|
||||
args.RecurrenceEndTime = d.Get("recurrence_end_time").(string)
|
||||
}
|
||||
|
||||
if d.HasChange("task_enabled") {
|
||||
args.TaskEnabled = d.Get("task_enabled").(bool)
|
||||
}
|
||||
|
||||
if _, err := conn.ModifyScheduledTask(args); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return resourceAliyunEssScheduleRead(d, meta)
|
||||
}
|
||||
|
||||
func resourceAliyunEssScheduleDelete(d *schema.ResourceData, meta interface{}) error {
|
||||
client := meta.(*AliyunClient)
|
||||
|
||||
return resource.Retry(2*time.Minute, func() *resource.RetryError {
|
||||
err := client.DeleteScheduleById(d.Id())
|
||||
|
||||
if err != nil {
|
||||
return resource.RetryableError(fmt.Errorf("Scaling schedule in use - trying again while it is deleted."))
|
||||
}
|
||||
|
||||
_, err = client.DescribeScheduleById(d.Id())
|
||||
if err != nil {
|
||||
if notFoundError(err) {
|
||||
return nil
|
||||
}
|
||||
return resource.NonRetryableError(err)
|
||||
}
|
||||
|
||||
return resource.RetryableError(fmt.Errorf("Scaling schedule in use - trying again while it is deleted."))
|
||||
})
|
||||
}
|
||||
|
||||
func buildAlicloudEssScheduleArgs(d *schema.ResourceData, meta interface{}) (*ess.CreateScheduledTaskArgs, error) {
|
||||
args := &ess.CreateScheduledTaskArgs{
|
||||
RegionId: getRegion(d, meta),
|
||||
ScheduledAction: d.Get("scheduled_action").(string),
|
||||
LaunchTime: d.Get("launch_time").(string),
|
||||
TaskEnabled: d.Get("task_enabled").(bool),
|
||||
}
|
||||
|
||||
if v := d.Get("scheduled_task_name").(string); v != "" {
|
||||
args.ScheduledTaskName = v
|
||||
}
|
||||
|
||||
if v := d.Get("description").(string); v != "" {
|
||||
args.Description = v
|
||||
}
|
||||
|
||||
if v := d.Get("recurrence_type").(string); v != "" {
|
||||
args.RecurrenceType = ess.RecurrenceType(v)
|
||||
}
|
||||
|
||||
if v := d.Get("recurrence_value").(string); v != "" {
|
||||
args.RecurrenceValue = v
|
||||
}
|
||||
|
||||
if v := d.Get("recurrence_end_time").(string); v != "" {
|
||||
args.RecurrenceEndTime = v
|
||||
}
|
||||
|
||||
if v := d.Get("launch_expiration_time").(int); v != 0 {
|
||||
args.LaunchExpirationTime = v
|
||||
}
|
||||
|
||||
return args, nil
|
||||
}
|
|
@ -0,0 +1,151 @@
|
|||
package alicloud
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"github.com/denverdino/aliyungo/common"
|
||||
"github.com/denverdino/aliyungo/ess"
|
||||
"github.com/hashicorp/terraform/helper/resource"
|
||||
"github.com/hashicorp/terraform/terraform"
|
||||
"log"
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestAccAlicloudEssSchedule_basic(t *testing.T) {
|
||||
var sc ess.ScheduledTaskItemType
|
||||
|
||||
resource.Test(t, resource.TestCase{
|
||||
PreCheck: func() {
|
||||
testAccPreCheck(t)
|
||||
},
|
||||
|
||||
// module name
|
||||
IDRefreshName: "alicloud_ess_schedule.foo",
|
||||
|
||||
Providers: testAccProviders,
|
||||
CheckDestroy: testAccCheckEssScheduleDestroy,
|
||||
Steps: []resource.TestStep{
|
||||
resource.TestStep{
|
||||
Config: testAccEssScheduleConfig,
|
||||
Check: resource.ComposeTestCheckFunc(
|
||||
testAccCheckEssScheduleExists(
|
||||
"alicloud_ess_schedule.foo", &sc),
|
||||
resource.TestCheckResourceAttr(
|
||||
"alicloud_ess_schedule.foo",
|
||||
"launch_time",
|
||||
"2017-04-29T07:30Z"),
|
||||
resource.TestCheckResourceAttr(
|
||||
"alicloud_ess_schedule.foo",
|
||||
"task_enabled",
|
||||
"true"),
|
||||
),
|
||||
},
|
||||
},
|
||||
})
|
||||
}
|
||||
|
||||
func testAccCheckEssScheduleExists(n string, d *ess.ScheduledTaskItemType) resource.TestCheckFunc {
|
||||
return func(s *terraform.State) error {
|
||||
rs, ok := s.RootModule().Resources[n]
|
||||
if !ok {
|
||||
return fmt.Errorf("Not found: %s", n)
|
||||
}
|
||||
|
||||
if rs.Primary.ID == "" {
|
||||
return fmt.Errorf("No ESS Schedule ID is set")
|
||||
}
|
||||
|
||||
client := testAccProvider.Meta().(*AliyunClient)
|
||||
attr, err := client.DescribeScheduleById(rs.Primary.ID)
|
||||
log.Printf("[DEBUG] check schedule %s attribute %#v", rs.Primary.ID, attr)
|
||||
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if attr == nil {
|
||||
return fmt.Errorf("Ess schedule not found")
|
||||
}
|
||||
|
||||
*d = *attr
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
func testAccCheckEssScheduleDestroy(s *terraform.State) error {
|
||||
client := testAccProvider.Meta().(*AliyunClient)
|
||||
|
||||
for _, rs := range s.RootModule().Resources {
|
||||
if rs.Type != "alicloud_ess_schedule" {
|
||||
continue
|
||||
}
|
||||
ins, err := client.DescribeScheduleById(rs.Primary.ID)
|
||||
|
||||
if ins != nil {
|
||||
return fmt.Errorf("Error ESS schedule still exist")
|
||||
}
|
||||
|
||||
// Verify the error is what we want
|
||||
if err != nil {
|
||||
// Verify the error is what we want
|
||||
e, _ := err.(*common.Error)
|
||||
if e.ErrorResponse.Code == InstanceNotfound {
|
||||
continue
|
||||
}
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
const testAccEssScheduleConfig = `
|
||||
data "alicloud_images" "ecs_image" {
|
||||
most_recent = true
|
||||
name_regex = "^centos_6\\w{1,5}[64].*"
|
||||
}
|
||||
|
||||
resource "alicloud_security_group" "tf_test_foo" {
|
||||
name = "tf_test_foo"
|
||||
description = "foo"
|
||||
}
|
||||
|
||||
resource "alicloud_security_group_rule" "ssh-in" {
|
||||
type = "ingress"
|
||||
ip_protocol = "tcp"
|
||||
nic_type = "internet"
|
||||
policy = "accept"
|
||||
port_range = "22/22"
|
||||
priority = 1
|
||||
security_group_id = "${alicloud_security_group.tf_test_foo.id}"
|
||||
cidr_ip = "0.0.0.0/0"
|
||||
}
|
||||
|
||||
resource "alicloud_ess_scaling_group" "bar" {
|
||||
min_size = 1
|
||||
max_size = 1
|
||||
scaling_group_name = "bar"
|
||||
removal_policies = ["OldestInstance", "NewestInstance"]
|
||||
}
|
||||
|
||||
resource "alicloud_ess_scaling_configuration" "foo" {
|
||||
scaling_group_id = "${alicloud_ess_scaling_group.bar.id}"
|
||||
|
||||
image_id = "${data.alicloud_images.ecs_image.images.0.id}"
|
||||
instance_type = "ecs.s2.large"
|
||||
io_optimized = "optimized"
|
||||
security_group_id = "${alicloud_security_group.tf_test_foo.id}"
|
||||
}
|
||||
|
||||
resource "alicloud_ess_scaling_rule" "foo" {
|
||||
scaling_group_id = "${alicloud_ess_scaling_group.bar.id}"
|
||||
adjustment_type = "TotalCapacity"
|
||||
adjustment_value = 2
|
||||
cooldown = 60
|
||||
}
|
||||
|
||||
resource "alicloud_ess_schedule" "foo" {
|
||||
scheduled_action = "${alicloud_ess_scaling_rule.foo.ari}"
|
||||
launch_time = "2017-04-29T07:30Z"
|
||||
scheduled_task_name = "tf-foo"
|
||||
}
|
||||
`
|
|
@ -0,0 +1,165 @@
|
|||
package alicloud
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"github.com/denverdino/aliyungo/ecs"
|
||||
"github.com/hashicorp/terraform/helper/schema"
|
||||
)
|
||||
|
||||
func resourceAliyunForwardEntry() *schema.Resource {
|
||||
return &schema.Resource{
|
||||
Create: resourceAliyunForwardEntryCreate,
|
||||
Read: resourceAliyunForwardEntryRead,
|
||||
Update: resourceAliyunForwardEntryUpdate,
|
||||
Delete: resourceAliyunForwardEntryDelete,
|
||||
|
||||
Schema: map[string]*schema.Schema{
|
||||
"forward_table_id": &schema.Schema{
|
||||
Type: schema.TypeString,
|
||||
Required: true,
|
||||
ForceNew: true,
|
||||
},
|
||||
"external_ip": &schema.Schema{
|
||||
Type: schema.TypeString,
|
||||
Required: true,
|
||||
ForceNew: true,
|
||||
},
|
||||
"external_port": &schema.Schema{
|
||||
Type: schema.TypeString,
|
||||
Required: true,
|
||||
ValidateFunc: validateForwardPort,
|
||||
},
|
||||
"ip_protocol": &schema.Schema{
|
||||
Type: schema.TypeString,
|
||||
Required: true,
|
||||
ValidateFunc: validateAllowedStringValue([]string{"tcp", "udp", "any"}),
|
||||
},
|
||||
"internal_ip": &schema.Schema{
|
||||
Type: schema.TypeString,
|
||||
Required: true,
|
||||
},
|
||||
"internal_port": &schema.Schema{
|
||||
Type: schema.TypeString,
|
||||
Required: true,
|
||||
ValidateFunc: validateForwardPort,
|
||||
},
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
func resourceAliyunForwardEntryCreate(d *schema.ResourceData, meta interface{}) error {
|
||||
conn := meta.(*AliyunClient).vpcconn
|
||||
|
||||
args := &ecs.CreateForwardEntryArgs{
|
||||
RegionId: getRegion(d, meta),
|
||||
ForwardTableId: d.Get("forward_table_id").(string),
|
||||
ExternalIp: d.Get("external_ip").(string),
|
||||
ExternalPort: d.Get("external_port").(string),
|
||||
IpProtocol: d.Get("ip_protocol").(string),
|
||||
InternalIp: d.Get("internal_ip").(string),
|
||||
InternalPort: d.Get("internal_port").(string),
|
||||
}
|
||||
|
||||
resp, err := conn.CreateForwardEntry(args)
|
||||
if err != nil {
|
||||
return fmt.Errorf("CreateForwardEntry got error: %#v", err)
|
||||
}
|
||||
|
||||
d.SetId(resp.ForwardEntryId)
|
||||
d.Set("forward_table_id", d.Get("forward_table_id").(string))
|
||||
|
||||
return resourceAliyunForwardEntryRead(d, meta)
|
||||
}
|
||||
|
||||
func resourceAliyunForwardEntryRead(d *schema.ResourceData, meta interface{}) error {
|
||||
client := meta.(*AliyunClient)
|
||||
|
||||
forwardEntry, err := client.DescribeForwardEntry(d.Get("forward_table_id").(string), d.Id())
|
||||
|
||||
if err != nil {
|
||||
if notFoundError(err) {
|
||||
return nil
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
d.Set("forward_table_id", forwardEntry.ForwardTableId)
|
||||
d.Set("external_ip", forwardEntry.ExternalIp)
|
||||
d.Set("external_port", forwardEntry.ExternalPort)
|
||||
d.Set("ip_protocol", forwardEntry.IpProtocol)
|
||||
d.Set("internal_ip", forwardEntry.InternalIp)
|
||||
d.Set("internal_port", forwardEntry.InternalPort)
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func resourceAliyunForwardEntryUpdate(d *schema.ResourceData, meta interface{}) error {
|
||||
client := meta.(*AliyunClient)
|
||||
conn := client.vpcconn
|
||||
|
||||
forwardEntry, err := client.DescribeForwardEntry(d.Get("forward_table_id").(string), d.Id())
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
d.Partial(true)
|
||||
attributeUpdate := false
|
||||
args := &ecs.ModifyForwardEntryArgs{
|
||||
RegionId: getRegion(d, meta),
|
||||
ForwardTableId: forwardEntry.ForwardTableId,
|
||||
ForwardEntryId: forwardEntry.ForwardEntryId,
|
||||
ExternalIp: forwardEntry.ExternalIp,
|
||||
IpProtocol: forwardEntry.IpProtocol,
|
||||
ExternalPort: forwardEntry.ExternalPort,
|
||||
InternalIp: forwardEntry.InternalIp,
|
||||
InternalPort: forwardEntry.InternalPort,
|
||||
}
|
||||
|
||||
if d.HasChange("external_port") {
|
||||
d.SetPartial("external_port")
|
||||
args.ExternalPort = d.Get("external_port").(string)
|
||||
attributeUpdate = true
|
||||
}
|
||||
|
||||
if d.HasChange("ip_protocol") {
|
||||
d.SetPartial("ip_protocol")
|
||||
args.IpProtocol = d.Get("ip_protocol").(string)
|
||||
attributeUpdate = true
|
||||
}
|
||||
|
||||
if d.HasChange("internal_port") {
|
||||
d.SetPartial("internal_port")
|
||||
args.InternalPort = d.Get("internal_port").(string)
|
||||
attributeUpdate = true
|
||||
}
|
||||
|
||||
if attributeUpdate {
|
||||
if err := conn.ModifyForwardEntry(args); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
d.Partial(false)
|
||||
|
||||
return resourceAliyunForwardEntryRead(d, meta)
|
||||
}
|
||||
|
||||
func resourceAliyunForwardEntryDelete(d *schema.ResourceData, meta interface{}) error {
|
||||
client := meta.(*AliyunClient)
|
||||
conn := client.vpcconn
|
||||
|
||||
forwardEntryId := d.Id()
|
||||
forwardTableId := d.Get("forward_table_id").(string)
|
||||
|
||||
args := &ecs.DeleteForwardEntryArgs{
|
||||
RegionId: getRegion(d, meta),
|
||||
ForwardTableId: forwardTableId,
|
||||
ForwardEntryId: forwardEntryId,
|
||||
}
|
||||
|
||||
if err := conn.DeleteForwardEntry(args); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
|
@ -0,0 +1,216 @@
|
|||
package alicloud
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"github.com/denverdino/aliyungo/common"
|
||||
"github.com/denverdino/aliyungo/ecs"
|
||||
"github.com/hashicorp/terraform/helper/resource"
|
||||
"github.com/hashicorp/terraform/terraform"
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestAccAlicloudForward_basic(t *testing.T) {
|
||||
var forward ecs.ForwardTableEntrySetType
|
||||
|
||||
resource.Test(t, resource.TestCase{
|
||||
PreCheck: func() {
|
||||
testAccPreCheck(t)
|
||||
},
|
||||
|
||||
// module name
|
||||
IDRefreshName: "alicloud_forward_entry.foo",
|
||||
Providers: testAccProviders,
|
||||
CheckDestroy: testAccCheckForwardEntryDestroy,
|
||||
Steps: []resource.TestStep{
|
||||
resource.TestStep{
|
||||
Config: testAccForwardEntryConfig,
|
||||
Check: resource.ComposeTestCheckFunc(
|
||||
testAccCheckForwardEntryExists(
|
||||
"alicloud_forward_entry.foo", &forward),
|
||||
),
|
||||
},
|
||||
|
||||
resource.TestStep{
|
||||
Config: testAccForwardEntryUpdate,
|
||||
Check: resource.ComposeTestCheckFunc(
|
||||
testAccCheckForwardEntryExists(
|
||||
"alicloud_forward_entry.foo", &forward),
|
||||
),
|
||||
},
|
||||
},
|
||||
})
|
||||
|
||||
}
|
||||
|
||||
func testAccCheckForwardEntryDestroy(s *terraform.State) error {
|
||||
client := testAccProvider.Meta().(*AliyunClient)
|
||||
|
||||
for _, rs := range s.RootModule().Resources {
|
||||
if rs.Type != "alicloud_snat_entry" {
|
||||
continue
|
||||
}
|
||||
|
||||
// Try to find the Snat entry
|
||||
instance, err := client.DescribeForwardEntry(rs.Primary.Attributes["forward_table_id"], rs.Primary.ID)
|
||||
|
||||
//this special deal cause the DescribeSnatEntry can't find the records would be throw "cant find the snatTable error"
|
||||
if instance.ForwardEntryId == "" {
|
||||
return nil
|
||||
}
|
||||
|
||||
if instance.ForwardEntryId != "" {
|
||||
return fmt.Errorf("Forward entry still exist")
|
||||
}
|
||||
|
||||
if err != nil {
|
||||
// Verify the error is what we want
|
||||
e, _ := err.(*common.Error)
|
||||
|
||||
if !notFoundError(e) {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func testAccCheckForwardEntryExists(n string, snat *ecs.ForwardTableEntrySetType) resource.TestCheckFunc {
|
||||
return func(s *terraform.State) error {
|
||||
rs, ok := s.RootModule().Resources[n]
|
||||
if !ok {
|
||||
return fmt.Errorf("Not found: %s", n)
|
||||
}
|
||||
|
||||
if rs.Primary.ID == "" {
|
||||
return fmt.Errorf("No ForwardEntry ID is set")
|
||||
}
|
||||
|
||||
client := testAccProvider.Meta().(*AliyunClient)
|
||||
instance, err := client.DescribeForwardEntry(rs.Primary.Attributes["forward_table_id"], rs.Primary.ID)
|
||||
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if instance.ForwardEntryId == "" {
|
||||
return fmt.Errorf("ForwardEntry not found")
|
||||
}
|
||||
|
||||
*snat = instance
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
const testAccForwardEntryConfig = `
|
||||
provider "alicloud"{
|
||||
region = "cn-hangzhou"
|
||||
}
|
||||
|
||||
data "alicloud_zones" "default" {
|
||||
"available_resource_creation"= "VSwitch"
|
||||
}
|
||||
|
||||
resource "alicloud_vpc" "foo" {
|
||||
name = "tf_test_foo"
|
||||
cidr_block = "172.16.0.0/12"
|
||||
}
|
||||
|
||||
resource "alicloud_vswitch" "foo" {
|
||||
vpc_id = "${alicloud_vpc.foo.id}"
|
||||
cidr_block = "172.16.0.0/21"
|
||||
availability_zone = "${data.alicloud_zones.default.zones.0.id}"
|
||||
}
|
||||
|
||||
resource "alicloud_nat_gateway" "foo" {
|
||||
vpc_id = "${alicloud_vpc.foo.id}"
|
||||
spec = "Small"
|
||||
name = "test_foo"
|
||||
bandwidth_packages = [{
|
||||
ip_count = 1
|
||||
bandwidth = 5
|
||||
zone = "${data.alicloud_zones.default.zones.0.id}"
|
||||
},{
|
||||
ip_count = 1
|
||||
bandwidth = 6
|
||||
zone = "${data.alicloud_zones.default.zones.0.id}"
|
||||
}]
|
||||
depends_on = [
|
||||
"alicloud_vswitch.foo"]
|
||||
}
|
||||
|
||||
resource "alicloud_forward_entry" "foo"{
|
||||
forward_table_id = "${alicloud_nat_gateway.foo.forward_table_ids}"
|
||||
external_ip = "${alicloud_nat_gateway.foo.bandwidth_packages.0.public_ip_addresses}"
|
||||
external_port = "80"
|
||||
ip_protocol = "tcp"
|
||||
internal_ip = "172.16.0.3"
|
||||
internal_port = "8080"
|
||||
}
|
||||
|
||||
resource "alicloud_forward_entry" "foo1"{
|
||||
forward_table_id = "${alicloud_nat_gateway.foo.forward_table_ids}"
|
||||
external_ip = "${alicloud_nat_gateway.foo.bandwidth_packages.0.public_ip_addresses}"
|
||||
external_port = "443"
|
||||
ip_protocol = "udp"
|
||||
internal_ip = "172.16.0.4"
|
||||
internal_port = "8080"
|
||||
}
|
||||
`
|
||||
|
||||
const testAccForwardEntryUpdate = `
|
||||
provider "alicloud"{
|
||||
region = "cn-hangzhou"
|
||||
}
|
||||
|
||||
data "alicloud_zones" "default" {
|
||||
"available_resource_creation"= "VSwitch"
|
||||
}
|
||||
|
||||
resource "alicloud_vpc" "foo" {
|
||||
name = "tf_test_foo"
|
||||
cidr_block = "172.16.0.0/12"
|
||||
}
|
||||
|
||||
resource "alicloud_vswitch" "foo" {
|
||||
vpc_id = "${alicloud_vpc.foo.id}"
|
||||
cidr_block = "172.16.0.0/21"
|
||||
availability_zone = "${data.alicloud_zones.default.zones.0.id}"
|
||||
}
|
||||
|
||||
resource "alicloud_nat_gateway" "foo" {
|
||||
vpc_id = "${alicloud_vpc.foo.id}"
|
||||
spec = "Small"
|
||||
name = "test_foo"
|
||||
bandwidth_packages = [{
|
||||
ip_count = 1
|
||||
bandwidth = 5
|
||||
zone = "${data.alicloud_zones.default.zones.0.id}"
|
||||
},{
|
||||
ip_count = 1
|
||||
bandwidth = 6
|
||||
zone = "${data.alicloud_zones.default.zones.0.id}"
|
||||
}]
|
||||
depends_on = [
|
||||
"alicloud_vswitch.foo"]
|
||||
}
|
||||
|
||||
resource "alicloud_forward_entry" "foo"{
|
||||
forward_table_id = "${alicloud_nat_gateway.foo.forward_table_ids}"
|
||||
external_ip = "${alicloud_nat_gateway.foo.bandwidth_packages.0.public_ip_addresses}"
|
||||
external_port = "80"
|
||||
ip_protocol = "tcp"
|
||||
internal_ip = "172.16.0.3"
|
||||
internal_port = "8081"
|
||||
}
|
||||
|
||||
|
||||
resource "alicloud_forward_entry" "foo1"{
|
||||
forward_table_id = "${alicloud_nat_gateway.foo.forward_table_ids}"
|
||||
external_ip = "${alicloud_nat_gateway.foo.bandwidth_packages.0.public_ip_addresses}"
|
||||
external_port = "22"
|
||||
ip_protocol = "udp"
|
||||
internal_ip = "172.16.0.4"
|
||||
internal_port = "8080"
|
||||
}
|
||||
`
|
|
@ -8,8 +8,10 @@ import (
|
|||
"encoding/json"
|
||||
"github.com/denverdino/aliyungo/common"
|
||||
"github.com/denverdino/aliyungo/ecs"
|
||||
"github.com/hashicorp/terraform/helper/resource"
|
||||
"github.com/hashicorp/terraform/helper/schema"
|
||||
"strings"
|
||||
"time"
|
||||
)
|
||||
|
||||
func resourceAliyunInstance() *schema.Resource {
|
||||
|
@ -193,11 +195,8 @@ func resourceAliyunInstanceCreate(d *schema.ResourceData, meta interface{}) erro
|
|||
//d.Set("system_disk_category", d.Get("system_disk_category"))
|
||||
//d.Set("system_disk_size", d.Get("system_disk_size"))
|
||||
|
||||
if d.Get("allocate_public_ip").(bool) {
|
||||
_, err := conn.AllocatePublicIpAddress(d.Id())
|
||||
if err != nil {
|
||||
log.Printf("[DEBUG] AllocatePublicIpAddress for instance got error: %#v", err)
|
||||
}
|
||||
if err := allocateIpAndBandWidthRelative(d, meta); err != nil {
|
||||
return fmt.Errorf("allocateIpAndBandWidthRelative err: %#v", err)
|
||||
}
|
||||
|
||||
// after instance created, its status is pending,
|
||||
|
@ -226,6 +225,12 @@ func resourceAliyunRunInstance(d *schema.ResourceData, meta interface{}) error {
|
|||
return err
|
||||
}
|
||||
|
||||
if args.IoOptimized == "optimized" {
|
||||
args.IoOptimized = ecs.IoOptimized("true")
|
||||
} else {
|
||||
args.IoOptimized = ecs.IoOptimized("false")
|
||||
}
|
||||
|
||||
runArgs, err := buildAliyunRunInstancesArgs(d, meta)
|
||||
if err != nil {
|
||||
return err
|
||||
|
@ -246,14 +251,15 @@ func resourceAliyunRunInstance(d *schema.ResourceData, meta interface{}) error {
|
|||
d.Set("system_disk_category", d.Get("system_disk_category"))
|
||||
d.Set("system_disk_size", d.Get("system_disk_size"))
|
||||
|
||||
if d.Get("allocate_public_ip").(bool) {
|
||||
_, err := conn.AllocatePublicIpAddress(d.Id())
|
||||
if err != nil {
|
||||
log.Printf("[DEBUG] AllocatePublicIpAddress for instance got error: %#v", err)
|
||||
}
|
||||
// after instance created, its status change from pending, starting to running
|
||||
if err := conn.WaitForInstanceAsyn(d.Id(), ecs.Running, defaultTimeout); err != nil {
|
||||
log.Printf("[DEBUG] WaitForInstance %s got error: %#v", ecs.Running, err)
|
||||
}
|
||||
|
||||
if err := allocateIpAndBandWidthRelative(d, meta); err != nil {
|
||||
return fmt.Errorf("allocateIpAndBandWidthRelative err: %#v", err)
|
||||
}
|
||||
|
||||
// after instance created, its status change from pending, starting to running
|
||||
if err := conn.WaitForInstanceAsyn(d.Id(), ecs.Running, defaultTimeout); err != nil {
|
||||
log.Printf("[DEBUG] WaitForInstance %s got error: %#v", ecs.Running, err)
|
||||
}
|
||||
|
@ -451,30 +457,47 @@ func resourceAliyunInstanceDelete(d *schema.ResourceData, meta interface{}) erro
|
|||
client := meta.(*AliyunClient)
|
||||
conn := client.ecsconn
|
||||
|
||||
instance, err := client.QueryInstancesById(d.Id())
|
||||
if err != nil {
|
||||
if notFoundError(err) {
|
||||
return nil
|
||||
}
|
||||
return fmt.Errorf("Error DescribeInstanceAttribute: %#v", err)
|
||||
}
|
||||
|
||||
if instance.Status != ecs.Stopped {
|
||||
if err := conn.StopInstance(d.Id(), true); err != nil {
|
||||
return err
|
||||
return resource.Retry(5*time.Minute, func() *resource.RetryError {
|
||||
instance, err := client.QueryInstancesById(d.Id())
|
||||
if err != nil {
|
||||
if notFoundError(err) {
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
if err := conn.WaitForInstance(d.Id(), ecs.Stopped, defaultTimeout); err != nil {
|
||||
return err
|
||||
if instance.Status != ecs.Stopped {
|
||||
if err := conn.StopInstance(d.Id(), true); err != nil {
|
||||
return resource.RetryableError(fmt.Errorf("ECS stop error - trying again."))
|
||||
}
|
||||
|
||||
if err := conn.WaitForInstance(d.Id(), ecs.Stopped, defaultTimeout); err != nil {
|
||||
return resource.RetryableError(fmt.Errorf("Waiting for ecs stopped timeout - trying again."))
|
||||
}
|
||||
}
|
||||
|
||||
if err := conn.DeleteInstance(d.Id()); err != nil {
|
||||
return resource.RetryableError(fmt.Errorf("ECS Instance in use - trying again while it is deleted."))
|
||||
}
|
||||
|
||||
return nil
|
||||
})
|
||||
|
||||
}
|
||||
|
||||
func allocateIpAndBandWidthRelative(d *schema.ResourceData, meta interface{}) error {
|
||||
conn := meta.(*AliyunClient).ecsconn
|
||||
if d.Get("allocate_public_ip").(bool) {
|
||||
if d.Get("internet_max_bandwidth_out") == 0 {
|
||||
return fmt.Errorf("Error: if allocate_public_ip is true than the internet_max_bandwidth_out cannot equal zero.")
|
||||
}
|
||||
_, err := conn.AllocatePublicIpAddress(d.Id())
|
||||
if err != nil {
|
||||
return fmt.Errorf("[DEBUG] AllocatePublicIpAddress for instance got error: %#v", err)
|
||||
}
|
||||
}
|
||||
|
||||
if err := conn.DeleteInstance(d.Id()); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func buildAliyunRunInstancesArgs(d *schema.ResourceData, meta interface{}) (*ecs.RunInstanceArgs, error) {
|
||||
args := &ecs.RunInstanceArgs{
|
||||
MaxAmount: DEFAULT_INSTANCE_COUNT,
|
||||
|
@ -560,7 +583,6 @@ func buildAliyunInstanceArgs(d *schema.ResourceData, meta interface{}) (*ecs.Cre
|
|||
args.Description = v
|
||||
}
|
||||
|
||||
log.Printf("[DEBUG] SystemDisk is %d", systemDiskSize)
|
||||
if v := d.Get("internet_charge_type").(string); v != "" {
|
||||
args.InternetChargeType = common.InternetChargeType(v)
|
||||
}
|
||||
|
@ -578,11 +600,7 @@ func buildAliyunInstanceArgs(d *schema.ResourceData, meta interface{}) (*ecs.Cre
|
|||
}
|
||||
|
||||
if v := d.Get("io_optimized").(string); v != "" {
|
||||
if v == "optimized" {
|
||||
args.IoOptimized = ecs.IoOptimized("true")
|
||||
} else {
|
||||
args.IoOptimized = ecs.IoOptimized("false")
|
||||
}
|
||||
args.IoOptimized = ecs.IoOptimized(v)
|
||||
}
|
||||
|
||||
vswitchValue := d.Get("subnet_id").(string)
|
||||
|
|
|
@ -4,12 +4,13 @@ import (
|
|||
"fmt"
|
||||
"testing"
|
||||
|
||||
"log"
|
||||
|
||||
"github.com/denverdino/aliyungo/common"
|
||||
"github.com/denverdino/aliyungo/ecs"
|
||||
"github.com/hashicorp/terraform/helper/resource"
|
||||
"github.com/hashicorp/terraform/helper/schema"
|
||||
"github.com/hashicorp/terraform/terraform"
|
||||
"log"
|
||||
)
|
||||
|
||||
func TestAccAlicloudInstance_basic(t *testing.T) {
|
||||
|
@ -456,6 +457,17 @@ func TestAccAlicloudInstance_associatePublicIP(t *testing.T) {
|
|||
}
|
||||
}
|
||||
|
||||
testCheckPublicIP := func() resource.TestCheckFunc {
|
||||
return func(*terraform.State) error {
|
||||
publicIP := instance.PublicIpAddress.IpAddress[0]
|
||||
if publicIP == "" {
|
||||
return fmt.Errorf("can't get public IP")
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
resource.Test(t, resource.TestCase{
|
||||
PreCheck: func() {
|
||||
testAccPreCheck(t)
|
||||
|
@ -469,6 +481,7 @@ func TestAccAlicloudInstance_associatePublicIP(t *testing.T) {
|
|||
Check: resource.ComposeTestCheckFunc(
|
||||
testAccCheckInstanceExists("alicloud_instance.foo", &instance),
|
||||
testCheckPrivateIP(),
|
||||
testCheckPublicIP(),
|
||||
),
|
||||
},
|
||||
},
|
||||
|
|
|
@ -8,6 +8,7 @@ import (
|
|||
"github.com/hashicorp/terraform/helper/resource"
|
||||
"github.com/hashicorp/terraform/helper/schema"
|
||||
"log"
|
||||
"strconv"
|
||||
"strings"
|
||||
"time"
|
||||
)
|
||||
|
@ -44,6 +45,16 @@ func resourceAliyunNatGateway() *schema.Resource {
|
|||
Computed: true,
|
||||
},
|
||||
|
||||
"snat_table_ids": &schema.Schema{
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
},
|
||||
|
||||
"forward_table_ids": &schema.Schema{
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
},
|
||||
|
||||
"bandwidth_packages": &schema.Schema{
|
||||
Type: schema.TypeList,
|
||||
Elem: &schema.Resource{
|
||||
|
@ -60,6 +71,10 @@ func resourceAliyunNatGateway() *schema.Resource {
|
|||
Type: schema.TypeString,
|
||||
Optional: true,
|
||||
},
|
||||
"public_ip_addresses": &schema.Schema{
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
},
|
||||
},
|
||||
},
|
||||
Required: true,
|
||||
|
@ -133,8 +148,16 @@ func resourceAliyunNatGatewayRead(d *schema.ResourceData, meta interface{}) erro
|
|||
d.Set("name", natGateway.Name)
|
||||
d.Set("spec", natGateway.Spec)
|
||||
d.Set("bandwidth_package_ids", strings.Join(natGateway.BandwidthPackageIds.BandwidthPackageId, ","))
|
||||
d.Set("snat_table_ids", strings.Join(natGateway.SnatTableIds.SnatTableId, ","))
|
||||
d.Set("forward_table_ids", strings.Join(natGateway.ForwardTableIds.ForwardTableId, ","))
|
||||
d.Set("description", natGateway.Description)
|
||||
d.Set("vpc_id", natGateway.VpcId)
|
||||
bindWidthPackages, err := flattenBandWidthPackages(natGateway.BandwidthPackageIds.BandwidthPackageId, meta, d)
|
||||
if err != nil {
|
||||
log.Printf("[ERROR] bindWidthPackages flattenBandWidthPackages failed. natgateway id is %#v", d.Id())
|
||||
} else {
|
||||
d.Set("bandwidth_packages", bindWidthPackages)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
@ -254,7 +277,7 @@ func resourceAliyunNatGatewayDelete(d *schema.ResourceData, meta interface{}) er
|
|||
}
|
||||
|
||||
args := &ecs.DeleteNatGatewayArgs{
|
||||
RegionId: client.Region,
|
||||
RegionId: getRegion(d, meta),
|
||||
NatGatewayId: d.Id(),
|
||||
}
|
||||
|
||||
|
@ -267,7 +290,7 @@ func resourceAliyunNatGatewayDelete(d *schema.ResourceData, meta interface{}) er
|
|||
}
|
||||
|
||||
describeArgs := &ecs.DescribeNatGatewaysArgs{
|
||||
RegionId: client.Region,
|
||||
RegionId: getRegion(d, meta),
|
||||
NatGatewayId: d.Id(),
|
||||
}
|
||||
gw, _, gwErr := conn.DescribeNatGateways(describeArgs)
|
||||
|
@ -282,3 +305,69 @@ func resourceAliyunNatGatewayDelete(d *schema.ResourceData, meta interface{}) er
|
|||
return resource.RetryableError(fmt.Errorf("NatGateway in use - trying again while it is deleted."))
|
||||
})
|
||||
}
|
||||
|
||||
func flattenBandWidthPackages(bandWidthPackageIds []string, meta interface{}, d *schema.ResourceData) ([]map[string]interface{}, error) {
|
||||
|
||||
packageLen := len(bandWidthPackageIds)
|
||||
result := make([]map[string]interface{}, 0, packageLen)
|
||||
|
||||
for i := packageLen - 1; i >= 0; i-- {
|
||||
packageId := bandWidthPackageIds[i]
|
||||
packages, err := getPackages(packageId, meta, d)
|
||||
if err != nil {
|
||||
log.Printf("[ERROR] NatGateways getPackages failed. packageId is %#v", packageId)
|
||||
return result, err
|
||||
}
|
||||
ipAddress := flattenPackPublicIp(packages.PublicIpAddresses.PublicIpAddresse)
|
||||
ipCont, ipContErr := strconv.Atoi(packages.IpCount)
|
||||
bandWidth, bandWidthErr := strconv.Atoi(packages.Bandwidth)
|
||||
if ipContErr != nil {
|
||||
log.Printf("[ERROR] NatGateways getPackages failed: ipCont convert error. packageId is %#v", packageId)
|
||||
return result, ipContErr
|
||||
}
|
||||
if bandWidthErr != nil {
|
||||
log.Printf("[ERROR] NatGateways getPackages failed: bandWidthErr convert error. packageId is %#v", packageId)
|
||||
return result, bandWidthErr
|
||||
}
|
||||
l := map[string]interface{}{
|
||||
"ip_count": ipCont,
|
||||
"bandwidth": bandWidth,
|
||||
"zone": packages.ZoneId,
|
||||
"public_ip_addresses": ipAddress,
|
||||
}
|
||||
result = append(result, l)
|
||||
}
|
||||
return result, nil
|
||||
}
|
||||
|
||||
func getPackages(packageId string, meta interface{}, d *schema.ResourceData) (*ecs.DescribeBandwidthPackageType, error) {
|
||||
client := meta.(*AliyunClient)
|
||||
conn := client.vpcconn
|
||||
packages, err := conn.DescribeBandwidthPackages(&ecs.DescribeBandwidthPackagesArgs{
|
||||
RegionId: getRegion(d, meta),
|
||||
BandwidthPackageId: packageId,
|
||||
})
|
||||
|
||||
if err != nil {
|
||||
log.Printf("[ERROR] Describe bandwidth package is failed, BandwidthPackageId Id: %s", packageId)
|
||||
return nil, err
|
||||
}
|
||||
|
||||
if len(packages) == 0 {
|
||||
return nil, common.GetClientErrorFromString(InstanceNotfound)
|
||||
}
|
||||
|
||||
return &packages[0], nil
|
||||
|
||||
}
|
||||
|
||||
func flattenPackPublicIp(publicIpAddressList []ecs.PublicIpAddresseType) string {
|
||||
var result []string
|
||||
|
||||
for _, publicIpAddresses := range publicIpAddressList {
|
||||
ipAddress := publicIpAddresses.IpAddress
|
||||
result = append(result, ipAddress)
|
||||
}
|
||||
|
||||
return strings.Join(result, ",")
|
||||
}
|
||||
|
|
|
@ -48,6 +48,7 @@ func TestAccAlicloudNatGateway_basic(t *testing.T) {
|
|||
"alicloud_nat_gateway.foo",
|
||||
"name",
|
||||
"test_foo"),
|
||||
testAccCheckNatgatewayIpAddress("alicloud_nat_gateway.foo", &nat),
|
||||
),
|
||||
},
|
||||
},
|
||||
|
@ -96,6 +97,31 @@ func TestAccAlicloudNatGateway_spec(t *testing.T) {
|
|||
|
||||
}
|
||||
|
||||
func testAccCheckNatgatewayIpAddress(n string, nat *ecs.NatGatewaySetType) resource.TestCheckFunc {
|
||||
return func(s *terraform.State) error {
|
||||
rs, ok := s.RootModule().Resources[n]
|
||||
if !ok {
|
||||
return fmt.Errorf("Not found: %s", n)
|
||||
}
|
||||
|
||||
if rs.Primary.ID == "" {
|
||||
return fmt.Errorf("No NatGateway ID is set")
|
||||
}
|
||||
|
||||
client := testAccProvider.Meta().(*AliyunClient)
|
||||
natGateway, err := client.DescribeNatGateway(rs.Primary.ID)
|
||||
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if natGateway == nil {
|
||||
return fmt.Errorf("Natgateway not found")
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
func testAccCheckNatGatewayExists(n string, nat *ecs.NatGatewaySetType) resource.TestCheckFunc {
|
||||
return func(s *terraform.State) error {
|
||||
rs, ok := s.RootModule().Resources[n]
|
||||
|
@ -164,7 +190,7 @@ resource "alicloud_vpc" "foo" {
|
|||
resource "alicloud_vswitch" "foo" {
|
||||
vpc_id = "${alicloud_vpc.foo.id}"
|
||||
cidr_block = "172.16.0.0/21"
|
||||
availability_zone = "${data.alicloud_zones.default.zones.0.id}"
|
||||
availability_zone = "${data.alicloud_zones.default.zones.2.id}"
|
||||
}
|
||||
|
||||
resource "alicloud_nat_gateway" "foo" {
|
||||
|
@ -174,11 +200,19 @@ resource "alicloud_nat_gateway" "foo" {
|
|||
bandwidth_packages = [{
|
||||
ip_count = 1
|
||||
bandwidth = 5
|
||||
zone = "${data.alicloud_zones.default.zones.0.id}"
|
||||
zone = "${data.alicloud_zones.default.zones.2.id}"
|
||||
}, {
|
||||
ip_count = 2
|
||||
bandwidth = 10
|
||||
zone = "${data.alicloud_zones.default.zones.0.id}"
|
||||
bandwidth = 6
|
||||
zone = "${data.alicloud_zones.default.zones.2.id}"
|
||||
}, {
|
||||
ip_count = 3
|
||||
bandwidth = 7
|
||||
zone = "${data.alicloud_zones.default.zones.2.id}"
|
||||
}, {
|
||||
ip_count = 1
|
||||
bandwidth = 8
|
||||
zone = "${data.alicloud_zones.default.zones.2.id}"
|
||||
}]
|
||||
depends_on = [
|
||||
"alicloud_vswitch.foo"]
|
||||
|
|
|
@ -74,6 +74,11 @@ func resourceAliyunSecurityGroupRead(d *schema.ResourceData, meta interface{}) e
|
|||
return fmt.Errorf("Error DescribeSecurityGroupAttribute: %#v", err)
|
||||
}
|
||||
|
||||
if sg == nil {
|
||||
d.SetId("")
|
||||
return nil
|
||||
}
|
||||
|
||||
d.Set("name", sg.SecurityGroupName)
|
||||
d.Set("description", sg.Description)
|
||||
|
||||
|
|
|
@ -3,9 +3,10 @@ package alicloud
|
|||
import (
|
||||
"fmt"
|
||||
"github.com/denverdino/aliyungo/ecs"
|
||||
"github.com/hashicorp/terraform/helper/resource"
|
||||
"github.com/hashicorp/terraform/helper/schema"
|
||||
"log"
|
||||
"strings"
|
||||
"time"
|
||||
)
|
||||
|
||||
func resourceAliyunSecurityGroupRule() *schema.Resource {
|
||||
|
@ -141,7 +142,7 @@ func resourceAliyunSecurityGroupRuleRead(d *schema.ResourceData, meta interface{
|
|||
}
|
||||
return fmt.Errorf("Error SecurityGroup rule: %#v", err)
|
||||
}
|
||||
log.Printf("[WARN]sg %s, type %s, protocol %s, port %s, rule %#v", sgId, direction, ip_protocol, port_range, rule)
|
||||
|
||||
d.Set("type", rule.Direction)
|
||||
d.Set("ip_protocol", strings.ToLower(string(rule.IpProtocol)))
|
||||
d.Set("nic_type", rule.NicType)
|
||||
|
@ -163,7 +164,7 @@ func resourceAliyunSecurityGroupRuleRead(d *schema.ResourceData, meta interface{
|
|||
return nil
|
||||
}
|
||||
|
||||
func resourceAliyunSecurityGroupRuleDelete(d *schema.ResourceData, meta interface{}) error {
|
||||
func deleteSecurityGroupRule(d *schema.ResourceData, meta interface{}) error {
|
||||
client := meta.(*AliyunClient)
|
||||
ruleType := d.Get("type").(string)
|
||||
|
||||
|
@ -187,6 +188,30 @@ func resourceAliyunSecurityGroupRuleDelete(d *schema.ResourceData, meta interfac
|
|||
AuthorizeSecurityGroupEgressArgs: *args,
|
||||
}
|
||||
return client.RevokeSecurityGroupEgress(revokeArgs)
|
||||
}
|
||||
|
||||
func resourceAliyunSecurityGroupRuleDelete(d *schema.ResourceData, meta interface{}) error {
|
||||
client := meta.(*AliyunClient)
|
||||
parts := strings.Split(d.Id(), ":")
|
||||
sgId, direction, ip_protocol, port_range, nic_type := parts[0], parts[1], parts[2], parts[3], parts[4]
|
||||
|
||||
return resource.Retry(5*time.Minute, func() *resource.RetryError {
|
||||
err := deleteSecurityGroupRule(d, meta)
|
||||
|
||||
if err != nil {
|
||||
resource.RetryableError(fmt.Errorf("Security group rule in use - trying again while it is deleted."))
|
||||
}
|
||||
|
||||
_, err = client.DescribeSecurityGroupRule(sgId, direction, nic_type, ip_protocol, port_range)
|
||||
if err != nil {
|
||||
if notFoundError(err) {
|
||||
return nil
|
||||
}
|
||||
return resource.NonRetryableError(err)
|
||||
}
|
||||
|
||||
return resource.RetryableError(fmt.Errorf("Security group rule in use - trying again while it is deleted."))
|
||||
})
|
||||
|
||||
}
|
||||
|
||||
|
|
|
@ -281,6 +281,11 @@ func resourceAliyunSlbRead(d *schema.ResourceData, meta interface{}) error {
|
|||
return err
|
||||
}
|
||||
|
||||
if loadBalancer == nil {
|
||||
d.SetId("")
|
||||
return nil
|
||||
}
|
||||
|
||||
d.Set("name", loadBalancer.LoadBalancerName)
|
||||
|
||||
if loadBalancer.AddressType == slb.InternetAddressType {
|
||||
|
|
|
@ -64,10 +64,14 @@ func resourceAliyunSlbAttachmentRead(d *schema.ResourceData, meta interface{}) e
|
|||
if err != nil {
|
||||
if notFoundError(err) {
|
||||
d.SetId("")
|
||||
return fmt.Errorf("Read special SLB Id not found: %#v", err)
|
||||
return nil
|
||||
}
|
||||
return fmt.Errorf("Read special SLB Id not found: %#v", err)
|
||||
}
|
||||
|
||||
return err
|
||||
if loadBalancer == nil {
|
||||
d.SetId("")
|
||||
return nil
|
||||
}
|
||||
|
||||
backendServerType := loadBalancer.BackendServers
|
||||
|
|
|
@ -0,0 +1,134 @@
|
|||
package alicloud
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"github.com/denverdino/aliyungo/ecs"
|
||||
"github.com/hashicorp/terraform/helper/schema"
|
||||
)
|
||||
|
||||
func resourceAliyunSnatEntry() *schema.Resource {
|
||||
return &schema.Resource{
|
||||
Create: resourceAliyunSnatEntryCreate,
|
||||
Read: resourceAliyunSnatEntryRead,
|
||||
Update: resourceAliyunSnatEntryUpdate,
|
||||
Delete: resourceAliyunSnatEntryDelete,
|
||||
|
||||
Schema: map[string]*schema.Schema{
|
||||
"snat_table_id": &schema.Schema{
|
||||
Type: schema.TypeString,
|
||||
Required: true,
|
||||
ForceNew: true,
|
||||
},
|
||||
"source_vswitch_id": &schema.Schema{
|
||||
Type: schema.TypeString,
|
||||
Required: true,
|
||||
ForceNew: true,
|
||||
},
|
||||
"snat_ip": &schema.Schema{
|
||||
Type: schema.TypeString,
|
||||
Required: true,
|
||||
},
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
func resourceAliyunSnatEntryCreate(d *schema.ResourceData, meta interface{}) error {
|
||||
conn := meta.(*AliyunClient).vpcconn
|
||||
|
||||
args := &ecs.CreateSnatEntryArgs{
|
||||
RegionId: getRegion(d, meta),
|
||||
SnatTableId: d.Get("snat_table_id").(string),
|
||||
SourceVSwitchId: d.Get("source_vswitch_id").(string),
|
||||
SnatIp: d.Get("snat_ip").(string),
|
||||
}
|
||||
|
||||
resp, err := conn.CreateSnatEntry(args)
|
||||
if err != nil {
|
||||
return fmt.Errorf("CreateSnatEntry got error: %#v", err)
|
||||
}
|
||||
|
||||
d.SetId(resp.SnatEntryId)
|
||||
d.Set("snat_table_id", d.Get("snat_table_id").(string))
|
||||
|
||||
return resourceAliyunSnatEntryRead(d, meta)
|
||||
}
|
||||
|
||||
func resourceAliyunSnatEntryRead(d *schema.ResourceData, meta interface{}) error {
|
||||
client := meta.(*AliyunClient)
|
||||
|
||||
snatEntry, err := client.DescribeSnatEntry(d.Get("snat_table_id").(string), d.Id())
|
||||
|
||||
if err != nil {
|
||||
if notFoundError(err) {
|
||||
return nil
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
d.Set("snat_table_id", snatEntry.SnatTableId)
|
||||
d.Set("source_vswitch_id", snatEntry.SourceVSwitchId)
|
||||
d.Set("snat_ip", snatEntry.SnatIp)
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func resourceAliyunSnatEntryUpdate(d *schema.ResourceData, meta interface{}) error {
|
||||
client := meta.(*AliyunClient)
|
||||
conn := client.vpcconn
|
||||
|
||||
snatEntry, err := client.DescribeSnatEntry(d.Get("snat_table_id").(string), d.Id())
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
d.Partial(true)
|
||||
attributeUpdate := false
|
||||
args := &ecs.ModifySnatEntryArgs{
|
||||
RegionId: getRegion(d, meta),
|
||||
SnatTableId: snatEntry.SnatTableId,
|
||||
SnatEntryId: snatEntry.SnatEntryId,
|
||||
}
|
||||
|
||||
if d.HasChange("snat_ip") {
|
||||
d.SetPartial("snat_ip")
|
||||
var snat_ip string
|
||||
if v, ok := d.GetOk("snat_ip"); ok {
|
||||
snat_ip = v.(string)
|
||||
} else {
|
||||
return fmt.Errorf("cann't change snap_ip to empty string")
|
||||
}
|
||||
args.SnatIp = snat_ip
|
||||
|
||||
attributeUpdate = true
|
||||
}
|
||||
|
||||
if attributeUpdate {
|
||||
if err := conn.ModifySnatEntry(args); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
d.Partial(false)
|
||||
|
||||
return resourceAliyunSnatEntryRead(d, meta)
|
||||
}
|
||||
|
||||
func resourceAliyunSnatEntryDelete(d *schema.ResourceData, meta interface{}) error {
|
||||
client := meta.(*AliyunClient)
|
||||
conn := client.vpcconn
|
||||
|
||||
snatEntryId := d.Id()
|
||||
snatTableId := d.Get("snat_table_id").(string)
|
||||
|
||||
args := &ecs.DeleteSnatEntryArgs{
|
||||
RegionId: getRegion(d, meta),
|
||||
SnatTableId: snatTableId,
|
||||
SnatEntryId: snatEntryId,
|
||||
}
|
||||
|
||||
if err := conn.DeleteSnatEntry(args); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
|
@ -0,0 +1,180 @@
|
|||
package alicloud
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"github.com/denverdino/aliyungo/common"
|
||||
"github.com/denverdino/aliyungo/ecs"
|
||||
"github.com/hashicorp/terraform/helper/resource"
|
||||
"github.com/hashicorp/terraform/terraform"
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestAccAlicloudSnat_basic(t *testing.T) {
|
||||
var snat ecs.SnatEntrySetType
|
||||
|
||||
resource.Test(t, resource.TestCase{
|
||||
PreCheck: func() {
|
||||
testAccPreCheck(t)
|
||||
},
|
||||
|
||||
// module name
|
||||
IDRefreshName: "alicloud_snat_entry.foo",
|
||||
Providers: testAccProviders,
|
||||
CheckDestroy: testAccCheckSnatEntryDestroy,
|
||||
Steps: []resource.TestStep{
|
||||
resource.TestStep{
|
||||
Config: testAccSnatEntryConfig,
|
||||
Check: resource.ComposeTestCheckFunc(
|
||||
testAccCheckSnatEntryExists(
|
||||
"alicloud_snat_entry.foo", &snat),
|
||||
),
|
||||
},
|
||||
resource.TestStep{
|
||||
Config: testAccSnatEntryUpdate,
|
||||
Check: resource.ComposeTestCheckFunc(
|
||||
testAccCheckSnatEntryExists(
|
||||
"alicloud_snat_entry.foo", &snat),
|
||||
),
|
||||
},
|
||||
},
|
||||
})
|
||||
|
||||
}
|
||||
|
||||
func testAccCheckSnatEntryDestroy(s *terraform.State) error {
|
||||
client := testAccProvider.Meta().(*AliyunClient)
|
||||
|
||||
for _, rs := range s.RootModule().Resources {
|
||||
if rs.Type != "alicloud_snat_entry" {
|
||||
continue
|
||||
}
|
||||
|
||||
// Try to find the Snat entry
|
||||
instance, err := client.DescribeSnatEntry(rs.Primary.Attributes["snat_table_id"], rs.Primary.ID)
|
||||
|
||||
//this special deal cause the DescribeSnatEntry can't find the records would be throw "cant find the snatTable error"
|
||||
if instance.SnatEntryId == "" {
|
||||
return nil
|
||||
}
|
||||
|
||||
if instance.SnatEntryId != "" {
|
||||
return fmt.Errorf("Snat entry still exist")
|
||||
}
|
||||
|
||||
if err != nil {
|
||||
// Verify the error is what we want
|
||||
e, _ := err.(*common.Error)
|
||||
|
||||
if !notFoundError(e) {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func testAccCheckSnatEntryExists(n string, snat *ecs.SnatEntrySetType) resource.TestCheckFunc {
|
||||
return func(s *terraform.State) error {
|
||||
rs, ok := s.RootModule().Resources[n]
|
||||
if !ok {
|
||||
return fmt.Errorf("Not found: %s", n)
|
||||
}
|
||||
|
||||
if rs.Primary.ID == "" {
|
||||
return fmt.Errorf("No SnatEntry ID is set")
|
||||
}
|
||||
|
||||
client := testAccProvider.Meta().(*AliyunClient)
|
||||
instance, err := client.DescribeSnatEntry(rs.Primary.Attributes["snat_table_id"], rs.Primary.ID)
|
||||
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if instance.SnatEntryId == "" {
|
||||
return fmt.Errorf("SnatEntry not found")
|
||||
}
|
||||
|
||||
*snat = instance
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
const testAccSnatEntryConfig = `
|
||||
data "alicloud_zones" "default" {
|
||||
"available_resource_creation"= "VSwitch"
|
||||
}
|
||||
|
||||
resource "alicloud_vpc" "foo" {
|
||||
name = "tf_test_foo"
|
||||
cidr_block = "172.16.0.0/12"
|
||||
}
|
||||
|
||||
resource "alicloud_vswitch" "foo" {
|
||||
vpc_id = "${alicloud_vpc.foo.id}"
|
||||
cidr_block = "172.16.0.0/21"
|
||||
availability_zone = "${data.alicloud_zones.default.zones.2.id}"
|
||||
}
|
||||
|
||||
resource "alicloud_nat_gateway" "foo" {
|
||||
vpc_id = "${alicloud_vpc.foo.id}"
|
||||
spec = "Small"
|
||||
name = "test_foo"
|
||||
bandwidth_packages = [{
|
||||
ip_count = 2
|
||||
bandwidth = 5
|
||||
zone = "${data.alicloud_zones.default.zones.2.id}"
|
||||
},{
|
||||
ip_count = 1
|
||||
bandwidth = 6
|
||||
zone = "${data.alicloud_zones.default.zones.2.id}"
|
||||
}]
|
||||
depends_on = [
|
||||
"alicloud_vswitch.foo"]
|
||||
}
|
||||
resource "alicloud_snat_entry" "foo"{
|
||||
snat_table_id = "${alicloud_nat_gateway.foo.snat_table_ids}"
|
||||
source_vswitch_id = "${alicloud_vswitch.foo.id}"
|
||||
snat_ip = "${alicloud_nat_gateway.foo.bandwidth_packages.0.public_ip_addresses}"
|
||||
}
|
||||
`
|
||||
|
||||
const testAccSnatEntryUpdate = `
|
||||
data "alicloud_zones" "default" {
|
||||
"available_resource_creation"= "VSwitch"
|
||||
}
|
||||
|
||||
resource "alicloud_vpc" "foo" {
|
||||
name = "tf_test_foo"
|
||||
cidr_block = "172.16.0.0/12"
|
||||
}
|
||||
|
||||
resource "alicloud_vswitch" "foo" {
|
||||
vpc_id = "${alicloud_vpc.foo.id}"
|
||||
cidr_block = "172.16.0.0/21"
|
||||
availability_zone = "${data.alicloud_zones.default.zones.2.id}"
|
||||
}
|
||||
|
||||
resource "alicloud_nat_gateway" "foo" {
|
||||
vpc_id = "${alicloud_vpc.foo.id}"
|
||||
spec = "Small"
|
||||
name = "test_foo"
|
||||
bandwidth_packages = [{
|
||||
ip_count = 2
|
||||
bandwidth = 5
|
||||
zone = "${data.alicloud_zones.default.zones.2.id}"
|
||||
},{
|
||||
ip_count = 1
|
||||
bandwidth = 6
|
||||
zone = "${data.alicloud_zones.default.zones.2.id}"
|
||||
}]
|
||||
depends_on = [
|
||||
"alicloud_vswitch.foo"]
|
||||
}
|
||||
resource "alicloud_snat_entry" "foo"{
|
||||
snat_table_id = "${alicloud_nat_gateway.foo.snat_table_ids}"
|
||||
source_vswitch_id = "${alicloud_vswitch.foo.id}"
|
||||
snat_ip = "${alicloud_nat_gateway.foo.bandwidth_packages.1.public_ip_addresses}"
|
||||
}
|
||||
`
|
|
@ -86,7 +86,7 @@ func resourceAliyunVpcCreate(d *schema.ResourceData, meta interface{}) error {
|
|||
return fmt.Errorf("Timeout when WaitForVpcAvailable")
|
||||
}
|
||||
|
||||
return resourceAliyunVpcRead(d, meta)
|
||||
return resourceAliyunVpcUpdate(d, meta)
|
||||
}
|
||||
|
||||
func resourceAliyunVpcRead(d *schema.ResourceData, meta interface{}) error {
|
||||
|
@ -144,7 +144,7 @@ func resourceAliyunVpcUpdate(d *schema.ResourceData, meta interface{}) error {
|
|||
|
||||
d.Partial(false)
|
||||
|
||||
return nil
|
||||
return resourceAliyunVpcRead(d, meta)
|
||||
}
|
||||
|
||||
func resourceAliyunVpcDelete(d *schema.ResourceData, meta interface{}) error {
|
||||
|
|
|
@ -68,7 +68,7 @@ func resourceAliyunSwitchCreate(d *schema.ResourceData, meta interface{}) error
|
|||
return fmt.Errorf("WaitForVSwitchAvailable got a error: %s", err)
|
||||
}
|
||||
|
||||
return resourceAliyunSwitchRead(d, meta)
|
||||
return resourceAliyunSwitchUpdate(d, meta)
|
||||
}
|
||||
|
||||
func resourceAliyunSwitchRead(d *schema.ResourceData, meta interface{}) error {
|
||||
|
@ -139,7 +139,7 @@ func resourceAliyunSwitchUpdate(d *schema.ResourceData, meta interface{}) error
|
|||
|
||||
d.Partial(false)
|
||||
|
||||
return nil
|
||||
return resourceAliyunSwitchRead(d, meta)
|
||||
}
|
||||
|
||||
func resourceAliyunSwitchDelete(d *schema.ResourceData, meta interface{}) error {
|
||||
|
|
|
@ -131,7 +131,7 @@ func (client *AliyunClient) QueryInstancesById(id string) (instance *ecs.Instanc
|
|||
}
|
||||
|
||||
if len(instances) == 0 {
|
||||
return nil, common.GetClientErrorFromString(InstanceNotfound)
|
||||
return nil, GetNotFoundErrorFromString(InstanceNotfound)
|
||||
}
|
||||
|
||||
return &instances[0], nil
|
||||
|
@ -244,7 +244,7 @@ func (client *AliyunClient) DescribeSecurityGroupRule(securityGroupId, direction
|
|||
return &p, nil
|
||||
}
|
||||
}
|
||||
return nil, nil
|
||||
return nil, GetNotFoundErrorFromString("Security group rule not found")
|
||||
|
||||
}
|
||||
|
||||
|
|
|
@ -0,0 +1,167 @@
|
|||
package alicloud
|
||||
|
||||
import (
|
||||
"github.com/denverdino/aliyungo/ess"
|
||||
)
|
||||
|
||||
func (client *AliyunClient) DescribeScalingGroupById(sgId string) (*ess.ScalingGroupItemType, error) {
|
||||
args := ess.DescribeScalingGroupsArgs{
|
||||
RegionId: client.Region,
|
||||
ScalingGroupId: []string{sgId},
|
||||
}
|
||||
|
||||
sgs, _, err := client.essconn.DescribeScalingGroups(&args)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
if len(sgs) == 0 {
|
||||
return nil, GetNotFoundErrorFromString("Scaling group not found")
|
||||
}
|
||||
|
||||
return &sgs[0], nil
|
||||
}
|
||||
|
||||
func (client *AliyunClient) DeleteScalingGroupById(sgId string) error {
|
||||
args := ess.DeleteScalingGroupArgs{
|
||||
ScalingGroupId: sgId,
|
||||
ForceDelete: true,
|
||||
}
|
||||
|
||||
_, err := client.essconn.DeleteScalingGroup(&args)
|
||||
return err
|
||||
}
|
||||
|
||||
func (client *AliyunClient) DescribeScalingConfigurationById(sgId, configId string) (*ess.ScalingConfigurationItemType, error) {
|
||||
args := ess.DescribeScalingConfigurationsArgs{
|
||||
RegionId: client.Region,
|
||||
ScalingGroupId: sgId,
|
||||
ScalingConfigurationId: []string{configId},
|
||||
}
|
||||
|
||||
cs, _, err := client.essconn.DescribeScalingConfigurations(&args)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
if len(cs) == 0 {
|
||||
return nil, GetNotFoundErrorFromString("Scaling configuration not found")
|
||||
}
|
||||
|
||||
return &cs[0], nil
|
||||
}
|
||||
|
||||
func (client *AliyunClient) ActiveScalingConfigurationById(sgId, configId string) error {
|
||||
args := ess.ModifyScalingGroupArgs{
|
||||
ScalingGroupId: sgId,
|
||||
ActiveScalingConfigurationId: configId,
|
||||
}
|
||||
|
||||
_, err := client.essconn.ModifyScalingGroup(&args)
|
||||
return err
|
||||
}
|
||||
|
||||
func (client *AliyunClient) EnableScalingConfigurationById(sgId, configId string, ids []string) error {
|
||||
args := ess.EnableScalingGroupArgs{
|
||||
ScalingGroupId: sgId,
|
||||
ActiveScalingConfigurationId: configId,
|
||||
}
|
||||
|
||||
if len(ids) > 0 {
|
||||
args.InstanceId = ids
|
||||
}
|
||||
|
||||
_, err := client.essconn.EnableScalingGroup(&args)
|
||||
return err
|
||||
}
|
||||
|
||||
func (client *AliyunClient) DisableScalingConfigurationById(sgId string) error {
|
||||
args := ess.DisableScalingGroupArgs{
|
||||
ScalingGroupId: sgId,
|
||||
}
|
||||
|
||||
_, err := client.essconn.DisableScalingGroup(&args)
|
||||
return err
|
||||
}
|
||||
|
||||
func (client *AliyunClient) DeleteScalingConfigurationById(sgId, configId string) error {
|
||||
args := ess.DeleteScalingConfigurationArgs{
|
||||
ScalingGroupId: sgId,
|
||||
ScalingConfigurationId: configId,
|
||||
}
|
||||
|
||||
_, err := client.essconn.DeleteScalingConfiguration(&args)
|
||||
return err
|
||||
}
|
||||
|
||||
// Flattens an array of datadisk into a []map[string]interface{}
|
||||
func flattenDataDiskMappings(list []ess.DataDiskItemType) []map[string]interface{} {
|
||||
result := make([]map[string]interface{}, 0, len(list))
|
||||
for _, i := range list {
|
||||
l := map[string]interface{}{
|
||||
"size": i.Size,
|
||||
"category": i.Category,
|
||||
"snapshot_id": i.SnapshotId,
|
||||
"device": i.Device,
|
||||
}
|
||||
result = append(result, l)
|
||||
}
|
||||
return result
|
||||
}
|
||||
|
||||
func (client *AliyunClient) DescribeScalingRuleById(sgId, ruleId string) (*ess.ScalingRuleItemType, error) {
|
||||
args := ess.DescribeScalingRulesArgs{
|
||||
RegionId: client.Region,
|
||||
ScalingGroupId: sgId,
|
||||
ScalingRuleId: []string{ruleId},
|
||||
}
|
||||
|
||||
cs, _, err := client.essconn.DescribeScalingRules(&args)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
if len(cs) == 0 {
|
||||
return nil, GetNotFoundErrorFromString("Scaling rule not found")
|
||||
}
|
||||
|
||||
return &cs[0], nil
|
||||
}
|
||||
|
||||
func (client *AliyunClient) DeleteScalingRuleById(ruleId string) error {
|
||||
args := ess.DeleteScalingRuleArgs{
|
||||
RegionId: client.Region,
|
||||
ScalingRuleId: ruleId,
|
||||
}
|
||||
|
||||
_, err := client.essconn.DeleteScalingRule(&args)
|
||||
return err
|
||||
}
|
||||
|
||||
func (client *AliyunClient) DescribeScheduleById(scheduleId string) (*ess.ScheduledTaskItemType, error) {
|
||||
args := ess.DescribeScheduledTasksArgs{
|
||||
RegionId: client.Region,
|
||||
ScheduledTaskId: []string{scheduleId},
|
||||
}
|
||||
|
||||
cs, _, err := client.essconn.DescribeScheduledTasks(&args)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
if len(cs) == 0 {
|
||||
return nil, GetNotFoundErrorFromString("Schedule not found")
|
||||
}
|
||||
|
||||
return &cs[0], nil
|
||||
}
|
||||
|
||||
func (client *AliyunClient) DeleteScheduleById(scheduleId string) error {
|
||||
args := ess.DeleteScheduledTaskArgs{
|
||||
RegionId: client.Region,
|
||||
ScheduledTaskId: scheduleId,
|
||||
}
|
||||
|
||||
_, err := client.essconn.DeleteScheduledTask(&args)
|
||||
return err
|
||||
}
|
|
@ -6,7 +6,20 @@ import (
|
|||
"strings"
|
||||
)
|
||||
|
||||
// when getInstance is empty, then throw InstanceNotfound error
|
||||
//
|
||||
// _______________ _______________ _______________
|
||||
// | | ______param______\ | | _____request_____\ | |
|
||||
// | Business | | Service | | SDK/API |
|
||||
// | | __________________ | | __________________ | |
|
||||
// |______________| \ (obj, err) |______________| \ (status, cont) |______________|
|
||||
// | |
|
||||
// |A. {instance, nil} |a. {200, content}
|
||||
// |B. {nil, error} |b. {200, nil}
|
||||
// |c. {4xx, nil}
|
||||
//
|
||||
// The API return 200 for resource not found.
|
||||
// When getInstance is empty, then throw InstanceNotfound error.
|
||||
// That the business layer only need to check error.
|
||||
func (client *AliyunClient) DescribeDBInstanceById(id string) (instance *rds.DBInstanceAttribute, err error) {
|
||||
arrtArgs := rds.DescribeDBInstancesArgs{
|
||||
DBInstanceId: id,
|
||||
|
@ -19,7 +32,7 @@ func (client *AliyunClient) DescribeDBInstanceById(id string) (instance *rds.DBI
|
|||
attr := resp.Items.DBInstanceAttribute
|
||||
|
||||
if len(attr) <= 0 {
|
||||
return nil, common.GetClientErrorFromString(InstanceNotfound)
|
||||
return nil, GetNotFoundErrorFromString("DB instance not found")
|
||||
}
|
||||
|
||||
return &attr[0], nil
|
||||
|
@ -164,13 +177,10 @@ func (client *AliyunClient) GetSecurityIps(instanceId string) ([]string, error)
|
|||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
ips := ""
|
||||
for i, ip := range arr {
|
||||
if i == 0 {
|
||||
ips += ip.SecurityIPList
|
||||
} else {
|
||||
ips += COMMA_SEPARATED + ip.SecurityIPList
|
||||
}
|
||||
var ips, separator string
|
||||
for _, ip := range arr {
|
||||
ips += separator + ip.SecurityIPList
|
||||
separator = COMMA_SEPARATED
|
||||
}
|
||||
return strings.Split(ips, COMMA_SEPARATED), nil
|
||||
}
|
||||
|
|
|
@ -32,6 +32,7 @@ func (client *AliyunClient) DescribeNatGateway(natGatewayId string) (*ecs.NatGat
|
|||
}
|
||||
|
||||
natGateways, _, err := client.vpcconn.DescribeNatGateways(args)
|
||||
//fmt.Println("natGateways %#v", natGateways)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
@ -64,6 +65,78 @@ func (client *AliyunClient) DescribeVpc(vpcId string) (*ecs.VpcSetType, error) {
|
|||
return &vpcs[0], nil
|
||||
}
|
||||
|
||||
func (client *AliyunClient) DescribeSnatEntry(snatTableId string, snatEntryId string) (ecs.SnatEntrySetType, error) {
|
||||
|
||||
var resultSnat ecs.SnatEntrySetType
|
||||
|
||||
args := &ecs.DescribeSnatTableEntriesArgs{
|
||||
RegionId: client.Region,
|
||||
SnatTableId: snatTableId,
|
||||
}
|
||||
|
||||
snatEntries, _, err := client.vpcconn.DescribeSnatTableEntries(args)
|
||||
|
||||
//this special deal cause the DescribeSnatEntry can't find the records would be throw "cant find the snatTable error"
|
||||
//so judge the snatEntries length priority
|
||||
if len(snatEntries) == 0 {
|
||||
return resultSnat, common.GetClientErrorFromString(InstanceNotfound)
|
||||
}
|
||||
|
||||
if err != nil {
|
||||
return resultSnat, err
|
||||
}
|
||||
|
||||
findSnat := false
|
||||
|
||||
for _, snat := range snatEntries {
|
||||
if snat.SnatEntryId == snatEntryId {
|
||||
resultSnat = snat
|
||||
findSnat = true
|
||||
}
|
||||
}
|
||||
if !findSnat {
|
||||
return resultSnat, common.GetClientErrorFromString(NotFindSnatEntryBySnatId)
|
||||
}
|
||||
|
||||
return resultSnat, nil
|
||||
}
|
||||
|
||||
func (client *AliyunClient) DescribeForwardEntry(forwardTableId string, forwardEntryId string) (ecs.ForwardTableEntrySetType, error) {
|
||||
|
||||
var resultFoward ecs.ForwardTableEntrySetType
|
||||
|
||||
args := &ecs.DescribeForwardTableEntriesArgs{
|
||||
RegionId: client.Region,
|
||||
ForwardTableId: forwardTableId,
|
||||
}
|
||||
|
||||
forwardEntries, _, err := client.vpcconn.DescribeForwardTableEntries(args)
|
||||
|
||||
//this special deal cause the DescribeSnatEntry can't find the records would be throw "cant find the snatTable error"
|
||||
//so judge the snatEntries length priority
|
||||
if len(forwardEntries) == 0 {
|
||||
return resultFoward, common.GetClientErrorFromString(InstanceNotfound)
|
||||
}
|
||||
|
||||
findForward := false
|
||||
|
||||
for _, forward := range forwardEntries {
|
||||
if forward.ForwardEntryId == forwardEntryId {
|
||||
resultFoward = forward
|
||||
findForward = true
|
||||
}
|
||||
}
|
||||
if !findForward {
|
||||
return resultFoward, common.GetClientErrorFromString(NotFindForwardEntryByForwardId)
|
||||
}
|
||||
|
||||
if err != nil {
|
||||
return resultFoward, err
|
||||
}
|
||||
|
||||
return resultFoward, nil
|
||||
}
|
||||
|
||||
// describe vswitch by param filters
|
||||
func (client *AliyunClient) QueryVswitches(args *ecs.DescribeVSwitchesArgs) (vswitches []ecs.VSwitchSetType, err error) {
|
||||
vsws, _, err := client.ecsconn.DescribeVSwitches(args)
|
||||
|
@ -130,7 +203,7 @@ func (client *AliyunClient) QueryRouteEntry(routeTableId, cidrBlock, nextHopType
|
|||
return &e, nil
|
||||
}
|
||||
}
|
||||
return nil, nil
|
||||
return nil, GetNotFoundErrorFromString("Vpc router entry not found")
|
||||
}
|
||||
|
||||
func (client *AliyunClient) GetVpcIdByVSwitchId(vswitchId string) (vpcId string, err error) {
|
||||
|
|
|
@ -1,11 +0,0 @@
|
|||
package alicloud
|
||||
|
||||
// Takes the result of flatmap.Expand for an array of strings
|
||||
// and returns a []string
|
||||
func expandStringList(configured []interface{}) []string {
|
||||
vs := make([]string, 0, len(configured))
|
||||
for _, v := range configured {
|
||||
vs = append(vs, v.(string))
|
||||
}
|
||||
return vs
|
||||
}
|
|
@ -18,7 +18,7 @@ func validateInstancePort(v interface{}, k string) (ws []string, errors []error)
|
|||
value := v.(int)
|
||||
if value < 1 || value > 65535 {
|
||||
errors = append(errors, fmt.Errorf(
|
||||
"%q must be a valid instance port between 1 and 65535",
|
||||
"%q must be a valid port between 1 and 65535",
|
||||
k))
|
||||
return
|
||||
}
|
||||
|
@ -26,8 +26,8 @@ func validateInstancePort(v interface{}, k string) (ws []string, errors []error)
|
|||
}
|
||||
|
||||
func validateInstanceProtocol(v interface{}, k string) (ws []string, errors []error) {
|
||||
protocal := v.(string)
|
||||
if !isProtocalValid(protocal) {
|
||||
protocol := v.(string)
|
||||
if !isProtocolValid(protocol) {
|
||||
errors = append(errors, fmt.Errorf(
|
||||
"%q is an invalid value. Valid values are either http, https, tcp or udp",
|
||||
k))
|
||||
|
@ -282,9 +282,9 @@ func validateInternetChargeType(v interface{}, k string) (ws []string, errors []
|
|||
|
||||
func validateInternetMaxBandWidthOut(v interface{}, k string) (ws []string, errors []error) {
|
||||
value := v.(int)
|
||||
if value < 1 || value > 100 {
|
||||
if value < 0 || value > 100 {
|
||||
errors = append(errors, fmt.Errorf(
|
||||
"%q must be a valid internet bandwidth out between 1 and 1000",
|
||||
"%q must be a valid internet bandwidth out between 0 and 100",
|
||||
k))
|
||||
return
|
||||
}
|
||||
|
@ -565,3 +565,14 @@ func validateRegion(v interface{}, k string) (ws []string, errors []error) {
|
|||
}
|
||||
return
|
||||
}
|
||||
|
||||
func validateForwardPort(v interface{}, k string) (ws []string, errors []error) {
|
||||
value := v.(string)
|
||||
if value != "any" {
|
||||
valueConv, err := strconv.Atoi(value)
|
||||
if err != nil || valueConv < 1 || valueConv > 65535 {
|
||||
errors = append(errors, fmt.Errorf("%q must be a valid port between 1 and 65535 or any ", k))
|
||||
}
|
||||
}
|
||||
return
|
||||
}
|
||||
|
|
|
@ -21,17 +21,17 @@ func TestValidateInstancePort(t *testing.T) {
|
|||
}
|
||||
|
||||
func TestValidateInstanceProtocol(t *testing.T) {
|
||||
validProtocals := []string{"http", "tcp", "https", "udp"}
|
||||
for _, v := range validProtocals {
|
||||
_, errors := validateInstanceProtocol(v, "instance_protocal")
|
||||
validProtocols := []string{"http", "tcp", "https", "udp"}
|
||||
for _, v := range validProtocols {
|
||||
_, errors := validateInstanceProtocol(v, "instance_protocol")
|
||||
if len(errors) != 0 {
|
||||
t.Fatalf("%q should be a valid instance protocol: %q", v, errors)
|
||||
}
|
||||
}
|
||||
|
||||
invalidProtocals := []string{"HTTP", "abc", "ecmp", "dubbo"}
|
||||
for _, v := range invalidProtocals {
|
||||
_, errors := validateInstanceProtocol(v, "instance_protocal")
|
||||
invalidProtocols := []string{"HTTP", "abc", "ecmp", "dubbo"}
|
||||
for _, v := range invalidProtocols {
|
||||
_, errors := validateInstanceProtocol(v, "instance_protocol")
|
||||
if len(errors) == 0 {
|
||||
t.Fatalf("%q should be an invalid instance protocol", v)
|
||||
}
|
||||
|
@ -353,7 +353,7 @@ func TestValidateInternetMaxBandWidthOut(t *testing.T) {
|
|||
}
|
||||
}
|
||||
|
||||
invalidInternetMaxBandWidthOut := []int{-2, 0, 101, 123}
|
||||
invalidInternetMaxBandWidthOut := []int{-2, 101, 123}
|
||||
for _, v := range invalidInternetMaxBandWidthOut {
|
||||
_, errors := validateInternetMaxBandWidthOut(v, "internet_max_bandwidth_out")
|
||||
if len(errors) == 0 {
|
||||
|
|
|
@ -54,7 +54,7 @@ func GetAccountInfo(iamconn *iam.IAM, stsconn *sts.STS, authProviderName string)
|
|||
awsErr, ok := err.(awserr.Error)
|
||||
// AccessDenied and ValidationError can be raised
|
||||
// if credentials belong to federated profile, so we ignore these
|
||||
if !ok || (awsErr.Code() != "AccessDenied" && awsErr.Code() != "ValidationError") {
|
||||
if !ok || (awsErr.Code() != "AccessDenied" && awsErr.Code() != "ValidationError" && awsErr.Code() != "InvalidClientTokenId") {
|
||||
return "", "", fmt.Errorf("Failed getting account ID via 'iam:GetUser': %s", err)
|
||||
}
|
||||
log.Printf("[DEBUG] Getting account ID via iam:GetUser failed: %s", err)
|
||||
|
|
|
@ -5,6 +5,7 @@ import (
|
|||
"fmt"
|
||||
"log"
|
||||
"regexp"
|
||||
"strconv"
|
||||
|
||||
"github.com/aws/aws-sdk-go/aws"
|
||||
"github.com/aws/aws-sdk-go/service/autoscaling"
|
||||
|
@ -12,8 +13,8 @@ import (
|
|||
"github.com/hashicorp/terraform/helper/schema"
|
||||
)
|
||||
|
||||
// tagsSchema returns the schema to use for tags.
|
||||
func autoscalingTagsSchema() *schema.Schema {
|
||||
// autoscalingTagSchema returns the schema to use for the tag element.
|
||||
func autoscalingTagSchema() *schema.Schema {
|
||||
return &schema.Schema{
|
||||
Type: schema.TypeSet,
|
||||
Optional: true,
|
||||
|
@ -35,11 +36,11 @@ func autoscalingTagsSchema() *schema.Schema {
|
|||
},
|
||||
},
|
||||
},
|
||||
Set: autoscalingTagsToHash,
|
||||
Set: autoscalingTagToHash,
|
||||
}
|
||||
}
|
||||
|
||||
func autoscalingTagsToHash(v interface{}) int {
|
||||
func autoscalingTagToHash(v interface{}) int {
|
||||
var buf bytes.Buffer
|
||||
m := v.(map[string]interface{})
|
||||
buf.WriteString(fmt.Sprintf("%s-", m["key"].(string)))
|
||||
|
@ -52,35 +53,74 @@ func autoscalingTagsToHash(v interface{}) int {
|
|||
// setTags is a helper to set the tags for a resource. It expects the
|
||||
// tags field to be named "tag"
|
||||
func setAutoscalingTags(conn *autoscaling.AutoScaling, d *schema.ResourceData) error {
|
||||
if d.HasChange("tag") {
|
||||
resourceID := d.Get("name").(string)
|
||||
var createTags, removeTags []*autoscaling.Tag
|
||||
|
||||
if d.HasChange("tag") || d.HasChange("tags") {
|
||||
oraw, nraw := d.GetChange("tag")
|
||||
o := setToMapByKey(oraw.(*schema.Set), "key")
|
||||
n := setToMapByKey(nraw.(*schema.Set), "key")
|
||||
|
||||
resourceID := d.Get("name").(string)
|
||||
c, r := diffAutoscalingTags(
|
||||
autoscalingTagsFromMap(o, resourceID),
|
||||
autoscalingTagsFromMap(n, resourceID),
|
||||
resourceID)
|
||||
create := autoscaling.CreateOrUpdateTagsInput{
|
||||
Tags: c,
|
||||
}
|
||||
remove := autoscaling.DeleteTagsInput{
|
||||
Tags: r,
|
||||
old, err := autoscalingTagsFromMap(o, resourceID)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// Set tags
|
||||
if len(r) > 0 {
|
||||
log.Printf("[DEBUG] Removing autoscaling tags: %#v", r)
|
||||
if _, err := conn.DeleteTags(&remove); err != nil {
|
||||
return err
|
||||
}
|
||||
new, err := autoscalingTagsFromMap(n, resourceID)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if len(c) > 0 {
|
||||
log.Printf("[DEBUG] Creating autoscaling tags: %#v", c)
|
||||
if _, err := conn.CreateOrUpdateTags(&create); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
c, r, err := diffAutoscalingTags(old, new, resourceID)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
createTags = append(createTags, c...)
|
||||
removeTags = append(removeTags, r...)
|
||||
|
||||
oraw, nraw = d.GetChange("tags")
|
||||
old, err = autoscalingTagsFromList(oraw.([]interface{}), resourceID)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
new, err = autoscalingTagsFromList(nraw.([]interface{}), resourceID)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
c, r, err = diffAutoscalingTags(old, new, resourceID)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
createTags = append(createTags, c...)
|
||||
removeTags = append(removeTags, r...)
|
||||
}
|
||||
|
||||
// Set tags
|
||||
if len(removeTags) > 0 {
|
||||
log.Printf("[DEBUG] Removing autoscaling tags: %#v", removeTags)
|
||||
|
||||
remove := autoscaling.DeleteTagsInput{
|
||||
Tags: removeTags,
|
||||
}
|
||||
|
||||
if _, err := conn.DeleteTags(&remove); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
if len(createTags) > 0 {
|
||||
log.Printf("[DEBUG] Creating autoscaling tags: %#v", createTags)
|
||||
|
||||
create := autoscaling.CreateOrUpdateTagsInput{
|
||||
Tags: createTags,
|
||||
}
|
||||
|
||||
if _, err := conn.CreateOrUpdateTags(&create); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -90,11 +130,12 @@ func setAutoscalingTags(conn *autoscaling.AutoScaling, d *schema.ResourceData) e
|
|||
// diffTags takes our tags locally and the ones remotely and returns
|
||||
// the set of tags that must be created, and the set of tags that must
|
||||
// be destroyed.
|
||||
func diffAutoscalingTags(oldTags, newTags []*autoscaling.Tag, resourceID string) ([]*autoscaling.Tag, []*autoscaling.Tag) {
|
||||
func diffAutoscalingTags(oldTags, newTags []*autoscaling.Tag, resourceID string) ([]*autoscaling.Tag, []*autoscaling.Tag, error) {
|
||||
// First, we're creating everything we have
|
||||
create := make(map[string]interface{})
|
||||
for _, t := range newTags {
|
||||
tag := map[string]interface{}{
|
||||
"key": *t.Key,
|
||||
"value": *t.Value,
|
||||
"propagate_at_launch": *t.PropagateAtLaunch,
|
||||
}
|
||||
|
@ -112,27 +153,99 @@ func diffAutoscalingTags(oldTags, newTags []*autoscaling.Tag, resourceID string)
|
|||
}
|
||||
}
|
||||
|
||||
return autoscalingTagsFromMap(create, resourceID), remove
|
||||
createTags, err := autoscalingTagsFromMap(create, resourceID)
|
||||
if err != nil {
|
||||
return nil, nil, err
|
||||
}
|
||||
|
||||
return createTags, remove, nil
|
||||
}
|
||||
|
||||
func autoscalingTagsFromList(vs []interface{}, resourceID string) ([]*autoscaling.Tag, error) {
|
||||
result := make([]*autoscaling.Tag, 0, len(vs))
|
||||
for _, tag := range vs {
|
||||
attr, ok := tag.(map[string]interface{})
|
||||
if !ok {
|
||||
continue
|
||||
}
|
||||
|
||||
t, err := autoscalingTagFromMap(attr, resourceID)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
if t != nil {
|
||||
result = append(result, t)
|
||||
}
|
||||
}
|
||||
return result, nil
|
||||
}
|
||||
|
||||
// tagsFromMap returns the tags for the given map of data.
|
||||
func autoscalingTagsFromMap(m map[string]interface{}, resourceID string) []*autoscaling.Tag {
|
||||
func autoscalingTagsFromMap(m map[string]interface{}, resourceID string) ([]*autoscaling.Tag, error) {
|
||||
result := make([]*autoscaling.Tag, 0, len(m))
|
||||
for k, v := range m {
|
||||
attr := v.(map[string]interface{})
|
||||
t := &autoscaling.Tag{
|
||||
Key: aws.String(k),
|
||||
Value: aws.String(attr["value"].(string)),
|
||||
PropagateAtLaunch: aws.Bool(attr["propagate_at_launch"].(bool)),
|
||||
ResourceId: aws.String(resourceID),
|
||||
ResourceType: aws.String("auto-scaling-group"),
|
||||
for _, v := range m {
|
||||
attr, ok := v.(map[string]interface{})
|
||||
if !ok {
|
||||
continue
|
||||
}
|
||||
if !tagIgnoredAutoscaling(t) {
|
||||
|
||||
t, err := autoscalingTagFromMap(attr, resourceID)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
if t != nil {
|
||||
result = append(result, t)
|
||||
}
|
||||
}
|
||||
|
||||
return result
|
||||
return result, nil
|
||||
}
|
||||
|
||||
func autoscalingTagFromMap(attr map[string]interface{}, resourceID string) (*autoscaling.Tag, error) {
|
||||
if _, ok := attr["key"]; !ok {
|
||||
return nil, fmt.Errorf("%s: invalid tag attributes: key missing", resourceID)
|
||||
}
|
||||
|
||||
if _, ok := attr["value"]; !ok {
|
||||
return nil, fmt.Errorf("%s: invalid tag attributes: value missing", resourceID)
|
||||
}
|
||||
|
||||
if _, ok := attr["propagate_at_launch"]; !ok {
|
||||
return nil, fmt.Errorf("%s: invalid tag attributes: propagate_at_launch missing", resourceID)
|
||||
}
|
||||
|
||||
var propagateAtLaunch bool
|
||||
var err error
|
||||
|
||||
if v, ok := attr["propagate_at_launch"].(bool); ok {
|
||||
propagateAtLaunch = v
|
||||
}
|
||||
|
||||
if v, ok := attr["propagate_at_launch"].(string); ok {
|
||||
if propagateAtLaunch, err = strconv.ParseBool(v); err != nil {
|
||||
return nil, fmt.Errorf(
|
||||
"%s: invalid tag attribute: invalid value for propagate_at_launch: %s",
|
||||
resourceID,
|
||||
v,
|
||||
)
|
||||
}
|
||||
}
|
||||
|
||||
t := &autoscaling.Tag{
|
||||
Key: aws.String(attr["key"].(string)),
|
||||
Value: aws.String(attr["value"].(string)),
|
||||
PropagateAtLaunch: aws.Bool(propagateAtLaunch),
|
||||
ResourceId: aws.String(resourceID),
|
||||
ResourceType: aws.String("auto-scaling-group"),
|
||||
}
|
||||
|
||||
if tagIgnoredAutoscaling(t) {
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
return t, nil
|
||||
}
|
||||
|
||||
// autoscalingTagsToMap turns the list of tags into a map.
|
||||
|
@ -140,6 +253,7 @@ func autoscalingTagsToMap(ts []*autoscaling.Tag) map[string]interface{} {
|
|||
tags := make(map[string]interface{})
|
||||
for _, t := range ts {
|
||||
tag := map[string]interface{}{
|
||||
"key": *t.Key,
|
||||
"value": *t.Value,
|
||||
"propagate_at_launch": *t.PropagateAtLaunch,
|
||||
}
|
||||
|
@ -154,6 +268,7 @@ func autoscalingTagDescriptionsToMap(ts *[]*autoscaling.TagDescription) map[stri
|
|||
tags := make(map[string]map[string]interface{})
|
||||
for _, t := range *ts {
|
||||
tag := map[string]interface{}{
|
||||
"key": *t.Key,
|
||||
"value": *t.Value,
|
||||
"propagate_at_launch": *t.PropagateAtLaunch,
|
||||
}
|
||||
|
@ -190,7 +305,7 @@ func setToMapByKey(s *schema.Set, key string) map[string]interface{} {
|
|||
// compare a tag against a list of strings and checks if it should
|
||||
// be ignored or not
|
||||
func tagIgnoredAutoscaling(t *autoscaling.Tag) bool {
|
||||
filter := []string{"^aws:*"}
|
||||
filter := []string{"^aws:"}
|
||||
for _, v := range filter {
|
||||
log.Printf("[DEBUG] Matching %v with %v\n", v, *t.Key)
|
||||
if r, _ := regexp.MatchString(v, *t.Key); r == true {
|
||||
|
|
|
@ -20,24 +20,28 @@ func TestDiffAutoscalingTags(t *testing.T) {
|
|||
{
|
||||
Old: map[string]interface{}{
|
||||
"Name": map[string]interface{}{
|
||||
"key": "Name",
|
||||
"value": "bar",
|
||||
"propagate_at_launch": true,
|
||||
},
|
||||
},
|
||||
New: map[string]interface{}{
|
||||
"DifferentTag": map[string]interface{}{
|
||||
"key": "DifferentTag",
|
||||
"value": "baz",
|
||||
"propagate_at_launch": true,
|
||||
},
|
||||
},
|
||||
Create: map[string]interface{}{
|
||||
"DifferentTag": map[string]interface{}{
|
||||
"key": "DifferentTag",
|
||||
"value": "baz",
|
||||
"propagate_at_launch": true,
|
||||
},
|
||||
},
|
||||
Remove: map[string]interface{}{
|
||||
"Name": map[string]interface{}{
|
||||
"key": "Name",
|
||||
"value": "bar",
|
||||
"propagate_at_launch": true,
|
||||
},
|
||||
|
@ -48,24 +52,28 @@ func TestDiffAutoscalingTags(t *testing.T) {
|
|||
{
|
||||
Old: map[string]interface{}{
|
||||
"Name": map[string]interface{}{
|
||||
"key": "Name",
|
||||
"value": "bar",
|
||||
"propagate_at_launch": true,
|
||||
},
|
||||
},
|
||||
New: map[string]interface{}{
|
||||
"Name": map[string]interface{}{
|
||||
"key": "Name",
|
||||
"value": "baz",
|
||||
"propagate_at_launch": false,
|
||||
},
|
||||
},
|
||||
Create: map[string]interface{}{
|
||||
"Name": map[string]interface{}{
|
||||
"key": "Name",
|
||||
"value": "baz",
|
||||
"propagate_at_launch": false,
|
||||
},
|
||||
},
|
||||
Remove: map[string]interface{}{
|
||||
"Name": map[string]interface{}{
|
||||
"key": "Name",
|
||||
"value": "bar",
|
||||
"propagate_at_launch": true,
|
||||
},
|
||||
|
@ -76,10 +84,20 @@ func TestDiffAutoscalingTags(t *testing.T) {
|
|||
var resourceID = "sample"
|
||||
|
||||
for i, tc := range cases {
|
||||
awsTagsOld := autoscalingTagsFromMap(tc.Old, resourceID)
|
||||
awsTagsNew := autoscalingTagsFromMap(tc.New, resourceID)
|
||||
awsTagsOld, err := autoscalingTagsFromMap(tc.Old, resourceID)
|
||||
if err != nil {
|
||||
t.Fatalf("%d: unexpected error convertig old tags: %v", i, err)
|
||||
}
|
||||
|
||||
c, r := diffAutoscalingTags(awsTagsOld, awsTagsNew, resourceID)
|
||||
awsTagsNew, err := autoscalingTagsFromMap(tc.New, resourceID)
|
||||
if err != nil {
|
||||
t.Fatalf("%d: unexpected error convertig new tags: %v", i, err)
|
||||
}
|
||||
|
||||
c, r, err := diffAutoscalingTags(awsTagsOld, awsTagsNew, resourceID)
|
||||
if err != nil {
|
||||
t.Fatalf("%d: unexpected error diff'ing tags: %v", i, err)
|
||||
}
|
||||
|
||||
cm := autoscalingTagsToMap(c)
|
||||
rm := autoscalingTagsToMap(r)
|
||||
|
|
|
@ -773,21 +773,31 @@ func originCustomHeaderHash(v interface{}) int {
|
|||
}
|
||||
|
||||
func expandCustomOriginConfig(m map[string]interface{}) *cloudfront.CustomOriginConfig {
|
||||
return &cloudfront.CustomOriginConfig{
|
||||
OriginProtocolPolicy: aws.String(m["origin_protocol_policy"].(string)),
|
||||
HTTPPort: aws.Int64(int64(m["http_port"].(int))),
|
||||
HTTPSPort: aws.Int64(int64(m["https_port"].(int))),
|
||||
OriginSslProtocols: expandCustomOriginConfigSSL(m["origin_ssl_protocols"].([]interface{})),
|
||||
|
||||
customOrigin := &cloudfront.CustomOriginConfig{
|
||||
OriginProtocolPolicy: aws.String(m["origin_protocol_policy"].(string)),
|
||||
HTTPPort: aws.Int64(int64(m["http_port"].(int))),
|
||||
HTTPSPort: aws.Int64(int64(m["https_port"].(int))),
|
||||
OriginSslProtocols: expandCustomOriginConfigSSL(m["origin_ssl_protocols"].([]interface{})),
|
||||
OriginReadTimeout: aws.Int64(int64(m["origin_read_timeout"].(int))),
|
||||
OriginKeepaliveTimeout: aws.Int64(int64(m["origin_keepalive_timeout"].(int))),
|
||||
}
|
||||
|
||||
return customOrigin
|
||||
}
|
||||
|
||||
func flattenCustomOriginConfig(cor *cloudfront.CustomOriginConfig) map[string]interface{} {
|
||||
return map[string]interface{}{
|
||||
"origin_protocol_policy": *cor.OriginProtocolPolicy,
|
||||
"http_port": int(*cor.HTTPPort),
|
||||
"https_port": int(*cor.HTTPSPort),
|
||||
"origin_ssl_protocols": flattenCustomOriginConfigSSL(cor.OriginSslProtocols),
|
||||
|
||||
customOrigin := map[string]interface{}{
|
||||
"origin_protocol_policy": *cor.OriginProtocolPolicy,
|
||||
"http_port": int(*cor.HTTPPort),
|
||||
"https_port": int(*cor.HTTPSPort),
|
||||
"origin_ssl_protocols": flattenCustomOriginConfigSSL(cor.OriginSslProtocols),
|
||||
"origin_read_timeout": int(*cor.OriginReadTimeout),
|
||||
"origin_keepalive_timeout": int(*cor.OriginKeepaliveTimeout),
|
||||
}
|
||||
|
||||
return customOrigin
|
||||
}
|
||||
|
||||
// Assemble the hash for the aws_cloudfront_distribution custom_origin_config
|
||||
|
@ -801,6 +811,9 @@ func customOriginConfigHash(v interface{}) int {
|
|||
for _, v := range sortInterfaceSlice(m["origin_ssl_protocols"].([]interface{})) {
|
||||
buf.WriteString(fmt.Sprintf("%s-", v.(string)))
|
||||
}
|
||||
buf.WriteString(fmt.Sprintf("%d-", m["origin_keepalive_timeout"].(int)))
|
||||
buf.WriteString(fmt.Sprintf("%d-", m["origin_read_timeout"].(int)))
|
||||
|
||||
return hashcode.String(buf.String())
|
||||
}
|
||||
|
||||
|
|
|
@ -117,10 +117,12 @@ func originCustomHeaderConf2() map[string]interface{} {
|
|||
|
||||
func customOriginConf() map[string]interface{} {
|
||||
return map[string]interface{}{
|
||||
"origin_protocol_policy": "http-only",
|
||||
"http_port": 80,
|
||||
"https_port": 443,
|
||||
"origin_ssl_protocols": customOriginSslProtocolsConf(),
|
||||
"origin_protocol_policy": "http-only",
|
||||
"http_port": 80,
|
||||
"https_port": 443,
|
||||
"origin_ssl_protocols": customOriginSslProtocolsConf(),
|
||||
"origin_read_timeout": 30,
|
||||
"origin_keepalive_timeout": 5,
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -785,6 +787,12 @@ func TestCloudFrontStructure_expandCustomOriginConfig(t *testing.T) {
|
|||
if *co.HTTPSPort != 443 {
|
||||
t.Fatalf("Expected HTTPSPort to be 443, got %v", *co.HTTPSPort)
|
||||
}
|
||||
if *co.OriginReadTimeout != 30 {
|
||||
t.Fatalf("Expected Origin Read Timeout to be 30, got %v", *co.OriginReadTimeout)
|
||||
}
|
||||
if *co.OriginKeepaliveTimeout != 5 {
|
||||
t.Fatalf("Expected Origin Keepalive Timeout to be 5, got %v", *co.OriginKeepaliveTimeout)
|
||||
}
|
||||
}
|
||||
|
||||
func TestCloudFrontStructure_flattenCustomOriginConfig(t *testing.T) {
|
||||
|
|
|
@ -28,8 +28,10 @@ import (
|
|||
"github.com/aws/aws-sdk-go/service/codecommit"
|
||||
"github.com/aws/aws-sdk-go/service/codedeploy"
|
||||
"github.com/aws/aws-sdk-go/service/codepipeline"
|
||||
"github.com/aws/aws-sdk-go/service/cognitoidentity"
|
||||
"github.com/aws/aws-sdk-go/service/configservice"
|
||||
"github.com/aws/aws-sdk-go/service/databasemigrationservice"
|
||||
"github.com/aws/aws-sdk-go/service/devicefarm"
|
||||
"github.com/aws/aws-sdk-go/service/directoryservice"
|
||||
"github.com/aws/aws-sdk-go/service/dynamodb"
|
||||
"github.com/aws/aws-sdk-go/service/ec2"
|
||||
|
@ -64,6 +66,7 @@ import (
|
|||
"github.com/aws/aws-sdk-go/service/ssm"
|
||||
"github.com/aws/aws-sdk-go/service/sts"
|
||||
"github.com/aws/aws-sdk-go/service/waf"
|
||||
"github.com/aws/aws-sdk-go/service/wafregional"
|
||||
"github.com/davecgh/go-spew/spew"
|
||||
"github.com/hashicorp/errwrap"
|
||||
"github.com/hashicorp/go-cleanhttp"
|
||||
|
@ -88,15 +91,25 @@ type Config struct {
|
|||
AllowedAccountIds []interface{}
|
||||
ForbiddenAccountIds []interface{}
|
||||
|
||||
DynamoDBEndpoint string
|
||||
KinesisEndpoint string
|
||||
Ec2Endpoint string
|
||||
IamEndpoint string
|
||||
ElbEndpoint string
|
||||
S3Endpoint string
|
||||
Insecure bool
|
||||
CloudFormationEndpoint string
|
||||
CloudWatchEndpoint string
|
||||
CloudWatchEventsEndpoint string
|
||||
CloudWatchLogsEndpoint string
|
||||
DynamoDBEndpoint string
|
||||
DeviceFarmEndpoint string
|
||||
Ec2Endpoint string
|
||||
ElbEndpoint string
|
||||
IamEndpoint string
|
||||
KinesisEndpoint string
|
||||
KmsEndpoint string
|
||||
RdsEndpoint string
|
||||
S3Endpoint string
|
||||
SnsEndpoint string
|
||||
SqsEndpoint string
|
||||
Insecure bool
|
||||
|
||||
SkipCredsValidation bool
|
||||
SkipGetEC2Platforms bool
|
||||
SkipRegionValidation bool
|
||||
SkipRequestingAccountId bool
|
||||
SkipMetadataApiCheck bool
|
||||
|
@ -110,7 +123,9 @@ type AWSClient struct {
|
|||
cloudwatchconn *cloudwatch.CloudWatch
|
||||
cloudwatchlogsconn *cloudwatchlogs.CloudWatchLogs
|
||||
cloudwatcheventsconn *cloudwatchevents.CloudWatchEvents
|
||||
cognitoconn *cognitoidentity.CognitoIdentity
|
||||
configconn *configservice.ConfigService
|
||||
devicefarmconn *devicefarm.DeviceFarm
|
||||
dmsconn *databasemigrationservice.DatabaseMigrationService
|
||||
dsconn *directoryservice.DirectoryService
|
||||
dynamodbconn *dynamodb.DynamoDB
|
||||
|
@ -158,6 +173,29 @@ type AWSClient struct {
|
|||
sfnconn *sfn.SFN
|
||||
ssmconn *ssm.SSM
|
||||
wafconn *waf.WAF
|
||||
wafregionalconn *wafregional.WAFRegional
|
||||
}
|
||||
|
||||
func (c *AWSClient) S3() *s3.S3 {
|
||||
return c.s3conn
|
||||
}
|
||||
|
||||
func (c *AWSClient) DynamoDB() *dynamodb.DynamoDB {
|
||||
return c.dynamodbconn
|
||||
}
|
||||
|
||||
func (c *AWSClient) IsGovCloud() bool {
|
||||
if c.region == "us-gov-west-1" {
|
||||
return true
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
func (c *AWSClient) IsChinaCloud() bool {
|
||||
if c.region == "cn-north-1" {
|
||||
return true
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
// Client configures and returns a fully initialized AWSClient
|
||||
|
@ -239,12 +277,24 @@ func (c *Config) Client() (interface{}, error) {
|
|||
usEast1Sess := sess.Copy(&aws.Config{Region: aws.String("us-east-1")})
|
||||
|
||||
// Some services have user-configurable endpoints
|
||||
awsCfSess := sess.Copy(&aws.Config{Endpoint: aws.String(c.CloudFormationEndpoint)})
|
||||
awsCwSess := sess.Copy(&aws.Config{Endpoint: aws.String(c.CloudWatchEndpoint)})
|
||||
awsCweSess := sess.Copy(&aws.Config{Endpoint: aws.String(c.CloudWatchEventsEndpoint)})
|
||||
awsCwlSess := sess.Copy(&aws.Config{Endpoint: aws.String(c.CloudWatchLogsEndpoint)})
|
||||
awsDynamoSess := sess.Copy(&aws.Config{Endpoint: aws.String(c.DynamoDBEndpoint)})
|
||||
awsEc2Sess := sess.Copy(&aws.Config{Endpoint: aws.String(c.Ec2Endpoint)})
|
||||
awsElbSess := sess.Copy(&aws.Config{Endpoint: aws.String(c.ElbEndpoint)})
|
||||
awsIamSess := sess.Copy(&aws.Config{Endpoint: aws.String(c.IamEndpoint)})
|
||||
awsKinesisSess := sess.Copy(&aws.Config{Endpoint: aws.String(c.KinesisEndpoint)})
|
||||
awsKmsSess := sess.Copy(&aws.Config{Endpoint: aws.String(c.KmsEndpoint)})
|
||||
awsRdsSess := sess.Copy(&aws.Config{Endpoint: aws.String(c.RdsEndpoint)})
|
||||
awsS3Sess := sess.Copy(&aws.Config{Endpoint: aws.String(c.S3Endpoint)})
|
||||
dynamoSess := sess.Copy(&aws.Config{Endpoint: aws.String(c.DynamoDBEndpoint)})
|
||||
kinesisSess := sess.Copy(&aws.Config{Endpoint: aws.String(c.KinesisEndpoint)})
|
||||
awsSnsSess := sess.Copy(&aws.Config{Endpoint: aws.String(c.SnsEndpoint)})
|
||||
awsSqsSess := sess.Copy(&aws.Config{Endpoint: aws.String(c.SqsEndpoint)})
|
||||
awsDeviceFarmSess := sess.Copy(&aws.Config{Endpoint: aws.String(c.DeviceFarmEndpoint)})
|
||||
|
||||
log.Println("[INFO] Initializing DeviceFarm SDK connection")
|
||||
client.devicefarmconn = devicefarm.New(awsDeviceFarmSess)
|
||||
|
||||
// These two services need to be set up early so we can check on AccountID
|
||||
client.iamconn = iam.New(awsIamSess)
|
||||
|
@ -272,33 +322,36 @@ func (c *Config) Client() (interface{}, error) {
|
|||
|
||||
client.ec2conn = ec2.New(awsEc2Sess)
|
||||
|
||||
supportedPlatforms, err := GetSupportedEC2Platforms(client.ec2conn)
|
||||
if err != nil {
|
||||
// We intentionally fail *silently* because there's a chance
|
||||
// user just doesn't have ec2:DescribeAccountAttributes permissions
|
||||
log.Printf("[WARN] Unable to get supported EC2 platforms: %s", err)
|
||||
} else {
|
||||
client.supportedplatforms = supportedPlatforms
|
||||
if !c.SkipGetEC2Platforms {
|
||||
supportedPlatforms, err := GetSupportedEC2Platforms(client.ec2conn)
|
||||
if err != nil {
|
||||
// We intentionally fail *silently* because there's a chance
|
||||
// user just doesn't have ec2:DescribeAccountAttributes permissions
|
||||
log.Printf("[WARN] Unable to get supported EC2 platforms: %s", err)
|
||||
} else {
|
||||
client.supportedplatforms = supportedPlatforms
|
||||
}
|
||||
}
|
||||
|
||||
client.acmconn = acm.New(sess)
|
||||
client.apigateway = apigateway.New(sess)
|
||||
client.appautoscalingconn = applicationautoscaling.New(sess)
|
||||
client.autoscalingconn = autoscaling.New(sess)
|
||||
client.cfconn = cloudformation.New(sess)
|
||||
client.cfconn = cloudformation.New(awsCfSess)
|
||||
client.cloudfrontconn = cloudfront.New(sess)
|
||||
client.cloudtrailconn = cloudtrail.New(sess)
|
||||
client.cloudwatchconn = cloudwatch.New(sess)
|
||||
client.cloudwatcheventsconn = cloudwatchevents.New(sess)
|
||||
client.cloudwatchlogsconn = cloudwatchlogs.New(sess)
|
||||
client.cloudwatchconn = cloudwatch.New(awsCwSess)
|
||||
client.cloudwatcheventsconn = cloudwatchevents.New(awsCweSess)
|
||||
client.cloudwatchlogsconn = cloudwatchlogs.New(awsCwlSess)
|
||||
client.codecommitconn = codecommit.New(sess)
|
||||
client.codebuildconn = codebuild.New(sess)
|
||||
client.codedeployconn = codedeploy.New(sess)
|
||||
client.configconn = configservice.New(sess)
|
||||
client.cognitoconn = cognitoidentity.New(sess)
|
||||
client.dmsconn = databasemigrationservice.New(sess)
|
||||
client.codepipelineconn = codepipeline.New(sess)
|
||||
client.dsconn = directoryservice.New(sess)
|
||||
client.dynamodbconn = dynamodb.New(dynamoSess)
|
||||
client.dynamodbconn = dynamodb.New(awsDynamoSess)
|
||||
client.ecrconn = ecr.New(sess)
|
||||
client.ecsconn = ecs.New(sess)
|
||||
client.efsconn = efs.New(sess)
|
||||
|
@ -312,22 +365,23 @@ func (c *Config) Client() (interface{}, error) {
|
|||
client.firehoseconn = firehose.New(sess)
|
||||
client.inspectorconn = inspector.New(sess)
|
||||
client.glacierconn = glacier.New(sess)
|
||||
client.kinesisconn = kinesis.New(kinesisSess)
|
||||
client.kmsconn = kms.New(sess)
|
||||
client.kinesisconn = kinesis.New(awsKinesisSess)
|
||||
client.kmsconn = kms.New(awsKmsSess)
|
||||
client.lambdaconn = lambda.New(sess)
|
||||
client.lightsailconn = lightsail.New(usEast1Sess)
|
||||
client.opsworksconn = opsworks.New(sess)
|
||||
client.r53conn = route53.New(usEast1Sess)
|
||||
client.rdsconn = rds.New(sess)
|
||||
client.rdsconn = rds.New(awsRdsSess)
|
||||
client.redshiftconn = redshift.New(sess)
|
||||
client.simpledbconn = simpledb.New(sess)
|
||||
client.s3conn = s3.New(awsS3Sess)
|
||||
client.sesConn = ses.New(sess)
|
||||
client.sfnconn = sfn.New(sess)
|
||||
client.snsconn = sns.New(sess)
|
||||
client.sqsconn = sqs.New(sess)
|
||||
client.snsconn = sns.New(awsSnsSess)
|
||||
client.sqsconn = sqs.New(awsSqsSess)
|
||||
client.ssmconn = ssm.New(sess)
|
||||
client.wafconn = waf.New(sess)
|
||||
client.wafregionalconn = wafregional.New(sess)
|
||||
|
||||
return &client, nil
|
||||
}
|
||||
|
|
|
@ -5,8 +5,6 @@ import (
|
|||
"fmt"
|
||||
"log"
|
||||
"regexp"
|
||||
"sort"
|
||||
"time"
|
||||
|
||||
"github.com/aws/aws-sdk-go/service/ec2"
|
||||
"github.com/hashicorp/terraform/helper/hashcode"
|
||||
|
@ -181,7 +179,7 @@ func dataSourceAwsAmiRead(d *schema.ResourceData, meta interface{}) error {
|
|||
nameRegex, nameRegexOk := d.GetOk("name_regex")
|
||||
owners, ownersOk := d.GetOk("owners")
|
||||
|
||||
if executableUsersOk == false && filtersOk == false && nameRegexOk == false && ownersOk == false {
|
||||
if !executableUsersOk && !filtersOk && !nameRegexOk && !ownersOk {
|
||||
return fmt.Errorf("One of executable_users, filters, name_regex, or owners must be assigned")
|
||||
}
|
||||
|
||||
|
@ -249,21 +247,9 @@ func dataSourceAwsAmiRead(d *schema.ResourceData, meta interface{}) error {
|
|||
return amiDescriptionAttributes(d, image)
|
||||
}
|
||||
|
||||
type imageSort []*ec2.Image
|
||||
|
||||
func (a imageSort) Len() int { return len(a) }
|
||||
func (a imageSort) Swap(i, j int) { a[i], a[j] = a[j], a[i] }
|
||||
func (a imageSort) Less(i, j int) bool {
|
||||
itime, _ := time.Parse(time.RFC3339, *a[i].CreationDate)
|
||||
jtime, _ := time.Parse(time.RFC3339, *a[j].CreationDate)
|
||||
return itime.Unix() < jtime.Unix()
|
||||
}
|
||||
|
||||
// Returns the most recent AMI out of a slice of images.
|
||||
func mostRecentAmi(images []*ec2.Image) *ec2.Image {
|
||||
sortedImages := images
|
||||
sort.Sort(imageSort(sortedImages))
|
||||
return sortedImages[len(sortedImages)-1]
|
||||
return sortImages(images)[0]
|
||||
}
|
||||
|
||||
// populate the numerous fields that the image description returns.
|
||||
|
|
|
@ -0,0 +1,111 @@
|
|||
package aws
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"log"
|
||||
"regexp"
|
||||
|
||||
"github.com/aws/aws-sdk-go/service/ec2"
|
||||
"github.com/hashicorp/terraform/helper/hashcode"
|
||||
"github.com/hashicorp/terraform/helper/schema"
|
||||
)
|
||||
|
||||
func dataSourceAwsAmiIds() *schema.Resource {
|
||||
return &schema.Resource{
|
||||
Read: dataSourceAwsAmiIdsRead,
|
||||
|
||||
Schema: map[string]*schema.Schema{
|
||||
"filter": dataSourceFiltersSchema(),
|
||||
"executable_users": {
|
||||
Type: schema.TypeList,
|
||||
Optional: true,
|
||||
ForceNew: true,
|
||||
Elem: &schema.Schema{Type: schema.TypeString},
|
||||
},
|
||||
"name_regex": {
|
||||
Type: schema.TypeString,
|
||||
Optional: true,
|
||||
ForceNew: true,
|
||||
ValidateFunc: validateNameRegex,
|
||||
},
|
||||
"owners": {
|
||||
Type: schema.TypeList,
|
||||
Optional: true,
|
||||
ForceNew: true,
|
||||
Elem: &schema.Schema{Type: schema.TypeString},
|
||||
},
|
||||
"tags": dataSourceTagsSchema(),
|
||||
"ids": &schema.Schema{
|
||||
Type: schema.TypeList,
|
||||
Computed: true,
|
||||
Elem: &schema.Schema{Type: schema.TypeString},
|
||||
},
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
func dataSourceAwsAmiIdsRead(d *schema.ResourceData, meta interface{}) error {
|
||||
conn := meta.(*AWSClient).ec2conn
|
||||
|
||||
executableUsers, executableUsersOk := d.GetOk("executable_users")
|
||||
filters, filtersOk := d.GetOk("filter")
|
||||
nameRegex, nameRegexOk := d.GetOk("name_regex")
|
||||
owners, ownersOk := d.GetOk("owners")
|
||||
|
||||
if executableUsersOk == false && filtersOk == false && nameRegexOk == false && ownersOk == false {
|
||||
return fmt.Errorf("One of executable_users, filters, name_regex, or owners must be assigned")
|
||||
}
|
||||
|
||||
params := &ec2.DescribeImagesInput{}
|
||||
|
||||
if executableUsersOk {
|
||||
params.ExecutableUsers = expandStringList(executableUsers.([]interface{}))
|
||||
}
|
||||
if filtersOk {
|
||||
params.Filters = buildAwsDataSourceFilters(filters.(*schema.Set))
|
||||
}
|
||||
if ownersOk {
|
||||
o := expandStringList(owners.([]interface{}))
|
||||
|
||||
if len(o) > 0 {
|
||||
params.Owners = o
|
||||
}
|
||||
}
|
||||
|
||||
resp, err := conn.DescribeImages(params)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
var filteredImages []*ec2.Image
|
||||
imageIds := make([]string, 0)
|
||||
|
||||
if nameRegexOk {
|
||||
r := regexp.MustCompile(nameRegex.(string))
|
||||
for _, image := range resp.Images {
|
||||
// Check for a very rare case where the response would include no
|
||||
// image name. No name means nothing to attempt a match against,
|
||||
// therefore we are skipping such image.
|
||||
if image.Name == nil || *image.Name == "" {
|
||||
log.Printf("[WARN] Unable to find AMI name to match against "+
|
||||
"for image ID %q owned by %q, nothing to do.",
|
||||
*image.ImageId, *image.OwnerId)
|
||||
continue
|
||||
}
|
||||
if r.MatchString(*image.Name) {
|
||||
filteredImages = append(filteredImages, image)
|
||||
}
|
||||
}
|
||||
} else {
|
||||
filteredImages = resp.Images[:]
|
||||
}
|
||||
|
||||
for _, image := range sortImages(filteredImages) {
|
||||
imageIds = append(imageIds, *image.ImageId)
|
||||
}
|
||||
|
||||
d.SetId(fmt.Sprintf("%d", hashcode.String(params.String())))
|
||||
d.Set("ids", imageIds)
|
||||
|
||||
return nil
|
||||
}
|
|
@ -0,0 +1,128 @@
|
|||
package aws
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"testing"
|
||||
|
||||
"github.com/hashicorp/terraform/helper/resource"
|
||||
"github.com/satori/uuid"
|
||||
)
|
||||
|
||||
func TestAccDataSourceAwsAmiIds_basic(t *testing.T) {
|
||||
resource.Test(t, resource.TestCase{
|
||||
PreCheck: func() { testAccPreCheck(t) },
|
||||
Providers: testAccProviders,
|
||||
Steps: []resource.TestStep{
|
||||
{
|
||||
Config: testAccDataSourceAwsAmiIdsConfig_basic,
|
||||
Check: resource.ComposeTestCheckFunc(
|
||||
testAccCheckAwsAmiDataSourceID("data.aws_ami_ids.ubuntu"),
|
||||
),
|
||||
},
|
||||
},
|
||||
})
|
||||
}
|
||||
|
||||
func TestAccDataSourceAwsAmiIds_sorted(t *testing.T) {
|
||||
uuid := uuid.NewV4().String()
|
||||
|
||||
resource.Test(t, resource.TestCase{
|
||||
PreCheck: func() { testAccPreCheck(t) },
|
||||
Providers: testAccProviders,
|
||||
Steps: []resource.TestStep{
|
||||
{
|
||||
Config: testAccDataSourceAwsAmiIdsConfig_sorted1(uuid),
|
||||
Check: resource.ComposeTestCheckFunc(
|
||||
resource.TestCheckResourceAttrSet("aws_ami_from_instance.a", "id"),
|
||||
resource.TestCheckResourceAttrSet("aws_ami_from_instance.b", "id"),
|
||||
),
|
||||
},
|
||||
{
|
||||
Config: testAccDataSourceAwsAmiIdsConfig_sorted2(uuid),
|
||||
Check: resource.ComposeTestCheckFunc(
|
||||
testAccCheckAwsEbsSnapshotDataSourceID("data.aws_ami_ids.test"),
|
||||
resource.TestCheckResourceAttr("data.aws_ami_ids.test", "ids.#", "2"),
|
||||
resource.TestCheckResourceAttrPair(
|
||||
"data.aws_ami_ids.test", "ids.0",
|
||||
"aws_ami_from_instance.b", "id"),
|
||||
resource.TestCheckResourceAttrPair(
|
||||
"data.aws_ami_ids.test", "ids.1",
|
||||
"aws_ami_from_instance.a", "id"),
|
||||
),
|
||||
},
|
||||
},
|
||||
})
|
||||
}
|
||||
|
||||
func TestAccDataSourceAwsAmiIds_empty(t *testing.T) {
|
||||
resource.Test(t, resource.TestCase{
|
||||
PreCheck: func() { testAccPreCheck(t) },
|
||||
Providers: testAccProviders,
|
||||
Steps: []resource.TestStep{
|
||||
{
|
||||
Config: testAccDataSourceAwsAmiIdsConfig_empty,
|
||||
Check: resource.ComposeTestCheckFunc(
|
||||
testAccCheckAwsAmiDataSourceID("data.aws_ami_ids.empty"),
|
||||
resource.TestCheckResourceAttr("data.aws_ami_ids.empty", "ids.#", "0"),
|
||||
),
|
||||
},
|
||||
},
|
||||
})
|
||||
}
|
||||
|
||||
const testAccDataSourceAwsAmiIdsConfig_basic = `
|
||||
data "aws_ami_ids" "ubuntu" {
|
||||
owners = ["099720109477"]
|
||||
|
||||
filter {
|
||||
name = "name"
|
||||
values = ["ubuntu/images/ubuntu-*-*-amd64-server-*"]
|
||||
}
|
||||
}
|
||||
`
|
||||
|
||||
func testAccDataSourceAwsAmiIdsConfig_sorted1(uuid string) string {
|
||||
return fmt.Sprintf(`
|
||||
resource "aws_instance" "test" {
|
||||
ami = "ami-efd0428f"
|
||||
instance_type = "m3.medium"
|
||||
|
||||
count = 2
|
||||
}
|
||||
|
||||
resource "aws_ami_from_instance" "a" {
|
||||
name = "tf-test-%s-a"
|
||||
source_instance_id = "${aws_instance.test.*.id[0]}"
|
||||
snapshot_without_reboot = true
|
||||
}
|
||||
|
||||
resource "aws_ami_from_instance" "b" {
|
||||
name = "tf-test-%s-b"
|
||||
source_instance_id = "${aws_instance.test.*.id[1]}"
|
||||
snapshot_without_reboot = true
|
||||
|
||||
// We want to ensure that 'aws_ami_from_instance.a.creation_date' is less
|
||||
// than 'aws_ami_from_instance.b.creation_date' so that we can ensure that
|
||||
// the images are being sorted correctly.
|
||||
depends_on = ["aws_ami_from_instance.a"]
|
||||
}
|
||||
`, uuid, uuid)
|
||||
}
|
||||
|
||||
func testAccDataSourceAwsAmiIdsConfig_sorted2(uuid string) string {
|
||||
return testAccDataSourceAwsAmiIdsConfig_sorted1(uuid) + fmt.Sprintf(`
|
||||
data "aws_ami_ids" "test" {
|
||||
owners = ["self"]
|
||||
name_regex = "^tf-test-%s-"
|
||||
}
|
||||
`, uuid)
|
||||
}
|
||||
|
||||
const testAccDataSourceAwsAmiIdsConfig_empty = `
|
||||
data "aws_ami_ids" "empty" {
|
||||
filter {
|
||||
name = "name"
|
||||
values = []
|
||||
}
|
||||
}
|
||||
`
|
|
@ -5,6 +5,7 @@ import (
|
|||
"log"
|
||||
"time"
|
||||
|
||||
"github.com/aws/aws-sdk-go/service/sts"
|
||||
"github.com/hashicorp/terraform/helper/schema"
|
||||
)
|
||||
|
||||
|
@ -17,24 +18,33 @@ func dataSourceAwsCallerIdentity() *schema.Resource {
|
|||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
},
|
||||
|
||||
"arn": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
},
|
||||
|
||||
"user_id": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
},
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
func dataSourceAwsCallerIdentityRead(d *schema.ResourceData, meta interface{}) error {
|
||||
client := meta.(*AWSClient)
|
||||
client := meta.(*AWSClient).stsconn
|
||||
|
||||
log.Printf("[DEBUG] Reading Caller Identity.")
|
||||
d.SetId(time.Now().UTC().String())
|
||||
|
||||
if client.accountid == "" {
|
||||
log.Println("[DEBUG] No Account ID available, failing")
|
||||
return fmt.Errorf("No AWS Account ID is available to the provider. Please ensure that\n" +
|
||||
"skip_requesting_account_id is not set on the AWS provider.")
|
||||
res, err := client.GetCallerIdentity(&sts.GetCallerIdentityInput{})
|
||||
if err != nil {
|
||||
return fmt.Errorf("Error getting Caller Identity: %v", err)
|
||||
}
|
||||
|
||||
log.Printf("[DEBUG] Setting AWS Account ID to %s.", client.accountid)
|
||||
d.Set("account_id", meta.(*AWSClient).accountid)
|
||||
log.Printf("[DEBUG] Received Caller Identity: %s", res)
|
||||
|
||||
d.SetId(time.Now().UTC().String())
|
||||
d.Set("account_id", res.Account)
|
||||
d.Set("arn", res.Arn)
|
||||
d.Set("user_id", res.UserId)
|
||||
return nil
|
||||
}
|
||||
|
|
|
@ -39,6 +39,14 @@ func testAccCheckAwsCallerIdentityAccountId(n string) resource.TestCheckFunc {
|
|||
return fmt.Errorf("Incorrect Account ID: expected %q, got %q", expected, rs.Primary.Attributes["account_id"])
|
||||
}
|
||||
|
||||
if rs.Primary.Attributes["user_id"] == "" {
|
||||
return fmt.Errorf("UserID expected to not be nil")
|
||||
}
|
||||
|
||||
if rs.Primary.Attributes["arn"] == "" {
|
||||
return fmt.Errorf("ARN expected to not be nil")
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
|
|
@ -188,6 +188,11 @@ func dataSourceAwsDbInstance() *schema.Resource {
|
|||
Computed: true,
|
||||
Elem: &schema.Schema{Type: schema.TypeString},
|
||||
},
|
||||
|
||||
"replicate_source_db": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
},
|
||||
},
|
||||
}
|
||||
}
|
||||
|
@ -271,6 +276,7 @@ func dataSourceAwsDbInstanceRead(d *schema.ResourceData, meta interface{}) error
|
|||
d.Set("storage_encrypted", dbInstance.StorageEncrypted)
|
||||
d.Set("storage_type", dbInstance.StorageType)
|
||||
d.Set("timezone", dbInstance.Timezone)
|
||||
d.Set("replicate_source_db", dbInstance.ReadReplicaSourceDBInstanceIdentifier)
|
||||
|
||||
var vpcSecurityGroups []string
|
||||
for _, v := range dbInstance.VpcSecurityGroups {
|
||||
|
|
|
@ -0,0 +1,217 @@
|
|||
package aws
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"log"
|
||||
"sort"
|
||||
"time"
|
||||
|
||||
"github.com/aws/aws-sdk-go/aws"
|
||||
"github.com/aws/aws-sdk-go/service/rds"
|
||||
"github.com/hashicorp/terraform/helper/schema"
|
||||
)
|
||||
|
||||
func dataSourceAwsDbSnapshot() *schema.Resource {
|
||||
return &schema.Resource{
|
||||
Read: dataSourceAwsDbSnapshotRead,
|
||||
|
||||
Schema: map[string]*schema.Schema{
|
||||
//selection criteria
|
||||
"db_instance_identifier": {
|
||||
Type: schema.TypeString,
|
||||
Optional: true,
|
||||
ForceNew: true,
|
||||
},
|
||||
|
||||
"db_snapshot_identifier": {
|
||||
Type: schema.TypeString,
|
||||
Optional: true,
|
||||
ForceNew: true,
|
||||
},
|
||||
|
||||
"snapshot_type": {
|
||||
Type: schema.TypeString,
|
||||
Optional: true,
|
||||
ForceNew: true,
|
||||
},
|
||||
|
||||
"include_shared": {
|
||||
Type: schema.TypeBool,
|
||||
Optional: true,
|
||||
ForceNew: true,
|
||||
Default: false,
|
||||
},
|
||||
|
||||
"include_public": {
|
||||
Type: schema.TypeBool,
|
||||
Optional: true,
|
||||
ForceNew: true,
|
||||
Default: false,
|
||||
},
|
||||
"most_recent": {
|
||||
Type: schema.TypeBool,
|
||||
Optional: true,
|
||||
Default: false,
|
||||
ForceNew: true,
|
||||
},
|
||||
|
||||
//Computed values returned
|
||||
"allocated_storage": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
},
|
||||
"availability_zone": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
},
|
||||
"db_snapshot_arn": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
},
|
||||
"encrypted": {
|
||||
Type: schema.TypeBool,
|
||||
Computed: true,
|
||||
},
|
||||
"engine": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
},
|
||||
"engine_version": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
},
|
||||
"iops": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
},
|
||||
"kms_key_id": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
},
|
||||
"license_model": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
},
|
||||
"option_group_name": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
},
|
||||
"port": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
},
|
||||
"source_db_snapshot_identifier": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
},
|
||||
"source_region": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
},
|
||||
"snapshot_create_time": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
},
|
||||
"status": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
},
|
||||
"storage_type": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
},
|
||||
"vpc_id": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
},
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
func dataSourceAwsDbSnapshotRead(d *schema.ResourceData, meta interface{}) error {
|
||||
conn := meta.(*AWSClient).rdsconn
|
||||
|
||||
instanceIdentifier, instanceIdentifierOk := d.GetOk("db_instance_identifier")
|
||||
snapshotIdentifier, snapshotIdentifierOk := d.GetOk("db_snapshot_identifier")
|
||||
|
||||
if !instanceIdentifierOk && !snapshotIdentifierOk {
|
||||
return fmt.Errorf("One of db_snapshot_indentifier or db_instance_identifier must be assigned")
|
||||
}
|
||||
|
||||
params := &rds.DescribeDBSnapshotsInput{
|
||||
IncludePublic: aws.Bool(d.Get("include_public").(bool)),
|
||||
IncludeShared: aws.Bool(d.Get("include_shared").(bool)),
|
||||
}
|
||||
if v, ok := d.GetOk("snapshot_type"); ok {
|
||||
params.SnapshotType = aws.String(v.(string))
|
||||
}
|
||||
if instanceIdentifierOk {
|
||||
params.DBInstanceIdentifier = aws.String(instanceIdentifier.(string))
|
||||
}
|
||||
if snapshotIdentifierOk {
|
||||
params.DBSnapshotIdentifier = aws.String(snapshotIdentifier.(string))
|
||||
}
|
||||
|
||||
resp, err := conn.DescribeDBSnapshots(params)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if len(resp.DBSnapshots) < 1 {
|
||||
return fmt.Errorf("Your query returned no results. Please change your search criteria and try again.")
|
||||
}
|
||||
|
||||
var snapshot *rds.DBSnapshot
|
||||
if len(resp.DBSnapshots) > 1 {
|
||||
recent := d.Get("most_recent").(bool)
|
||||
log.Printf("[DEBUG] aws_db_snapshot - multiple results found and `most_recent` is set to: %t", recent)
|
||||
if recent {
|
||||
snapshot = mostRecentDbSnapshot(resp.DBSnapshots)
|
||||
} else {
|
||||
return fmt.Errorf("Your query returned more than one result. Please try a more specific search criteria.")
|
||||
}
|
||||
} else {
|
||||
snapshot = resp.DBSnapshots[0]
|
||||
}
|
||||
|
||||
return dbSnapshotDescriptionAttributes(d, snapshot)
|
||||
}
|
||||
|
||||
type rdsSnapshotSort []*rds.DBSnapshot
|
||||
|
||||
func (a rdsSnapshotSort) Len() int { return len(a) }
|
||||
func (a rdsSnapshotSort) Swap(i, j int) { a[i], a[j] = a[j], a[i] }
|
||||
func (a rdsSnapshotSort) Less(i, j int) bool {
|
||||
return (*a[i].SnapshotCreateTime).Before(*a[j].SnapshotCreateTime)
|
||||
}
|
||||
|
||||
func mostRecentDbSnapshot(snapshots []*rds.DBSnapshot) *rds.DBSnapshot {
|
||||
sortedSnapshots := snapshots
|
||||
sort.Sort(rdsSnapshotSort(sortedSnapshots))
|
||||
return sortedSnapshots[len(sortedSnapshots)-1]
|
||||
}
|
||||
|
||||
func dbSnapshotDescriptionAttributes(d *schema.ResourceData, snapshot *rds.DBSnapshot) error {
|
||||
d.SetId(*snapshot.DBInstanceIdentifier)
|
||||
d.Set("db_instance_identifier", snapshot.DBInstanceIdentifier)
|
||||
d.Set("db_snapshot_identifier", snapshot.DBSnapshotIdentifier)
|
||||
d.Set("snapshot_type", snapshot.SnapshotType)
|
||||
d.Set("allocated_storage", snapshot.AllocatedStorage)
|
||||
d.Set("availability_zone", snapshot.AvailabilityZone)
|
||||
d.Set("db_snapshot_arn", snapshot.DBSnapshotArn)
|
||||
d.Set("encrypted", snapshot.Encrypted)
|
||||
d.Set("engine", snapshot.Engine)
|
||||
d.Set("engine_version", snapshot.EngineVersion)
|
||||
d.Set("iops", snapshot.Iops)
|
||||
d.Set("kms_key_id", snapshot.KmsKeyId)
|
||||
d.Set("license_model", snapshot.LicenseModel)
|
||||
d.Set("option_group_name", snapshot.OptionGroupName)
|
||||
d.Set("port", snapshot.Port)
|
||||
d.Set("source_db_snapshot_identifier", snapshot.SourceDBSnapshotIdentifier)
|
||||
d.Set("source_region", snapshot.SourceRegion)
|
||||
d.Set("status", snapshot.Status)
|
||||
d.Set("vpc_id", snapshot.VpcId)
|
||||
d.Set("snapshot_create_time", snapshot.SnapshotCreateTime.Format(time.RFC3339))
|
||||
|
||||
return nil
|
||||
}
|
|
@ -0,0 +1,74 @@
|
|||
package aws
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"testing"
|
||||
|
||||
"github.com/hashicorp/terraform/helper/acctest"
|
||||
"github.com/hashicorp/terraform/helper/resource"
|
||||
"github.com/hashicorp/terraform/terraform"
|
||||
)
|
||||
|
||||
func TestAccAWSDbSnapshotDataSource_basic(t *testing.T) {
|
||||
rInt := acctest.RandInt()
|
||||
resource.Test(t, resource.TestCase{
|
||||
PreCheck: func() { testAccPreCheck(t) },
|
||||
Providers: testAccProviders,
|
||||
Steps: []resource.TestStep{
|
||||
{
|
||||
Config: testAccCheckAwsDbSnapshotDataSourceConfig(rInt),
|
||||
Check: resource.ComposeTestCheckFunc(
|
||||
testAccCheckAwsDbSnapshotDataSourceID("data.aws_db_snapshot.snapshot"),
|
||||
),
|
||||
},
|
||||
},
|
||||
})
|
||||
}
|
||||
|
||||
func testAccCheckAwsDbSnapshotDataSourceID(n string) resource.TestCheckFunc {
|
||||
return func(s *terraform.State) error {
|
||||
rs, ok := s.RootModule().Resources[n]
|
||||
if !ok {
|
||||
return fmt.Errorf("Can't find Volume data source: %s", n)
|
||||
}
|
||||
|
||||
if rs.Primary.ID == "" {
|
||||
return fmt.Errorf("Snapshot data source ID not set")
|
||||
}
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
func testAccCheckAwsDbSnapshotDataSourceConfig(rInt int) string {
|
||||
return fmt.Sprintf(`
|
||||
resource "aws_db_instance" "bar" {
|
||||
allocated_storage = 10
|
||||
engine = "MySQL"
|
||||
engine_version = "5.6.21"
|
||||
instance_class = "db.t1.micro"
|
||||
name = "baz"
|
||||
password = "barbarbarbar"
|
||||
username = "foo"
|
||||
|
||||
|
||||
# Maintenance Window is stored in lower case in the API, though not strictly
|
||||
# documented. Terraform will downcase this to match (as opposed to throw a
|
||||
# validation error).
|
||||
maintenance_window = "Fri:09:00-Fri:09:30"
|
||||
|
||||
backup_retention_period = 0
|
||||
|
||||
parameter_group_name = "default.mysql5.6"
|
||||
}
|
||||
|
||||
data "aws_db_snapshot" "snapshot" {
|
||||
most_recent = "true"
|
||||
db_snapshot_identifier = "${aws_db_snapshot.test.id}"
|
||||
}
|
||||
|
||||
|
||||
resource "aws_db_snapshot" "test" {
|
||||
db_instance_identifier = "${aws_db_instance.bar.id}"
|
||||
db_snapshot_identifier = "testsnapshot%d"
|
||||
}`, rInt)
|
||||
}
|
|
@ -3,7 +3,6 @@ package aws
|
|||
import (
|
||||
"fmt"
|
||||
"log"
|
||||
"sort"
|
||||
|
||||
"github.com/aws/aws-sdk-go/service/ec2"
|
||||
"github.com/hashicorp/terraform/helper/schema"
|
||||
|
@ -94,7 +93,7 @@ func dataSourceAwsEbsSnapshotRead(d *schema.ResourceData, meta interface{}) erro
|
|||
snapshotIds, snapshotIdsOk := d.GetOk("snapshot_ids")
|
||||
owners, ownersOk := d.GetOk("owners")
|
||||
|
||||
if restorableUsers == false && filtersOk == false && snapshotIds == false && ownersOk == false {
|
||||
if !restorableUsersOk && !filtersOk && !snapshotIdsOk && !ownersOk {
|
||||
return fmt.Errorf("One of snapshot_ids, filters, restorable_by_user_ids, or owners must be assigned")
|
||||
}
|
||||
|
||||
|
@ -138,20 +137,8 @@ func dataSourceAwsEbsSnapshotRead(d *schema.ResourceData, meta interface{}) erro
|
|||
return snapshotDescriptionAttributes(d, snapshot)
|
||||
}
|
||||
|
||||
type snapshotSort []*ec2.Snapshot
|
||||
|
||||
func (a snapshotSort) Len() int { return len(a) }
|
||||
func (a snapshotSort) Swap(i, j int) { a[i], a[j] = a[j], a[i] }
|
||||
func (a snapshotSort) Less(i, j int) bool {
|
||||
itime := *a[i].StartTime
|
||||
jtime := *a[j].StartTime
|
||||
return itime.Unix() < jtime.Unix()
|
||||
}
|
||||
|
||||
func mostRecentSnapshot(snapshots []*ec2.Snapshot) *ec2.Snapshot {
|
||||
sortedSnapshots := snapshots
|
||||
sort.Sort(snapshotSort(sortedSnapshots))
|
||||
return sortedSnapshots[len(sortedSnapshots)-1]
|
||||
return sortSnapshots(snapshots)[0]
|
||||
}
|
||||
|
||||
func snapshotDescriptionAttributes(d *schema.ResourceData, snapshot *ec2.Snapshot) error {
|
||||
|
|
|
@ -0,0 +1,77 @@
|
|||
package aws
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
|
||||
"github.com/aws/aws-sdk-go/service/ec2"
|
||||
"github.com/hashicorp/terraform/helper/hashcode"
|
||||
"github.com/hashicorp/terraform/helper/schema"
|
||||
)
|
||||
|
||||
func dataSourceAwsEbsSnapshotIds() *schema.Resource {
|
||||
return &schema.Resource{
|
||||
Read: dataSourceAwsEbsSnapshotIdsRead,
|
||||
|
||||
Schema: map[string]*schema.Schema{
|
||||
"filter": dataSourceFiltersSchema(),
|
||||
"owners": {
|
||||
Type: schema.TypeList,
|
||||
Optional: true,
|
||||
ForceNew: true,
|
||||
Elem: &schema.Schema{Type: schema.TypeString},
|
||||
},
|
||||
"restorable_by_user_ids": {
|
||||
Type: schema.TypeList,
|
||||
Optional: true,
|
||||
ForceNew: true,
|
||||
Elem: &schema.Schema{Type: schema.TypeString},
|
||||
},
|
||||
"tags": dataSourceTagsSchema(),
|
||||
"ids": &schema.Schema{
|
||||
Type: schema.TypeList,
|
||||
Computed: true,
|
||||
Elem: &schema.Schema{Type: schema.TypeString},
|
||||
},
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
func dataSourceAwsEbsSnapshotIdsRead(d *schema.ResourceData, meta interface{}) error {
|
||||
conn := meta.(*AWSClient).ec2conn
|
||||
|
||||
restorableUsers, restorableUsersOk := d.GetOk("restorable_by_user_ids")
|
||||
filters, filtersOk := d.GetOk("filter")
|
||||
owners, ownersOk := d.GetOk("owners")
|
||||
|
||||
if restorableUsers == false && filtersOk == false && ownersOk == false {
|
||||
return fmt.Errorf("One of filters, restorable_by_user_ids, or owners must be assigned")
|
||||
}
|
||||
|
||||
params := &ec2.DescribeSnapshotsInput{}
|
||||
|
||||
if restorableUsersOk {
|
||||
params.RestorableByUserIds = expandStringList(restorableUsers.([]interface{}))
|
||||
}
|
||||
if filtersOk {
|
||||
params.Filters = buildAwsDataSourceFilters(filters.(*schema.Set))
|
||||
}
|
||||
if ownersOk {
|
||||
params.OwnerIds = expandStringList(owners.([]interface{}))
|
||||
}
|
||||
|
||||
resp, err := conn.DescribeSnapshots(params)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
snapshotIds := make([]string, 0)
|
||||
|
||||
for _, snapshot := range sortSnapshots(resp.Snapshots) {
|
||||
snapshotIds = append(snapshotIds, *snapshot.SnapshotId)
|
||||
}
|
||||
|
||||
d.SetId(fmt.Sprintf("%d", hashcode.String(params.String())))
|
||||
d.Set("ids", snapshotIds)
|
||||
|
||||
return nil
|
||||
}
|
|
@ -0,0 +1,131 @@
|
|||
package aws
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"testing"
|
||||
|
||||
"github.com/hashicorp/terraform/helper/resource"
|
||||
"github.com/satori/uuid"
|
||||
)
|
||||
|
||||
func TestAccDataSourceAwsEbsSnapshotIds_basic(t *testing.T) {
|
||||
resource.Test(t, resource.TestCase{
|
||||
PreCheck: func() { testAccPreCheck(t) },
|
||||
Providers: testAccProviders,
|
||||
Steps: []resource.TestStep{
|
||||
{
|
||||
Config: testAccDataSourceAwsEbsSnapshotIdsConfig_basic,
|
||||
Check: resource.ComposeTestCheckFunc(
|
||||
testAccCheckAwsEbsSnapshotDataSourceID("data.aws_ebs_snapshot_ids.test"),
|
||||
),
|
||||
},
|
||||
},
|
||||
})
|
||||
}
|
||||
|
||||
func TestAccDataSourceAwsEbsSnapshotIds_sorted(t *testing.T) {
|
||||
uuid := uuid.NewV4().String()
|
||||
|
||||
resource.Test(t, resource.TestCase{
|
||||
PreCheck: func() { testAccPreCheck(t) },
|
||||
Providers: testAccProviders,
|
||||
Steps: []resource.TestStep{
|
||||
{
|
||||
Config: testAccDataSourceAwsEbsSnapshotIdsConfig_sorted1(uuid),
|
||||
Check: resource.ComposeTestCheckFunc(
|
||||
resource.TestCheckResourceAttrSet("aws_ebs_snapshot.a", "id"),
|
||||
resource.TestCheckResourceAttrSet("aws_ebs_snapshot.b", "id"),
|
||||
),
|
||||
},
|
||||
{
|
||||
Config: testAccDataSourceAwsEbsSnapshotIdsConfig_sorted2(uuid),
|
||||
Check: resource.ComposeTestCheckFunc(
|
||||
testAccCheckAwsEbsSnapshotDataSourceID("data.aws_ebs_snapshot_ids.test"),
|
||||
resource.TestCheckResourceAttr("data.aws_ebs_snapshot_ids.test", "ids.#", "2"),
|
||||
resource.TestCheckResourceAttrPair(
|
||||
"data.aws_ebs_snapshot_ids.test", "ids.0",
|
||||
"aws_ebs_snapshot.b", "id"),
|
||||
resource.TestCheckResourceAttrPair(
|
||||
"data.aws_ebs_snapshot_ids.test", "ids.1",
|
||||
"aws_ebs_snapshot.a", "id"),
|
||||
),
|
||||
},
|
||||
},
|
||||
})
|
||||
}
|
||||
|
||||
func TestAccDataSourceAwsEbsSnapshotIds_empty(t *testing.T) {
|
||||
resource.Test(t, resource.TestCase{
|
||||
PreCheck: func() { testAccPreCheck(t) },
|
||||
Providers: testAccProviders,
|
||||
Steps: []resource.TestStep{
|
||||
{
|
||||
Config: testAccDataSourceAwsEbsSnapshotIdsConfig_empty,
|
||||
Check: resource.ComposeTestCheckFunc(
|
||||
testAccCheckAwsEbsSnapshotDataSourceID("data.aws_ebs_snapshot_ids.empty"),
|
||||
resource.TestCheckResourceAttr("data.aws_ebs_snapshot_ids.empty", "ids.#", "0"),
|
||||
),
|
||||
},
|
||||
},
|
||||
})
|
||||
}
|
||||
|
||||
const testAccDataSourceAwsEbsSnapshotIdsConfig_basic = `
|
||||
resource "aws_ebs_volume" "test" {
|
||||
availability_zone = "us-west-2a"
|
||||
size = 1
|
||||
}
|
||||
|
||||
resource "aws_ebs_snapshot" "test" {
|
||||
volume_id = "${aws_ebs_volume.test.id}"
|
||||
}
|
||||
|
||||
data "aws_ebs_snapshot_ids" "test" {
|
||||
owners = ["self"]
|
||||
}
|
||||
`
|
||||
|
||||
func testAccDataSourceAwsEbsSnapshotIdsConfig_sorted1(uuid string) string {
|
||||
return fmt.Sprintf(`
|
||||
resource "aws_ebs_volume" "test" {
|
||||
availability_zone = "us-west-2a"
|
||||
size = 1
|
||||
|
||||
count = 2
|
||||
}
|
||||
|
||||
resource "aws_ebs_snapshot" "a" {
|
||||
volume_id = "${aws_ebs_volume.test.*.id[0]}"
|
||||
description = "tf-test-%s"
|
||||
}
|
||||
|
||||
resource "aws_ebs_snapshot" "b" {
|
||||
volume_id = "${aws_ebs_volume.test.*.id[1]}"
|
||||
description = "tf-test-%s"
|
||||
|
||||
// We want to ensure that 'aws_ebs_snapshot.a.creation_date' is less than
|
||||
// 'aws_ebs_snapshot.b.creation_date'/ so that we can ensure that the
|
||||
// snapshots are being sorted correctly.
|
||||
depends_on = ["aws_ebs_snapshot.a"]
|
||||
}
|
||||
`, uuid, uuid)
|
||||
}
|
||||
|
||||
func testAccDataSourceAwsEbsSnapshotIdsConfig_sorted2(uuid string) string {
|
||||
return testAccDataSourceAwsEbsSnapshotIdsConfig_sorted1(uuid) + fmt.Sprintf(`
|
||||
data "aws_ebs_snapshot_ids" "test" {
|
||||
owners = ["self"]
|
||||
|
||||
filter {
|
||||
name = "description"
|
||||
values = ["tf-test-%s"]
|
||||
}
|
||||
}
|
||||
`, uuid)
|
||||
}
|
||||
|
||||
const testAccDataSourceAwsEbsSnapshotIdsConfig_empty = `
|
||||
data "aws_ebs_snapshot_ids" "empty" {
|
||||
owners = ["000000000000"]
|
||||
}
|
||||
`
|
|
@ -44,7 +44,7 @@ func testAccCheckAwsEbsSnapshotDataSourceID(n string) resource.TestCheckFunc {
|
|||
return func(s *terraform.State) error {
|
||||
rs, ok := s.RootModule().Resources[n]
|
||||
if !ok {
|
||||
return fmt.Errorf("Can't find Volume data source: %s", n)
|
||||
return fmt.Errorf("Can't find snapshot data source: %s", n)
|
||||
}
|
||||
|
||||
if rs.Primary.ID == "" {
|
||||
|
|
|
@ -15,10 +15,10 @@ func TestAccAWSEcsDataSource_ecsTaskDefinition(t *testing.T) {
|
|||
resource.TestStep{
|
||||
Config: testAccCheckAwsEcsTaskDefinitionDataSourceConfig,
|
||||
Check: resource.ComposeTestCheckFunc(
|
||||
resource.TestMatchResourceAttr("data.aws_ecs_task_definition.mongo", "id", regexp.MustCompile("^arn:aws:ecs:us-west-2:[0-9]{12}:task-definition/mongodb:[1-9]*[0-9]$")),
|
||||
resource.TestMatchResourceAttr("data.aws_ecs_task_definition.mongo", "id", regexp.MustCompile("^arn:aws:ecs:us-west-2:[0-9]{12}:task-definition/mongodb:[1-9][0-9]*$")),
|
||||
resource.TestCheckResourceAttr("data.aws_ecs_task_definition.mongo", "family", "mongodb"),
|
||||
resource.TestCheckResourceAttr("data.aws_ecs_task_definition.mongo", "network_mode", "bridge"),
|
||||
resource.TestMatchResourceAttr("data.aws_ecs_task_definition.mongo", "revision", regexp.MustCompile("^[1-9]*[0-9]$")),
|
||||
resource.TestMatchResourceAttr("data.aws_ecs_task_definition.mongo", "revision", regexp.MustCompile("^[1-9][0-9]*$")),
|
||||
resource.TestCheckResourceAttr("data.aws_ecs_task_definition.mongo", "status", "ACTIVE"),
|
||||
resource.TestMatchResourceAttr("data.aws_ecs_task_definition.mongo", "task_role_arn", regexp.MustCompile("^arn:aws:iam::[0-9]{12}:role/mongo_role$")),
|
||||
),
|
||||
|
|
|
@ -0,0 +1,113 @@
|
|||
package aws
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"log"
|
||||
|
||||
"github.com/aws/aws-sdk-go/aws"
|
||||
"github.com/aws/aws-sdk-go/service/efs"
|
||||
"github.com/hashicorp/errwrap"
|
||||
"github.com/hashicorp/terraform/helper/schema"
|
||||
)
|
||||
|
||||
func dataSourceAwsEfsFileSystem() *schema.Resource {
|
||||
return &schema.Resource{
|
||||
Read: dataSourceAwsEfsFileSystemRead,
|
||||
|
||||
Schema: map[string]*schema.Schema{
|
||||
"creation_token": {
|
||||
Type: schema.TypeString,
|
||||
Optional: true,
|
||||
Computed: true,
|
||||
ForceNew: true,
|
||||
ValidateFunc: validateMaxLength(64),
|
||||
},
|
||||
"file_system_id": {
|
||||
Type: schema.TypeString,
|
||||
Optional: true,
|
||||
Computed: true,
|
||||
ForceNew: true,
|
||||
},
|
||||
"performance_mode": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
},
|
||||
"tags": tagsSchemaComputed(),
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
func dataSourceAwsEfsFileSystemRead(d *schema.ResourceData, meta interface{}) error {
|
||||
efsconn := meta.(*AWSClient).efsconn
|
||||
|
||||
describeEfsOpts := &efs.DescribeFileSystemsInput{}
|
||||
|
||||
if v, ok := d.GetOk("creation_token"); ok {
|
||||
describeEfsOpts.CreationToken = aws.String(v.(string))
|
||||
}
|
||||
|
||||
if v, ok := d.GetOk("file_system_id"); ok {
|
||||
describeEfsOpts.FileSystemId = aws.String(v.(string))
|
||||
}
|
||||
|
||||
describeResp, err := efsconn.DescribeFileSystems(describeEfsOpts)
|
||||
if err != nil {
|
||||
return errwrap.Wrapf("Error retrieving EFS: {{err}}", err)
|
||||
}
|
||||
if len(describeResp.FileSystems) != 1 {
|
||||
return fmt.Errorf("Search returned %d results, please revise so only one is returned", len(describeResp.FileSystems))
|
||||
}
|
||||
|
||||
d.SetId(*describeResp.FileSystems[0].FileSystemId)
|
||||
|
||||
tags := make([]*efs.Tag, 0)
|
||||
var marker string
|
||||
for {
|
||||
params := &efs.DescribeTagsInput{
|
||||
FileSystemId: aws.String(d.Id()),
|
||||
}
|
||||
if marker != "" {
|
||||
params.Marker = aws.String(marker)
|
||||
}
|
||||
|
||||
tagsResp, err := efsconn.DescribeTags(params)
|
||||
if err != nil {
|
||||
return fmt.Errorf("Error retrieving EC2 tags for EFS file system (%q): %s",
|
||||
d.Id(), err.Error())
|
||||
}
|
||||
|
||||
for _, tag := range tagsResp.Tags {
|
||||
tags = append(tags, tag)
|
||||
}
|
||||
|
||||
if tagsResp.NextMarker != nil {
|
||||
marker = *tagsResp.NextMarker
|
||||
} else {
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
err = d.Set("tags", tagsToMapEFS(tags))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
var fs *efs.FileSystemDescription
|
||||
for _, f := range describeResp.FileSystems {
|
||||
if d.Id() == *f.FileSystemId {
|
||||
fs = f
|
||||
break
|
||||
}
|
||||
}
|
||||
if fs == nil {
|
||||
log.Printf("[WARN] EFS (%s) not found, removing from state", d.Id())
|
||||
d.SetId("")
|
||||
return nil
|
||||
}
|
||||
|
||||
d.Set("creation_token", fs.CreationToken)
|
||||
d.Set("performance_mode", fs.PerformanceMode)
|
||||
d.Set("file_system_id", fs.FileSystemId)
|
||||
|
||||
return nil
|
||||
}
|
|
@ -0,0 +1,71 @@
|
|||
package aws
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"testing"
|
||||
|
||||
"github.com/hashicorp/terraform/helper/resource"
|
||||
"github.com/hashicorp/terraform/terraform"
|
||||
)
|
||||
|
||||
func TestAccDataSourceAwsEfsFileSystem(t *testing.T) {
|
||||
resource.Test(t, resource.TestCase{
|
||||
PreCheck: func() { testAccPreCheck(t) },
|
||||
Providers: testAccProviders,
|
||||
Steps: []resource.TestStep{
|
||||
{
|
||||
Config: testAccDataSourceAwsEfsFileSystemConfig,
|
||||
Check: resource.ComposeTestCheckFunc(
|
||||
testAccDataSourceAwsEfsFileSystemCheck("data.aws_efs_file_system.by_creation_token"),
|
||||
testAccDataSourceAwsEfsFileSystemCheck("data.aws_efs_file_system.by_id"),
|
||||
),
|
||||
},
|
||||
},
|
||||
})
|
||||
}
|
||||
|
||||
func testAccDataSourceAwsEfsFileSystemCheck(name string) resource.TestCheckFunc {
|
||||
return func(s *terraform.State) error {
|
||||
rs, ok := s.RootModule().Resources[name]
|
||||
if !ok {
|
||||
return fmt.Errorf("root module has no resource called %s", name)
|
||||
}
|
||||
|
||||
efsRs, ok := s.RootModule().Resources["aws_efs_file_system.test"]
|
||||
if !ok {
|
||||
return fmt.Errorf("can't find aws_efs_file_system.test in state")
|
||||
}
|
||||
|
||||
attr := rs.Primary.Attributes
|
||||
|
||||
if attr["creation_token"] != efsRs.Primary.Attributes["creation_token"] {
|
||||
return fmt.Errorf(
|
||||
"creation_token is %s; want %s",
|
||||
attr["creation_token"],
|
||||
efsRs.Primary.Attributes["creation_token"],
|
||||
)
|
||||
}
|
||||
|
||||
if attr["id"] != efsRs.Primary.Attributes["id"] {
|
||||
return fmt.Errorf(
|
||||
"file_system_id is %s; want %s",
|
||||
attr["id"],
|
||||
efsRs.Primary.Attributes["id"],
|
||||
)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
const testAccDataSourceAwsEfsFileSystemConfig = `
|
||||
resource "aws_efs_file_system" "test" {}
|
||||
|
||||
data "aws_efs_file_system" "by_creation_token" {
|
||||
creation_token = "${aws_efs_file_system.test.creation_token}"
|
||||
}
|
||||
|
||||
data "aws_efs_file_system" "by_id" {
|
||||
file_system_id = "${aws_efs_file_system.test.id}"
|
||||
}
|
||||
`
|
|
@ -34,6 +34,7 @@ func dataSourceAwsIamAccountAliasRead(d *schema.ResourceData, meta interface{})
|
|||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// 'AccountAliases': [] if there is no alias.
|
||||
if resp == nil || len(resp.AccountAliases) == 0 {
|
||||
return fmt.Errorf("no IAM account alias found")
|
||||
|
|
|
@ -1,43 +0,0 @@
|
|||
package aws
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"testing"
|
||||
|
||||
"github.com/hashicorp/terraform/helper/resource"
|
||||
"github.com/hashicorp/terraform/terraform"
|
||||
)
|
||||
|
||||
func TestAccAWSIamAccountAlias_basic(t *testing.T) {
|
||||
resource.Test(t, resource.TestCase{
|
||||
PreCheck: func() { testAccPreCheck(t) },
|
||||
Providers: testAccProviders,
|
||||
Steps: []resource.TestStep{
|
||||
{
|
||||
Config: testAccCheckAwsIamAccountAliasConfig_basic,
|
||||
Check: resource.ComposeTestCheckFunc(
|
||||
testAccCheckAwsIamAccountAlias("data.aws_iam_account_alias.current"),
|
||||
),
|
||||
},
|
||||
},
|
||||
})
|
||||
}
|
||||
|
||||
func testAccCheckAwsIamAccountAlias(n string) resource.TestCheckFunc {
|
||||
return func(s *terraform.State) error {
|
||||
rs, ok := s.RootModule().Resources[n]
|
||||
if !ok {
|
||||
return fmt.Errorf("Can't find Account Alias resource: %s", n)
|
||||
}
|
||||
|
||||
if rs.Primary.Attributes["account_alias"] == "" {
|
||||
return fmt.Errorf("Missing Account Alias")
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
const testAccCheckAwsIamAccountAliasConfig_basic = `
|
||||
data "aws_iam_account_alias" "current" { }
|
||||
`
|
|
@ -0,0 +1,67 @@
|
|||
package aws
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
|
||||
"github.com/aws/aws-sdk-go/aws"
|
||||
"github.com/aws/aws-sdk-go/service/iam"
|
||||
"github.com/hashicorp/errwrap"
|
||||
"github.com/hashicorp/terraform/helper/schema"
|
||||
)
|
||||
|
||||
func dataSourceAwsIAMRole() *schema.Resource {
|
||||
return &schema.Resource{
|
||||
Read: dataSourceAwsIAMRoleRead,
|
||||
|
||||
Schema: map[string]*schema.Schema{
|
||||
"arn": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
},
|
||||
"assume_role_policy_document": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
},
|
||||
"path": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
},
|
||||
"role_id": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
},
|
||||
"role_name": {
|
||||
Type: schema.TypeString,
|
||||
Required: true,
|
||||
},
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
func dataSourceAwsIAMRoleRead(d *schema.ResourceData, meta interface{}) error {
|
||||
iamconn := meta.(*AWSClient).iamconn
|
||||
|
||||
roleName := d.Get("role_name").(string)
|
||||
|
||||
req := &iam.GetRoleInput{
|
||||
RoleName: aws.String(roleName),
|
||||
}
|
||||
|
||||
resp, err := iamconn.GetRole(req)
|
||||
if err != nil {
|
||||
return errwrap.Wrapf("Error getting roles: {{err}}", err)
|
||||
}
|
||||
if resp == nil {
|
||||
return fmt.Errorf("no IAM role found")
|
||||
}
|
||||
|
||||
role := resp.Role
|
||||
|
||||
d.SetId(*role.RoleId)
|
||||
d.Set("arn", role.Arn)
|
||||
d.Set("assume_role_policy_document", role.AssumeRolePolicyDocument)
|
||||
d.Set("path", role.Path)
|
||||
d.Set("role_id", role.RoleId)
|
||||
|
||||
return nil
|
||||
}
|
|
@ -0,0 +1,59 @@
|
|||
package aws
|
||||
|
||||
import (
|
||||
"regexp"
|
||||
"testing"
|
||||
|
||||
"github.com/hashicorp/terraform/helper/resource"
|
||||
)
|
||||
|
||||
func TestAccAWSDataSourceIAMRole_basic(t *testing.T) {
|
||||
resource.Test(t, resource.TestCase{
|
||||
PreCheck: func() { testAccPreCheck(t) },
|
||||
Providers: testAccProviders,
|
||||
Steps: []resource.TestStep{
|
||||
{
|
||||
Config: testAccAwsIAMRoleConfig,
|
||||
Check: resource.ComposeTestCheckFunc(
|
||||
resource.TestCheckResourceAttrSet("data.aws_iam_role.test", "role_id"),
|
||||
resource.TestCheckResourceAttr("data.aws_iam_role.test", "assume_role_policy_document", "%7B%22Version%22%3A%222012-10-17%22%2C%22Statement%22%3A%5B%7B%22Sid%22%3A%22%22%2C%22Effect%22%3A%22Allow%22%2C%22Principal%22%3A%7B%22Service%22%3A%22ec2.amazonaws.com%22%7D%2C%22Action%22%3A%22sts%3AAssumeRole%22%7D%5D%7D"),
|
||||
resource.TestCheckResourceAttr("data.aws_iam_role.test", "path", "/testpath/"),
|
||||
resource.TestCheckResourceAttr("data.aws_iam_role.test", "role_name", "TestRole"),
|
||||
resource.TestMatchResourceAttr("data.aws_iam_role.test", "arn", regexp.MustCompile("^arn:aws:iam::[0-9]{12}:role/testpath/TestRole$")),
|
||||
),
|
||||
},
|
||||
},
|
||||
})
|
||||
}
|
||||
|
||||
const testAccAwsIAMRoleConfig = `
|
||||
provider "aws" {
|
||||
region = "us-east-1"
|
||||
}
|
||||
|
||||
resource "aws_iam_role" "test_role" {
|
||||
name = "TestRole"
|
||||
|
||||
assume_role_policy = <<EOF
|
||||
{
|
||||
"Version": "2012-10-17",
|
||||
"Statement": [
|
||||
{
|
||||
"Action": "sts:AssumeRole",
|
||||
"Principal": {
|
||||
"Service": "ec2.amazonaws.com"
|
||||
},
|
||||
"Effect": "Allow",
|
||||
"Sid": ""
|
||||
}
|
||||
]
|
||||
}
|
||||
EOF
|
||||
|
||||
path = "/testpath/"
|
||||
}
|
||||
|
||||
data "aws_iam_role" "test" {
|
||||
role_name = "${aws_iam_role.test_role.name}"
|
||||
}
|
||||
`
|
|
@ -0,0 +1,95 @@
|
|||
package aws
|
||||
|
||||
import (
|
||||
"github.com/aws/aws-sdk-go/aws"
|
||||
"github.com/aws/aws-sdk-go/service/kinesis"
|
||||
"github.com/hashicorp/terraform/helper/schema"
|
||||
)
|
||||
|
||||
func dataSourceAwsKinesisStream() *schema.Resource {
|
||||
return &schema.Resource{
|
||||
Read: dataSourceAwsKinesisStreamRead,
|
||||
|
||||
Schema: map[string]*schema.Schema{
|
||||
"name": &schema.Schema{
|
||||
Type: schema.TypeString,
|
||||
Required: true,
|
||||
},
|
||||
|
||||
"arn": &schema.Schema{
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
},
|
||||
|
||||
"creation_timestamp": &schema.Schema{
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
},
|
||||
|
||||
"status": &schema.Schema{
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
},
|
||||
|
||||
"retention_period": &schema.Schema{
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
},
|
||||
|
||||
"open_shards": &schema.Schema{
|
||||
Type: schema.TypeSet,
|
||||
Computed: true,
|
||||
Elem: &schema.Schema{Type: schema.TypeString},
|
||||
Set: schema.HashString,
|
||||
},
|
||||
|
||||
"closed_shards": &schema.Schema{
|
||||
Type: schema.TypeSet,
|
||||
Computed: true,
|
||||
Elem: &schema.Schema{Type: schema.TypeString},
|
||||
Set: schema.HashString,
|
||||
},
|
||||
|
||||
"shard_level_metrics": &schema.Schema{
|
||||
Type: schema.TypeSet,
|
||||
Computed: true,
|
||||
Elem: &schema.Schema{Type: schema.TypeString},
|
||||
Set: schema.HashString,
|
||||
},
|
||||
|
||||
"tags": &schema.Schema{
|
||||
Type: schema.TypeMap,
|
||||
Computed: true,
|
||||
},
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
func dataSourceAwsKinesisStreamRead(d *schema.ResourceData, meta interface{}) error {
|
||||
conn := meta.(*AWSClient).kinesisconn
|
||||
sn := d.Get("name").(string)
|
||||
|
||||
state, err := readKinesisStreamState(conn, sn)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
d.SetId(state.arn)
|
||||
d.Set("arn", state.arn)
|
||||
d.Set("name", sn)
|
||||
d.Set("open_shards", state.openShards)
|
||||
d.Set("closed_shards", state.closedShards)
|
||||
d.Set("status", state.status)
|
||||
d.Set("creation_timestamp", state.creationTimestamp)
|
||||
d.Set("retention_period", state.retentionPeriod)
|
||||
d.Set("shard_level_metrics", state.shardLevelMetrics)
|
||||
|
||||
tags, err := conn.ListTagsForStream(&kinesis.ListTagsForStreamInput{
|
||||
StreamName: aws.String(sn),
|
||||
})
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
d.Set("tags", tagsToMapKinesis(tags.Tags))
|
||||
|
||||
return nil
|
||||
}
|
|
@ -0,0 +1,94 @@
|
|||
package aws
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"testing"
|
||||
|
||||
"github.com/aws/aws-sdk-go/aws"
|
||||
"github.com/aws/aws-sdk-go/service/kinesis"
|
||||
"github.com/hashicorp/terraform/helper/acctest"
|
||||
"github.com/hashicorp/terraform/helper/resource"
|
||||
)
|
||||
|
||||
func TestAccAWSKinesisStreamDataSource(t *testing.T) {
|
||||
var stream kinesis.StreamDescription
|
||||
|
||||
sn := fmt.Sprintf("terraform-kinesis-test-%d", acctest.RandInt())
|
||||
config := fmt.Sprintf(testAccCheckAwsKinesisStreamDataSourceConfig, sn)
|
||||
|
||||
updateShardCount := func() {
|
||||
conn := testAccProvider.Meta().(*AWSClient).kinesisconn
|
||||
_, err := conn.UpdateShardCount(&kinesis.UpdateShardCountInput{
|
||||
ScalingType: aws.String(kinesis.ScalingTypeUniformScaling),
|
||||
StreamName: aws.String(sn),
|
||||
TargetShardCount: aws.Int64(3),
|
||||
})
|
||||
if err != nil {
|
||||
t.Fatalf("Error calling UpdateShardCount: %s", err)
|
||||
}
|
||||
if err := waitForKinesisToBeActive(conn, sn); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
}
|
||||
|
||||
resource.Test(t, resource.TestCase{
|
||||
PreCheck: func() { testAccPreCheck(t) },
|
||||
Providers: testAccProviders,
|
||||
CheckDestroy: testAccCheckKinesisStreamDestroy,
|
||||
Steps: []resource.TestStep{
|
||||
{
|
||||
Config: config,
|
||||
Check: resource.ComposeTestCheckFunc(
|
||||
testAccCheckKinesisStreamExists("aws_kinesis_stream.test_stream", &stream),
|
||||
resource.TestCheckResourceAttrSet("data.aws_kinesis_stream.test_stream", "arn"),
|
||||
resource.TestCheckResourceAttr("data.aws_kinesis_stream.test_stream", "name", sn),
|
||||
resource.TestCheckResourceAttr("data.aws_kinesis_stream.test_stream", "status", "ACTIVE"),
|
||||
resource.TestCheckResourceAttr("data.aws_kinesis_stream.test_stream", "open_shards.#", "2"),
|
||||
resource.TestCheckResourceAttr("data.aws_kinesis_stream.test_stream", "closed_shards.#", "0"),
|
||||
resource.TestCheckResourceAttr("data.aws_kinesis_stream.test_stream", "shard_level_metrics.#", "2"),
|
||||
resource.TestCheckResourceAttr("data.aws_kinesis_stream.test_stream", "retention_period", "72"),
|
||||
resource.TestCheckResourceAttrSet("data.aws_kinesis_stream.test_stream", "creation_timestamp"),
|
||||
resource.TestCheckResourceAttr("data.aws_kinesis_stream.test_stream", "tags.Name", "tf-test"),
|
||||
),
|
||||
},
|
||||
{
|
||||
Config: config,
|
||||
PreConfig: updateShardCount,
|
||||
Check: resource.ComposeTestCheckFunc(
|
||||
testAccCheckKinesisStreamExists("aws_kinesis_stream.test_stream", &stream),
|
||||
resource.TestCheckResourceAttrSet("data.aws_kinesis_stream.test_stream", "arn"),
|
||||
resource.TestCheckResourceAttr("data.aws_kinesis_stream.test_stream", "name", sn),
|
||||
resource.TestCheckResourceAttr("data.aws_kinesis_stream.test_stream", "status", "ACTIVE"),
|
||||
resource.TestCheckResourceAttr("data.aws_kinesis_stream.test_stream", "open_shards.#", "3"),
|
||||
resource.TestCheckResourceAttr("data.aws_kinesis_stream.test_stream", "closed_shards.#", "4"),
|
||||
resource.TestCheckResourceAttr("data.aws_kinesis_stream.test_stream", "shard_level_metrics.#", "2"),
|
||||
resource.TestCheckResourceAttr("data.aws_kinesis_stream.test_stream", "retention_period", "72"),
|
||||
resource.TestCheckResourceAttrSet("data.aws_kinesis_stream.test_stream", "creation_timestamp"),
|
||||
resource.TestCheckResourceAttr("data.aws_kinesis_stream.test_stream", "tags.Name", "tf-test"),
|
||||
),
|
||||
},
|
||||
},
|
||||
})
|
||||
}
|
||||
|
||||
var testAccCheckAwsKinesisStreamDataSourceConfig = `
|
||||
resource "aws_kinesis_stream" "test_stream" {
|
||||
name = "%s"
|
||||
shard_count = 2
|
||||
retention_period = 72
|
||||
tags {
|
||||
Name = "tf-test"
|
||||
}
|
||||
shard_level_metrics = [
|
||||
"IncomingBytes",
|
||||
"OutgoingBytes"
|
||||
]
|
||||
lifecycle {
|
||||
ignore_changes = ["shard_count"]
|
||||
}
|
||||
}
|
||||
|
||||
data "aws_kinesis_stream" "test_stream" {
|
||||
name = "${aws_kinesis_stream.test_stream.name}"
|
||||
}
|
||||
`
|
|
@ -0,0 +1,62 @@
|
|||
package aws
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"time"
|
||||
|
||||
"github.com/aws/aws-sdk-go/service/kms"
|
||||
"github.com/hashicorp/errwrap"
|
||||
"github.com/hashicorp/terraform/helper/schema"
|
||||
)
|
||||
|
||||
func dataSourceAwsKmsAlias() *schema.Resource {
|
||||
return &schema.Resource{
|
||||
Read: dataSourceAwsKmsAliasRead,
|
||||
Schema: map[string]*schema.Schema{
|
||||
"name": {
|
||||
Type: schema.TypeString,
|
||||
Required: true,
|
||||
ValidateFunc: validateAwsKmsName,
|
||||
},
|
||||
"arn": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
},
|
||||
"target_key_id": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
},
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
func dataSourceAwsKmsAliasRead(d *schema.ResourceData, meta interface{}) error {
|
||||
conn := meta.(*AWSClient).kmsconn
|
||||
params := &kms.ListAliasesInput{}
|
||||
|
||||
target := d.Get("name")
|
||||
var alias *kms.AliasListEntry
|
||||
err := conn.ListAliasesPages(params, func(page *kms.ListAliasesOutput, lastPage bool) bool {
|
||||
for _, entity := range page.Aliases {
|
||||
if *entity.AliasName == target {
|
||||
alias = entity
|
||||
return false
|
||||
}
|
||||
}
|
||||
|
||||
return true
|
||||
})
|
||||
if err != nil {
|
||||
return errwrap.Wrapf("Error fetch KMS alias list: {{err}}", err)
|
||||
}
|
||||
|
||||
if alias == nil {
|
||||
return fmt.Errorf("No alias with name %q found in this region.", target)
|
||||
}
|
||||
|
||||
d.SetId(time.Now().UTC().String())
|
||||
d.Set("arn", alias.AliasArn)
|
||||
d.Set("target_key_id", alias.TargetKeyId)
|
||||
|
||||
return nil
|
||||
}
|
|
@ -0,0 +1,77 @@
|
|||
package aws
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"testing"
|
||||
|
||||
"github.com/hashicorp/terraform/helper/acctest"
|
||||
"github.com/hashicorp/terraform/helper/resource"
|
||||
"github.com/hashicorp/terraform/terraform"
|
||||
)
|
||||
|
||||
func TestAccDataSourceAwsKmsAlias(t *testing.T) {
|
||||
rInt := acctest.RandInt()
|
||||
resource.Test(t, resource.TestCase{
|
||||
PreCheck: func() { testAccPreCheck(t) },
|
||||
Providers: testAccProviders,
|
||||
Steps: []resource.TestStep{
|
||||
resource.TestStep{
|
||||
Config: testAccDataSourceAwsKmsAlias(rInt),
|
||||
Check: resource.ComposeTestCheckFunc(
|
||||
testAccDataSourceAwsKmsAliasCheck("data.aws_kms_alias.by_name"),
|
||||
),
|
||||
},
|
||||
},
|
||||
})
|
||||
}
|
||||
|
||||
func testAccDataSourceAwsKmsAliasCheck(name string) resource.TestCheckFunc {
|
||||
return func(s *terraform.State) error {
|
||||
rs, ok := s.RootModule().Resources[name]
|
||||
if !ok {
|
||||
return fmt.Errorf("root module has no resource called %s", name)
|
||||
}
|
||||
|
||||
kmsKeyRs, ok := s.RootModule().Resources["aws_kms_alias.single"]
|
||||
if !ok {
|
||||
return fmt.Errorf("can't find aws_kms_alias.single in state")
|
||||
}
|
||||
|
||||
attr := rs.Primary.Attributes
|
||||
|
||||
if attr["arn"] != kmsKeyRs.Primary.Attributes["arn"] {
|
||||
return fmt.Errorf(
|
||||
"arn is %s; want %s",
|
||||
attr["arn"],
|
||||
kmsKeyRs.Primary.Attributes["arn"],
|
||||
)
|
||||
}
|
||||
|
||||
if attr["target_key_id"] != kmsKeyRs.Primary.Attributes["target_key_id"] {
|
||||
return fmt.Errorf(
|
||||
"target_key_id is %s; want %s",
|
||||
attr["target_key_id"],
|
||||
kmsKeyRs.Primary.Attributes["target_key_id"],
|
||||
)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
func testAccDataSourceAwsKmsAlias(rInt int) string {
|
||||
return fmt.Sprintf(`
|
||||
resource "aws_kms_key" "one" {
|
||||
description = "Terraform acc test"
|
||||
deletion_window_in_days = 7
|
||||
}
|
||||
|
||||
resource "aws_kms_alias" "single" {
|
||||
name = "alias/tf-acc-key-alias-%d"
|
||||
target_key_id = "${aws_kms_key.one.key_id}"
|
||||
}
|
||||
|
||||
data "aws_kms_alias" "by_name" {
|
||||
name = "${aws_kms_alias.single.name}"
|
||||
}`, rInt)
|
||||
}
|
|
@ -136,7 +136,7 @@ func dataSourceAwsRoute53ZoneRead(d *schema.ResourceData, meta interface{}) erro
|
|||
|
||||
if matchingTags && matchingVPC {
|
||||
if hostedZoneFound != nil {
|
||||
return fmt.Errorf("multplie Route53Zone found please use vpc_id option to filter")
|
||||
return fmt.Errorf("multiple Route53Zone found please use vpc_id option to filter")
|
||||
} else {
|
||||
hostedZoneFound = hostedZone
|
||||
}
|
||||
|
|
|
@ -72,7 +72,7 @@ func testAccDataSourceAwsRoute53ZoneCheck(rsName, dsName, zName string) resource
|
|||
func testAccDataSourceAwsRoute53ZoneConfig(rInt int) string {
|
||||
return fmt.Sprintf(`
|
||||
provider "aws" {
|
||||
region = "us-east-2"
|
||||
region = "us-east-1"
|
||||
}
|
||||
|
||||
resource "aws_vpc" "test" {
|
||||
|
|
|
@ -149,7 +149,7 @@ func TestAccDataSourceAWSS3BucketObject_allParams(t *testing.T) {
|
|||
resource.TestCheckNoResourceAttr("data.aws_s3_bucket_object.obj", "body"),
|
||||
resource.TestCheckResourceAttr("data.aws_s3_bucket_object.obj", "cache_control", "no-cache"),
|
||||
resource.TestCheckResourceAttr("data.aws_s3_bucket_object.obj", "content_disposition", "attachment"),
|
||||
resource.TestCheckResourceAttr("data.aws_s3_bucket_object.obj", "content_encoding", "gzip"),
|
||||
resource.TestCheckResourceAttr("data.aws_s3_bucket_object.obj", "content_encoding", "identity"),
|
||||
resource.TestCheckResourceAttr("data.aws_s3_bucket_object.obj", "content_language", "en-GB"),
|
||||
// Encryption is off
|
||||
resource.TestCheckResourceAttr("data.aws_s3_bucket_object.obj", "server_side_encryption", ""),
|
||||
|
@ -284,7 +284,7 @@ CONTENT
|
|||
content_type = "application/unknown"
|
||||
cache_control = "no-cache"
|
||||
content_disposition = "attachment"
|
||||
content_encoding = "gzip"
|
||||
content_encoding = "identity"
|
||||
content_language = "en-GB"
|
||||
tags {
|
||||
Key1 = "Value 1"
|
||||
|
|
|
@ -14,23 +14,29 @@ func dataSourceAwsSecurityGroup() *schema.Resource {
|
|||
Read: dataSourceAwsSecurityGroupRead,
|
||||
|
||||
Schema: map[string]*schema.Schema{
|
||||
"vpc_id": &schema.Schema{
|
||||
"vpc_id": {
|
||||
Type: schema.TypeString,
|
||||
Optional: true,
|
||||
Computed: true,
|
||||
},
|
||||
"name": &schema.Schema{
|
||||
"name": {
|
||||
Type: schema.TypeString,
|
||||
Optional: true,
|
||||
Computed: true,
|
||||
},
|
||||
"filter": ec2CustomFiltersSchema(),
|
||||
|
||||
"id": &schema.Schema{
|
||||
"id": {
|
||||
Type: schema.TypeString,
|
||||
Optional: true,
|
||||
Computed: true,
|
||||
},
|
||||
|
||||
"arn": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
},
|
||||
|
||||
"tags": tagsSchemaComputed(),
|
||||
},
|
||||
}
|
||||
|
@ -81,6 +87,8 @@ func dataSourceAwsSecurityGroupRead(d *schema.ResourceData, meta interface{}) er
|
|||
d.Set("description", sg.Description)
|
||||
d.Set("vpc_id", sg.VpcId)
|
||||
d.Set("tags", tagsToMap(sg.Tags))
|
||||
d.Set("arn", fmt.Sprintf("arn:%s:ec2:%s:%s/security-group/%s",
|
||||
meta.(*AWSClient).partition, meta.(*AWSClient).region, *sg.OwnerId, *sg.GroupId))
|
||||
|
||||
return nil
|
||||
}
|
||||
|
|
|
@ -4,6 +4,8 @@ import (
|
|||
"fmt"
|
||||
"testing"
|
||||
|
||||
"strings"
|
||||
|
||||
"github.com/hashicorp/terraform/helper/acctest"
|
||||
"github.com/hashicorp/terraform/helper/resource"
|
||||
"github.com/hashicorp/terraform/terraform"
|
||||
|
@ -66,6 +68,10 @@ func testAccDataSourceAwsSecurityGroupCheck(name string) resource.TestCheckFunc
|
|||
return fmt.Errorf("bad Name tag %s", attr["tags.Name"])
|
||||
}
|
||||
|
||||
if !strings.Contains(attr["arn"], attr["id"]) {
|
||||
return fmt.Errorf("bad ARN %s", attr["arn"])
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
|
|
@ -14,19 +14,25 @@ func dataSourceAwsSubnet() *schema.Resource {
|
|||
Read: dataSourceAwsSubnetRead,
|
||||
|
||||
Schema: map[string]*schema.Schema{
|
||||
"availability_zone": &schema.Schema{
|
||||
"availability_zone": {
|
||||
Type: schema.TypeString,
|
||||
Optional: true,
|
||||
Computed: true,
|
||||
},
|
||||
|
||||
"cidr_block": &schema.Schema{
|
||||
"cidr_block": {
|
||||
Type: schema.TypeString,
|
||||
Optional: true,
|
||||
Computed: true,
|
||||
},
|
||||
|
||||
"default_for_az": &schema.Schema{
|
||||
"ipv6_cidr_block": {
|
||||
Type: schema.TypeString,
|
||||
Optional: true,
|
||||
Computed: true,
|
||||
},
|
||||
|
||||
"default_for_az": {
|
||||
Type: schema.TypeBool,
|
||||
Optional: true,
|
||||
Computed: true,
|
||||
|
@ -34,13 +40,13 @@ func dataSourceAwsSubnet() *schema.Resource {
|
|||
|
||||
"filter": ec2CustomFiltersSchema(),
|
||||
|
||||
"id": &schema.Schema{
|
||||
"id": {
|
||||
Type: schema.TypeString,
|
||||
Optional: true,
|
||||
Computed: true,
|
||||
},
|
||||
|
||||
"state": &schema.Schema{
|
||||
"state": {
|
||||
Type: schema.TypeString,
|
||||
Optional: true,
|
||||
Computed: true,
|
||||
|
@ -48,11 +54,26 @@ func dataSourceAwsSubnet() *schema.Resource {
|
|||
|
||||
"tags": tagsSchemaComputed(),
|
||||
|
||||
"vpc_id": &schema.Schema{
|
||||
"vpc_id": {
|
||||
Type: schema.TypeString,
|
||||
Optional: true,
|
||||
Computed: true,
|
||||
},
|
||||
|
||||
"assign_ipv6_address_on_creation": {
|
||||
Type: schema.TypeBool,
|
||||
Computed: true,
|
||||
},
|
||||
|
||||
"map_public_ip_on_launch": {
|
||||
Type: schema.TypeBool,
|
||||
Computed: true,
|
||||
},
|
||||
|
||||
"ipv6_cidr_block_association_id": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
},
|
||||
},
|
||||
}
|
||||
}
|
||||
|
@ -76,15 +97,22 @@ func dataSourceAwsSubnetRead(d *schema.ResourceData, meta interface{}) error {
|
|||
defaultForAzStr = "true"
|
||||
}
|
||||
|
||||
req.Filters = buildEC2AttributeFilterList(
|
||||
map[string]string{
|
||||
"availabilityZone": d.Get("availability_zone").(string),
|
||||
"cidrBlock": d.Get("cidr_block").(string),
|
||||
"defaultForAz": defaultForAzStr,
|
||||
"state": d.Get("state").(string),
|
||||
"vpc-id": d.Get("vpc_id").(string),
|
||||
},
|
||||
)
|
||||
filters := map[string]string{
|
||||
"availabilityZone": d.Get("availability_zone").(string),
|
||||
"defaultForAz": defaultForAzStr,
|
||||
"state": d.Get("state").(string),
|
||||
"vpc-id": d.Get("vpc_id").(string),
|
||||
}
|
||||
|
||||
if v, ok := d.GetOk("cidr_block"); ok {
|
||||
filters["cidrBlock"] = v.(string)
|
||||
}
|
||||
|
||||
if v, ok := d.GetOk("ipv6_cidr_block"); ok {
|
||||
filters["ipv6-cidr-block-association.ipv6-cidr-block"] = v.(string)
|
||||
}
|
||||
|
||||
req.Filters = buildEC2AttributeFilterList(filters)
|
||||
req.Filters = append(req.Filters, buildEC2TagFilterList(
|
||||
tagsFromMap(d.Get("tags").(map[string]interface{})),
|
||||
)...)
|
||||
|
@ -118,6 +146,15 @@ func dataSourceAwsSubnetRead(d *schema.ResourceData, meta interface{}) error {
|
|||
d.Set("default_for_az", subnet.DefaultForAz)
|
||||
d.Set("state", subnet.State)
|
||||
d.Set("tags", tagsToMap(subnet.Tags))
|
||||
d.Set("assign_ipv6_address_on_creation", subnet.AssignIpv6AddressOnCreation)
|
||||
d.Set("map_public_ip_on_launch", subnet.MapPublicIpOnLaunch)
|
||||
|
||||
for _, a := range subnet.Ipv6CidrBlockAssociationSet {
|
||||
if *a.Ipv6CidrBlockState.State == "associated" { //we can only ever have 1 IPv6 block associated at once
|
||||
d.Set("ipv6_cidr_block_association_id", a.AssociationId)
|
||||
d.Set("ipv6_cidr_block", a.Ipv6CidrBlock)
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
|
|
@ -0,0 +1,68 @@
|
|||
package aws
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"log"
|
||||
|
||||
"github.com/aws/aws-sdk-go/service/ec2"
|
||||
"github.com/hashicorp/terraform/helper/schema"
|
||||
)
|
||||
|
||||
func dataSourceAwsSubnetIDs() *schema.Resource {
|
||||
return &schema.Resource{
|
||||
Read: dataSourceAwsSubnetIDsRead,
|
||||
Schema: map[string]*schema.Schema{
|
||||
|
||||
"tags": tagsSchemaComputed(),
|
||||
|
||||
"vpc_id": &schema.Schema{
|
||||
Type: schema.TypeString,
|
||||
Required: true,
|
||||
},
|
||||
|
||||
"ids": &schema.Schema{
|
||||
Type: schema.TypeSet,
|
||||
Computed: true,
|
||||
Elem: &schema.Schema{Type: schema.TypeString},
|
||||
Set: schema.HashString,
|
||||
},
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
func dataSourceAwsSubnetIDsRead(d *schema.ResourceData, meta interface{}) error {
|
||||
conn := meta.(*AWSClient).ec2conn
|
||||
|
||||
req := &ec2.DescribeSubnetsInput{}
|
||||
|
||||
req.Filters = buildEC2AttributeFilterList(
|
||||
map[string]string{
|
||||
"vpc-id": d.Get("vpc_id").(string),
|
||||
},
|
||||
)
|
||||
|
||||
req.Filters = append(req.Filters, buildEC2TagFilterList(
|
||||
tagsFromMap(d.Get("tags").(map[string]interface{})),
|
||||
)...)
|
||||
|
||||
log.Printf("[DEBUG] DescribeSubnets %s\n", req)
|
||||
resp, err := conn.DescribeSubnets(req)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if resp == nil || len(resp.Subnets) == 0 {
|
||||
return fmt.Errorf("no matching subnet found for vpc with id %s", d.Get("vpc_id").(string))
|
||||
}
|
||||
|
||||
subnets := make([]string, 0)
|
||||
|
||||
for _, subnet := range resp.Subnets {
|
||||
subnets = append(subnets, *subnet.SubnetId)
|
||||
}
|
||||
|
||||
d.SetId(d.Get("vpc_id").(string))
|
||||
d.Set("ids", subnets)
|
||||
|
||||
return nil
|
||||
}
|
|
@ -0,0 +1,132 @@
|
|||
package aws
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"testing"
|
||||
|
||||
"github.com/hashicorp/terraform/helper/acctest"
|
||||
"github.com/hashicorp/terraform/helper/resource"
|
||||
)
|
||||
|
||||
func TestAccDataSourceAwsSubnetIDs(t *testing.T) {
|
||||
rInt := acctest.RandIntRange(0, 256)
|
||||
resource.Test(t, resource.TestCase{
|
||||
PreCheck: func() { testAccPreCheck(t) },
|
||||
Providers: testAccProviders,
|
||||
CheckDestroy: testAccCheckVpcDestroy,
|
||||
Steps: []resource.TestStep{
|
||||
{
|
||||
Config: testAccDataSourceAwsSubnetIDsConfig(rInt),
|
||||
},
|
||||
{
|
||||
Config: testAccDataSourceAwsSubnetIDsConfigWithDataSource(rInt),
|
||||
Check: resource.ComposeTestCheckFunc(
|
||||
resource.TestCheckResourceAttr("data.aws_subnet_ids.selected", "ids.#", "3"),
|
||||
resource.TestCheckResourceAttr("data.aws_subnet_ids.private", "ids.#", "2"),
|
||||
),
|
||||
},
|
||||
},
|
||||
})
|
||||
}
|
||||
|
||||
func testAccDataSourceAwsSubnetIDsConfigWithDataSource(rInt int) string {
|
||||
return fmt.Sprintf(
|
||||
`
|
||||
resource "aws_vpc" "test" {
|
||||
cidr_block = "172.%d.0.0/16"
|
||||
|
||||
tags {
|
||||
Name = "terraform-testacc-subnet-ids-data-source"
|
||||
}
|
||||
}
|
||||
|
||||
resource "aws_subnet" "test_public_a" {
|
||||
vpc_id = "${aws_vpc.test.id}"
|
||||
cidr_block = "172.%d.123.0/24"
|
||||
availability_zone = "us-west-2a"
|
||||
|
||||
tags {
|
||||
Name = "terraform-testacc-subnet-ids-data-source-public-a"
|
||||
Tier = "Public"
|
||||
}
|
||||
}
|
||||
|
||||
resource "aws_subnet" "test_private_a" {
|
||||
vpc_id = "${aws_vpc.test.id}"
|
||||
cidr_block = "172.%d.125.0/24"
|
||||
availability_zone = "us-west-2a"
|
||||
|
||||
tags {
|
||||
Name = "terraform-testacc-subnet-ids-data-source-private-a"
|
||||
Tier = "Private"
|
||||
}
|
||||
}
|
||||
|
||||
resource "aws_subnet" "test_private_b" {
|
||||
vpc_id = "${aws_vpc.test.id}"
|
||||
cidr_block = "172.%d.126.0/24"
|
||||
availability_zone = "us-west-2b"
|
||||
|
||||
tags {
|
||||
Name = "terraform-testacc-subnet-ids-data-source-private-b"
|
||||
Tier = "Private"
|
||||
}
|
||||
}
|
||||
|
||||
data "aws_subnet_ids" "selected" {
|
||||
vpc_id = "${aws_vpc.test.id}"
|
||||
}
|
||||
|
||||
data "aws_subnet_ids" "private" {
|
||||
vpc_id = "${aws_vpc.test.id}"
|
||||
tags {
|
||||
Tier = "Private"
|
||||
}
|
||||
}
|
||||
`, rInt, rInt, rInt, rInt)
|
||||
}
|
||||
|
||||
func testAccDataSourceAwsSubnetIDsConfig(rInt int) string {
|
||||
return fmt.Sprintf(`
|
||||
resource "aws_vpc" "test" {
|
||||
cidr_block = "172.%d.0.0/16"
|
||||
|
||||
tags {
|
||||
Name = "terraform-testacc-subnet-ids-data-source"
|
||||
}
|
||||
}
|
||||
|
||||
resource "aws_subnet" "test_public_a" {
|
||||
vpc_id = "${aws_vpc.test.id}"
|
||||
cidr_block = "172.%d.123.0/24"
|
||||
availability_zone = "us-west-2a"
|
||||
|
||||
tags {
|
||||
Name = "terraform-testacc-subnet-ids-data-source-public-a"
|
||||
Tier = "Public"
|
||||
}
|
||||
}
|
||||
|
||||
resource "aws_subnet" "test_private_a" {
|
||||
vpc_id = "${aws_vpc.test.id}"
|
||||
cidr_block = "172.%d.125.0/24"
|
||||
availability_zone = "us-west-2a"
|
||||
|
||||
tags {
|
||||
Name = "terraform-testacc-subnet-ids-data-source-private-a"
|
||||
Tier = "Private"
|
||||
}
|
||||
}
|
||||
|
||||
resource "aws_subnet" "test_private_b" {
|
||||
vpc_id = "${aws_vpc.test.id}"
|
||||
cidr_block = "172.%d.126.0/24"
|
||||
availability_zone = "us-west-2b"
|
||||
|
||||
tags {
|
||||
Name = "terraform-testacc-subnet-ids-data-source-private-b"
|
||||
Tier = "Private"
|
||||
}
|
||||
}
|
||||
`, rInt, rInt, rInt, rInt)
|
||||
}
|
|
@ -4,30 +4,76 @@ import (
|
|||
"fmt"
|
||||
"testing"
|
||||
|
||||
"github.com/hashicorp/terraform/helper/acctest"
|
||||
"github.com/hashicorp/terraform/helper/resource"
|
||||
"github.com/hashicorp/terraform/terraform"
|
||||
)
|
||||
|
||||
func TestAccDataSourceAwsSubnet(t *testing.T) {
|
||||
rInt := acctest.RandIntRange(0, 256)
|
||||
|
||||
resource.Test(t, resource.TestCase{
|
||||
PreCheck: func() { testAccPreCheck(t) },
|
||||
Providers: testAccProviders,
|
||||
PreCheck: func() { testAccPreCheck(t) },
|
||||
Providers: testAccProviders,
|
||||
CheckDestroy: testAccCheckVpcDestroy,
|
||||
Steps: []resource.TestStep{
|
||||
resource.TestStep{
|
||||
Config: testAccDataSourceAwsSubnetConfig,
|
||||
{
|
||||
Config: testAccDataSourceAwsSubnetConfig(rInt),
|
||||
Check: resource.ComposeTestCheckFunc(
|
||||
testAccDataSourceAwsSubnetCheck("data.aws_subnet.by_id"),
|
||||
testAccDataSourceAwsSubnetCheck("data.aws_subnet.by_cidr"),
|
||||
testAccDataSourceAwsSubnetCheck("data.aws_subnet.by_tag"),
|
||||
testAccDataSourceAwsSubnetCheck("data.aws_subnet.by_vpc"),
|
||||
testAccDataSourceAwsSubnetCheck("data.aws_subnet.by_filter"),
|
||||
testAccDataSourceAwsSubnetCheck("data.aws_subnet.by_id", rInt),
|
||||
testAccDataSourceAwsSubnetCheck("data.aws_subnet.by_cidr", rInt),
|
||||
testAccDataSourceAwsSubnetCheck("data.aws_subnet.by_tag", rInt),
|
||||
testAccDataSourceAwsSubnetCheck("data.aws_subnet.by_vpc", rInt),
|
||||
testAccDataSourceAwsSubnetCheck("data.aws_subnet.by_filter", rInt),
|
||||
),
|
||||
},
|
||||
},
|
||||
})
|
||||
}
|
||||
|
||||
func testAccDataSourceAwsSubnetCheck(name string) resource.TestCheckFunc {
|
||||
func TestAccDataSourceAwsSubnetIpv6ByIpv6Filter(t *testing.T) {
|
||||
rInt := acctest.RandIntRange(0, 256)
|
||||
resource.Test(t, resource.TestCase{
|
||||
PreCheck: func() { testAccPreCheck(t) },
|
||||
Providers: testAccProviders,
|
||||
Steps: []resource.TestStep{
|
||||
{
|
||||
Config: testAccDataSourceAwsSubnetConfigIpv6(rInt),
|
||||
},
|
||||
{
|
||||
Config: testAccDataSourceAwsSubnetConfigIpv6WithDataSourceFilter(rInt),
|
||||
Check: resource.ComposeAggregateTestCheckFunc(
|
||||
resource.TestCheckResourceAttrSet(
|
||||
"data.aws_subnet.by_ipv6_cidr", "ipv6_cidr_block_association_id"),
|
||||
resource.TestCheckResourceAttrSet(
|
||||
"data.aws_subnet.by_ipv6_cidr", "ipv6_cidr_block"),
|
||||
),
|
||||
},
|
||||
},
|
||||
})
|
||||
}
|
||||
|
||||
func TestAccDataSourceAwsSubnetIpv6ByIpv6CidrBlock(t *testing.T) {
|
||||
rInt := acctest.RandIntRange(0, 256)
|
||||
resource.Test(t, resource.TestCase{
|
||||
PreCheck: func() { testAccPreCheck(t) },
|
||||
Providers: testAccProviders,
|
||||
Steps: []resource.TestStep{
|
||||
{
|
||||
Config: testAccDataSourceAwsSubnetConfigIpv6(rInt),
|
||||
},
|
||||
{
|
||||
Config: testAccDataSourceAwsSubnetConfigIpv6WithDataSourceIpv6CidrBlock(rInt),
|
||||
Check: resource.ComposeAggregateTestCheckFunc(
|
||||
resource.TestCheckResourceAttrSet(
|
||||
"data.aws_subnet.by_ipv6_cidr", "ipv6_cidr_block_association_id"),
|
||||
),
|
||||
},
|
||||
},
|
||||
})
|
||||
}
|
||||
|
||||
func testAccDataSourceAwsSubnetCheck(name string, rInt int) resource.TestCheckFunc {
|
||||
return func(s *terraform.State) error {
|
||||
rs, ok := s.RootModule().Resources[name]
|
||||
if !ok {
|
||||
|
@ -61,13 +107,13 @@ func testAccDataSourceAwsSubnetCheck(name string) resource.TestCheckFunc {
|
|||
)
|
||||
}
|
||||
|
||||
if attr["cidr_block"] != "172.16.123.0/24" {
|
||||
if attr["cidr_block"] != fmt.Sprintf("172.%d.123.0/24", rInt) {
|
||||
return fmt.Errorf("bad cidr_block %s", attr["cidr_block"])
|
||||
}
|
||||
if attr["availability_zone"] != "us-west-2a" {
|
||||
return fmt.Errorf("bad availability_zone %s", attr["availability_zone"])
|
||||
}
|
||||
if attr["tags.Name"] != "terraform-testacc-subnet-data-source" {
|
||||
if attr["tags.Name"] != fmt.Sprintf("terraform-testacc-subnet-data-source-%d", rInt) {
|
||||
return fmt.Errorf("bad Name tag %s", attr["tags.Name"])
|
||||
}
|
||||
|
||||
|
@ -75,51 +121,137 @@ func testAccDataSourceAwsSubnetCheck(name string) resource.TestCheckFunc {
|
|||
}
|
||||
}
|
||||
|
||||
const testAccDataSourceAwsSubnetConfig = `
|
||||
provider "aws" {
|
||||
region = "us-west-2"
|
||||
func testAccDataSourceAwsSubnetConfig(rInt int) string {
|
||||
return fmt.Sprintf(`
|
||||
provider "aws" {
|
||||
region = "us-west-2"
|
||||
}
|
||||
|
||||
resource "aws_vpc" "test" {
|
||||
cidr_block = "172.%d.0.0/16"
|
||||
|
||||
tags {
|
||||
Name = "terraform-testacc-subnet-data-source"
|
||||
}
|
||||
}
|
||||
|
||||
resource "aws_subnet" "test" {
|
||||
vpc_id = "${aws_vpc.test.id}"
|
||||
cidr_block = "172.%d.123.0/24"
|
||||
availability_zone = "us-west-2a"
|
||||
|
||||
tags {
|
||||
Name = "terraform-testacc-subnet-data-source-%d"
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
data "aws_subnet" "by_id" {
|
||||
id = "${aws_subnet.test.id}"
|
||||
}
|
||||
|
||||
data "aws_subnet" "by_cidr" {
|
||||
cidr_block = "${aws_subnet.test.cidr_block}"
|
||||
}
|
||||
|
||||
data "aws_subnet" "by_tag" {
|
||||
tags {
|
||||
Name = "${aws_subnet.test.tags["Name"]}"
|
||||
}
|
||||
}
|
||||
|
||||
data "aws_subnet" "by_vpc" {
|
||||
vpc_id = "${aws_subnet.test.vpc_id}"
|
||||
}
|
||||
|
||||
data "aws_subnet" "by_filter" {
|
||||
filter {
|
||||
name = "vpc-id"
|
||||
values = ["${aws_subnet.test.vpc_id}"]
|
||||
}
|
||||
}
|
||||
`, rInt, rInt, rInt)
|
||||
}
|
||||
|
||||
func testAccDataSourceAwsSubnetConfigIpv6(rInt int) string {
|
||||
return fmt.Sprintf(`
|
||||
resource "aws_vpc" "test" {
|
||||
cidr_block = "172.16.0.0/16"
|
||||
cidr_block = "172.%d.0.0/16"
|
||||
assign_generated_ipv6_cidr_block = true
|
||||
|
||||
tags {
|
||||
Name = "terraform-testacc-subnet-data-source"
|
||||
Name = "terraform-testacc-subnet-data-source-ipv6"
|
||||
}
|
||||
}
|
||||
|
||||
resource "aws_subnet" "test" {
|
||||
vpc_id = "${aws_vpc.test.id}"
|
||||
cidr_block = "172.16.123.0/24"
|
||||
cidr_block = "172.%d.123.0/24"
|
||||
availability_zone = "us-west-2a"
|
||||
ipv6_cidr_block = "${cidrsubnet(aws_vpc.test.ipv6_cidr_block, 8, 1)}"
|
||||
|
||||
tags {
|
||||
Name = "terraform-testacc-subnet-data-source"
|
||||
Name = "terraform-testacc-subnet-data-sourceipv6-%d"
|
||||
}
|
||||
}
|
||||
`, rInt, rInt, rInt)
|
||||
}
|
||||
|
||||
func testAccDataSourceAwsSubnetConfigIpv6WithDataSourceFilter(rInt int) string {
|
||||
return fmt.Sprintf(`
|
||||
resource "aws_vpc" "test" {
|
||||
cidr_block = "172.%d.0.0/16"
|
||||
assign_generated_ipv6_cidr_block = true
|
||||
|
||||
tags {
|
||||
Name = "terraform-testacc-subnet-data-source-ipv6"
|
||||
}
|
||||
}
|
||||
|
||||
data "aws_subnet" "by_id" {
|
||||
id = "${aws_subnet.test.id}"
|
||||
}
|
||||
resource "aws_subnet" "test" {
|
||||
vpc_id = "${aws_vpc.test.id}"
|
||||
cidr_block = "172.%d.123.0/24"
|
||||
availability_zone = "us-west-2a"
|
||||
ipv6_cidr_block = "${cidrsubnet(aws_vpc.test.ipv6_cidr_block, 8, 1)}"
|
||||
|
||||
data "aws_subnet" "by_cidr" {
|
||||
cidr_block = "${aws_subnet.test.cidr_block}"
|
||||
}
|
||||
|
||||
data "aws_subnet" "by_tag" {
|
||||
tags {
|
||||
Name = "${aws_subnet.test.tags["Name"]}"
|
||||
Name = "terraform-testacc-subnet-data-sourceipv6-%d"
|
||||
}
|
||||
}
|
||||
|
||||
data "aws_subnet" "by_vpc" {
|
||||
vpc_id = "${aws_subnet.test.vpc_id}"
|
||||
}
|
||||
|
||||
data "aws_subnet" "by_filter" {
|
||||
data "aws_subnet" "by_ipv6_cidr" {
|
||||
filter {
|
||||
name = "vpc-id"
|
||||
values = ["${aws_subnet.test.vpc_id}"]
|
||||
name = "ipv6-cidr-block-association.ipv6-cidr-block"
|
||||
values = ["${aws_subnet.test.ipv6_cidr_block}"]
|
||||
}
|
||||
}
|
||||
`
|
||||
`, rInt, rInt, rInt)
|
||||
}
|
||||
|
||||
func testAccDataSourceAwsSubnetConfigIpv6WithDataSourceIpv6CidrBlock(rInt int) string {
|
||||
return fmt.Sprintf(`
|
||||
resource "aws_vpc" "test" {
|
||||
cidr_block = "172.%d.0.0/16"
|
||||
assign_generated_ipv6_cidr_block = true
|
||||
|
||||
tags {
|
||||
Name = "terraform-testacc-subnet-data-source-ipv6"
|
||||
}
|
||||
}
|
||||
|
||||
resource "aws_subnet" "test" {
|
||||
vpc_id = "${aws_vpc.test.id}"
|
||||
cidr_block = "172.%d.123.0/24"
|
||||
availability_zone = "us-west-2a"
|
||||
ipv6_cidr_block = "${cidrsubnet(aws_vpc.test.ipv6_cidr_block, 8, 1)}"
|
||||
|
||||
tags {
|
||||
Name = "terraform-testacc-subnet-data-sourceipv6-%d"
|
||||
}
|
||||
}
|
||||
|
||||
data "aws_subnet" "by_ipv6_cidr" {
|
||||
ipv6_cidr_block = "${aws_subnet.test.ipv6_cidr_block}"
|
||||
}
|
||||
`, rInt, rInt, rInt)
|
||||
}
|
||||
|
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue