Merge branch 'master' into cloud_router

This commit is contained in:
Roberto Jung Drebes 2017-05-05 11:43:41 +02:00 committed by GitHub
commit 50f8b9407e
83 changed files with 2911 additions and 171 deletions

View File

@ -2,7 +2,7 @@ dist: trusty
sudo: false sudo: false
language: go language: go
go: go:
- 1.8.x - 1.8
# add TF_CONSUL_TEST=1 to run consul tests # add TF_CONSUL_TEST=1 to run consul tests
# they were causing timouts in travis # they were causing timouts in travis
@ -25,7 +25,7 @@ install:
- bash scripts/gogetcookie.sh - bash scripts/gogetcookie.sh
- go get github.com/kardianos/govendor - go get github.com/kardianos/govendor
script: script:
- make vendor-status test vet - make vet vendor-status test
- GOOS=windows go build - GOOS=windows go build
branches: branches:
only: only:

View File

@ -9,6 +9,9 @@ FEATURES:
* **New Provider:** `gitlab` [GH-13898] * **New Provider:** `gitlab` [GH-13898]
* **New Resource:** `aws_emr_security_configuration` [GH-14080] * **New Resource:** `aws_emr_security_configuration` [GH-14080]
* **New Resource:** `aws_ssm_maintenance_window` [GH-14087]
* **New Resource:** `aws_ssm_maintenance_window_target` [GH-14087]
* **New Resource:** `aws_ssm_maintenance_window_task` [GH-14087]
* **New Resource:** `azurerm_sql_elasticpool` [GH-14099] * **New Resource:** `azurerm_sql_elasticpool` [GH-14099]
* **New Resource:** `google_compute_backend_bucket` [GH-14015] * **New Resource:** `google_compute_backend_bucket` [GH-14015]
* **New Resource:** `google_compute_snapshot` [GH-12482] * **New Resource:** `google_compute_snapshot` [GH-12482]
@ -21,16 +24,21 @@ FEATURES:
IMPROVEMENTS: IMPROVEMENTS:
* core: `sha512` and `base64sha512` interpolation functions, similar to their `sha256` equivalents. [GH-14100] * core: `sha512` and `base64sha512` interpolation functions, similar to their `sha256` equivalents. [GH-14100]
* core: It's now possible to use the index operator `[ ]` to select a known value out of a partially-known list, such as using "splat syntax" and increasing the `count`. [GH-14135]
* provider/aws: Add support for CustomOrigin timeouts to aws_cloudfront_distribution [GH-13367] * provider/aws: Add support for CustomOrigin timeouts to aws_cloudfront_distribution [GH-13367]
* provider/aws: Add support for IAMDatabaseAuthenticationEnabled [GH-14092] * provider/aws: Add support for IAMDatabaseAuthenticationEnabled [GH-14092]
* provider/aws: aws_dynamodb_table Add support for TimeToLive [GH-14104] * provider/aws: aws_dynamodb_table Add support for TimeToLive [GH-14104]
* provider/aws: Add `security_configuration` support to `aws_emr_cluster` [GH-14133] * provider/aws: Add `security_configuration` support to `aws_emr_cluster` [GH-14133]
* provider/aws: Add support for the tenancy placement option in `aws_spot_fleet_request` [GH-14163] * provider/aws: Add support for the tenancy placement option in `aws_spot_fleet_request` [GH-14163]
* provider/aws: aws_db_option_group normalizes name to lowercase [GH-14192]
* provider/aws: Add support description to aws_iam_role [GH-14208]
* provider/aws: Add support for SSM Documents to aws_cloudwatch_event_target [GH-14067]
* provider/azurerm: `azurerm_template_deployment` now supports String/Int/Boolean outputs [GH-13670] * provider/azurerm: `azurerm_template_deployment` now supports String/Int/Boolean outputs [GH-13670]
* provider/azurerm: Expose the Private IP Address for a Load Balancer, if available [GH-13965] * provider/azurerm: Expose the Private IP Address for a Load Balancer, if available [GH-13965]
* provider/dnsimple: Add support for import for dnsimple_records [GH-9130] * provider/dnsimple: Add support for import for dnsimple_records [GH-9130]
* provider/google: Add support for networkIP in compute instance templates [GH-13515] * provider/google: Add support for networkIP in compute instance templates [GH-13515]
* provider/google: google_dns_managed_zone is now importable [GH-13824] * provider/google: google_dns_managed_zone is now importable [GH-13824]
* provider/google: Add support for `compute_route` [GH-14065]
* provider/nomad: Add TLS options [GH-13956] * provider/nomad: Add TLS options [GH-13956]
* provider/triton: Add support for reading provider configuration from `TRITON_*` environment variables in addition to `SDC_*`[GH-14000] * provider/triton: Add support for reading provider configuration from `TRITON_*` environment variables in addition to `SDC_*`[GH-14000]
* provider/triton: Add `cloud_config` argument to `triton_machine` resources for Linux containers [GH-12840] * provider/triton: Add `cloud_config` argument to `triton_machine` resources for Linux containers [GH-12840]
@ -39,6 +47,7 @@ IMPROVEMENTS:
BUG FIXES: BUG FIXES:
* core: `module` blocks without names are now caught in validation, along with various other block types [GH-14162] * core: `module` blocks without names are now caught in validation, along with various other block types [GH-14162]
* core: no longer will errors and normal log output get garbled together on Windows [GH-14194]
* provider/aws: Update aws_ebs_volume when attached [GH-14005] * provider/aws: Update aws_ebs_volume when attached [GH-14005]
* provider/aws: Set aws_instance volume_tags to be Computed [GH-14007] * provider/aws: Set aws_instance volume_tags to be Computed [GH-14007]
* provider/aws: Fix issue getting partition for federated users [GH-13992] * provider/aws: Fix issue getting partition for federated users [GH-13992]
@ -46,11 +55,14 @@ BUG FIXES:
* provider/aws: Exclude aws_instance volume tagging for China and Gov Clouds [GH-14055] * provider/aws: Exclude aws_instance volume tagging for China and Gov Clouds [GH-14055]
* provider/aws: Fix source_dest_check with network_interface [GH-14079] * provider/aws: Fix source_dest_check with network_interface [GH-14079]
* provider/aws: Fixes the bug where SNS delivery policy get always recreated [GH-14064] * provider/aws: Fixes the bug where SNS delivery policy get always recreated [GH-14064]
* provider/aws: Prevent Crash when importing aws_route53_record [GH-14218]
* provider/digitalocean: Prevent diffs when using IDs of images instead of slugs [GH-13879] * provider/digitalocean: Prevent diffs when using IDs of images instead of slugs [GH-13879]
* provider/fastly: Changes setting conditionals to optional [GH-14103] * provider/fastly: Changes setting conditionals to optional [GH-14103]
* provider/google: Ignore certain project services that can't be enabled directly via the api [GH-13730] * provider/google: Ignore certain project services that can't be enabled directly via the api [GH-13730]
* provider/google: Ability to add more than 25 project services [GH-13758] * provider/google: Ability to add more than 25 project services [GH-13758]
* provider/google: Fix compute instance panic with bad disk config [GH-14169] * provider/google: Fix compute instance panic with bad disk config [GH-14169]
* provider/google: Handle `google_storage_bucket_object` not being found [GH-14203]
* provider/google: Handle `google_compute_instance_group_manager` not being found [GH-14190]
* providers/heroku: Configure buildpacks correctly for both Org Apps and non-org Apps [GH-13990] * providers/heroku: Configure buildpacks correctly for both Org Apps and non-org Apps [GH-13990]
* provider/postgres grant role when creating database [GH-11452] * provider/postgres grant role when creating database [GH-11452]
* provisioner/remote-exec: Fix panic from remote_exec provisioner [GH-14134] * provisioner/remote-exec: Fix panic from remote_exec provisioner [GH-14134]

View File

@ -422,6 +422,9 @@ func Provider() terraform.ResourceProvider {
"aws_ssm_activation": resourceAwsSsmActivation(), "aws_ssm_activation": resourceAwsSsmActivation(),
"aws_ssm_association": resourceAwsSsmAssociation(), "aws_ssm_association": resourceAwsSsmAssociation(),
"aws_ssm_document": resourceAwsSsmDocument(), "aws_ssm_document": resourceAwsSsmDocument(),
"aws_ssm_maintenance_window": resourceAwsSsmMaintenanceWindow(),
"aws_ssm_maintenance_window_target": resourceAwsSsmMaintenanceWindowTarget(),
"aws_ssm_maintenance_window_task": resourceAwsSsmMaintenanceWindowTask(),
"aws_spot_datafeed_subscription": resourceAwsSpotDataFeedSubscription(), "aws_spot_datafeed_subscription": resourceAwsSpotDataFeedSubscription(),
"aws_spot_instance_request": resourceAwsSpotInstanceRequest(), "aws_spot_instance_request": resourceAwsSpotInstanceRequest(),
"aws_spot_fleet_request": resourceAwsSpotFleetRequest(), "aws_spot_fleet_request": resourceAwsSpotFleetRequest(),

View File

@ -11,6 +11,7 @@ import (
"github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/aws/awserr" "github.com/aws/aws-sdk-go/aws/awserr"
events "github.com/aws/aws-sdk-go/service/cloudwatchevents" events "github.com/aws/aws-sdk-go/service/cloudwatchevents"
"github.com/hashicorp/terraform/helper/validation"
) )
func resourceAwsCloudWatchEventTarget() *schema.Resource { func resourceAwsCloudWatchEventTarget() *schema.Resource {
@ -21,14 +22,14 @@ func resourceAwsCloudWatchEventTarget() *schema.Resource {
Delete: resourceAwsCloudWatchEventTargetDelete, Delete: resourceAwsCloudWatchEventTargetDelete,
Schema: map[string]*schema.Schema{ Schema: map[string]*schema.Schema{
"rule": &schema.Schema{ "rule": {
Type: schema.TypeString, Type: schema.TypeString,
Required: true, Required: true,
ForceNew: true, ForceNew: true,
ValidateFunc: validateCloudWatchEventRuleName, ValidateFunc: validateCloudWatchEventRuleName,
}, },
"target_id": &schema.Schema{ "target_id": {
Type: schema.TypeString, Type: schema.TypeString,
Optional: true, Optional: true,
Computed: true, Computed: true,
@ -36,12 +37,12 @@ func resourceAwsCloudWatchEventTarget() *schema.Resource {
ValidateFunc: validateCloudWatchEventTargetId, ValidateFunc: validateCloudWatchEventTargetId,
}, },
"arn": &schema.Schema{ "arn": {
Type: schema.TypeString, Type: schema.TypeString,
Required: true, Required: true,
}, },
"input": &schema.Schema{ "input": {
Type: schema.TypeString, Type: schema.TypeString,
Optional: true, Optional: true,
ConflictsWith: []string{"input_path"}, ConflictsWith: []string{"input_path"},
@ -49,11 +50,36 @@ func resourceAwsCloudWatchEventTarget() *schema.Resource {
// but for built-in targets input may not be JSON // but for built-in targets input may not be JSON
}, },
"input_path": &schema.Schema{ "input_path": {
Type: schema.TypeString, Type: schema.TypeString,
Optional: true, Optional: true,
ConflictsWith: []string{"input"}, ConflictsWith: []string{"input"},
}, },
"role_arn": {
Type: schema.TypeString,
Optional: true,
},
"run_command_targets": {
Type: schema.TypeList,
Optional: true,
MaxItems: 5,
Elem: &schema.Resource{
Schema: map[string]*schema.Schema{
"key": {
Type: schema.TypeString,
Required: true,
ValidateFunc: validation.StringLenBetween(1, 128),
},
"values": {
Type: schema.TypeList,
Required: true,
Elem: &schema.Schema{Type: schema.TypeString},
},
},
},
},
}, },
} }
} }
@ -72,6 +98,7 @@ func resourceAwsCloudWatchEventTargetCreate(d *schema.ResourceData, meta interfa
} }
input := buildPutTargetInputStruct(d) input := buildPutTargetInputStruct(d)
log.Printf("[DEBUG] Creating CloudWatch Event Target: %s", input) log.Printf("[DEBUG] Creating CloudWatch Event Target: %s", input)
out, err := conn.PutTargets(input) out, err := conn.PutTargets(input)
if err != nil { if err != nil {
@ -128,6 +155,13 @@ func resourceAwsCloudWatchEventTargetRead(d *schema.ResourceData, meta interface
d.Set("target_id", t.Id) d.Set("target_id", t.Id)
d.Set("input", t.Input) d.Set("input", t.Input)
d.Set("input_path", t.InputPath) d.Set("input_path", t.InputPath)
d.Set("role_arn", t.RoleArn)
if t.RunCommandParameters != nil {
if err := d.Set("run_command_targets", flattenAwsCloudWatchEventTargetRunParameters(t.RunCommandParameters)); err != nil {
return fmt.Errorf("[DEBUG] Error setting run_command_targets error: %#v", err)
}
}
return nil return nil
} }
@ -162,6 +196,7 @@ func resourceAwsCloudWatchEventTargetUpdate(d *schema.ResourceData, meta interfa
conn := meta.(*AWSClient).cloudwatcheventsconn conn := meta.(*AWSClient).cloudwatcheventsconn
input := buildPutTargetInputStruct(d) input := buildPutTargetInputStruct(d)
log.Printf("[DEBUG] Updating CloudWatch Event Target: %s", input) log.Printf("[DEBUG] Updating CloudWatch Event Target: %s", input)
_, err := conn.PutTargets(input) _, err := conn.PutTargets(input)
if err != nil { if err != nil {
@ -203,6 +238,14 @@ func buildPutTargetInputStruct(d *schema.ResourceData) *events.PutTargetsInput {
e.InputPath = aws.String(v.(string)) e.InputPath = aws.String(v.(string))
} }
if v, ok := d.GetOk("role_arn"); ok {
e.RoleArn = aws.String(v.(string))
}
if v, ok := d.GetOk("run_command_targets"); ok {
e.RunCommandParameters = expandAwsCloudWatchEventTargetRunParameters(v.([]interface{}))
}
input := events.PutTargetsInput{ input := events.PutTargetsInput{
Rule: aws.String(d.Get("rule").(string)), Rule: aws.String(d.Get("rule").(string)),
Targets: []*events.Target{e}, Targets: []*events.Target{e},
@ -210,3 +253,39 @@ func buildPutTargetInputStruct(d *schema.ResourceData) *events.PutTargetsInput {
return &input return &input
} }
func expandAwsCloudWatchEventTargetRunParameters(config []interface{}) *events.RunCommandParameters {
commands := make([]*events.RunCommandTarget, 0)
for _, c := range config {
param := c.(map[string]interface{})
command := &events.RunCommandTarget{
Key: aws.String(param["key"].(string)),
Values: expandStringList(param["values"].([]interface{})),
}
commands = append(commands, command)
}
command := &events.RunCommandParameters{
RunCommandTargets: commands,
}
return command
}
func flattenAwsCloudWatchEventTargetRunParameters(runCommand *events.RunCommandParameters) []map[string]interface{} {
result := make([]map[string]interface{}, 0)
for _, x := range runCommand.RunCommandTargets {
config := make(map[string]interface{})
config["key"] = *x.Key
config["values"] = flattenStringList(x.Values)
result = append(result, config)
}
return result
}

View File

@ -18,7 +18,7 @@ func TestAccAWSCloudWatchEventTarget_basic(t *testing.T) {
Providers: testAccProviders, Providers: testAccProviders,
CheckDestroy: testAccCheckAWSCloudWatchEventTargetDestroy, CheckDestroy: testAccCheckAWSCloudWatchEventTargetDestroy,
Steps: []resource.TestStep{ Steps: []resource.TestStep{
resource.TestStep{ {
Config: testAccAWSCloudWatchEventTargetConfig, Config: testAccAWSCloudWatchEventTargetConfig,
Check: resource.ComposeTestCheckFunc( Check: resource.ComposeTestCheckFunc(
testAccCheckCloudWatchEventTargetExists("aws_cloudwatch_event_target.moobar", &target), testAccCheckCloudWatchEventTargetExists("aws_cloudwatch_event_target.moobar", &target),
@ -28,7 +28,7 @@ func TestAccAWSCloudWatchEventTarget_basic(t *testing.T) {
regexp.MustCompile(":tf-acc-moon$")), regexp.MustCompile(":tf-acc-moon$")),
), ),
}, },
resource.TestStep{ {
Config: testAccAWSCloudWatchEventTargetConfigModified, Config: testAccAWSCloudWatchEventTargetConfigModified,
Check: resource.ComposeTestCheckFunc( Check: resource.ComposeTestCheckFunc(
testAccCheckCloudWatchEventTargetExists("aws_cloudwatch_event_target.moobar", &target), testAccCheckCloudWatchEventTargetExists("aws_cloudwatch_event_target.moobar", &target),
@ -50,7 +50,7 @@ func TestAccAWSCloudWatchEventTarget_missingTargetId(t *testing.T) {
Providers: testAccProviders, Providers: testAccProviders,
CheckDestroy: testAccCheckAWSCloudWatchEventTargetDestroy, CheckDestroy: testAccCheckAWSCloudWatchEventTargetDestroy,
Steps: []resource.TestStep{ Steps: []resource.TestStep{
resource.TestStep{ {
Config: testAccAWSCloudWatchEventTargetConfigMissingTargetId, Config: testAccAWSCloudWatchEventTargetConfigMissingTargetId,
Check: resource.ComposeTestCheckFunc( Check: resource.ComposeTestCheckFunc(
testAccCheckCloudWatchEventTargetExists("aws_cloudwatch_event_target.moobar", &target), testAccCheckCloudWatchEventTargetExists("aws_cloudwatch_event_target.moobar", &target),
@ -71,7 +71,7 @@ func TestAccAWSCloudWatchEventTarget_full(t *testing.T) {
Providers: testAccProviders, Providers: testAccProviders,
CheckDestroy: testAccCheckAWSCloudWatchEventTargetDestroy, CheckDestroy: testAccCheckAWSCloudWatchEventTargetDestroy,
Steps: []resource.TestStep{ Steps: []resource.TestStep{
resource.TestStep{ {
Config: testAccAWSCloudWatchEventTargetConfig_full, Config: testAccAWSCloudWatchEventTargetConfig_full,
Check: resource.ComposeTestCheckFunc( Check: resource.ComposeTestCheckFunc(
testAccCheckCloudWatchEventTargetExists("aws_cloudwatch_event_target.foobar", &target), testAccCheckCloudWatchEventTargetExists("aws_cloudwatch_event_target.foobar", &target),
@ -87,6 +87,24 @@ func TestAccAWSCloudWatchEventTarget_full(t *testing.T) {
}) })
} }
func TestAccAWSCloudWatchEventTarget_ssmDocument(t *testing.T) {
var target events.Target
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
CheckDestroy: testAccCheckAWSCloudWatchEventTargetDestroy,
Steps: []resource.TestStep{
{
Config: testAccAWSCloudWatchEventTargetConfigSsmDocument,
Check: resource.ComposeTestCheckFunc(
testAccCheckCloudWatchEventTargetExists("aws_cloudwatch_event_target.test", &target),
),
},
},
})
}
func testAccCheckCloudWatchEventTargetExists(n string, rule *events.Target) resource.TestCheckFunc { func testAccCheckCloudWatchEventTargetExists(n string, rule *events.Target) resource.TestCheckFunc {
return func(s *terraform.State) error { return func(s *terraform.State) error {
rs, ok := s.RootModule().Resources[n] rs, ok := s.RootModule().Resources[n]
@ -126,17 +144,6 @@ func testAccCheckAWSCloudWatchEventTargetDestroy(s *terraform.State) error {
return nil return nil
} }
func testAccCheckTargetIdExists(targetId string) resource.TestCheckFunc {
return func(s *terraform.State) error {
_, ok := s.RootModule().Resources[targetId]
if !ok {
return fmt.Errorf("Not found: %s", targetId)
}
return nil
}
}
var testAccAWSCloudWatchEventTargetConfig = ` var testAccAWSCloudWatchEventTargetConfig = `
resource "aws_cloudwatch_event_rule" "foo" { resource "aws_cloudwatch_event_rule" "foo" {
name = "tf-acc-cw-event-rule-basic" name = "tf-acc-cw-event-rule-basic"
@ -249,3 +256,95 @@ resource "aws_kinesis_stream" "test_stream" {
shard_count = 1 shard_count = 1
} }
` `
var testAccAWSCloudWatchEventTargetConfigSsmDocument = `
resource "aws_ssm_document" "foo" {
name = "test_document-100"
document_type = "Command"
content = <<DOC
{
"schemaVersion": "1.2",
"description": "Check ip configuration of a Linux instance.",
"parameters": {
},
"runtimeConfig": {
"aws:runShellScript": {
"properties": [
{
"id": "0.aws:runShellScript",
"runCommand": ["ifconfig"]
}
]
}
}
}
DOC
}
resource "aws_cloudwatch_event_rule" "console" {
name = "another_test"
description = "another_test"
event_pattern = <<PATTERN
{
"source": [
"aws.autoscaling"
]
}
PATTERN
}
resource "aws_cloudwatch_event_target" "test" {
arn = "${aws_ssm_document.foo.arn}"
rule = "${aws_cloudwatch_event_rule.console.id}"
role_arn = "${aws_iam_role.test_role.arn}"
run_command_targets {
key = "tag:Name"
values = ["acceptance_test"]
}
}
resource "aws_iam_role" "test_role" {
name = "test_role"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "events.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
}
]
}
EOF
}
resource "aws_iam_role_policy" "test_policy" {
name = "test_policy"
role = "${aws_iam_role.test_role.id}"
policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "ssm:*",
"Effect": "Allow",
"Resource": [
"*"
]
}
]
}
EOF
}
`

View File

@ -110,6 +110,7 @@ func resourceAwsConfigConfigRule() *schema.Resource {
"event_source": { "event_source": {
Type: schema.TypeString, Type: schema.TypeString,
Optional: true, Optional: true,
Default: "aws.config",
}, },
"maximum_execution_frequency": { "maximum_execution_frequency": {
Type: schema.TypeString, Type: schema.TypeString,

View File

@ -4,6 +4,7 @@ import (
"bytes" "bytes"
"fmt" "fmt"
"log" "log"
"strings"
"time" "time"
"github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/aws"
@ -36,6 +37,10 @@ func resourceAwsDbOptionGroup() *schema.Resource {
ForceNew: true, ForceNew: true,
ConflictsWith: []string{"name_prefix"}, ConflictsWith: []string{"name_prefix"},
ValidateFunc: validateDbOptionGroupName, ValidateFunc: validateDbOptionGroupName,
StateFunc: func(v interface{}) string {
value := v.(string)
return strings.ToLower(value)
},
}, },
"name_prefix": &schema.Schema{ "name_prefix": &schema.Schema{
Type: schema.TypeString, Type: schema.TypeString,
@ -43,6 +48,10 @@ func resourceAwsDbOptionGroup() *schema.Resource {
Computed: true, Computed: true,
ForceNew: true, ForceNew: true,
ValidateFunc: validateDbOptionGroupNamePrefix, ValidateFunc: validateDbOptionGroupNamePrefix,
StateFunc: func(v interface{}) string {
value := v.(string)
return strings.ToLower(value)
},
}, },
"engine_name": &schema.Schema{ "engine_name": &schema.Schema{
Type: schema.TypeString, Type: schema.TypeString,

View File

@ -49,7 +49,7 @@ func TestAccAWSDBOptionGroup_namePrefix(t *testing.T) {
testAccCheckAWSDBOptionGroupExists("aws_db_option_group.test", &v), testAccCheckAWSDBOptionGroupExists("aws_db_option_group.test", &v),
testAccCheckAWSDBOptionGroupAttributes(&v), testAccCheckAWSDBOptionGroupAttributes(&v),
resource.TestMatchResourceAttr( resource.TestMatchResourceAttr(
"aws_db_option_group.test", "name", regexp.MustCompile("^tf-test-")), "aws_db_option_group.test", "name", regexp.MustCompile("^tf-TEST-")),
), ),
}, },
}, },
@ -112,7 +112,7 @@ func TestAccAWSDBOptionGroup_basicDestroyWithInstance(t *testing.T) {
func TestAccAWSDBOptionGroup_OptionSettings(t *testing.T) { func TestAccAWSDBOptionGroup_OptionSettings(t *testing.T) {
var v rds.OptionGroup var v rds.OptionGroup
rName := fmt.Sprintf("option-group-test-terraform-%s", acctest.RandString(5)) rName := fmt.Sprintf("option-group-TEST-terraform-%s", acctest.RandString(5))
resource.Test(t, resource.TestCase{ resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) }, PreCheck: func() { testAccPreCheck(t) },
@ -149,7 +149,7 @@ func TestAccAWSDBOptionGroup_OptionSettings(t *testing.T) {
func TestAccAWSDBOptionGroup_sqlServerOptionsUpdate(t *testing.T) { func TestAccAWSDBOptionGroup_sqlServerOptionsUpdate(t *testing.T) {
var v rds.OptionGroup var v rds.OptionGroup
rName := fmt.Sprintf("option-group-test-terraform-%s", acctest.RandString(5)) rName := fmt.Sprintf("option-group-TEST-terraform-%s", acctest.RandString(5))
resource.Test(t, resource.TestCase{ resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) }, PreCheck: func() { testAccPreCheck(t) },
@ -181,7 +181,7 @@ func TestAccAWSDBOptionGroup_sqlServerOptionsUpdate(t *testing.T) {
func TestAccAWSDBOptionGroup_multipleOptions(t *testing.T) { func TestAccAWSDBOptionGroup_multipleOptions(t *testing.T) {
var v rds.OptionGroup var v rds.OptionGroup
rName := fmt.Sprintf("option-group-test-terraform-%s", acctest.RandString(5)) rName := fmt.Sprintf("option-group-TEST-terraform-%s", acctest.RandString(5))
resource.Test(t, resource.TestCase{ resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) }, PreCheck: func() { testAccPreCheck(t) },
@ -434,7 +434,7 @@ resource "aws_db_option_group" "test" {
func testAccAWSDBOptionGroup_defaultDescription(n int) string { func testAccAWSDBOptionGroup_defaultDescription(n int) string {
return fmt.Sprintf(` return fmt.Sprintf(`
resource "aws_db_option_group" "test" { resource "aws_db_option_group" "test" {
name = "tf-test-%d" name = "tf-TEST-%d"
engine_name = "mysql" engine_name = "mysql"
major_engine_version = "5.6" major_engine_version = "5.6"
} }

View File

@ -82,6 +82,11 @@ func resourceAwsIamRole() *schema.Resource {
ForceNew: true, ForceNew: true,
}, },
"description": {
Type: schema.TypeString,
Optional: true,
},
"assume_role_policy": { "assume_role_policy": {
Type: schema.TypeString, Type: schema.TypeString,
Required: true, Required: true,
@ -115,6 +120,10 @@ func resourceAwsIamRoleCreate(d *schema.ResourceData, meta interface{}) error {
AssumeRolePolicyDocument: aws.String(d.Get("assume_role_policy").(string)), AssumeRolePolicyDocument: aws.String(d.Get("assume_role_policy").(string)),
} }
if v, ok := d.GetOk("description"); ok {
request.Description = aws.String(v.(string))
}
var createResp *iam.CreateRoleOutput var createResp *iam.CreateRoleOutput
err := resource.Retry(30*time.Second, func() *resource.RetryError { err := resource.Retry(30*time.Second, func() *resource.RetryError {
var err error var err error
@ -168,6 +177,20 @@ func resourceAwsIamRoleUpdate(d *schema.ResourceData, meta interface{}) error {
} }
} }
if d.HasChange("description") {
roleDescriptionInput := &iam.UpdateRoleDescriptionInput{
RoleName: aws.String(d.Id()),
Description: aws.String(d.Get("description").(string)),
}
_, err := iamconn.UpdateRoleDescription(roleDescriptionInput)
if err != nil {
if iamerr, ok := err.(awserr.Error); ok && iamerr.Code() == "NoSuchEntity" {
d.SetId("")
return nil
}
return fmt.Errorf("Error Updating IAM Role (%s) Description: %s", d.Id(), err)
}
}
return nil return nil
} }
@ -189,6 +212,13 @@ func resourceAwsIamRoleReadResult(d *schema.ResourceData, role *iam.Role) error
return err return err
} }
if role.Description != nil {
// the description isn't present in the response to CreateRole.
if err := d.Set("description", role.Description); err != nil {
return err
}
}
assumRolePolicy, err := url.QueryUnescape(*role.AssumeRolePolicyDocument) assumRolePolicy, err := url.QueryUnescape(*role.AssumeRolePolicyDocument)
if err != nil { if err != nil {
return err return err

View File

@ -178,6 +178,10 @@ func testAccCheckAWSRoleAttributes(role *iam.GetRoleOutput) resource.TestCheckFu
if *role.Role.Path != "/" { if *role.Role.Path != "/" {
return fmt.Errorf("Bad path: %s", *role.Role.Path) return fmt.Errorf("Bad path: %s", *role.Role.Path)
} }
if *role.Role.Description != "Test Role" {
return fmt.Errorf("Bad description: %s", *role.Role.Description)
}
return nil return nil
} }
} }
@ -186,6 +190,7 @@ const testAccAWSRoleConfig = `
resource "aws_iam_role" "role" { resource "aws_iam_role" "role" {
name = "test-role" name = "test-role"
path = "/" path = "/"
description = "Test Role"
assume_role_policy = "{\"Version\":\"2012-10-17\",\"Statement\":[{\"Effect\":\"Allow\",\"Principal\":{\"Service\":[\"ec2.amazonaws.com\"]},\"Action\":[\"sts:AssumeRole\"]}]}" assume_role_policy = "{\"Version\":\"2012-10-17\",\"Statement\":[{\"Effect\":\"Allow\",\"Principal\":{\"Service\":[\"ec2.amazonaws.com\"]},\"Action\":[\"sts:AssumeRole\"]}]}"
} }
` `

View File

@ -460,7 +460,9 @@ func resourceAwsRoute53RecordRead(d *schema.ResourceData, meta interface{}) erro
if _, ok := d.GetOk("zone_id"); !ok { if _, ok := d.GetOk("zone_id"); !ok {
parts := strings.Split(d.Id(), "_") parts := strings.Split(d.Id(), "_")
if len(parts) == 1 { //we check that we have parsed the id into the correct number of segments
//we need at least 3 segments!
if len(parts) == 1 || len(parts) < 3 {
return fmt.Errorf("Error Importing aws_route_53 record. Please make sure the record ID is in the form ZONEID_RECORDNAME_TYPE (i.e. Z4KAPRWWNC7JR_dev_A") return fmt.Errorf("Error Importing aws_route_53 record. Please make sure the record ID is in the form ZONEID_RECORDNAME_TYPE (i.e. Z4KAPRWWNC7JR_dev_A")
} }

View File

@ -406,6 +406,24 @@ func TestAccAWSRoute53Record_empty(t *testing.T) {
}) })
} }
// Regression test for https://github.com/hashicorp/terraform/issues/8423
func TestAccAWSRoute53Record_longTXTrecord(t *testing.T) {
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
IDRefreshName: "aws_route53_record.long_txt",
Providers: testAccProviders,
CheckDestroy: testAccCheckRoute53RecordDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: testAccRoute53RecordConfigLongTxtRecord,
Check: resource.ComposeTestCheckFunc(
testAccCheckRoute53RecordExists("aws_route53_record.long_txt"),
),
},
},
})
}
func testAccCheckRoute53RecordDestroy(s *terraform.State) error { func testAccCheckRoute53RecordDestroy(s *terraform.State) error {
conn := testAccProvider.Meta().(*AWSClient).r53conn conn := testAccProvider.Meta().(*AWSClient).r53conn
for _, rs := range s.RootModule().Resources { for _, rs := range s.RootModule().Resources {
@ -1161,3 +1179,19 @@ resource "aws_route53_record" "empty" {
records = ["127.0.0.1"] records = ["127.0.0.1"]
} }
` `
const testAccRoute53RecordConfigLongTxtRecord = `
resource "aws_route53_zone" "main" {
name = "notexample.com"
}
resource "aws_route53_record" "long_txt" {
zone_id = "${aws_route53_zone.main.zone_id}"
name = "google.notexample.com"
type = "TXT"
ttl = "30"
records = [
"v=DKIM1; k=rsa; p=MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAiajKNMp\" \"/A12roF4p3MBm9QxQu6GDsBlWUWFx8EaS8TCo3Qe8Cj0kTag1JMjzCC1s6oM0a43JhO6mp6z/"
]
}
`

View File

@ -27,6 +27,10 @@ func resourceAwsSsmDocument() *schema.Resource {
Delete: resourceAwsSsmDocumentDelete, Delete: resourceAwsSsmDocumentDelete,
Schema: map[string]*schema.Schema{ Schema: map[string]*schema.Schema{
"arn": {
Type: schema.TypeString,
Computed: true,
},
"name": { "name": {
Type: schema.TypeString, Type: schema.TypeString,
Required: true, Required: true,
@ -195,6 +199,9 @@ func resourceAwsSsmDocumentRead(d *schema.ResourceData, meta interface{}) error
d.Set("name", doc.Name) d.Set("name", doc.Name)
d.Set("owner", doc.Owner) d.Set("owner", doc.Owner)
d.Set("platform_types", flattenStringList(doc.PlatformTypes)) d.Set("platform_types", flattenStringList(doc.PlatformTypes))
if err := d.Set("arn", flattenAwsSsmDocumentArn(meta, doc.Name)); err != nil {
return fmt.Errorf("[DEBUG] Error setting arn error: %#v", err)
}
d.Set("status", doc.Status) d.Set("status", doc.Status)
@ -238,6 +245,12 @@ func resourceAwsSsmDocumentRead(d *schema.ResourceData, meta interface{}) error
return nil return nil
} }
func flattenAwsSsmDocumentArn(meta interface{}, docName *string) string {
region := meta.(*AWSClient).region
return fmt.Sprintf("arn:aws:ssm:%s::document/%s", region, *docName)
}
func resourceAwsSsmDocumentUpdate(d *schema.ResourceData, meta interface{}) error { func resourceAwsSsmDocumentUpdate(d *schema.ResourceData, meta interface{}) error {
if _, ok := d.GetOk("permissions"); ok { if _, ok := d.GetOk("permissions"); ok {

View File

@ -0,0 +1,168 @@
package aws
import (
"log"
"github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/service/ssm"
"github.com/hashicorp/terraform/helper/schema"
)
func resourceAwsSsmMaintenanceWindow() *schema.Resource {
return &schema.Resource{
Create: resourceAwsSsmMaintenanceWindowCreate,
Read: resourceAwsSsmMaintenanceWindowRead,
Update: resourceAwsSsmMaintenanceWindowUpdate,
Delete: resourceAwsSsmMaintenanceWindowDelete,
Schema: map[string]*schema.Schema{
"name": {
Type: schema.TypeString,
Required: true,
},
"schedule": {
Type: schema.TypeString,
Required: true,
},
"duration": {
Type: schema.TypeInt,
Required: true,
},
"cutoff": {
Type: schema.TypeInt,
Required: true,
},
"allow_unassociated_targets": {
Type: schema.TypeBool,
Optional: true,
Default: false,
},
"enabled": {
Type: schema.TypeBool,
Optional: true,
Default: true,
},
},
}
}
func resourceAwsSsmMaintenanceWindowCreate(d *schema.ResourceData, meta interface{}) error {
ssmconn := meta.(*AWSClient).ssmconn
params := &ssm.CreateMaintenanceWindowInput{
Name: aws.String(d.Get("name").(string)),
Schedule: aws.String(d.Get("schedule").(string)),
Duration: aws.Int64(int64(d.Get("duration").(int))),
Cutoff: aws.Int64(int64(d.Get("cutoff").(int))),
AllowUnassociatedTargets: aws.Bool(d.Get("allow_unassociated_targets").(bool)),
}
resp, err := ssmconn.CreateMaintenanceWindow(params)
if err != nil {
return err
}
d.SetId(*resp.WindowId)
return resourceAwsSsmMaintenanceWindowRead(d, meta)
}
func resourceAwsSsmMaintenanceWindowUpdate(d *schema.ResourceData, meta interface{}) error {
ssmconn := meta.(*AWSClient).ssmconn
params := &ssm.UpdateMaintenanceWindowInput{
WindowId: aws.String(d.Id()),
}
if d.HasChange("name") {
params.Name = aws.String(d.Get("name").(string))
}
if d.HasChange("schedule") {
params.Schedule = aws.String(d.Get("schedule").(string))
}
if d.HasChange("duration") {
params.Duration = aws.Int64(int64(d.Get("duration").(int)))
}
if d.HasChange("cutoff") {
params.Cutoff = aws.Int64(int64(d.Get("cutoff").(int)))
}
if d.HasChange("allow_unassociated_targets") {
params.AllowUnassociatedTargets = aws.Bool(d.Get("allow_unassociated_targets").(bool))
}
if d.HasChange("enabled") {
params.Enabled = aws.Bool(d.Get("enabled").(bool))
}
_, err := ssmconn.UpdateMaintenanceWindow(params)
if err != nil {
return err
}
return resourceAwsSsmMaintenanceWindowRead(d, meta)
}
func resourceAwsSsmMaintenanceWindowRead(d *schema.ResourceData, meta interface{}) error {
ssmconn := meta.(*AWSClient).ssmconn
params := &ssm.DescribeMaintenanceWindowsInput{
Filters: []*ssm.MaintenanceWindowFilter{
{
Key: aws.String("Name"),
Values: []*string{aws.String(d.Get("name").(string))},
},
},
}
resp, err := ssmconn.DescribeMaintenanceWindows(params)
if err != nil {
return err
}
found := false
for _, window := range resp.WindowIdentities {
if *window.WindowId == d.Id() {
found = true
d.Set("name", window.Name)
d.Set("cutoff", window.Cutoff)
d.Set("duration", window.Duration)
d.Set("enabled", window.Enabled)
}
}
if !found {
log.Printf("[INFO] Cannot find the SSM Maintenance Window %q. Removing from state", d.Get("name").(string))
d.SetId("")
return nil
}
return nil
}
func resourceAwsSsmMaintenanceWindowDelete(d *schema.ResourceData, meta interface{}) error {
ssmconn := meta.(*AWSClient).ssmconn
log.Printf("[INFO] Deleting SSM Maintenance Window: %s", d.Id())
params := &ssm.DeleteMaintenanceWindowInput{
WindowId: aws.String(d.Id()),
}
_, err := ssmconn.DeleteMaintenanceWindow(params)
if err != nil {
return err
}
return nil
}

View File

@ -0,0 +1,179 @@
package aws
import (
"fmt"
"log"
"github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/service/ssm"
"github.com/hashicorp/terraform/helper/schema"
)
func resourceAwsSsmMaintenanceWindowTarget() *schema.Resource {
return &schema.Resource{
Create: resourceAwsSsmMaintenanceWindowTargetCreate,
Read: resourceAwsSsmMaintenanceWindowTargetRead,
Delete: resourceAwsSsmMaintenanceWindowTargetDelete,
Schema: map[string]*schema.Schema{
"window_id": {
Type: schema.TypeString,
Required: true,
ForceNew: true,
},
"resource_type": {
Type: schema.TypeString,
Required: true,
ForceNew: true,
},
"targets": {
Type: schema.TypeList,
Required: true,
ForceNew: true,
MaxItems: 1,
Elem: &schema.Resource{
Schema: map[string]*schema.Schema{
"key": {
Type: schema.TypeString,
Required: true,
},
"values": {
Type: schema.TypeList,
Required: true,
Elem: &schema.Schema{Type: schema.TypeString},
},
},
},
},
"owner_information": {
Type: schema.TypeString,
ForceNew: true,
Optional: true,
},
},
}
}
func expandAwsSsmMaintenanceWindowTargets(d *schema.ResourceData) []*ssm.Target {
var targets []*ssm.Target
targetConfig := d.Get("targets").([]interface{})
for _, tConfig := range targetConfig {
config := tConfig.(map[string]interface{})
target := &ssm.Target{
Key: aws.String(config["key"].(string)),
Values: expandStringList(config["values"].([]interface{})),
}
targets = append(targets, target)
}
return targets
}
func flattenAwsSsmMaintenanceWindowTargets(targets []*ssm.Target) []map[string]interface{} {
if len(targets) == 0 {
return nil
}
result := make([]map[string]interface{}, 0, len(targets))
target := targets[0]
t := make(map[string]interface{})
t["key"] = *target.Key
t["values"] = flattenStringList(target.Values)
result = append(result, t)
return result
}
func resourceAwsSsmMaintenanceWindowTargetCreate(d *schema.ResourceData, meta interface{}) error {
ssmconn := meta.(*AWSClient).ssmconn
log.Printf("[INFO] Registering SSM Maintenance Window Target")
params := &ssm.RegisterTargetWithMaintenanceWindowInput{
WindowId: aws.String(d.Get("window_id").(string)),
ResourceType: aws.String(d.Get("resource_type").(string)),
Targets: expandAwsSsmMaintenanceWindowTargets(d),
}
if v, ok := d.GetOk("owner_information"); ok {
params.OwnerInformation = aws.String(v.(string))
}
resp, err := ssmconn.RegisterTargetWithMaintenanceWindow(params)
if err != nil {
return err
}
d.SetId(*resp.WindowTargetId)
return resourceAwsSsmMaintenanceWindowTargetRead(d, meta)
}
func resourceAwsSsmMaintenanceWindowTargetRead(d *schema.ResourceData, meta interface{}) error {
ssmconn := meta.(*AWSClient).ssmconn
params := &ssm.DescribeMaintenanceWindowTargetsInput{
WindowId: aws.String(d.Get("window_id").(string)),
Filters: []*ssm.MaintenanceWindowFilter{
{
Key: aws.String("WindowTargetId"),
Values: []*string{aws.String(d.Id())},
},
},
}
resp, err := ssmconn.DescribeMaintenanceWindowTargets(params)
if err != nil {
return err
}
found := false
for _, t := range resp.Targets {
if *t.WindowTargetId == d.Id() {
found = true
d.Set("owner_information", t.OwnerInformation)
d.Set("window_id", t.WindowId)
d.Set("resource_type", t.ResourceType)
if err := d.Set("targets", flattenAwsSsmMaintenanceWindowTargets(t.Targets)); err != nil {
return fmt.Errorf("[DEBUG] Error setting targets error: %#v", err)
}
}
}
if !found {
log.Printf("[INFO] Maintenance Window Target not found. Removing from state")
d.SetId("")
return nil
}
return nil
}
func resourceAwsSsmMaintenanceWindowTargetDelete(d *schema.ResourceData, meta interface{}) error {
ssmconn := meta.(*AWSClient).ssmconn
log.Printf("[INFO] Deregistering SSM Maintenance Window Target: %s", d.Id())
params := &ssm.DeregisterTargetFromMaintenanceWindowInput{
WindowId: aws.String(d.Get("window_id").(string)),
WindowTargetId: aws.String(d.Id()),
}
_, err := ssmconn.DeregisterTargetFromMaintenanceWindow(params)
if err != nil {
return err
}
return nil
}

View File

@ -0,0 +1,122 @@
package aws
import (
"fmt"
"testing"
"github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/aws/awserr"
"github.com/aws/aws-sdk-go/service/ssm"
"github.com/hashicorp/terraform/helper/acctest"
"github.com/hashicorp/terraform/helper/resource"
"github.com/hashicorp/terraform/terraform"
)
func TestAccAWSSSMMaintenanceWindowTarget_basic(t *testing.T) {
name := acctest.RandString(10)
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
CheckDestroy: testAccCheckAWSSSMMaintenanceWindowTargetDestroy,
Steps: []resource.TestStep{
{
Config: testAccAWSSSMMaintenanceWindowTargetBasicConfig(name),
Check: resource.ComposeTestCheckFunc(
testAccCheckAWSSSMMaintenanceWindowTargetExists("aws_ssm_maintenance_window_target.target"),
),
},
},
})
}
func testAccCheckAWSSSMMaintenanceWindowTargetExists(n string) resource.TestCheckFunc {
return func(s *terraform.State) error {
rs, ok := s.RootModule().Resources[n]
if !ok {
return fmt.Errorf("Not found: %s", n)
}
if rs.Primary.ID == "" {
return fmt.Errorf("No SSM Maintenance Window Target Window ID is set")
}
conn := testAccProvider.Meta().(*AWSClient).ssmconn
resp, err := conn.DescribeMaintenanceWindowTargets(&ssm.DescribeMaintenanceWindowTargetsInput{
WindowId: aws.String(rs.Primary.Attributes["window_id"]),
Filters: []*ssm.MaintenanceWindowFilter{
{
Key: aws.String("WindowTargetId"),
Values: []*string{aws.String(rs.Primary.ID)},
},
},
})
if err != nil {
return err
}
for _, i := range resp.Targets {
if *i.WindowTargetId == rs.Primary.ID {
return nil
}
}
return fmt.Errorf("No AWS SSM Maintenance window target found")
}
}
func testAccCheckAWSSSMMaintenanceWindowTargetDestroy(s *terraform.State) error {
conn := testAccProvider.Meta().(*AWSClient).ssmconn
for _, rs := range s.RootModule().Resources {
if rs.Type != "aws_ssm_maintenance_window_target" {
continue
}
out, err := conn.DescribeMaintenanceWindowTargets(&ssm.DescribeMaintenanceWindowTargetsInput{
WindowId: aws.String(rs.Primary.Attributes["window_id"]),
Filters: []*ssm.MaintenanceWindowFilter{
{
Key: aws.String("WindowTargetId"),
Values: []*string{aws.String(rs.Primary.ID)},
},
},
})
if err != nil {
// Verify the error is what we want
if ae, ok := err.(awserr.Error); ok && ae.Code() == "DoesNotExistException" {
continue
}
return err
}
if len(out.Targets) > 0 {
return fmt.Errorf("Expected AWS SSM Maintenance Target to be gone, but was still found")
}
return nil
}
return nil
}
func testAccAWSSSMMaintenanceWindowTargetBasicConfig(rName string) string {
return fmt.Sprintf(`
resource "aws_ssm_maintenance_window" "foo" {
name = "maintenance-window-%s"
schedule = "cron(0 16 ? * TUE *)"
duration = 3
cutoff = 1
}
resource "aws_ssm_maintenance_window_target" "target" {
window_id = "${aws_ssm_maintenance_window.foo.id}"
resource_type = "INSTANCE"
targets {
key = "tag:Name"
values = ["acceptance_test"]
}
}
`, rName)
}

View File

@ -0,0 +1,230 @@
package aws
import (
"fmt"
"log"
"github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/service/ssm"
"github.com/hashicorp/terraform/helper/schema"
)
func resourceAwsSsmMaintenanceWindowTask() *schema.Resource {
return &schema.Resource{
Create: resourceAwsSsmMaintenanceWindowTaskCreate,
Read: resourceAwsSsmMaintenanceWindowTaskRead,
Delete: resourceAwsSsmMaintenanceWindowTaskDelete,
Schema: map[string]*schema.Schema{
"window_id": {
Type: schema.TypeString,
Required: true,
ForceNew: true,
},
"max_concurrency": {
Type: schema.TypeString,
Required: true,
ForceNew: true,
},
"max_errors": {
Type: schema.TypeString,
Required: true,
ForceNew: true,
},
"task_type": {
Type: schema.TypeString,
Required: true,
ForceNew: true,
},
"task_arn": {
Type: schema.TypeString,
Required: true,
ForceNew: true,
},
"service_role_arn": {
Type: schema.TypeString,
Required: true,
ForceNew: true,
},
"targets": {
Type: schema.TypeList,
Required: true,
ForceNew: true,
Elem: &schema.Resource{
Schema: map[string]*schema.Schema{
"key": {
Type: schema.TypeString,
Required: true,
},
"values": {
Type: schema.TypeList,
Required: true,
Elem: &schema.Schema{Type: schema.TypeString},
},
},
},
},
"priority": {
Type: schema.TypeInt,
Optional: true,
ForceNew: true,
},
"logging_info": {
Type: schema.TypeList,
MaxItems: 1,
Optional: true,
ForceNew: true,
Elem: &schema.Resource{
Schema: map[string]*schema.Schema{
"s3_bucket_name": {
Type: schema.TypeString,
Required: true,
},
"s3_region": {
Type: schema.TypeString,
Required: true,
},
"s3_bucket_prefix": {
Type: schema.TypeString,
Optional: true,
},
},
},
},
},
}
}
func expandAwsSsmMaintenanceWindowLoggingInfo(config []interface{}) *ssm.LoggingInfo {
loggingConfig := config[0].(map[string]interface{})
loggingInfo := &ssm.LoggingInfo{
S3BucketName: aws.String(loggingConfig["s3_bucket_name"].(string)),
S3Region: aws.String(loggingConfig["s3_region"].(string)),
}
if s := loggingConfig["s3_bucket_prefix"].(string); s != "" {
loggingInfo.S3KeyPrefix = aws.String(s)
}
return loggingInfo
}
func flattenAwsSsmMaintenanceWindowLoggingInfo(loggingInfo *ssm.LoggingInfo) []interface{} {
result := make(map[string]interface{})
result["s3_bucket_name"] = *loggingInfo.S3BucketName
result["s3_region"] = *loggingInfo.S3Region
if loggingInfo.S3KeyPrefix != nil {
result["s3_bucket_prefix"] = *loggingInfo.S3KeyPrefix
}
return []interface{}{result}
}
func resourceAwsSsmMaintenanceWindowTaskCreate(d *schema.ResourceData, meta interface{}) error {
ssmconn := meta.(*AWSClient).ssmconn
log.Printf("[INFO] Registering SSM Maintenance Window Task")
params := &ssm.RegisterTaskWithMaintenanceWindowInput{
WindowId: aws.String(d.Get("window_id").(string)),
MaxConcurrency: aws.String(d.Get("max_concurrency").(string)),
MaxErrors: aws.String(d.Get("max_errors").(string)),
TaskType: aws.String(d.Get("task_type").(string)),
ServiceRoleArn: aws.String(d.Get("service_role_arn").(string)),
TaskArn: aws.String(d.Get("task_arn").(string)),
Targets: expandAwsSsmMaintenanceWindowTargets(d),
}
if v, ok := d.GetOk("priority"); ok {
params.Priority = aws.Int64(int64(v.(int)))
}
if v, ok := d.GetOk("logging_info"); ok {
params.LoggingInfo = expandAwsSsmMaintenanceWindowLoggingInfo(v.([]interface{}))
}
resp, err := ssmconn.RegisterTaskWithMaintenanceWindow(params)
if err != nil {
return err
}
d.SetId(*resp.WindowTaskId)
return resourceAwsSsmMaintenanceWindowTaskRead(d, meta)
}
func resourceAwsSsmMaintenanceWindowTaskRead(d *schema.ResourceData, meta interface{}) error {
ssmconn := meta.(*AWSClient).ssmconn
params := &ssm.DescribeMaintenanceWindowTasksInput{
WindowId: aws.String(d.Get("window_id").(string)),
}
resp, err := ssmconn.DescribeMaintenanceWindowTasks(params)
if err != nil {
return err
}
found := false
for _, t := range resp.Tasks {
if *t.WindowTaskId == d.Id() {
found = true
d.Set("window_id", t.WindowId)
d.Set("max_concurrency", t.MaxConcurrency)
d.Set("max_errors", t.MaxErrors)
d.Set("task_type", t.Type)
d.Set("service_role_arn", t.ServiceRoleArn)
d.Set("task_arn", t.TaskArn)
d.Set("priority", t.Priority)
if t.LoggingInfo != nil {
if err := d.Set("logging_info", flattenAwsSsmMaintenanceWindowLoggingInfo(t.LoggingInfo)); err != nil {
return fmt.Errorf("[DEBUG] Error setting logging_info error: %#v", err)
}
}
if err := d.Set("targets", flattenAwsSsmMaintenanceWindowTargets(t.Targets)); err != nil {
return fmt.Errorf("[DEBUG] Error setting targets error: %#v", err)
}
}
}
if !found {
log.Printf("[INFO] Maintenance Window Target not found. Removing from state")
d.SetId("")
return nil
}
return nil
}
func resourceAwsSsmMaintenanceWindowTaskDelete(d *schema.ResourceData, meta interface{}) error {
ssmconn := meta.(*AWSClient).ssmconn
log.Printf("[INFO] Deregistering SSM Maintenance Window Task: %s", d.Id())
params := &ssm.DeregisterTaskFromMaintenanceWindowInput{
WindowId: aws.String(d.Get("window_id").(string)),
WindowTaskId: aws.String(d.Id()),
}
_, err := ssmconn.DeregisterTaskFromMaintenanceWindow(params)
if err != nil {
return err
}
return nil
}

View File

@ -0,0 +1,158 @@
package aws
import (
"fmt"
"testing"
"github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/aws/awserr"
"github.com/aws/aws-sdk-go/service/ssm"
"github.com/hashicorp/terraform/helper/acctest"
"github.com/hashicorp/terraform/helper/resource"
"github.com/hashicorp/terraform/terraform"
)
func TestAccAWSSSMMaintenanceWindowTask_basic(t *testing.T) {
name := acctest.RandString(10)
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
CheckDestroy: testAccCheckAWSSSMMaintenanceWindowTaskDestroy,
Steps: []resource.TestStep{
{
Config: testAccAWSSSMMaintenanceWindowTaskBasicConfig(name),
Check: resource.ComposeTestCheckFunc(
testAccCheckAWSSSMMaintenanceWindowTaskExists("aws_ssm_maintenance_window_task.target"),
),
},
},
})
}
func testAccCheckAWSSSMMaintenanceWindowTaskExists(n string) resource.TestCheckFunc {
return func(s *terraform.State) error {
rs, ok := s.RootModule().Resources[n]
if !ok {
return fmt.Errorf("Not found: %s", n)
}
if rs.Primary.ID == "" {
return fmt.Errorf("No SSM Maintenance Window Task Window ID is set")
}
conn := testAccProvider.Meta().(*AWSClient).ssmconn
resp, err := conn.DescribeMaintenanceWindowTasks(&ssm.DescribeMaintenanceWindowTasksInput{
WindowId: aws.String(rs.Primary.Attributes["window_id"]),
})
if err != nil {
return err
}
for _, i := range resp.Tasks {
if *i.WindowTaskId == rs.Primary.ID {
return nil
}
}
return fmt.Errorf("No AWS SSM Maintenance window task found")
}
}
func testAccCheckAWSSSMMaintenanceWindowTaskDestroy(s *terraform.State) error {
conn := testAccProvider.Meta().(*AWSClient).ssmconn
for _, rs := range s.RootModule().Resources {
if rs.Type != "aws_ssm_maintenance_window_target" {
continue
}
out, err := conn.DescribeMaintenanceWindowTasks(&ssm.DescribeMaintenanceWindowTasksInput{
WindowId: aws.String(rs.Primary.Attributes["window_id"]),
})
if err != nil {
// Verify the error is what we want
if ae, ok := err.(awserr.Error); ok && ae.Code() == "DoesNotExistException" {
continue
}
return err
}
if len(out.Tasks) > 0 {
return fmt.Errorf("Expected AWS SSM Maintenance Task to be gone, but was still found")
}
return nil
}
return nil
}
func testAccAWSSSMMaintenanceWindowTaskBasicConfig(rName string) string {
return fmt.Sprintf(`
resource "aws_ssm_maintenance_window" "foo" {
name = "maintenance-window-%s"
schedule = "cron(0 16 ? * TUE *)"
duration = 3
cutoff = 1
}
resource "aws_ssm_maintenance_window_task" "target" {
window_id = "${aws_ssm_maintenance_window.foo.id}"
task_type = "RUN_COMMAND"
task_arn = "AWS-RunShellScript"
priority = 1
service_role_arn = "${aws_iam_role.ssm_role.arn}"
max_concurrency = "2"
max_errors = "1"
targets {
key = "InstanceIds"
values = ["${aws_instance.foo.id}"]
}
}
resource "aws_instance" "foo" {
ami = "ami-4fccb37f"
instance_type = "m1.small"
}
resource "aws_iam_role" "ssm_role" {
name = "ssm-role-%s"
assume_role_policy = <<POLICY
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "events.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
}
]
}
POLICY
}
resource "aws_iam_role_policy" "bar" {
name = "ssm_role_policy_%s"
role = "${aws_iam_role.ssm_role.name}"
policy = <<EOF
{
"Version": "2012-10-17",
"Statement": {
"Effect": "Allow",
"Action": "ssm:*",
"Resource": "*"
}
}
EOF
}
`, rName, rName, rName)
}

View File

@ -0,0 +1,141 @@
package aws
import (
"fmt"
"testing"
"github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/service/ssm"
"github.com/hashicorp/terraform/helper/acctest"
"github.com/hashicorp/terraform/helper/resource"
"github.com/hashicorp/terraform/terraform"
)
func TestAccAWSSSMMaintenanceWindow_basic(t *testing.T) {
name := acctest.RandString(10)
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
CheckDestroy: testAccCheckAWSSSMMaintenanceWindowDestroy,
Steps: []resource.TestStep{
{
Config: testAccAWSSSMMaintenanceWindowBasicConfig(name),
Check: resource.ComposeTestCheckFunc(
testAccCheckAWSSSMMaintenanceWindowExists("aws_ssm_maintenance_window.foo"),
resource.TestCheckResourceAttr(
"aws_ssm_maintenance_window.foo", "schedule", "cron(0 16 ? * TUE *)"),
resource.TestCheckResourceAttr(
"aws_ssm_maintenance_window.foo", "duration", "3"),
resource.TestCheckResourceAttr(
"aws_ssm_maintenance_window.foo", "cutoff", "1"),
resource.TestCheckResourceAttr(
"aws_ssm_maintenance_window.foo", "name", fmt.Sprintf("maintenance-window-%s", name)),
),
},
{
Config: testAccAWSSSMMaintenanceWindowBasicConfigUpdated(name),
Check: resource.ComposeTestCheckFunc(
testAccCheckAWSSSMMaintenanceWindowExists("aws_ssm_maintenance_window.foo"),
resource.TestCheckResourceAttr(
"aws_ssm_maintenance_window.foo", "schedule", "cron(0 16 ? * WED *)"),
resource.TestCheckResourceAttr(
"aws_ssm_maintenance_window.foo", "duration", "10"),
resource.TestCheckResourceAttr(
"aws_ssm_maintenance_window.foo", "cutoff", "8"),
resource.TestCheckResourceAttr(
"aws_ssm_maintenance_window.foo", "name", fmt.Sprintf("updated-maintenance-window-%s", name)),
),
},
},
})
}
func testAccCheckAWSSSMMaintenanceWindowExists(n string) resource.TestCheckFunc {
return func(s *terraform.State) error {
rs, ok := s.RootModule().Resources[n]
if !ok {
return fmt.Errorf("Not found: %s", n)
}
if rs.Primary.ID == "" {
return fmt.Errorf("No SSM Maintenance Window ID is set")
}
conn := testAccProvider.Meta().(*AWSClient).ssmconn
resp, err := conn.DescribeMaintenanceWindows(&ssm.DescribeMaintenanceWindowsInput{
Filters: []*ssm.MaintenanceWindowFilter{
{
Key: aws.String("Name"),
Values: []*string{aws.String(rs.Primary.Attributes["name"])},
},
},
})
for _, i := range resp.WindowIdentities {
if *i.WindowId == rs.Primary.ID {
return nil
}
}
if err != nil {
return err
}
return fmt.Errorf("No AWS SSM Maintenance window found")
}
}
func testAccCheckAWSSSMMaintenanceWindowDestroy(s *terraform.State) error {
conn := testAccProvider.Meta().(*AWSClient).ssmconn
for _, rs := range s.RootModule().Resources {
if rs.Type != "aws_ssm_maintenance_window" {
continue
}
out, err := conn.DescribeMaintenanceWindows(&ssm.DescribeMaintenanceWindowsInput{
Filters: []*ssm.MaintenanceWindowFilter{
{
Key: aws.String("Name"),
Values: []*string{aws.String(rs.Primary.Attributes["name"])},
},
},
})
if err != nil {
return err
}
if len(out.WindowIdentities) > 0 {
return fmt.Errorf("Expected AWS SSM Maintenance Document to be gone, but was still found")
}
return nil
}
return nil
}
func testAccAWSSSMMaintenanceWindowBasicConfig(rName string) string {
return fmt.Sprintf(`
resource "aws_ssm_maintenance_window" "foo" {
name = "maintenance-window-%s"
schedule = "cron(0 16 ? * TUE *)"
duration = 3
cutoff = 1
}
`, rName)
}
func testAccAWSSSMMaintenanceWindowBasicConfigUpdated(rName string) string {
return fmt.Sprintf(`
resource "aws_ssm_maintenance_window" "foo" {
name = "updated-maintenance-window-%s"
schedule = "cron(0 16 ? * WED *)"
duration = 10
cutoff = 8
}
`, rName)
}

View File

@ -820,7 +820,7 @@ func flattenResourceRecords(recs []*route53.ResourceRecord, typeStr string) []st
if r.Value != nil { if r.Value != nil {
s := *r.Value s := *r.Value
if typeStr == "TXT" || typeStr == "SPF" { if typeStr == "TXT" || typeStr == "SPF" {
s = strings.Replace(s, "\"", "", 2) s = expandTxtEntry(s)
} }
strs = append(strs, s) strs = append(strs, s)
} }
@ -833,14 +833,71 @@ func expandResourceRecords(recs []interface{}, typeStr string) []*route53.Resour
for _, r := range recs { for _, r := range recs {
s := r.(string) s := r.(string)
if typeStr == "TXT" || typeStr == "SPF" { if typeStr == "TXT" || typeStr == "SPF" {
// `flattenResourceRecords` removes quotes. Add them back. s = flattenTxtEntry(s)
s = fmt.Sprintf("\"%s\"", s)
} }
records = append(records, &route53.ResourceRecord{Value: aws.String(s)}) records = append(records, &route53.ResourceRecord{Value: aws.String(s)})
} }
return records return records
} }
// How 'flattenTxtEntry' and 'expandTxtEntry' work.
//
// In the Route 53, TXT entries are written using quoted strings, one per line.
// Example:
// "x=foo"
// "bar=12"
//
// In Terraform, there are two differences:
// - We use a list of strings instead of separating strings with newlines.
// - Within each string, we dont' include the surrounding quotes.
// Example:
// records = ["x=foo", "bar=12"] # Instead of ["\"x=foo\", \"bar=12\""]
//
// When we pull from Route 53, `expandTxtEntry` removes the surrounding quotes;
// when we push to Route 53, `flattenTxtEntry` adds them back.
//
// One complication is that a single TXT entry can have multiple quoted strings.
// For example, here are two TXT entries, one with two quoted strings and the
// other with three.
// "x=" "foo"
// "ba" "r" "=12"
//
// DNS clients are expected to merge the quoted strings before interpreting the
// value. Since `expandTxtEntry` only removes the quotes at the end we can still
// (hackily) represent the above configuration in Terraform:
// records = ["x=\" \"foo", "ba\" \"r\" \"=12"]
//
// The primary reason to use multiple strings for an entry is that DNS (and Route
// 53) doesn't allow a quoted string to be more than 255 characters long. If you
// want a longer TXT entry, you must use multiple quoted strings.
//
// It would be nice if this Terraform automatically split strings longer than 255
// characters. For example, imagine "xxx..xxx" has 256 "x" characters.
// records = ["xxx..xxx"]
// When pushing to Route 53, this could be converted to:
// "xxx..xx" "x"
//
// This could also work when the user is already using multiple quoted strings:
// records = ["xxx.xxx\" \"yyy..yyy"]
// When pushing to Route 53, this could be converted to:
// "xxx..xx" "xyyy...y" "yy"
//
// If you want to add this feature, make sure to follow all the quoting rules in
// <https://tools.ietf.org/html/rfc1464#section-2>. If you make a mistake, people
// might end up relying on that mistake so fixing it would be a breaking change.
func flattenTxtEntry(s string) string {
return fmt.Sprintf(`"%s"`, s)
}
func expandTxtEntry(s string) string {
last := len(s) - 1
if last != 0 && s[0] == '"' && s[last] == '"' {
s = s[1:last]
}
return s
}
func expandESClusterConfig(m map[string]interface{}) *elasticsearch.ElasticsearchClusterConfig { func expandESClusterConfig(m map[string]interface{}) *elasticsearch.ElasticsearchClusterConfig {
config := elasticsearch.ElasticsearchClusterConfig{} config := elasticsearch.ElasticsearchClusterConfig{}

View File

@ -822,11 +822,15 @@ func TestFlattenResourceRecords(t *testing.T) {
original := []string{ original := []string{
`127.0.0.1`, `127.0.0.1`,
`"abc def"`, `"abc def"`,
`"abc" "def"`,
`"abc" ""`,
} }
dequoted := []string{ dequoted := []string{
`127.0.0.1`, `127.0.0.1`,
`abc def`, `abc def`,
`abc" "def`,
`abc" "`,
} }
var wrapped []*route53.ResourceRecord = nil var wrapped []*route53.ResourceRecord = nil

View File

@ -25,7 +25,8 @@ func Provider() terraform.ResourceProvider {
}, },
}, },
ResourcesMap: map[string]*schema.Resource{ ResourcesMap: map[string]*schema.Resource{
"gitlab_project": resourceGitlabProject(), "gitlab_project": resourceGitlabProject(),
"gitlab_project_hook": resourceGitlabProjectHook(),
}, },
ConfigureFunc: providerConfigure, ConfigureFunc: providerConfigure,

View File

@ -0,0 +1,192 @@
package gitlab
import (
"fmt"
"log"
"strconv"
"github.com/hashicorp/terraform/helper/schema"
gitlab "github.com/xanzy/go-gitlab"
)
func resourceGitlabProjectHook() *schema.Resource {
return &schema.Resource{
Create: resourceGitlabProjectHookCreate,
Read: resourceGitlabProjectHookRead,
Update: resourceGitlabProjectHookUpdate,
Delete: resourceGitlabProjectHookDelete,
Schema: map[string]*schema.Schema{
"project": {
Type: schema.TypeString,
Required: true,
},
"url": {
Type: schema.TypeString,
Required: true,
},
"token": {
Type: schema.TypeString,
Optional: true,
Sensitive: true,
},
"push_events": {
Type: schema.TypeBool,
Optional: true,
Default: true,
},
"issues_events": {
Type: schema.TypeBool,
Optional: true,
Default: false,
},
"merge_requests_events": {
Type: schema.TypeBool,
Optional: true,
Default: false,
},
"tag_push_events": {
Type: schema.TypeBool,
Optional: true,
Default: false,
},
"note_events": {
Type: schema.TypeBool,
Optional: true,
Default: false,
},
"build_events": {
Type: schema.TypeBool,
Optional: true,
Default: false,
},
"pipeline_events": {
Type: schema.TypeBool,
Optional: true,
Default: false,
},
"wiki_page_events": {
Type: schema.TypeBool,
Optional: true,
Default: false,
},
"enable_ssl_verification": {
Type: schema.TypeBool,
Optional: true,
Default: true,
},
},
}
}
func resourceGitlabProjectHookCreate(d *schema.ResourceData, meta interface{}) error {
client := meta.(*gitlab.Client)
project := d.Get("project").(string)
options := &gitlab.AddProjectHookOptions{
URL: gitlab.String(d.Get("url").(string)),
PushEvents: gitlab.Bool(d.Get("push_events").(bool)),
IssuesEvents: gitlab.Bool(d.Get("issues_events").(bool)),
MergeRequestsEvents: gitlab.Bool(d.Get("merge_requests_events").(bool)),
TagPushEvents: gitlab.Bool(d.Get("tag_push_events").(bool)),
NoteEvents: gitlab.Bool(d.Get("note_events").(bool)),
BuildEvents: gitlab.Bool(d.Get("build_events").(bool)),
PipelineEvents: gitlab.Bool(d.Get("pipeline_events").(bool)),
WikiPageEvents: gitlab.Bool(d.Get("wiki_page_events").(bool)),
EnableSSLVerification: gitlab.Bool(d.Get("enable_ssl_verification").(bool)),
}
if v, ok := d.GetOk("token"); ok {
options.Token = gitlab.String(v.(string))
}
log.Printf("[DEBUG] create gitlab project hook %q", options.URL)
hook, _, err := client.Projects.AddProjectHook(project, options)
if err != nil {
return err
}
d.SetId(fmt.Sprintf("%d", hook.ID))
return resourceGitlabProjectHookRead(d, meta)
}
func resourceGitlabProjectHookRead(d *schema.ResourceData, meta interface{}) error {
client := meta.(*gitlab.Client)
project := d.Get("project").(string)
hookId, err := strconv.Atoi(d.Id())
if err != nil {
return err
}
log.Printf("[DEBUG] read gitlab project hook %s/%d", project, hookId)
hook, response, err := client.Projects.GetProjectHook(project, hookId)
if err != nil {
if response.StatusCode == 404 {
log.Printf("[WARN] removing project hook %d from state because it no longer exists in gitlab", hookId)
d.SetId("")
return nil
}
return err
}
d.Set("url", hook.URL)
d.Set("push_events", hook.PushEvents)
d.Set("issues_events", hook.IssuesEvents)
d.Set("merge_requests_events", hook.MergeRequestsEvents)
d.Set("tag_push_events", hook.TagPushEvents)
d.Set("note_events", hook.NoteEvents)
d.Set("build_events", hook.BuildEvents)
d.Set("pipeline_events", hook.PipelineEvents)
d.Set("wiki_page_events", hook.WikiPageEvents)
d.Set("enable_ssl_verification", hook.EnableSSLVerification)
return nil
}
func resourceGitlabProjectHookUpdate(d *schema.ResourceData, meta interface{}) error {
client := meta.(*gitlab.Client)
project := d.Get("project").(string)
hookId, err := strconv.Atoi(d.Id())
if err != nil {
return err
}
options := &gitlab.EditProjectHookOptions{
URL: gitlab.String(d.Get("url").(string)),
PushEvents: gitlab.Bool(d.Get("push_events").(bool)),
IssuesEvents: gitlab.Bool(d.Get("issues_events").(bool)),
MergeRequestsEvents: gitlab.Bool(d.Get("merge_requests_events").(bool)),
TagPushEvents: gitlab.Bool(d.Get("tag_push_events").(bool)),
NoteEvents: gitlab.Bool(d.Get("note_events").(bool)),
BuildEvents: gitlab.Bool(d.Get("build_events").(bool)),
PipelineEvents: gitlab.Bool(d.Get("pipeline_events").(bool)),
WikiPageEvents: gitlab.Bool(d.Get("wiki_page_events").(bool)),
EnableSSLVerification: gitlab.Bool(d.Get("enable_ssl_verification").(bool)),
}
if d.HasChange("token") {
options.Token = gitlab.String(d.Get("token").(string))
}
log.Printf("[DEBUG] update gitlab project hook %s", d.Id())
_, _, err = client.Projects.EditProjectHook(project, hookId, options)
if err != nil {
return err
}
return resourceGitlabProjectHookRead(d, meta)
}
func resourceGitlabProjectHookDelete(d *schema.ResourceData, meta interface{}) error {
client := meta.(*gitlab.Client)
project := d.Get("project").(string)
hookId, err := strconv.Atoi(d.Id())
if err != nil {
return err
}
log.Printf("[DEBUG] Delete gitlab project hook %s", d.Id())
_, err = client.Projects.DeleteProjectHook(project, hookId)
return err
}

View File

@ -0,0 +1,220 @@
package gitlab
import (
"fmt"
"strconv"
"testing"
"github.com/hashicorp/terraform/helper/acctest"
"github.com/hashicorp/terraform/helper/resource"
"github.com/hashicorp/terraform/terraform"
"github.com/xanzy/go-gitlab"
)
func TestAccGitlabProjectHook_basic(t *testing.T) {
var hook gitlab.ProjectHook
rInt := acctest.RandInt()
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
CheckDestroy: testAccCheckGitlabProjectHookDestroy,
Steps: []resource.TestStep{
// Create a project and hook with default options
{
Config: testAccGitlabProjectHookConfig(rInt),
Check: resource.ComposeTestCheckFunc(
testAccCheckGitlabProjectHookExists("gitlab_project_hook.foo", &hook),
testAccCheckGitlabProjectHookAttributes(&hook, &testAccGitlabProjectHookExpectedAttributes{
URL: fmt.Sprintf("https://example.com/hook-%d", rInt),
PushEvents: true,
EnableSSLVerification: true,
}),
),
},
// Update the project hook to toggle all the values to their inverse
{
Config: testAccGitlabProjectHookUpdateConfig(rInt),
Check: resource.ComposeTestCheckFunc(
testAccCheckGitlabProjectHookExists("gitlab_project_hook.foo", &hook),
testAccCheckGitlabProjectHookAttributes(&hook, &testAccGitlabProjectHookExpectedAttributes{
URL: fmt.Sprintf("https://example.com/hook-%d", rInt),
PushEvents: false,
IssuesEvents: true,
MergeRequestsEvents: true,
TagPushEvents: true,
NoteEvents: true,
BuildEvents: true,
PipelineEvents: true,
WikiPageEvents: true,
EnableSSLVerification: false,
}),
),
},
// Update the project hook to toggle the options back
{
Config: testAccGitlabProjectHookConfig(rInt),
Check: resource.ComposeTestCheckFunc(
testAccCheckGitlabProjectHookExists("gitlab_project_hook.foo", &hook),
testAccCheckGitlabProjectHookAttributes(&hook, &testAccGitlabProjectHookExpectedAttributes{
URL: fmt.Sprintf("https://example.com/hook-%d", rInt),
PushEvents: true,
EnableSSLVerification: true,
}),
),
},
},
})
}
func testAccCheckGitlabProjectHookExists(n string, hook *gitlab.ProjectHook) resource.TestCheckFunc {
return func(s *terraform.State) error {
rs, ok := s.RootModule().Resources[n]
if !ok {
return fmt.Errorf("Not Found: %s", n)
}
hookID, err := strconv.Atoi(rs.Primary.ID)
if err != nil {
return err
}
repoName := rs.Primary.Attributes["project"]
if repoName == "" {
return fmt.Errorf("No project ID is set")
}
conn := testAccProvider.Meta().(*gitlab.Client)
gotHook, _, err := conn.Projects.GetProjectHook(repoName, hookID)
if err != nil {
return err
}
*hook = *gotHook
return nil
}
}
type testAccGitlabProjectHookExpectedAttributes struct {
URL string
PushEvents bool
IssuesEvents bool
MergeRequestsEvents bool
TagPushEvents bool
NoteEvents bool
BuildEvents bool
PipelineEvents bool
WikiPageEvents bool
EnableSSLVerification bool
}
func testAccCheckGitlabProjectHookAttributes(hook *gitlab.ProjectHook, want *testAccGitlabProjectHookExpectedAttributes) resource.TestCheckFunc {
return func(s *terraform.State) error {
if hook.URL != want.URL {
return fmt.Errorf("got url %q; want %q", hook.URL, want.URL)
}
if hook.EnableSSLVerification != want.EnableSSLVerification {
return fmt.Errorf("got enable_ssl_verification %t; want %t", hook.EnableSSLVerification, want.EnableSSLVerification)
}
if hook.PushEvents != want.PushEvents {
return fmt.Errorf("got push_events %t; want %t", hook.PushEvents, want.PushEvents)
}
if hook.IssuesEvents != want.IssuesEvents {
return fmt.Errorf("got issues_events %t; want %t", hook.IssuesEvents, want.IssuesEvents)
}
if hook.MergeRequestsEvents != want.MergeRequestsEvents {
return fmt.Errorf("got merge_requests_events %t; want %t", hook.MergeRequestsEvents, want.MergeRequestsEvents)
}
if hook.TagPushEvents != want.TagPushEvents {
return fmt.Errorf("got tag_push_events %t; want %t", hook.TagPushEvents, want.TagPushEvents)
}
if hook.NoteEvents != want.NoteEvents {
return fmt.Errorf("got note_events %t; want %t", hook.NoteEvents, want.NoteEvents)
}
if hook.BuildEvents != want.BuildEvents {
return fmt.Errorf("got build_events %t; want %t", hook.BuildEvents, want.BuildEvents)
}
if hook.PipelineEvents != want.PipelineEvents {
return fmt.Errorf("got pipeline_events %t; want %t", hook.PipelineEvents, want.PipelineEvents)
}
if hook.WikiPageEvents != want.WikiPageEvents {
return fmt.Errorf("got wiki_page_events %t; want %t", hook.WikiPageEvents, want.WikiPageEvents)
}
return nil
}
}
func testAccCheckGitlabProjectHookDestroy(s *terraform.State) error {
conn := testAccProvider.Meta().(*gitlab.Client)
for _, rs := range s.RootModule().Resources {
if rs.Type != "gitlab_project" {
continue
}
gotRepo, resp, err := conn.Projects.GetProject(rs.Primary.ID)
if err == nil {
if gotRepo != nil && fmt.Sprintf("%d", gotRepo.ID) == rs.Primary.ID {
return fmt.Errorf("Repository still exists")
}
}
if resp.StatusCode != 404 {
return err
}
return nil
}
return nil
}
func testAccGitlabProjectHookConfig(rInt int) string {
return fmt.Sprintf(`
resource "gitlab_project" "foo" {
name = "foo-%d"
description = "Terraform acceptance tests"
# So that acceptance tests can be run in a gitlab organization
# with no billing
visibility_level = "public"
}
resource "gitlab_project_hook" "foo" {
project = "${gitlab_project.foo.id}"
url = "https://example.com/hook-%d"
}
`, rInt, rInt)
}
func testAccGitlabProjectHookUpdateConfig(rInt int) string {
return fmt.Sprintf(`
resource "gitlab_project" "foo" {
name = "foo-%d"
description = "Terraform acceptance tests"
# So that acceptance tests can be run in a gitlab organization
# with no billing
visibility_level = "public"
}
resource "gitlab_project_hook" "foo" {
project = "${gitlab_project.foo.id}"
url = "https://example.com/hook-%d"
enable_ssl_verification = false
push_events = false
issues_events = true
merge_requests_events = true
tag_push_events = true
note_events = true
build_events = true
pipeline_events = true
wiki_page_events = true
}
`, rInt, rInt)
}

View File

@ -0,0 +1,47 @@
package google
import (
"testing"
"github.com/hashicorp/terraform/helper/resource"
)
func TestAccComputeRoute_importBasic(t *testing.T) {
resourceName := "google_compute_network.foobar"
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
CheckDestroy: testAccCheckComputeRouteDestroy,
Steps: []resource.TestStep{
{
Config: testAccComputeRoute_basic,
},
{
ResourceName: resourceName,
ImportState: true,
ImportStateVerify: true,
},
},
})
}
func TestAccComputeRoute_importDefaultInternetGateway(t *testing.T) {
resourceName := "google_compute_network.foobar"
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
CheckDestroy: testAccCheckComputeRouteDestroy,
Steps: []resource.TestStep{
{
Config: testAccComputeRoute_defaultInternetGateway,
},
{
ResourceName: resourceName,
ImportState: true,
ImportStateVerify: true,
},
},
})
}

View File

@ -3,6 +3,7 @@ package google
import ( import (
"encoding/json" "encoding/json"
"fmt" "fmt"
"log"
"strings" "strings"
"github.com/hashicorp/terraform/helper/mutexkv" "github.com/hashicorp/terraform/helper/mutexkv"
@ -262,3 +263,15 @@ func getNetworkNameFromSelfLink(network string) (string, error) {
func getRouterLockName(region string, router string) string { func getRouterLockName(region string, router string) string {
return fmt.Sprintf("router/%s/%s", region, router) return fmt.Sprintf("router/%s/%s", region, router)
} }
func handleNotFoundError(err error, d *schema.ResourceData, resource string) error {
if gerr, ok := err.(*googleapi.Error); ok && gerr.Code == 404 {
log.Printf("[WARN] Removing %s because it's gone", resource)
// The resource doesn't exist anymore
d.SetId("")
return nil
}
return fmt.Errorf("Error reading %s: %s", resource, err)
}

View File

@ -7,7 +7,6 @@ import (
"github.com/hashicorp/terraform/helper/schema" "github.com/hashicorp/terraform/helper/schema"
"google.golang.org/api/compute/v1" "google.golang.org/api/compute/v1"
"google.golang.org/api/googleapi"
) )
func stringScopeHashcode(v interface{}) int { func stringScopeHashcode(v interface{}) int {
@ -361,16 +360,7 @@ func getInstance(config *Config, d *schema.ResourceData) (*compute.Instance, err
instance, err := config.clientCompute.Instances.Get( instance, err := config.clientCompute.Instances.Get(
project, d.Get("zone").(string), d.Id()).Do() project, d.Get("zone").(string), d.Id()).Do()
if err != nil { if err != nil {
if gerr, ok := err.(*googleapi.Error); ok && gerr.Code == 404 { return nil, handleNotFoundError(err, d, fmt.Sprintf("Instance %s", d.Get("name").(string)))
log.Printf("[WARN] Removing Instance %q because it's gone", d.Get("name").(string))
// The resource doesn't exist anymore
id := d.Id()
d.SetId("")
return nil, fmt.Errorf("Resource %s no longer exists", id)
}
return nil, fmt.Errorf("Error reading instance: %s", err)
} }
return instance, nil return instance, nil
@ -713,13 +703,8 @@ func resourceComputeInstanceCreate(d *schema.ResourceData, meta interface{}) err
func resourceComputeInstanceRead(d *schema.ResourceData, meta interface{}) error { func resourceComputeInstanceRead(d *schema.ResourceData, meta interface{}) error {
config := meta.(*Config) config := meta.(*Config)
id := d.Id()
instance, err := getInstance(config, d) instance, err := getInstance(config, d)
if err != nil { if err != nil || instance == nil {
if strings.Contains(err.Error(), "no longer exists") {
log.Printf("[WARN] Google Compute Instance (%s) not found", id)
return nil
}
return err return err
} }

View File

@ -222,7 +222,7 @@ func resourceComputeInstanceGroupManagerRead(d *schema.ResourceData, meta interf
manager, e = config.clientCompute.InstanceGroupManagers.Get(project, zone.(string), d.Id()).Do() manager, e = config.clientCompute.InstanceGroupManagers.Get(project, zone.(string), d.Id()).Do()
if e != nil { if e != nil {
return e return handleNotFoundError(e, d, fmt.Sprintf("Instance Group Manager %q", d.Get("name").(string)))
} }
} else { } else {
// If the resource was imported, the only info we have is the ID. Try to find the resource // If the resource was imported, the only info we have is the ID. Try to find the resource

View File

@ -14,6 +14,9 @@ func resourceComputeRoute() *schema.Resource {
Create: resourceComputeRouteCreate, Create: resourceComputeRouteCreate,
Read: resourceComputeRouteRead, Read: resourceComputeRouteRead,
Delete: resourceComputeRouteDelete, Delete: resourceComputeRouteDelete,
Importer: &schema.ResourceImporter{
State: schema.ImportStatePassthrough,
},
Schema: map[string]*schema.Schema{ Schema: map[string]*schema.Schema{
"dest_range": &schema.Schema{ "dest_range": &schema.Schema{

View File

@ -151,6 +151,14 @@ func resourceStorageBucketObjectDelete(d *schema.ResourceData, meta interface{})
err := DeleteCall.Do() err := DeleteCall.Do()
if err != nil { if err != nil {
if gerr, ok := err.(*googleapi.Error); ok && gerr.Code == 404 {
log.Printf("[WARN] Removing Bucket Object %q because it's gone", name)
// The resource doesn't exist anymore
d.SetId("")
return nil
}
return fmt.Errorf("Error deleting contents of object %s: %s", name, err) return fmt.Errorf("Error deleting contents of object %s: %s", name, err)
} }

View File

@ -288,7 +288,11 @@ func testResourceJob_updateCheck(s *terraform.State) error {
// Verify foo doesn't exist // Verify foo doesn't exist
job, _, err := client.Jobs().Info("foo", nil) job, _, err := client.Jobs().Info("foo", nil)
if err != nil { if err != nil {
return fmt.Errorf("error reading %q job: %s", "foo", err) // Job could have already been purged from nomad server
if !strings.Contains(err.Error(), "(job not found)") {
return fmt.Errorf("error reading %q job: %s", "foo", err)
}
return nil
} }
if job.Status != "dead" { if job.Status != "dead" {
return fmt.Errorf("%q job is not dead. Status: %q", "foo", job.Status) return fmt.Errorf("%q job is not dead. Status: %q", "foo", job.Status)

View File

@ -0,0 +1,91 @@
# Maintainer's Etiquette
Are you a core maintainer of Terraform? Great! Here's a few notes
to help you get comfortable when working on the project.
## Expectations
We value the time you spend on the project and as such your maintainer status
doesn't imply any obligations to do any specific work.
### Your PRs
These apply to all contributors, but maintainers should lead by examples! :wink:
- for `provider/*` PRs it's useful to attach test results & advise on how to run the relevant tests
- for `bug`fixes it's useful to attach repro case, ideally in a form of a test
### PRs/issues from others
- you're welcomed to triage (attach labels to) other PRs and issues
- we generally use 2-label system (= at least 2 labels per issue/PR) where one label is generic and other one API-specific, e.g. `enhancement` & `provider/aws`
## Merging
- you're free to review PRs from the community or other HC employees and give :+1: / :-1:
- if the PR submitter has push privileges (recognizable via `Collaborator`, `Member` or `Owner` badge) - we expect **the submitter** to merge their own PR after receiving a positive review from either HC employee or another maintainer. _Exceptions apply - see below._
- we prefer to use the Github's interface or API to do this, just click the green button
- squash?
- squash when you think the commit history is irrelevant (will not be helpful for any readers in T+6mons)
- Add the new PR to the **Changelog** if it may affect the user (almost any PR except test changes and docs updates)
- we prefer to use the Github's web interface to modify the Changelog and use `[GH-12345]` to format the PR number. These will be turned into links as part of the release process. Breaking changes should be always documented separately.
## Release process
- HC employees are responsible for cutting new releases
- The employee cutting the release will always notify all maintainers via Slack channel before & after each release
so you can avoid merging PRs during the release process.
## Exceptions
Any PR that is significantly changing or even breaking user experience cross-providers should always get at least one :+1: from a HC employee prior to merge.
It is generally advisable to leave PRs labelled as `core` for HC employees to review and merge.
Examples include:
- adding/changing/removing a CLI (sub)command or a [flag](https://github.com/hashicorp/terraform/pull/12939)
- introduce a new feature like [Environments](https://github.com/hashicorp/terraform/pull/12182) or [Shadow Graph](https://github.com/hashicorp/terraform/pull/9334)
- changing config (HCL) like [adding support for lists](https://github.com/hashicorp/terraform/pull/6322)
- change of the [build process or test environment](https://github.com/hashicorp/terraform/pull/9355)
## Breaking Changes
- we always try to avoid breaking changes where possible and/or defer them to the nearest major release
- [state migration](https://github.com/hashicorp/terraform/blob/2fe5976aec290f4b53f07534f4cde13f6d877a3f/helper/schema/resource.go#L33-L56) may help you avoid breaking changes, see [example](https://github.com/hashicorp/terraform/blob/351c6bed79abbb40e461d3f7d49fe4cf20bced41/builtin/providers/aws/resource_aws_route53_record_migrate.go)
- either way BCs should be clearly documented in special section of the Changelog
- Any BC must always receive at least one :+1: from HC employee prior to merge, two :+1:s are advisable
### Examples of Breaking Changes
- https://github.com/hashicorp/terraform/pull/12396
- https://github.com/hashicorp/terraform/pull/13872
- https://github.com/hashicorp/terraform/pull/13752
## Unsure?
If you're unsure about anything, ask in the committer's Slack channel.
## New Providers
These will require :+1: and some extra effort from HC employee.
We expect all acceptance tests to be as self-sustainable as possible
to keep the bar for running any acceptance test low for anyone
outside of HashiCorp or core maintainers team.
We expect any test to run **in parallel** alongside any other test (even the same test).
To ensure this is possible, we need all tests to avoid sharing namespaces or using static unique names.
In rare occasions this may require the use of mutexes in the resource code.
### New Remote-API-based provider (e.g. AWS, Google Cloud, PagerDuty, Atlas)
We will need some details about who to contact or where to register for a new account
and generally we can't merge providers before ensuring we have a way to test them nightly,
which usually involves setting up a new account and obtaining API credentials.
### Local provider (e.g. MySQL, PostgreSQL, Kubernetes, Vault)
We will need either Terraform configs that will set up the underlying test infrastructure
(e.g. GKE cluster for Kubernetes) or Dockerfile(s) that will prepare test environment (e.g. MySQL)
and expose the endpoint for testing.

View File

@ -0,0 +1,28 @@
# Create a CDN Profile, a CDN Endpoint with a Storage Account as origin
This Terraform template was based on [this](https://github.com/Azure/azure-quickstart-templates/tree/master/201-cdn-with-storage-account) Azure Quickstart Template. Changes to the ARM template that may have occurred since the creation of this example may not be reflected in this Terraform template.
This template creates a [CDN Profile](https://docs.microsoft.com/en-us/azure/cdn/cdn-overview) and a CDN Endpoint with the origin as a Storage Account. Note that the user needs to create a public container in the Storage Account in order for CDN Endpoint to serve content from the Storage Account.
# Important
The endpoint will not immediately be available for use, as it takes time for the registration to propagate through the CDN. For Azure CDN from Akamai profiles, propagation will usually complete within one minute. For Azure CDN from Verizon profiles, propagation will usually complete within 90 minutes, but in some cases can take longer.
Users who try to use the CDN domain name before the endpoint configuration has propagated to the POPs will receive HTTP 404 response codes. If it has been several hours since you created your endpoint and you're still receiving 404 responses, please see [Troubleshooting CDN endpoints returning 404 statuses](https://docs.microsoft.com/en-us/azure/cdn/cdn-troubleshoot-endpoint).
## main.tf
The `main.tf` file contains the actual resources that will be deployed. It also contains the Azure Resource Group definition and any defined variables.
## outputs.tf
This data is outputted when `terraform apply` is called, and can be queried using the `terraform output` command.
## provider.tf
Azure requires that an application is added to Azure Active Directory to generate the `client_id`, `client_secret`, and `tenant_id` needed by Terraform (`subscription_id` can be recovered from your Azure account details). Please go [here](https://www.terraform.io/docs/providers/azurerm/) for full instructions on how to create this to populate your `provider.tf` file.
## terraform.tfvars
If a `terraform.tfvars` file is present in the current directory, Terraform automatically loads it to populate variables. We don't recommend saving usernames and password to version control, but you can create a local secret variables file and use `-var-file` to load it.
If you are committing this template to source control, please insure that you add this file to your `.gitignore` file.
## variables.tf
The `variables.tf` file contains all of the input parameters that the user can specify when deploying this Terraform template.

View File

@ -0,0 +1,29 @@
#!/bin/bash
set -o errexit -o nounset
docker run --rm -it \
-e ARM_CLIENT_ID \
-e ARM_CLIENT_SECRET \
-e ARM_SUBSCRIPTION_ID \
-e ARM_TENANT_ID \
-v $(pwd):/data \
--workdir=/data \
--entrypoint "/bin/sh" \
hashicorp/terraform:light \
-c "/bin/terraform get; \
/bin/terraform validate; \
/bin/terraform plan -out=out.tfplan -var resource_group=$KEY; \
/bin/terraform apply out.tfplan"
# cleanup deployed azure resources via terraform
docker run --rm -it \
-e ARM_CLIENT_ID \
-e ARM_CLIENT_SECRET \
-e ARM_SUBSCRIPTION_ID \
-e ARM_TENANT_ID \
-v $(pwd):/data \
--workdir=/data \
--entrypoint "/bin/sh" \
hashicorp/terraform:light \
-c "/bin/terraform destroy -force -var resource_group=$KEY;"

View File

@ -0,0 +1,15 @@
#!/bin/bash
set -o errexit -o nounset
if docker -v; then
# generate a unique string for CI deployment
export KEY=$(cat /dev/urandom | env LC_CTYPE=C tr -cd 'a-z' | head -c 12)
export PASSWORD=$KEY$(cat /dev/urandom | env LC_CTYPE=C tr -cd 'A-Z' | head -c 2)$(cat /dev/urandom | env LC_CTYPE=C tr -cd '0-9' | head -c 2)
/bin/sh ./deploy.ci.sh
else
echo "Docker is used to run terraform commands, please install before run: https://docs.docker.com/docker-for-mac/install/"
fi

Binary file not shown.

After

Width:  |  Height:  |  Size: 68 KiB

View File

@ -0,0 +1,39 @@
# provider "azurerm" {
# subscription_id = "REPLACE-WITH-YOUR-SUBSCRIPTION-ID"
# client_id = "REPLACE-WITH-YOUR-CLIENT-ID"
# client_secret = "REPLACE-WITH-YOUR-CLIENT-SECRET"
# tenant_id = "REPLACE-WITH-YOUR-TENANT-ID"
# }
resource "azurerm_resource_group" "rg" {
name = "${var.resource_group}"
location = "${var.location}"
}
resource "azurerm_storage_account" "stor" {
name = "${var.resource_group}stor"
location = "${var.location}"
resource_group_name = "${azurerm_resource_group.rg.name}"
account_type = "${var.storage_account_type}"
}
resource "azurerm_cdn_profile" "cdn" {
name = "${var.resource_group}CdnProfile1"
location = "${var.location}"
resource_group_name = "${azurerm_resource_group.rg.name}"
sku = "Standard_Akamai"
}
resource "azurerm_cdn_endpoint" "cdnendpt" {
name = "${var.resource_group}CdnEndpoint1"
profile_name = "${azurerm_cdn_profile.cdn.name}"
location = "${var.location}"
resource_group_name = "${azurerm_resource_group.rg.name}"
origin {
name = "${var.resource_group}Origin1"
host_name = "${var.host_name}"
http_port = 80
https_port = 443
}
}

View File

@ -0,0 +1,3 @@
output "CDN Endpoint ID" {
value = "${azurerm_cdn_endpoint.cdnendpt.name}.azureedge.net"
}

View File

@ -0,0 +1,18 @@
variable "resource_group" {
description = "The name of the resource group in which to create the virtual network."
}
variable "location" {
description = "The location/region where the virtual network is created. Changing this forces a new resource to be created."
default = "southcentralus"
}
variable "storage_account_type" {
description = "Specifies the type of the storage account"
default = "Standard_LRS"
}
variable "host_name" {
description = "A string that determines the hostname/IP address of the origin server. This string could be a domain name, IPv4 address or IPv6 address."
default = "www.hostnameoforiginserver.com"
}

View File

@ -17,4 +17,4 @@ Azure requires that an application is added to Azure Active Directory to generat
If a `terraform.tfvars` file is present in the current directory, Terraform automatically loads it to populate variables. We don't recommend saving usernames and password to version control, but you can create a local secret variables file and use `-var-file` to load it. If a `terraform.tfvars` file is present in the current directory, Terraform automatically loads it to populate variables. We don't recommend saving usernames and password to version control, but you can create a local secret variables file and use `-var-file` to load it.
## variables.tf ## variables.tf
The `variables.tf` file contains all of the input parameters that the user can specify when deploying this Terraform template. The `variables.tf` file contains all of the input parameters that the user can specify when deploying this Terraform template.

View File

@ -33,4 +33,4 @@ docker run --rm -it \
--workdir=/data \ --workdir=/data \
--entrypoint "/bin/sh" \ --entrypoint "/bin/sh" \
hashicorp/terraform:light \ hashicorp/terraform:light \
-c "/bin/terraform destroy -force -var dns_name=$KEY -var hostname=$KEY -var resource_group=$KEY -var admin_password=$PASSWORD;" -c "/bin/terraform destroy -force -var dns_name=$KEY -var hostname=$KEY -var resource_group=$KEY -var admin_password=$PASSWORD;"

View File

@ -12,4 +12,4 @@ if docker -v; then
else else
echo "Docker is used to run terraform commands, please install before run: https://docs.docker.com/docker-for-mac/install/" echo "Docker is used to run terraform commands, please install before run: https://docs.docker.com/docker-for-mac/install/"
fi fi

View File

@ -105,4 +105,4 @@ resource "azurerm_virtual_machine" "vm" {
enabled = true enabled = true
storage_uri = "${azurerm_storage_account.stor.primary_blob_endpoint}" storage_uri = "${azurerm_storage_account.stor.primary_blob_endpoint}"
} }
} }

View File

@ -8,4 +8,4 @@ output "vm_fqdn" {
output "ssh_command" { output "ssh_command" {
value = "ssh ${var.admin_username}@${azurerm_public_ip.pip.fqdn}" value = "ssh ${var.admin_username}@${azurerm_public_ip.pip.fqdn}"
} }

View File

@ -72,4 +72,4 @@ variable "admin_username" {
variable "admin_password" { variable "admin_password" {
description = "administrator password (recommended to disable password auth)" description = "administrator password (recommended to disable password auth)"
} }

View File

@ -0,0 +1,3 @@
terraform.tfstate*
provider.tf
out.tfplan

View File

@ -0,0 +1,18 @@
# Virtual Network with Two Subnets
This template allows you to create a Virtual Network with two subnets.
## main.tf
The `main.tf` file contains the actual resources that will be deployed. It also contains the Azure Resource Group definition and any defined variables.
## outputs.tf
This data is outputted when `terraform apply` is called, and can be queried using the `terraform output` command.
## provider.tf
Azure requires that an application is added to Azure Active Directory to generate the `client_id`, `client_secret`, and `tenant_id` needed by Terraform (`subscription_id` can be recovered from your Azure account details). Please go [here](https://www.terraform.io/docs/providers/azurerm/) for full instructions on how to create this to populate your `provider.tf` file.
## terraform.tfvars
If a `terraform.tfvars` file is present in the current directory, Terraform automatically loads it to populate variables. We don't recommend saving usernames and password to version control, but you can create a local secret variables file and use `-var-file` to load it.
## variables.tf
The `variables.tf` file contains all of the input parameters that the user can specify when deploying this Terraform template.

View File

@ -0,0 +1,41 @@
#!/bin/bash
set -o errexit -o nounset
# generate a unique string for CI deployment
# KEY=$(cat /dev/urandom | tr -cd 'a-z' | head -c 12)
# PASSWORD=$KEY$(cat /dev/urandom | tr -cd 'A-Z' | head -c 2)$(cat /dev/urandom | tr -cd '0-9' | head -c 2)
docker run --rm -it \
-e ARM_CLIENT_ID \
-e ARM_CLIENT_SECRET \
-e ARM_SUBSCRIPTION_ID \
-e ARM_TENANT_ID \
-v $(pwd):/data \
--workdir=/data \
--entrypoint "/bin/sh" \
hashicorp/terraform:light \
-c "/bin/terraform get; \
/bin/terraform validate; \
/bin/terraform plan -out=out.tfplan -var resource_group=$KEY; \
/bin/terraform apply out.tfplan; \
/bin/terraform show;"
# check that resources exist via azure cli
docker run --rm -it \
azuresdk/azure-cli-python \
sh -c "az login --service-principal -u $ARM_CLIENT_ID -p $ARM_CLIENT_SECRET --tenant $ARM_TENANT_ID > /dev/null; \
az network vnet subnet show -n subnet1 -g $KEY --vnet-name '$KEY'vnet; \
az network vnet subnet show -n subnet2 -g $KEY --vnet-name '$KEY'vnet;"
# cleanup deployed azure resources via terraform
docker run --rm -it \
-e ARM_CLIENT_ID \
-e ARM_CLIENT_SECRET \
-e ARM_SUBSCRIPTION_ID \
-e ARM_TENANT_ID \
-v $(pwd):/data \
--workdir=/data \
--entrypoint "/bin/sh" \
hashicorp/terraform:light \
-c "/bin/terraform destroy -force -var resource_group=$KEY;"

View File

@ -0,0 +1,15 @@
#!/bin/bash
set -o errexit -o nounset
if docker -v; then
# generate a unique string for CI deployment
export KEY=$(cat /dev/urandom | env LC_CTYPE=C tr -cd 'a-z' | head -c 12)
export PASSWORD=$KEY$(cat /dev/urandom | env LC_CTYPE=C tr -cd 'A-Z' | head -c 2)$(cat /dev/urandom | env LC_CTYPE=C tr -cd '0-9' | head -c 2)
/bin/sh ./deploy.ci.sh
else
echo "Docker is used to run terraform commands, please install before run: https://docs.docker.com/docker-for-mac/install/"
fi

View File

@ -0,0 +1,32 @@
# provider "azurerm" {
# subscription_id = "REPLACE-WITH-YOUR-SUBSCRIPTION-ID"
# client_id = "REPLACE-WITH-YOUR-CLIENT-ID"
# client_secret = "REPLACE-WITH-YOUR-CLIENT-SECRET"
# tenant_id = "REPLACE-WITH-YOUR-TENANT-ID"
# }
resource "azurerm_resource_group" "rg" {
name = "${var.resource_group}"
location = "${var.location}"
}
resource "azurerm_virtual_network" "vnet" {
name = "${var.resource_group}vnet"
location = "${var.location}"
address_space = ["10.0.0.0/16"]
resource_group_name = "${azurerm_resource_group.rg.name}"
}
resource "azurerm_subnet" "subnet1" {
name = "subnet1"
virtual_network_name = "${azurerm_virtual_network.vnet.name}"
resource_group_name = "${azurerm_resource_group.rg.name}"
address_prefix = "10.0.0.0/24"
}
resource "azurerm_subnet" "subnet2" {
name = "subnet2"
virtual_network_name = "${azurerm_virtual_network.vnet.name}"
resource_group_name = "${azurerm_resource_group.rg.name}"
address_prefix = "10.0.1.0/24"
}

View File

@ -0,0 +1,8 @@
variable "resource_group" {
description = "The name of the resource group in which to create the virtual network."
}
variable "location" {
description = "The location/region where the virtual network is created. Changing this forces a new resource to be created."
default = "southcentralus"
}

View File

@ -258,6 +258,15 @@ func copyOutput(r io.Reader, doneCh chan<- struct{}) {
if runtime.GOOS == "windows" { if runtime.GOOS == "windows" {
stdout = colorable.NewColorableStdout() stdout = colorable.NewColorableStdout()
stderr = colorable.NewColorableStderr() stderr = colorable.NewColorableStderr()
// colorable is not concurrency-safe when stdout and stderr are the
// same console, so we need to add some synchronization to ensure that
// we can't be concurrently writing to both stderr and stdout at
// once, or else we get intermingled writes that create gibberish
// in the console.
wrapped := synchronizedWriters(stdout, stderr)
stdout = wrapped[0]
stderr = wrapped[1]
} }
var wg sync.WaitGroup var wg sync.WaitGroup

31
synchronized_writers.go Normal file
View File

@ -0,0 +1,31 @@
package main
import (
"io"
"sync"
)
type synchronizedWriter struct {
io.Writer
mutex *sync.Mutex
}
// synchronizedWriters takes a set of writers and returns wrappers that ensure
// that only one write can be outstanding at a time across the whole set.
func synchronizedWriters(targets ...io.Writer) []io.Writer {
mutex := &sync.Mutex{}
ret := make([]io.Writer, len(targets))
for i, target := range targets {
ret[i] = &synchronizedWriter{
Writer: target,
mutex: mutex,
}
}
return ret
}
func (w *synchronizedWriter) Write(p []byte) (int, error) {
w.mutex.Lock()
defer w.mutex.Unlock()
return w.Writer.Write(p)
}

View File

@ -1871,6 +1871,107 @@ func TestContext2Plan_countIncreaseFromOneCorrupted(t *testing.T) {
} }
} }
// A common pattern in TF configs is to have a set of resources with the same
// count and to use count.index to create correspondences between them:
//
// foo_id = "${foo.bar.*.id[count.index]}"
//
// This test is for the situation where some instances already exist and the
// count is increased. In that case, we should see only the create diffs
// for the new instances and not any update diffs for the existing ones.
func TestContext2Plan_countIncreaseWithSplatReference(t *testing.T) {
m := testModule(t, "plan-count-splat-reference")
p := testProvider("aws")
p.DiffFn = testDiffFn
s := &State{
Modules: []*ModuleState{
&ModuleState{
Path: rootModulePath,
Resources: map[string]*ResourceState{
"aws_instance.foo.0": &ResourceState{
Type: "aws_instance",
Primary: &InstanceState{
ID: "bar",
Attributes: map[string]string{
"name": "foo 0",
},
},
},
"aws_instance.foo.1": &ResourceState{
Type: "aws_instance",
Primary: &InstanceState{
ID: "bar",
Attributes: map[string]string{
"name": "foo 1",
},
},
},
"aws_instance.bar.0": &ResourceState{
Type: "aws_instance",
Primary: &InstanceState{
ID: "bar",
Attributes: map[string]string{
"foo_name": "foo 0",
},
},
},
"aws_instance.bar.1": &ResourceState{
Type: "aws_instance",
Primary: &InstanceState{
ID: "bar",
Attributes: map[string]string{
"foo_name": "foo 1",
},
},
},
},
},
},
}
ctx := testContext2(t, &ContextOpts{
Module: m,
Providers: map[string]ResourceProviderFactory{
"aws": testProviderFuncFixed(p),
},
State: s,
})
plan, err := ctx.Plan()
if err != nil {
t.Fatalf("err: %s", err)
}
actual := strings.TrimSpace(plan.String())
expected := strings.TrimSpace(`
DIFF:
CREATE: aws_instance.bar.2
foo_name: "" => "foo 2"
type: "" => "aws_instance"
CREATE: aws_instance.foo.2
name: "" => "foo 2"
type: "" => "aws_instance"
STATE:
aws_instance.bar.0:
ID = bar
foo_name = foo 0
aws_instance.bar.1:
ID = bar
foo_name = foo 1
aws_instance.foo.0:
ID = bar
name = foo 0
aws_instance.foo.1:
ID = bar
name = foo 1
`)
if actual != expected {
t.Fatalf("bad:\n%s", actual)
}
}
func TestContext2Plan_destroy(t *testing.T) { func TestContext2Plan_destroy(t *testing.T) {
m := testModule(t, "plan-destroy") m := testModule(t, "plan-destroy")
p := testProvider("aws") p := testProvider("aws")

View File

@ -594,10 +594,6 @@ func (i *Interpolater) computeResourceMultiVariable(
} }
if singleAttr, ok := r.Primary.Attributes[v.Field]; ok { if singleAttr, ok := r.Primary.Attributes[v.Field]; ok {
if singleAttr == config.UnknownVariableValue {
return &unknownVariable, nil
}
values = append(values, singleAttr) values = append(values, singleAttr)
continue continue
} }
@ -613,10 +609,6 @@ func (i *Interpolater) computeResourceMultiVariable(
return nil, err return nil, err
} }
if multiAttr == unknownVariable {
return &unknownVariable, nil
}
values = append(values, multiAttr) values = append(values, multiAttr)
} }

View File

@ -359,8 +359,81 @@ func TestInterpolater_resourceVariableMulti(t *testing.T) {
} }
testInterpolate(t, i, scope, "aws_instance.web.*.foo", ast.Variable{ testInterpolate(t, i, scope, "aws_instance.web.*.foo", ast.Variable{
Value: config.UnknownVariableValue, Type: ast.TypeList,
Type: ast.TypeUnknown, Value: []ast.Variable{
{
Type: ast.TypeUnknown,
Value: config.UnknownVariableValue,
},
},
})
}
func TestInterpolater_resourceVariableMultiPartialUnknown(t *testing.T) {
lock := new(sync.RWMutex)
state := &State{
Modules: []*ModuleState{
&ModuleState{
Path: rootModulePath,
Resources: map[string]*ResourceState{
"aws_instance.web.0": &ResourceState{
Type: "aws_instance",
Primary: &InstanceState{
ID: "bar",
Attributes: map[string]string{
"foo": "1",
},
},
},
"aws_instance.web.1": &ResourceState{
Type: "aws_instance",
Primary: &InstanceState{
ID: "bar",
Attributes: map[string]string{
"foo": config.UnknownVariableValue,
},
},
},
"aws_instance.web.2": &ResourceState{
Type: "aws_instance",
Primary: &InstanceState{
ID: "bar",
Attributes: map[string]string{
"foo": "2",
},
},
},
},
},
},
}
i := &Interpolater{
Module: testModule(t, "interpolate-resource-variable-multi"),
State: state,
StateLock: lock,
}
scope := &InterpolationScope{
Path: rootModulePath,
}
testInterpolate(t, i, scope, "aws_instance.web.*.foo", ast.Variable{
Type: ast.TypeList,
Value: []ast.Variable{
{
Type: ast.TypeString,
Value: "1",
},
{
Type: ast.TypeUnknown,
Value: config.UnknownVariableValue,
},
{
Type: ast.TypeString,
Value: "2",
},
},
}) })
} }
@ -408,8 +481,13 @@ func TestInterpolater_resourceVariableMultiList(t *testing.T) {
} }
testInterpolate(t, i, scope, "aws_instance.web.*.ip", ast.Variable{ testInterpolate(t, i, scope, "aws_instance.web.*.ip", ast.Variable{
Value: config.UnknownVariableValue, Type: ast.TypeList,
Type: ast.TypeUnknown, Value: []ast.Variable{
{
Type: ast.TypeUnknown,
Value: config.UnknownVariableValue,
},
},
}) })
} }

View File

@ -0,0 +1,3 @@
resource "aws_instance" "web" {
count = 3
}

View File

@ -0,0 +1,9 @@
resource "aws_instance" "foo" {
name = "foo ${count.index}"
count = 3
}
resource "aws_instance" "bar" {
foo_name = "${aws_instance.foo.*.name[count.index]}"
count = 3
}

View File

@ -4,7 +4,7 @@ clone_folder: c:\gopath\src\github.com\hashicorp\hcl
environment: environment:
GOPATH: c:\gopath GOPATH: c:\gopath
init: init:
- git config --global core.autocrlf true - git config --global core.autocrlf false
install: install:
- cmd: >- - cmd: >-
echo %Path% echo %Path%

View File

@ -3,6 +3,7 @@
package parser package parser
import ( import (
"bytes"
"errors" "errors"
"fmt" "fmt"
"strings" "strings"
@ -36,6 +37,11 @@ func newParser(src []byte) *Parser {
// Parse returns the fully parsed source and returns the abstract syntax tree. // Parse returns the fully parsed source and returns the abstract syntax tree.
func Parse(src []byte) (*ast.File, error) { func Parse(src []byte) (*ast.File, error) {
// normalize all line endings
// since the scanner and output only work with "\n" line endings, we may
// end up with dangling "\r" characters in the parsed data.
src = bytes.Replace(src, []byte("\r\n"), []byte("\n"), -1)
p := newParser(src) p := newParser(src)
return p.Parse() return p.Parse()
} }

View File

@ -62,6 +62,5 @@ func Format(src []byte) ([]byte, error) {
// Add trailing newline to result // Add trailing newline to result
buf.WriteString("\n") buf.WriteString("\n")
return buf.Bytes(), nil return buf.Bytes(), nil
} }

View File

@ -147,7 +147,7 @@ func (p *Parser) objectKey() ([]*ast.ObjectKey, error) {
// Done // Done
return keys, nil return keys, nil
case token.ILLEGAL: case token.ILLEGAL:
fmt.Println("illegal") return nil, errors.New("illegal")
default: default:
return nil, fmt.Errorf("expected: STRING got: %s", p.tok.Type) return nil, fmt.Errorf("expected: STRING got: %s", p.tok.Type)
} }

View File

@ -77,3 +77,12 @@ func (n *LiteralNode) String() string {
func (n *LiteralNode) Type(Scope) (Type, error) { func (n *LiteralNode) Type(Scope) (Type, error) {
return n.Typex, nil return n.Typex, nil
} }
// IsUnknown returns true either if the node's value is itself unknown
// of if it is a collection containing any unknown elements, deeply.
func (n *LiteralNode) IsUnknown() bool {
return IsUnknown(Variable{
Type: n.Typex,
Value: n.Value,
})
}

View File

@ -3,54 +3,61 @@ package ast
import "fmt" import "fmt"
func VariableListElementTypesAreHomogenous(variableName string, list []Variable) (Type, error) { func VariableListElementTypesAreHomogenous(variableName string, list []Variable) (Type, error) {
listTypes := make(map[Type]struct{}) if len(list) == 0 {
return TypeInvalid, fmt.Errorf("list %q does not have any elements so cannot determine type.", variableName)
}
elemType := TypeUnknown
for _, v := range list { for _, v := range list {
// Allow unknown
if v.Type == TypeUnknown { if v.Type == TypeUnknown {
continue continue
} }
if _, ok := listTypes[v.Type]; ok { if elemType == TypeUnknown {
elemType = v.Type
continue continue
} }
listTypes[v.Type] = struct{}{}
if v.Type != elemType {
return TypeInvalid, fmt.Errorf(
"list %q does not have homogenous types. found %s and then %s",
variableName,
elemType, v.Type,
)
}
elemType = v.Type
} }
if len(listTypes) != 1 && len(list) != 0 { return elemType, nil
return TypeInvalid, fmt.Errorf("list %q does not have homogenous types. found %s", variableName, reportTypes(listTypes))
}
if len(list) > 0 {
return list[0].Type, nil
}
return TypeInvalid, fmt.Errorf("list %q does not have any elements so cannot determine type.", variableName)
} }
func VariableMapValueTypesAreHomogenous(variableName string, vmap map[string]Variable) (Type, error) { func VariableMapValueTypesAreHomogenous(variableName string, vmap map[string]Variable) (Type, error) {
valueTypes := make(map[Type]struct{}) if len(vmap) == 0 {
return TypeInvalid, fmt.Errorf("map %q does not have any elements so cannot determine type.", variableName)
}
elemType := TypeUnknown
for _, v := range vmap { for _, v := range vmap {
// Allow unknown
if v.Type == TypeUnknown { if v.Type == TypeUnknown {
continue continue
} }
if _, ok := valueTypes[v.Type]; ok { if elemType == TypeUnknown {
elemType = v.Type
continue continue
} }
valueTypes[v.Type] = struct{}{} if v.Type != elemType {
return TypeInvalid, fmt.Errorf(
"map %q does not have homogenous types. found %s and then %s",
variableName,
elemType, v.Type,
)
}
elemType = v.Type
} }
if len(valueTypes) != 1 && len(vmap) != 0 { return elemType, nil
return TypeInvalid, fmt.Errorf("map %q does not have homogenous value types. found %s", variableName, reportTypes(valueTypes))
}
// For loop here is an easy way to get a single key, we return immediately.
for _, v := range vmap {
return v.Type, nil
}
// This means the map is empty
return TypeInvalid, fmt.Errorf("map %q does not have any elements so cannot determine type.", variableName)
} }

View File

@ -98,10 +98,6 @@ func (v *TypeCheck) visit(raw ast.Node) ast.Node {
pos.Column, pos.Line, err) pos.Column, pos.Line, err)
} }
if v.StackPeek() == ast.TypeUnknown {
v.err = errExitUnknown
}
return result return result
} }
@ -116,6 +112,14 @@ func (tc *typeCheckArithmetic) TypeCheck(v *TypeCheck) (ast.Node, error) {
exprs[len(tc.n.Exprs)-1-i] = v.StackPop() exprs[len(tc.n.Exprs)-1-i] = v.StackPop()
} }
// If any operand is unknown then our result is automatically unknown
for _, ty := range exprs {
if ty == ast.TypeUnknown {
v.StackPush(ast.TypeUnknown)
return tc.n, nil
}
}
switch tc.n.Op { switch tc.n.Op {
case ast.ArithmeticOpLogicalAnd, ast.ArithmeticOpLogicalOr: case ast.ArithmeticOpLogicalAnd, ast.ArithmeticOpLogicalOr:
return tc.checkLogical(v, exprs) return tc.checkLogical(v, exprs)
@ -333,6 +337,11 @@ func (tc *typeCheckCall) TypeCheck(v *TypeCheck) (ast.Node, error) {
continue continue
} }
if args[i] == ast.TypeUnknown {
v.StackPush(ast.TypeUnknown)
return tc.n, nil
}
if args[i] != expected { if args[i] != expected {
cn := v.ImplicitConversion(args[i], expected, tc.n.Args[i]) cn := v.ImplicitConversion(args[i], expected, tc.n.Args[i])
if cn != nil { if cn != nil {
@ -350,6 +359,11 @@ func (tc *typeCheckCall) TypeCheck(v *TypeCheck) (ast.Node, error) {
if function.Variadic && function.VariadicType != ast.TypeAny { if function.Variadic && function.VariadicType != ast.TypeAny {
args = args[len(function.ArgTypes):] args = args[len(function.ArgTypes):]
for i, t := range args { for i, t := range args {
if t == ast.TypeUnknown {
v.StackPush(ast.TypeUnknown)
return tc.n, nil
}
if t != function.VariadicType { if t != function.VariadicType {
realI := i + len(function.ArgTypes) realI := i + len(function.ArgTypes)
cn := v.ImplicitConversion( cn := v.ImplicitConversion(
@ -384,6 +398,11 @@ func (tc *typeCheckConditional) TypeCheck(v *TypeCheck) (ast.Node, error) {
trueType := v.StackPop() trueType := v.StackPop()
condType := v.StackPop() condType := v.StackPop()
if condType == ast.TypeUnknown {
v.StackPush(ast.TypeUnknown)
return tc.n, nil
}
if condType != ast.TypeBool { if condType != ast.TypeBool {
cn := v.ImplicitConversion(condType, ast.TypeBool, tc.n.CondExpr) cn := v.ImplicitConversion(condType, ast.TypeBool, tc.n.CondExpr)
if cn == nil { if cn == nil {
@ -457,6 +476,13 @@ func (tc *typeCheckOutput) TypeCheck(v *TypeCheck) (ast.Node, error) {
types[len(n.Exprs)-1-i] = v.StackPop() types[len(n.Exprs)-1-i] = v.StackPop()
} }
for _, ty := range types {
if ty == ast.TypeUnknown {
v.StackPush(ast.TypeUnknown)
return tc.n, nil
}
}
// If there is only one argument and it is a list, we evaluate to a list // If there is only one argument and it is a list, we evaluate to a list
if len(types) == 1 { if len(types) == 1 {
switch t := types[0]; t { switch t := types[0]; t {
@ -469,7 +495,14 @@ func (tc *typeCheckOutput) TypeCheck(v *TypeCheck) (ast.Node, error) {
} }
// Otherwise, all concat args must be strings, so validate that // Otherwise, all concat args must be strings, so validate that
resultType := ast.TypeString
for i, t := range types { for i, t := range types {
if t == ast.TypeUnknown {
resultType = ast.TypeUnknown
continue
}
if t != ast.TypeString { if t != ast.TypeString {
cn := v.ImplicitConversion(t, ast.TypeString, n.Exprs[i]) cn := v.ImplicitConversion(t, ast.TypeString, n.Exprs[i])
if cn != nil { if cn != nil {
@ -482,8 +515,8 @@ func (tc *typeCheckOutput) TypeCheck(v *TypeCheck) (ast.Node, error) {
} }
} }
// This always results in type string // This always results in type string, unless there are unknowns
v.StackPush(ast.TypeString) v.StackPush(resultType)
return n, nil return n, nil
} }
@ -509,13 +542,6 @@ func (tc *typeCheckVariableAccess) TypeCheck(v *TypeCheck) (ast.Node, error) {
"unknown variable accessed: %s", tc.n.Name) "unknown variable accessed: %s", tc.n.Name)
} }
// Check if the variable contains any unknown types. If so, then
// mark it as unknown.
if ast.IsUnknown(variable) {
v.StackPush(ast.TypeUnknown)
return tc.n, nil
}
// Add the type to the stack // Add the type to the stack
v.StackPush(variable.Type) v.StackPush(variable.Type)
@ -530,6 +556,11 @@ func (tc *typeCheckIndex) TypeCheck(v *TypeCheck) (ast.Node, error) {
keyType := v.StackPop() keyType := v.StackPop()
targetType := v.StackPop() targetType := v.StackPop()
if keyType == ast.TypeUnknown || targetType == ast.TypeUnknown {
v.StackPush(ast.TypeUnknown)
return tc.n, nil
}
// Ensure we have a VariableAccess as the target // Ensure we have a VariableAccess as the target
varAccessNode, ok := tc.n.Target.(*ast.VariableAccess) varAccessNode, ok := tc.n.Target.(*ast.VariableAccess)
if !ok { if !ok {

View File

@ -54,6 +54,14 @@ func Eval(root ast.Node, config *EvalConfig) (EvaluationResult, error) {
return InvalidResult, err return InvalidResult, err
} }
// If the result contains any nested unknowns then the result as a whole
// is unknown, so that callers only have to deal with "entirely known"
// or "entirely unknown" as outcomes.
if ast.IsUnknown(ast.Variable{Type: outputType, Value: output}) {
outputType = ast.TypeUnknown
output = UnknownValue
}
switch outputType { switch outputType {
case ast.TypeList: case ast.TypeList:
val, err := VariableToInterface(ast.Variable{ val, err := VariableToInterface(ast.Variable{
@ -264,6 +272,10 @@ func (v *evalCall) Eval(s ast.Scope, stack *ast.Stack) (interface{}, ast.Type, e
args := make([]interface{}, len(v.Args)) args := make([]interface{}, len(v.Args))
for i, _ := range v.Args { for i, _ := range v.Args {
node := stack.Pop().(*ast.LiteralNode) node := stack.Pop().(*ast.LiteralNode)
if node.IsUnknown() {
// If any arguments are unknown then the result is automatically unknown
return UnknownValue, ast.TypeUnknown, nil
}
args[len(v.Args)-1-i] = node.Value args[len(v.Args)-1-i] = node.Value
} }
@ -286,6 +298,11 @@ func (v *evalConditional) Eval(s ast.Scope, stack *ast.Stack) (interface{}, ast.
trueLit := stack.Pop().(*ast.LiteralNode) trueLit := stack.Pop().(*ast.LiteralNode)
condLit := stack.Pop().(*ast.LiteralNode) condLit := stack.Pop().(*ast.LiteralNode)
if condLit.IsUnknown() {
// If our conditional is unknown then our result is also unknown
return UnknownValue, ast.TypeUnknown, nil
}
if condLit.Value.(bool) { if condLit.Value.(bool) {
return trueLit.Value, trueLit.Typex, nil return trueLit.Value, trueLit.Typex, nil
} else { } else {
@ -301,6 +318,17 @@ func (v *evalIndex) Eval(scope ast.Scope, stack *ast.Stack) (interface{}, ast.Ty
variableName := v.Index.Target.(*ast.VariableAccess).Name variableName := v.Index.Target.(*ast.VariableAccess).Name
if key.IsUnknown() {
// If our key is unknown then our result is also unknown
return UnknownValue, ast.TypeUnknown, nil
}
// For target, we'll accept collections containing unknown values but
// we still need to catch when the collection itself is unknown, shallowly.
if target.Typex == ast.TypeUnknown {
return UnknownValue, ast.TypeUnknown, nil
}
switch target.Typex { switch target.Typex {
case ast.TypeList: case ast.TypeList:
return v.evalListIndex(variableName, target.Value, key.Value) return v.evalListIndex(variableName, target.Value, key.Value)
@ -377,8 +405,22 @@ func (v *evalOutput) Eval(s ast.Scope, stack *ast.Stack) (interface{}, ast.Type,
// The expressions should all be on the stack in reverse // The expressions should all be on the stack in reverse
// order. So pop them off, reverse their order, and concatenate. // order. So pop them off, reverse their order, and concatenate.
nodes := make([]*ast.LiteralNode, 0, len(v.Exprs)) nodes := make([]*ast.LiteralNode, 0, len(v.Exprs))
haveUnknown := false
for range v.Exprs { for range v.Exprs {
nodes = append(nodes, stack.Pop().(*ast.LiteralNode)) n := stack.Pop().(*ast.LiteralNode)
nodes = append(nodes, n)
// If we have any unknowns then the whole result is unknown
// (we must deal with this first, because the type checker can
// skip type conversions in the presence of unknowns, and thus
// any of our other nodes may be incorrectly typed.)
if n.IsUnknown() {
haveUnknown = true
}
}
if haveUnknown {
return UnknownValue, ast.TypeUnknown, nil
} }
// Special case the single list and map // Special case the single list and map
@ -396,6 +438,14 @@ func (v *evalOutput) Eval(s ast.Scope, stack *ast.Stack) (interface{}, ast.Type,
// Otherwise concatenate the strings // Otherwise concatenate the strings
var buf bytes.Buffer var buf bytes.Buffer
for i := len(nodes) - 1; i >= 0; i-- { for i := len(nodes) - 1; i >= 0; i-- {
if nodes[i].Typex != ast.TypeString {
return nil, ast.TypeInvalid, fmt.Errorf(
"invalid output with %s value at index %d: %#v",
nodes[i].Typex,
i,
nodes[i].Value,
)
}
buf.WriteString(nodes[i].Value.(string)) buf.WriteString(nodes[i].Value.(string))
} }
@ -418,11 +468,5 @@ func (v *evalVariableAccess) Eval(scope ast.Scope, _ *ast.Stack) (interface{}, a
"unknown variable accessed: %s", v.Name) "unknown variable accessed: %s", v.Name)
} }
// Check if the variable contains any unknown types. If so, then
// mark it as unknown and return that type.
if ast.IsUnknown(variable) {
return nil, ast.TypeUnknown, nil
}
return variable.Value, variable.Type, nil return variable.Value, variable.Type, nil
} }

72
vendor/vendor.json vendored
View File

@ -2072,94 +2072,94 @@
"revisionTime": "2016-08-13T22:13:03Z" "revisionTime": "2016-08-13T22:13:03Z"
}, },
{ {
"checksumSHA1": "Ok3Csn6Voou7pQT6Dv2mkwpqFtw=", "checksumSHA1": "o3XZZdOnSnwQSpYw215QV75ZDeI=",
"path": "github.com/hashicorp/hcl", "path": "github.com/hashicorp/hcl",
"revision": "630949a3c5fa3c613328e1b8256052cbc2327c9b", "revision": "a4b07c25de5ff55ad3b8936cea69a79a3d95a855",
"revisionTime": "2017-02-17T16:47:38Z" "revisionTime": "2017-05-04T19:02:34Z"
}, },
{ {
"checksumSHA1": "XQmjDva9JCGGkIecOgwtBEMCJhU=", "checksumSHA1": "XQmjDva9JCGGkIecOgwtBEMCJhU=",
"path": "github.com/hashicorp/hcl/hcl/ast", "path": "github.com/hashicorp/hcl/hcl/ast",
"revision": "630949a3c5fa3c613328e1b8256052cbc2327c9b", "revision": "a4b07c25de5ff55ad3b8936cea69a79a3d95a855",
"revisionTime": "2017-02-17T16:47:38Z" "revisionTime": "2017-05-04T19:02:34Z"
}, },
{ {
"checksumSHA1": "DaQmLi48oUAwctWcX6A6DNN61UY=", "checksumSHA1": "DaQmLi48oUAwctWcX6A6DNN61UY=",
"path": "github.com/hashicorp/hcl/hcl/fmtcmd", "path": "github.com/hashicorp/hcl/hcl/fmtcmd",
"revision": "630949a3c5fa3c613328e1b8256052cbc2327c9b", "revision": "a4b07c25de5ff55ad3b8936cea69a79a3d95a855",
"revisionTime": "2017-02-17T16:47:38Z" "revisionTime": "2017-05-04T19:02:34Z"
}, },
{ {
"checksumSHA1": "MGYzZActhzSs9AnCx3wrEYVbKFg=", "checksumSHA1": "teokXoyRXEJ0vZHOWBD11l5YFNI=",
"path": "github.com/hashicorp/hcl/hcl/parser", "path": "github.com/hashicorp/hcl/hcl/parser",
"revision": "630949a3c5fa3c613328e1b8256052cbc2327c9b", "revision": "a4b07c25de5ff55ad3b8936cea69a79a3d95a855",
"revisionTime": "2017-02-17T16:47:38Z" "revisionTime": "2017-05-04T19:02:34Z"
}, },
{ {
"checksumSHA1": "gKCHLG3j2CNs2iADkvSKSNkni+8=", "checksumSHA1": "WR1BjzDKgv6uE+3ShcDTYz0Gl6A=",
"path": "github.com/hashicorp/hcl/hcl/printer", "path": "github.com/hashicorp/hcl/hcl/printer",
"revision": "630949a3c5fa3c613328e1b8256052cbc2327c9b", "revision": "a4b07c25de5ff55ad3b8936cea69a79a3d95a855",
"revisionTime": "2017-02-17T16:47:38Z" "revisionTime": "2017-05-04T19:02:34Z"
}, },
{ {
"checksumSHA1": "z6wdP4mRw4GVjShkNHDaOWkbxS0=", "checksumSHA1": "z6wdP4mRw4GVjShkNHDaOWkbxS0=",
"path": "github.com/hashicorp/hcl/hcl/scanner", "path": "github.com/hashicorp/hcl/hcl/scanner",
"revision": "630949a3c5fa3c613328e1b8256052cbc2327c9b", "revision": "a4b07c25de5ff55ad3b8936cea69a79a3d95a855",
"revisionTime": "2017-02-17T16:47:38Z" "revisionTime": "2017-05-04T19:02:34Z"
}, },
{ {
"checksumSHA1": "oS3SCN9Wd6D8/LG0Yx1fu84a7gI=", "checksumSHA1": "oS3SCN9Wd6D8/LG0Yx1fu84a7gI=",
"path": "github.com/hashicorp/hcl/hcl/strconv", "path": "github.com/hashicorp/hcl/hcl/strconv",
"revision": "630949a3c5fa3c613328e1b8256052cbc2327c9b", "revision": "a4b07c25de5ff55ad3b8936cea69a79a3d95a855",
"revisionTime": "2017-02-17T16:47:38Z" "revisionTime": "2017-05-04T19:02:34Z"
}, },
{ {
"checksumSHA1": "c6yprzj06ASwCo18TtbbNNBHljA=", "checksumSHA1": "c6yprzj06ASwCo18TtbbNNBHljA=",
"path": "github.com/hashicorp/hcl/hcl/token", "path": "github.com/hashicorp/hcl/hcl/token",
"revision": "630949a3c5fa3c613328e1b8256052cbc2327c9b", "revision": "a4b07c25de5ff55ad3b8936cea69a79a3d95a855",
"revisionTime": "2017-02-17T16:47:38Z" "revisionTime": "2017-05-04T19:02:34Z"
}, },
{ {
"checksumSHA1": "138aCV5n8n7tkGYMsMVQQnnLq+0=", "checksumSHA1": "PwlfXt7mFS8UYzWxOK5DOq0yxS0=",
"path": "github.com/hashicorp/hcl/json/parser", "path": "github.com/hashicorp/hcl/json/parser",
"revision": "630949a3c5fa3c613328e1b8256052cbc2327c9b", "revision": "a4b07c25de5ff55ad3b8936cea69a79a3d95a855",
"revisionTime": "2017-02-17T16:47:38Z" "revisionTime": "2017-05-04T19:02:34Z"
}, },
{ {
"checksumSHA1": "YdvFsNOMSWMLnY6fcliWQa0O5Fw=", "checksumSHA1": "YdvFsNOMSWMLnY6fcliWQa0O5Fw=",
"path": "github.com/hashicorp/hcl/json/scanner", "path": "github.com/hashicorp/hcl/json/scanner",
"revision": "630949a3c5fa3c613328e1b8256052cbc2327c9b", "revision": "a4b07c25de5ff55ad3b8936cea69a79a3d95a855",
"revisionTime": "2017-02-17T16:47:38Z" "revisionTime": "2017-05-04T19:02:34Z"
}, },
{ {
"checksumSHA1": "fNlXQCQEnb+B3k5UDL/r15xtSJY=", "checksumSHA1": "fNlXQCQEnb+B3k5UDL/r15xtSJY=",
"path": "github.com/hashicorp/hcl/json/token", "path": "github.com/hashicorp/hcl/json/token",
"revision": "630949a3c5fa3c613328e1b8256052cbc2327c9b", "revision": "a4b07c25de5ff55ad3b8936cea69a79a3d95a855",
"revisionTime": "2017-02-17T16:47:38Z" "revisionTime": "2017-05-04T19:02:34Z"
}, },
{ {
"checksumSHA1": "2Nrl/YKrmowkRgCDLhA6UTFgYEY=", "checksumSHA1": "zz3/f3YpHHBN78uLhnhLBW2aF8o=",
"path": "github.com/hashicorp/hil", "path": "github.com/hashicorp/hil",
"revision": "5b8d13c8c5c2753e109fab25392a1dbfa2db93d2", "revision": "747a6e1523d6808f91144df070435b16865cd333",
"revisionTime": "2016-12-21T19:20:42Z" "revisionTime": "2017-05-01T20:07:50Z"
}, },
{ {
"checksumSHA1": "oZ2a2x9qyHqvqJdv/Du3LGeaFdA=", "checksumSHA1": "0S0KeBcfqVFYBPeZkuJ4fhQ5mCA=",
"path": "github.com/hashicorp/hil/ast", "path": "github.com/hashicorp/hil/ast",
"revision": "5b8d13c8c5c2753e109fab25392a1dbfa2db93d2", "revision": "747a6e1523d6808f91144df070435b16865cd333",
"revisionTime": "2016-12-21T19:20:42Z" "revisionTime": "2017-05-01T20:07:50Z"
}, },
{ {
"checksumSHA1": "P5PZ3k7SmqWmxgJ8Q0gLzeNpGhE=", "checksumSHA1": "P5PZ3k7SmqWmxgJ8Q0gLzeNpGhE=",
"path": "github.com/hashicorp/hil/parser", "path": "github.com/hashicorp/hil/parser",
"revision": "5b8d13c8c5c2753e109fab25392a1dbfa2db93d2", "revision": "747a6e1523d6808f91144df070435b16865cd333",
"revisionTime": "2016-12-21T19:20:42Z" "revisionTime": "2017-05-01T20:07:50Z"
}, },
{ {
"checksumSHA1": "DC1k5kOua4oFqmo+JRt0YzfP44o=", "checksumSHA1": "DC1k5kOua4oFqmo+JRt0YzfP44o=",
"path": "github.com/hashicorp/hil/scanner", "path": "github.com/hashicorp/hil/scanner",
"revision": "5b8d13c8c5c2753e109fab25392a1dbfa2db93d2", "revision": "747a6e1523d6808f91144df070435b16865cd333",
"revisionTime": "2016-12-21T19:20:42Z" "revisionTime": "2017-05-01T20:07:50Z"
}, },
{ {
"checksumSHA1": "vt+P9D2yWDO3gdvdgCzwqunlhxU=", "checksumSHA1": "vt+P9D2yWDO3gdvdgCzwqunlhxU=",

View File

@ -62,3 +62,11 @@ The following arguments are supported:
* `input` - (Optional) Valid JSON text passed to the target. * `input` - (Optional) Valid JSON text passed to the target.
* `input_path` - (Optional) The value of the [JSONPath](http://goessner.net/articles/JsonPath/) * `input_path` - (Optional) The value of the [JSONPath](http://goessner.net/articles/JsonPath/)
that is used for extracting part of the matched event when passing it to the target. that is used for extracting part of the matched event when passing it to the target.
* `role_arn` - (Optional) The Amazon Resource Name (ARN) of the IAM role to be used for this target when the rule is triggered.
* `run_command_targets` - (Optional) Parameters used when you are using the rule to invoke Amazon EC2 Run Command. Documented below. A maximum of 5 are allowed.
`run_command_parameters` support the following:
* `key` - (Required) Can be either `tag:tag-key` or `InstanceIds`.
* `values` - (Required) If Key is `tag:tag-key`, Values is a list of tag values. If Key is `InstanceIds`, Values is a list of Amazon EC2 instance IDs.

View File

@ -108,7 +108,7 @@ Provides the rule owner (AWS or customer), the rule identifier, and the notifica
For custom rules, the identifier is the ARN of the rule's AWS Lambda function, such as `arn:aws:lambda:us-east-1:123456789012:function:custom_rule_name`. For custom rules, the identifier is the ARN of the rule's AWS Lambda function, such as `arn:aws:lambda:us-east-1:123456789012:function:custom_rule_name`.
* `source_detail` - (Optional) Provides the source and type of the event that causes AWS Config to evaluate your AWS resources. Only valid if `owner` is `CUSTOM_LAMBDA`. * `source_detail` - (Optional) Provides the source and type of the event that causes AWS Config to evaluate your AWS resources. Only valid if `owner` is `CUSTOM_LAMBDA`.
* `event_source` - (Optional) The source of the event, such as an AWS service, that triggers AWS Config * `event_source` - (Optional) The source of the event, such as an AWS service, that triggers AWS Config
to evaluate your AWS resources. The only valid value is `aws.config`. to evaluate your AWS resources. This defaults to `aws.config` and is the only valid value.
* `maximum_execution_frequency` - (Optional) The frequency that you want AWS Config to run evaluations for a rule that * `maximum_execution_frequency` - (Optional) The frequency that you want AWS Config to run evaluations for a rule that
is triggered periodically. If specified, requires `message_type` to be `ScheduledNotification`. is triggered periodically. If specified, requires `message_type` to be `ScheduledNotification`.
* `message_type` - (Optional) The type of notification that triggers AWS Config to run an evaluation for a rule. You can specify the following notification types: * `message_type` - (Optional) The type of notification that triggers AWS Config to run an evaluation for a rule. You can specify the following notification types:

View File

@ -38,10 +38,10 @@ resource "aws_db_option_group" "bar" {
The following arguments are supported: The following arguments are supported:
* `name` - (Optional, Forces new resource) The name of the option group. If omitted, Terraform will assign a random, unique name. * `name` - (Optional, Forces new resource) The name of the option group. If omitted, Terraform will assign a random, unique name. This is converted to lowercase, as is stored in AWS.
* `name_prefix` - (Optional, Forces new resource) Creates a unique name beginning with the specified prefix. Conflicts with `name`. * `name_prefix` - (Optional, Forces new resource) Creates a unique name beginning with the specified prefix. Conflicts with `name`. This is converted to lowercase, as is stored in AWS.
* `option_group_description` - (Optional) The description of the option group. Defaults to "Managed by Terraform". * `option_group_description` - (Optional) The description of the option group. Defaults to "Managed by Terraform".
* `engine_name` - (Required) Specifies the name of the engine that this option group should be associated with.. * `engine_name` - (Required) Specifies the name of the engine that this option group should be associated with.
* `major_engine_version` - (Required) Specifies the major version of the engine that this option group should be associated with. * `major_engine_version` - (Required) Specifies the major version of the engine that this option group should be associated with.
* `option` - (Optional) A list of Options to apply. * `option` - (Optional) A list of Options to apply.
* `tags` - (Optional) A mapping of tags to assign to the resource. * `tags` - (Optional) A mapping of tags to assign to the resource.

View File

@ -46,6 +46,7 @@ The following arguments are supported:
* `path` - (Optional) The path to the role. * `path` - (Optional) The path to the role.
See [IAM Identifiers](https://docs.aws.amazon.com/IAM/latest/UserGuide/Using_Identifiers.html) for more information. See [IAM Identifiers](https://docs.aws.amazon.com/IAM/latest/UserGuide/Using_Identifiers.html) for more information.
* `description` - (Optional) The description of the role.
## Attributes Reference ## Attributes Reference
@ -55,6 +56,7 @@ The following attributes are exported:
* `create_date` - The creation date of the IAM role. * `create_date` - The creation date of the IAM role.
* `unique_id` - The stable and unique string identifying the role. * `unique_id` - The stable and unique string identifying the role.
* `name` - The name of the role. * `name` - The name of the role.
* `description` - The description of the role.
## Example of Using Data Source for Assume Role Policy ## Example of Using Data Source for Assume Role Policy

View File

@ -141,7 +141,18 @@ Weighted routing policies support the following:
## Import ## Import
Route53 Records can be imported using ID of the record, e.g. Route53 Records can be imported using ID of the record. The ID is made up as ZONEID_RECORDNAME_TYPE_SET-IDENTIFIER
e.g.
```
Z4KAPRWWNC7JR_dev.example.com_NS_dev
```
In this example, `Z4KAPRWWNC7JR` is the ZoneID, `dev.example.com` is the Record Name, `NS` is the Type and `dev` is the Set Identifier.
Only the Set Identifier is actually optional in the ID
To import the ID above, it would look as follows:
``` ```
$ terraform import aws_route53_record.myrecord Z4KAPRWWNC7JR_dev.example.com_NS_dev $ terraform import aws_route53_record.myrecord Z4KAPRWWNC7JR_dev.example.com_NS_dev

View File

@ -45,7 +45,7 @@ resource "aws_security_group" "allow_all" {
Basic usage with tags: Basic usage with tags:
``` ```hcl
resource "aws_security_group" "allow_all" { resource "aws_security_group" "allow_all" {
name = "allow_all" name = "allow_all"
description = "Allow all inbound traffic" description = "Allow all inbound traffic"
@ -116,12 +116,14 @@ specifically re-create it if you desire that rule. We feel this leads to fewer
surprises in terms of controlling your egress rules. If you desire this rule to surprises in terms of controlling your egress rules. If you desire this rule to
be in place, you can use this `egress` block: be in place, you can use this `egress` block:
```hcl
egress { egress {
from_port = 0 from_port = 0
to_port = 0 to_port = 0
protocol = "-1" protocol = "-1"
cidr_blocks = ["0.0.0.0/0"] cidr_blocks = ["0.0.0.0/0"]
} }
```
## Usage with prefix list IDs ## Usage with prefix list IDs
@ -129,7 +131,7 @@ Prefix list IDs are managed by AWS internally. Prefix list IDs
are associated with a prefix list name, or service name, that is linked to a specific region. are associated with a prefix list name, or service name, that is linked to a specific region.
Prefix list IDs are exported on VPC Endpoints, so you can use this format: Prefix list IDs are exported on VPC Endpoints, so you can use this format:
``` ```hcl
# ... # ...
egress { egress {
from_port = 0 from_port = 0

View File

@ -0,0 +1,38 @@
---
layout: "aws"
page_title: "AWS: aws_ssm_maintenance_window"
sidebar_current: "docs-aws-resource-ssm-maintenance-window"
description: |-
Provides an SSM Maintenance Window resource
---
# aws_ssm_maintenance_window
Provides an SSM Maintenance Window resource
## Example Usage
```hcl
resource "aws_ssm_maintenance_window" "production" {
name = "maintenance-window-application"
schedule = "cron(0 16 ? * TUE *)"
duration = 3
cutoff = 1
}
```
## Argument Reference
The following arguments are supported:
* `name` - (Required) The name of the maintenance window.
* `schedule` - (Required) The schedule of the Maintenance Window in the form of a [cron](https://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-maintenance-cron.html) or rate expression.
* `cutoff` - (Required) The number of hours before the end of the Maintenance Window that Systems Manager stops scheduling new tasks for execution.
* `duration` - (Required) The duration of the Maintenance Window in hours.
* `allow_unregistered_targets` - (Optional) Whether targets must be registered with the Maintenance Window before tasks can be defined for those targets.
## Attributes Reference
The following attributes are exported:
* `id` - The ID of the maintenance window.

View File

@ -0,0 +1,46 @@
---
layout: "aws"
page_title: "AWS: aws_ssm_maintenance_window_target"
sidebar_current: "docs-aws-resource-ssm-maintenance-window-target"
description: |-
Provides an SSM Maintenance Window Target resource
---
# aws_ssm_maintenance_window_target
Provides an SSM Maintenance Window Target resource
## Example Usage
```hcl
resource "aws_ssm_maintenance_window" "window" {
name = "maintenance-window-webapp"
schedule = "cron(0 16 ? * TUE *)"
duration = 3
cutoff = 1
}
resource "aws_ssm_maintenance_window_target" "target1" {
window_id = "${aws_ssm_maintenance_window.window.id}"
resource_type = "INSTANCE"
targets {
key = "tag:Name"
values = ["acceptance_test"]
}
}
```
## Argument Reference
The following arguments are supported:
* `window_id` - (Required) The Id of the maintenance window to register the target with.
* `resource_type` - (Required) The type of target being registered with the Maintenance Window. Possible values `INSTANCE`.
* `targets` - (Required) The targets (either instances or tags). Instances are specified using Key=instanceids,Values=instanceid1,instanceid2. Tags are specified using Key=tag name,Values=tag value.
* `owner_information` - (Optional) User-provided value that will be included in any CloudWatch events raised while running tasks for these targets in this Maintenance Window.
## Attributes Reference
The following attributes are exported:
* `id` - The ID of the maintenance window target.

View File

@ -0,0 +1,68 @@
---
layout: "aws"
page_title: "AWS: aws_ssm_maintenance_window_task"
sidebar_current: "docs-aws-resource-ssm-maintenance-window-task"
description: |-
Provides an SSM Maintenance Window Task resource
---
# aws_ssm_maintenance_window_task
Provides an SSM Maintenance Window Task resource
## Example Usage
```hcl
resource "aws_ssm_maintenance_window" "window" {
name = "maintenance-window-%s"
schedule = "cron(0 16 ? * TUE *)"
duration = 3
cutoff = 1
}
resource "aws_ssm_maintenance_window_task" "target" {
window_id = "${aws_ssm_maintenance_window.window.id}"
task_type = "RUN_COMMAND"
task_arn = "AWS-RunShellScript"
priority = 1
service_role_arn = "arn:aws:iam::187416307283:role/service-role/AWS_Events_Invoke_Run_Command_112316643"
max_concurrency = "2"
max_errors = "1"
targets {
key = "InstanceIds"
values = ["${aws_instance.instance.id}"]
}
}
resource "aws_instance" "instance" {
ami = "ami-4fccb37f"
instance_type = "m1.small"
}
```
## Argument Reference
The following arguments are supported:
* `window_id` - (Required) The Id of the maintenance window to register the task with.
* `max_concurrency` - (Required) The maximum number of targets this task can be run for in parallel.
* `max_errors` - (Required) The maximum number of errors allowed before this task stops being scheduled.
* `task_type` - (Required) The type of task being registered. The only allowed value is `RUN_COMMAND`.
* `task_arn` - (Required) The ARN of the task to execute.
* `service_role_arn` - (Required) The role that should be assumed when executing the task.
* `targets` - (Required) The targets (either instances or tags). Instances are specified using Key=instanceids,Values=instanceid1,instanceid2. Tags are specified using Key=tag name,Values=tag value.
* `priority` - (Optional) The priority of the task in the Maintenance Window, the lower the number the higher the priority. Tasks in a Maintenance Window are scheduled in priority order with tasks that have the same priority scheduled in parallel.
* `logging_info` - (Optional) A structure containing information about an Amazon S3 bucket to write instance-level logs to. Documented below.
`logging_info` supports the following:
* `s3_bucket_name` - (Required)
* `s3_region` - (Required)
* `s3_bucket_prefix` - (Optional)
## Attributes Reference
The following attributes are exported:
* `id` - The ID of the maintenance window task.

View File

@ -0,0 +1,59 @@
---
layout: "gitlab"
page_title: "GitLab: gitlab_project_hook"
sidebar_current: "docs-gitlab-resource-project-hook"
description: |-
Creates and manages hooks for GitLab projects
---
# gitlab\_project\_hook
This resource allows you to create and manage hooks for your GitLab projects.
For further information on hooks, consult the [gitlab
documentation](https://docs.gitlab.com/ce/user/project/integrations/webhooks.html).
## Example Usage
```hcl
resource "gitlab_project_hook" "example" {
project = "example/hooked"
url = "https://example.com/hook/example"
merge_requests_events = true
}
```
## Argument Reference
The following arguments are supported:
* `project` - (Required) The name or id of the project to add the hook to.
* `url` - (Required) The url of the hook to invoke.
* `token` - (Optional) A token to present when invoking the hook.
* `enable_ssl_verification` - (Optional) Enable ssl verification when invoking
the hook.
* `push_events` - (Optional) Invoke the hook for push events.
* `issues_events` - (Optional) Invoke the hook for issues events.
* `merge_requests_events` - (Optional) Invoke the hook for merge requests.
* `tag_push_events` - (Optional) Invoke the hook for tag push events.
* `note_events` - (Optional) Invoke the hook for tag push events.
* `build_events` - (Optional) Invoke the hook for build events.
* `pipeline_events` - (Optional) Invoke the hook for pipeline events.
* `wiki_page_events` - (Optional) Invoke the hook for wiki page events.
## Attributes Reference
The resource exports the following attributes:
* `id` - The unique id assigned to the hook by the GitLab server.

View File

@ -33,6 +33,22 @@ resource "kubernetes_secret" "example" {
} }
``` ```
## Example Usage (Docker config)
```hcl
resource "kubernetes_secret" "example" {
metadata {
name = "docker-cfg"
}
data {
".dockercfg" = "${file("${path.module}/.docker/config.json")}"
}
type = "kubernetes.io/dockercfg"
}
```
## Argument Reference ## Argument Reference
The following arguments are supported: The following arguments are supported:

View File

@ -1,12 +1,12 @@
--- ---
layout: "openstack" layout: "openstack"
page_title: "OpenStack: openstack_lb_listener_v2" page_title: "OpenStack: openstack_lb_listener_v2"
sidebar_current: "docs-openstack-resource-lbaas-listener-v2" sidebar_current: "docs-openstack-resource-lb-listener-v2"
description: |- description: |-
Manages a V2 listener resource within OpenStack. Manages a V2 listener resource within OpenStack.
--- ---
# openstack\_lbaas\_listener\_v2 # openstack\_lb\_listener\_v2
Manages a V2 listener resource within OpenStack. Manages a V2 listener resource within OpenStack.

View File

@ -1,12 +1,12 @@
--- ---
layout: "openstack" layout: "openstack"
page_title: "OpenStack: openstack_lb_member_v2" page_title: "OpenStack: openstack_lb_member_v2"
sidebar_current: "docs-openstack-resource-lbaas-member-v2" sidebar_current: "docs-openstack-resource-lb-member-v2"
description: |- description: |-
Manages a V2 member resource within OpenStack. Manages a V2 member resource within OpenStack.
--- ---
# openstack\_lbaas\_member\_v2 # openstack\_lb\_member\_v2
Manages a V2 member resource within OpenStack. Manages a V2 member resource within OpenStack.

View File

@ -1,12 +1,12 @@
--- ---
layout: "openstack" layout: "openstack"
page_title: "OpenStack: openstack_lb_monitor_v2" page_title: "OpenStack: openstack_lb_monitor_v2"
sidebar_current: "docs-openstack-resource-lbaas-monitor-v2" sidebar_current: "docs-openstack-resource-lb-monitor-v2"
description: |- description: |-
Manages a V2 monitor resource within OpenStack. Manages a V2 monitor resource within OpenStack.
--- ---
# openstack\_lbaas\_monitor\_v2 # openstack\_lb\_monitor\_v2
Manages a V2 monitor resource within OpenStack. Manages a V2 monitor resource within OpenStack.

View File

@ -1,12 +1,12 @@
--- ---
layout: "openstack" layout: "openstack"
page_title: "OpenStack: openstack_lb_pool_v2" page_title: "OpenStack: openstack_lb_pool_v2"
sidebar_current: "docs-openstack-resource-lbaas-pool-v2" sidebar_current: "docs-openstack-resource-lb-pool-v2"
description: |- description: |-
Manages a V2 pool resource within OpenStack. Manages a V2 pool resource within OpenStack.
--- ---
# openstack\_lbaas\_pool\_v2 # openstack\_lb\_pool\_v2
Manages a V2 pool resource within OpenStack. Manages a V2 pool resource within OpenStack.

View File

@ -1249,6 +1249,18 @@
<a href="/docs/providers/aws/r/ssm_document.html">aws_ssm_document</a> <a href="/docs/providers/aws/r/ssm_document.html">aws_ssm_document</a>
</li> </li>
<li<%= sidebar_current("docs-aws-resource-ssm-maintenance-window") %>>
<a href="/docs/providers/aws/r/ssm_maintenance_window.html">aws_ssm_maintenance_window</a>
</li>
<li<%= sidebar_current("docs-aws-resource-ssm-maintenance-window-target") %>>
<a href="/docs/providers/aws/r/ssm_maintenance_window_target.html">aws_ssm_maintenance_window_target</a>
</li>
<li<%= sidebar_current("docs-aws-resource-ssm-maintenance-window-task") %>>
<a href="/docs/providers/aws/r/ssm_maintenance_window_task.html">aws_ssm_maintenance_window_task</a>
</li>
</ul> </ul>
</li> </li>