Merge branch 'master' into paddy_gcp_detach_deleted_disks

This commit is contained in:
Paddy 2017-05-23 14:45:27 -07:00 committed by GitHub
commit 7976a03a2d
133 changed files with 4334 additions and 605 deletions

View File

@ -25,7 +25,9 @@ install:
- bash scripts/gogetcookie.sh - bash scripts/gogetcookie.sh
- go get github.com/kardianos/govendor - go get github.com/kardianos/govendor
script: script:
- make vet vendor-status test - make vendor-status
- make test
- make vet
- GOOS=windows go build - GOOS=windows go build
branches: branches:
only: only:

View File

@ -2,41 +2,57 @@
BACKWARDS INCOMPATIBILITIES / NOTES: BACKWARDS INCOMPATIBILITIES / NOTES:
* When assigning a "splat variable" to a resource attribute, like `foo = "${some_resource.foo.*.baz}"`, it is no longer required (nor recommended) to wrap the string in list brackets. The extra brackets continue to be allowed for resource attributes for compatibility, but this will cease to be allowed in a future version. [GH-14737]
* provider/aws: Allow lightsail resources to work in other regions. Previously Terraform would automatically configure lightsail resources to run solely in `us-east-1`. This means that if a provider was initialized with a different region than `us-east-1`, users will need to create a provider alias to maintain their lightsail resources in us-east-1 [GH-14685].
* provider/aws: Users of `aws_cloudfront_distribution` `default_cache_behavior` will notice that cookies is now a required value - even if that value is none [GH-12628] * provider/aws: Users of `aws_cloudfront_distribution` `default_cache_behavior` will notice that cookies is now a required value - even if that value is none [GH-12628]
* provider/google: Users of `google_compute_health_check` who were not setting a value for the `host` property of `http_health_check` or `https_health_check` previously had a faulty default value. This has been fixed and will show as a change in terraform plan/apply. [GH-14441] * provider/google: Users of `google_compute_health_check` who were not setting a value for the `host` property of `http_health_check` or `https_health_check` previously had a faulty default value. This has been fixed and will show as a change in terraform plan/apply. [GH-14441]
FEATURES: FEATURES:
* **New Provider:** `OVH` [GH-12669] * **New Provider:** `ovh` [GH-12669]
* **New Resource:** `aws_default_subnet` [GH-14476] * **New Resource:** `aws_default_subnet` [GH-14476]
* **New Resource:** `aws_default_vpc` [GH-11710] * **New Resource:** `aws_default_vpc` [GH-11710]
* **New Resource:** `aws_default_vpc_dhcp_options` [GH-14475] * **New Resource:** `aws_default_vpc_dhcp_options` [GH-14475]
* **New Resource:** `aws_devicefarm_project` [GH-14288] * **New Resource:** `aws_devicefarm_project` [GH-14288]
* **New Resource:** `aws_wafregion_ipset` [GH-13705] * **New Resource:** `aws_wafregional_ipset` [GH-13705]
* **New Resource:** `aws_wafregion_byte_match_set` [GH-13705] * **New Resource:** `aws_wafregional_byte_match_set` [GH-13705]
* **New Resource:** `azurerm_express_route_circuit` [GH-14265] * **New Resource:** `azurerm_express_route_circuit` [GH-14265]
* **New Resource:** `kubernetes_service` [GH-14554] * **New Resource:** `kubernetes_service` [GH-14554]
* **New Resource:** `openstack_dns_zone_v2` [GH-14721]
* **New Data Source:** `aws_db_snapshot` [GH-10291] * **New Data Source:** `aws_db_snapshot` [GH-10291]
* **New Data Source:** `aws_kms_ciphertext` [GH-14691]
* **New Data Source:** `github_user` [GH-14570] * **New Data Source:** `github_user` [GH-14570]
* **New Data Source:** `github_team` [GH-14614]
* **New Interpolation Function:** `pow` [GH-14598] * **New Interpolation Function:** `pow` [GH-14598]
IMPROVEMENTS: IMPROVEMENTS:
* core: After `apply`, if the state cannot be persisted to remote for some reason then write out a local state file for recovery [GH-14423]
* core: It's no longer required to surround an attribute value that is just a "splat" variable with a redundant set of array brackets. [GH-14737]
* core/provider-split: Split out the Oracle OPC provider to new structure [GH-14362] * core/provider-split: Split out the Oracle OPC provider to new structure [GH-14362]
* provider/aws: Show state reason when EC2 instance fails to launch [GH-14479] * provider/aws: Show state reason when EC2 instance fails to launch [GH-14479]
* provider/aws: Show last scaling activity when ASG creation/update fails [GH-14480] * provider/aws: Show last scaling activity when ASG creation/update fails [GH-14480]
* provider/aws: Add `tags` (list of maps) for `aws_autoscaling_group` [GH-13574] * provider/aws: Add `tags` (list of maps) for `aws_autoscaling_group` [GH-13574]
* provider/aws: Support filtering in ASG data source [GH-14501] * provider/aws: Support filtering in ASG data source [GH-14501]
* provider/aws: Add ability to 'terraform import' aws_kms_alias resources [GH-14679]
* provider/aws: Allow lightsail resources to work in other regions [GH-14685]
* provider/aws: Configurable timeouts for EC2 instance + spot instance [GH-14711]
* provider/aws: Add ability to define timeouts for DMS replication instance [GH-14729]
* provider/azurerm: Virtual Machine Scale Sets with managed disk support [GH-13717] * provider/azurerm: Virtual Machine Scale Sets with managed disk support [GH-13717]
* provider/azurerm: Virtual Machine Scale Sets with single placement option support [GH-14510] * provider/azurerm: Virtual Machine Scale Sets with single placement option support [GH-14510]
* provider/azurerm: Adding support for VMSS Data Disks using Managed Disk feature [GH-14608] * provider/azurerm: Adding support for VMSS Data Disks using Managed Disk feature [GH-14608]
* provider/azurerm: Adding support for 4TB disks [GH-14688]
* provider/datadog: Add last aggregator to datadog_timeboard resource [GH-14391] * provider/datadog: Add last aggregator to datadog_timeboard resource [GH-14391]
* provider/datadog: Added new evaluation_delay parameter [GH-14433] * provider/datadog: Added new evaluation_delay parameter [GH-14433]
* provider/docker: Allow Windows Docker containers to map volumes [GH-13584] * provider/docker: Allow Windows Docker containers to map volumes [GH-13584]
* provider/docker: Add `network_alias` to `docker_container` resource [GH-14710]
* provider/fastly: Mark the `s3_access_key`, `s3_secret_key`, & `secret_key` fields as sensitive [GH-14634] * provider/fastly: Mark the `s3_access_key`, `s3_secret_key`, & `secret_key` fields as sensitive [GH-14634]
* provider/google: Add a `url` attribute to `google_storage_bucket` [GH-14393] * provider/google: Add a `url` attribute to `google_storage_bucket` [GH-14393]
* provider/google: Make google resource storage bucket importable [GH-14455] * provider/google: Make google resource storage bucket importable [GH-14455]
* provider/google: Add support for privateIpGoogleAccess on subnetworks [GH-14234] * provider/google: Add support for privateIpGoogleAccess on subnetworks [GH-14234]
* provider/google: Add import support to `google_sql_user` [GH-14457]
* provider/google: add failover parameter to `google_sql_database_instance` [GH-14336]
* provider/google: resource_compute_disks can now reference snapshots using the snapshot URL [GH-14774]
* provider/heroku: Add import support for `heroku_pipeline` resource [GH-14486] * provider/heroku: Add import support for `heroku_pipeline` resource [GH-14486]
* provider/heroku: Add import support for `heroku_pipeline_coupling` resource [GH-14495] * provider/heroku: Add import support for `heroku_pipeline_coupling` resource [GH-14495]
* provider/openstack: Add support for all protocols in Security Group Rules [GH-14307] * provider/openstack: Add support for all protocols in Security Group Rules [GH-14307]
@ -50,10 +66,18 @@ BUG FIXES:
* core: Fixed 0.9.5 regression with the conditional operator `.. ? .. : ..` failing to type check with unknown/computed values [GH-14454] * core: Fixed 0.9.5 regression with the conditional operator `.. ? .. : ..` failing to type check with unknown/computed values [GH-14454]
* core: Fixed 0.9 regression causing issues during refresh when adding new data resource instances using `count` [GH-14098] * core: Fixed 0.9 regression causing issues during refresh when adding new data resource instances using `count` [GH-14098]
* core: Fixed crasher when populating a "splat variable" from an empty (nil) module state. [GH-14526] * core: Fixed crasher when populating a "splat variable" from an empty (nil) module state. [GH-14526]
* core: fix bad Sprintf in backend migration message [GH-14601]
* core: Addressed 0.9.5 issue with passing partially-unknown splat results through module variables, by removing the requirement to pass a redundant list level. [GH-14737]
* provider/aws: Allow updating constraints in WAF SizeConstraintSet + no constraints [GH-14661]
* provider/aws: Allow updating tuples in WAF ByteMatchSet + no tuples [GH-14071]
* provider/aws: Allow updating tuples in WAF SQLInjectionMatchSet + no tuples [GH-14667]
* provider/aws: Allow updating tuples in WAF XssMatchSet + no tuples [GH-14671]
* provider/aws: Increase EIP update timeout [GH-14381] * provider/aws: Increase EIP update timeout [GH-14381]
* provider/aws: Increase timeout for creating security group [GH-14380] * provider/aws: Increase timeout for creating security group [GH-14380] [GH-14724]
* provider/aws: Increase timeout for (dis)associating IPv6 addr to/from subnet [GH-14401] * provider/aws: Increase timeout for (dis)associating IPv6 addr to/from subnet [GH-14401]
* provider/aws: Increase timeout for retrying creation of IAM server cert [GH-14609] * provider/aws: Increase timeout for retrying creation of IAM server cert [GH-14609]
* provider/aws: Increase timeout for deleting IGW [GH-14705]
* provider/aws: Increase timeout for retrying creation of CW log subs [GH-14722]
* provider/aws: Using the new time schema helper for RDS Instance lifecycle mgmt [GH-14369] * provider/aws: Using the new time schema helper for RDS Instance lifecycle mgmt [GH-14369]
* provider/aws: Using the timeout schema helper to make alb timeout cofigurable [GH-14375] * provider/aws: Using the timeout schema helper to make alb timeout cofigurable [GH-14375]
* provider/aws: Refresh from state when CodePipeline Not Found [GH-14431] * provider/aws: Refresh from state when CodePipeline Not Found [GH-14431]
@ -68,15 +92,28 @@ BUG FIXES:
* provider/aws: Handling data migration in RDS snapshot restoring [GH-14622] * provider/aws: Handling data migration in RDS snapshot restoring [GH-14622]
* provider/aws: Mark cookies in `default_cache_behaviour` of cloudfront_distribution as required [GH-12628] * provider/aws: Mark cookies in `default_cache_behaviour` of cloudfront_distribution as required [GH-12628]
* provider/aws: Fall back to old tagging mechanism for AWS gov and aws China [GH-14627] * provider/aws: Fall back to old tagging mechanism for AWS gov and aws China [GH-14627]
* provider/aws: Change AWS ssm_maintenance_window Read func [GH-14665]
* provider/aws: Increase timeout for creation of route_table [GH-14701]
* provider/aws: Retry ElastiCache cluster deletion when it's snapshotting [GH-14700]
* provider/aws: Retry ECS service update on InvalidParameterException [GH-14708]
* provider/aws: Retry IAM Role deletion on DeleteConflict [GH-14707]
* provider/aws: Do not dereference source_Dest_check in aws_instance [GH-14723]
* provider/aws: Add validation function for IAM Policies [GH-14669]
* provider/aws: Fix panic on instance shutdown [GH-14727]
* provider/aws: Handle migration when restoring db cluster from snapshot [GH-14766]
* provider/aws: Provider ability to enable snapshotting on ElastiCache RG [GH-14757]
* provider/cloudstack: `cloudstack_firewall` panicked when used with older (< v4.6) CloudStack versions [GH-14044] * provider/cloudstack: `cloudstack_firewall` panicked when used with older (< v4.6) CloudStack versions [GH-14044]
* provider/datadog: Allowed method on aggregator is `avg` ! `average` [GH-14414] * provider/datadog: Allowed method on aggregator is `avg` ! `average` [GH-14414]
* provider/digitalocean: Fix parsing of digitalocean dns records [GH-14215] * provider/digitalocean: Fix parsing of digitalocean dns records [GH-14215]
* provider/github: Log HTTP requests and responses in DEBUG mode [GH-14363] * provider/github: Log HTTP requests and responses in DEBUG mode [GH-14363]
* provider/github Check for potentially nil response from GitHub API client [GH-14683]
* provider/google: Fix health check http/https defaults [GH-14441] * provider/google: Fix health check http/https defaults [GH-14441]
* provider/google: Fix issue with GCP Cloud SQL Instance `disk_autoresize` [GH-14582] * provider/google: Fix issue with GCP Cloud SQL Instance `disk_autoresize` [GH-14582]
* provider/google: Fix crash creating Google Cloud SQL 2nd Generation replication instance [GH-14373]
* provider/heroku: Fix issue with setting correct CName in heroku_domain [GH-14443] * provider/heroku: Fix issue with setting correct CName in heroku_domain [GH-14443]
* provider/opc: Correctly export `ip_address` in IP Addr Reservation [GH-14543] * provider/opc: Correctly export `ip_address` in IP Addr Reservation [GH-14543]
* provider/openstack: Handle Deleted Resources in Floating IP Association [GH-14533] * provider/openstack: Handle Deleted Resources in Floating IP Association [GH-14533]
* provider/openstack: Catch error during instance network parsing [GH-14704]
* provider/vault: Prevent panic when no secret found [GH-14435] * provider/vault: Prevent panic when no secret found [GH-14435]
## 0.9.5 (May 11, 2017) ## 0.9.5 (May 11, 2017)

View File

@ -1,7 +1,9 @@
package local package local
import ( import (
"bytes"
"context" "context"
"errors"
"fmt" "fmt"
"log" "log"
"strings" "strings"
@ -137,11 +139,11 @@ func (b *Local) opApply(
// Persist the state // Persist the state
if err := opState.WriteState(applyState); err != nil { if err := opState.WriteState(applyState); err != nil {
runningOp.Err = fmt.Errorf("Failed to save state: %s", err) runningOp.Err = b.backupStateForError(applyState, err)
return return
} }
if err := opState.PersistState(); err != nil { if err := opState.PersistState(); err != nil {
runningOp.Err = fmt.Errorf("Failed to save state: %s", err) runningOp.Err = b.backupStateForError(applyState, err)
return return
} }
@ -186,6 +188,42 @@ func (b *Local) opApply(
} }
} }
// backupStateForError is called in a scenario where we're unable to persist the
// state for some reason, and will attempt to save a backup copy of the state
// to local disk to help the user recover. This is a "last ditch effort" sort
// of thing, so we really don't want to end up in this codepath; we should do
// everything we possibly can to get the state saved _somewhere_.
func (b *Local) backupStateForError(applyState *terraform.State, err error) error {
b.CLI.Error(fmt.Sprintf("Failed to save state: %s\n", err))
local := &state.LocalState{Path: "errored.tfstate"}
writeErr := local.WriteState(applyState)
if writeErr != nil {
b.CLI.Error(fmt.Sprintf(
"Also failed to create local state file for recovery: %s\n\n", writeErr,
))
// To avoid leaving the user with no state at all, our last resort
// is to print the JSON state out onto the terminal. This is an awful
// UX, so we should definitely avoid doing this if at all possible,
// but at least the user has _some_ path to recover if we end up
// here for some reason.
stateBuf := new(bytes.Buffer)
jsonErr := terraform.WriteState(applyState, stateBuf)
if jsonErr != nil {
b.CLI.Error(fmt.Sprintf(
"Also failed to JSON-serialize the state to print it: %s\n\n", jsonErr,
))
return errors.New(stateWriteFatalError)
}
b.CLI.Output(stateBuf.String())
return errors.New(stateWriteConsoleFallbackError)
}
return errors.New(stateWriteBackedUpError)
}
const applyErrNoConfig = ` const applyErrNoConfig = `
No configuration files found! No configuration files found!
@ -194,3 +232,41 @@ would mark everything for destruction, which is normally not what is desired.
If you would like to destroy everything, please run 'terraform destroy' instead If you would like to destroy everything, please run 'terraform destroy' instead
which does not require any configuration files. which does not require any configuration files.
` `
const stateWriteBackedUpError = `Failed to persist state to backend.
The error shown above has prevented Terraform from writing the updated state
to the configured backend. To allow for recovery, the state has been written
to the file "errored.tfstate" in the current working directory.
Running "terraform apply" again at this point will create a forked state,
making it harder to recover.
To retry writing this state, use the following command:
terraform state push errored.tfstate
`
const stateWriteConsoleFallbackError = `Failed to persist state to backend.
The errors shown above prevented Terraform from writing the updated state to
the configured backend and from creating a local backup file. As a fallback,
the raw state data is printed above as a JSON object.
To retry writing this state, copy the state data (from the first { to the
last } inclusive) and save it into a local file called errored.tfstate, then
run the following command:
terraform state push errored.tfstate
`
const stateWriteFatalError = `Failed to save state after apply.
A catastrophic error has prevented Terraform from persisting the state file
or creating a backup. Unfortunately this means that the record of any resources
created during this apply has been lost, and such resources may exist outside
of Terraform's management.
For resources that support import, it is possible to recover by manually
importing each resource using its id from the target system.
This is a serious bug in Terraform and should be reported.
`

View File

@ -2,14 +2,19 @@ package local
import ( import (
"context" "context"
"errors"
"fmt" "fmt"
"os" "os"
"path/filepath"
"strings"
"sync" "sync"
"testing" "testing"
"github.com/hashicorp/terraform/backend" "github.com/hashicorp/terraform/backend"
"github.com/hashicorp/terraform/config/module" "github.com/hashicorp/terraform/config/module"
"github.com/hashicorp/terraform/state"
"github.com/hashicorp/terraform/terraform" "github.com/hashicorp/terraform/terraform"
"github.com/mitchellh/cli"
) )
func TestLocal_applyBasic(t *testing.T) { func TestLocal_applyBasic(t *testing.T) {
@ -158,6 +163,77 @@ test_instance.foo:
`) `)
} }
func TestLocal_applyBackendFail(t *testing.T) {
mod, modCleanup := module.TestTree(t, "./test-fixtures/apply")
defer modCleanup()
b := TestLocal(t)
wd, err := os.Getwd()
if err != nil {
t.Fatalf("failed to get current working directory")
}
err = os.Chdir(filepath.Dir(b.StatePath))
if err != nil {
t.Fatalf("failed to set temporary working directory")
}
defer os.Chdir(wd)
b.Backend = &backendWithFailingState{}
b.CLI = new(cli.MockUi)
p := TestLocalProvider(t, b, "test")
p.ApplyReturn = &terraform.InstanceState{ID: "yes"}
op := testOperationApply()
op.Module = mod
run, err := b.Operation(context.Background(), op)
if err != nil {
t.Fatalf("bad: %s", err)
}
<-run.Done()
if run.Err == nil {
t.Fatalf("apply succeeded; want error")
}
errStr := run.Err.Error()
if !strings.Contains(errStr, "terraform state push errored.tfstate") {
t.Fatalf("wrong error message:\n%s", errStr)
}
msgStr := b.CLI.(*cli.MockUi).ErrorWriter.String()
if !strings.Contains(msgStr, "Failed to save state: fake failure") {
t.Fatalf("missing original error message in output:\n%s", msgStr)
}
// The fallback behavior should've created a file errored.tfstate in the
// current working directory.
checkState(t, "errored.tfstate", `
test_instance.foo:
ID = yes
`)
}
type backendWithFailingState struct {
Local
}
func (b *backendWithFailingState) State(name string) (state.State, error) {
return &failingState{
&state.LocalState{
Path: "failing-state.tfstate",
},
}, nil
}
type failingState struct {
*state.LocalState
}
func (s failingState) WriteState(state *terraform.State) error {
return errors.New("fake failure")
}
func testOperationApply() *backend.Operation { func testOperationApply() *backend.Operation {
return &backend.Operation{ return &backend.Operation{
Type: backend.OperationTypeApply, Type: backend.OperationTypeApply,

View File

@ -269,12 +269,11 @@ func (c *Config) Client() (interface{}, error) {
sess.Handlers.UnmarshalError.PushFrontNamed(debugAuthFailure) sess.Handlers.UnmarshalError.PushFrontNamed(debugAuthFailure)
} }
// Some services exist only in us-east-1, e.g. because they manage // This restriction should only be used for Route53 sessions.
// resources that can span across multiple regions, or because // Other resources that have restrictions should allow the API to fail, rather
// signature format v4 requires region to be us-east-1 for global // than Terraform abstracting the region for the user. This can lead to breaking
// endpoints: // changes if that resource is ever opened up to more regions.
// http://docs.aws.amazon.com/general/latest/gr/sigv4_changes.html r53Sess := sess.Copy(&aws.Config{Region: aws.String("us-east-1")})
usEast1Sess := sess.Copy(&aws.Config{Region: aws.String("us-east-1")})
// Some services have user-configurable endpoints // Some services have user-configurable endpoints
awsCfSess := sess.Copy(&aws.Config{Endpoint: aws.String(c.CloudFormationEndpoint)}) awsCfSess := sess.Copy(&aws.Config{Endpoint: aws.String(c.CloudFormationEndpoint)})
@ -368,9 +367,9 @@ func (c *Config) Client() (interface{}, error) {
client.kinesisconn = kinesis.New(awsKinesisSess) client.kinesisconn = kinesis.New(awsKinesisSess)
client.kmsconn = kms.New(awsKmsSess) client.kmsconn = kms.New(awsKmsSess)
client.lambdaconn = lambda.New(sess) client.lambdaconn = lambda.New(sess)
client.lightsailconn = lightsail.New(usEast1Sess) client.lightsailconn = lightsail.New(sess)
client.opsworksconn = opsworks.New(sess) client.opsworksconn = opsworks.New(sess)
client.r53conn = route53.New(usEast1Sess) client.r53conn = route53.New(r53Sess)
client.rdsconn = rds.New(awsRdsSess) client.rdsconn = rds.New(awsRdsSess)
client.redshiftconn = redshift.New(sess) client.redshiftconn = redshift.New(sess)
client.simpledbconn = simpledb.New(sess) client.simpledbconn = simpledb.New(sess)

View File

@ -0,0 +1,66 @@
package aws
import (
"encoding/base64"
"log"
"time"
"github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/service/kms"
"github.com/hashicorp/terraform/helper/schema"
)
func dataSourceAwsKmsCiphetext() *schema.Resource {
return &schema.Resource{
Read: dataSourceAwsKmsCiphetextRead,
Schema: map[string]*schema.Schema{
"plaintext": {
Type: schema.TypeString,
Required: true,
},
"key_id": {
Type: schema.TypeString,
Required: true,
},
"context": &schema.Schema{
Type: schema.TypeMap,
Optional: true,
Elem: &schema.Schema{Type: schema.TypeString},
},
"ciphertext_blob": {
Type: schema.TypeString,
Computed: true,
},
},
}
}
func dataSourceAwsKmsCiphetextRead(d *schema.ResourceData, meta interface{}) error {
conn := meta.(*AWSClient).kmsconn
d.SetId(time.Now().UTC().String())
req := &kms.EncryptInput{
KeyId: aws.String(d.Get("key_id").(string)),
Plaintext: []byte(d.Get("plaintext").(string)),
}
if ec := d.Get("context"); ec != nil {
req.EncryptionContext = stringMapToPointers(ec.(map[string]interface{}))
}
log.Printf("[DEBUG] KMS encrypt for key: %s", d.Get("key_id").(string))
resp, err := conn.Encrypt(req)
if err != nil {
return err
}
d.Set("ciphertext_blob", base64.StdEncoding.EncodeToString(resp.CiphertextBlob))
return nil
}

View File

@ -0,0 +1,136 @@
package aws
import (
"testing"
"github.com/hashicorp/terraform/helper/resource"
)
func TestAccDataSourceAwsKmsCiphertext_basic(t *testing.T) {
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
Steps: []resource.TestStep{
{
Config: testAccDataSourceAwsKmsCiphertextConfig_basic,
Check: resource.ComposeTestCheckFunc(
resource.TestCheckResourceAttrSet(
"data.aws_kms_ciphertext.foo", "ciphertext_blob"),
),
},
},
})
}
func TestAccDataSourceAwsKmsCiphertext_validate(t *testing.T) {
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
Steps: []resource.TestStep{
{
Config: testAccDataSourceAwsKmsCiphertextConfig_validate,
Check: resource.ComposeTestCheckFunc(
resource.TestCheckResourceAttrSet(
"data.aws_kms_ciphertext.foo", "ciphertext_blob"),
resource.TestCheckResourceAttrSet(
"data.aws_kms_secret.foo", "plaintext"),
resource.TestCheckResourceAttr(
"data.aws_kms_secret.foo", "plaintext", "Super secret data"),
),
},
},
})
}
func TestAccDataSourceAwsKmsCiphertext_validate_withContext(t *testing.T) {
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
Steps: []resource.TestStep{
{
Config: testAccDataSourceAwsKmsCiphertextConfig_validate_withContext,
Check: resource.ComposeTestCheckFunc(
resource.TestCheckResourceAttrSet(
"data.aws_kms_ciphertext.foo", "ciphertext_blob"),
resource.TestCheckResourceAttrSet(
"data.aws_kms_secret.foo", "plaintext"),
resource.TestCheckResourceAttr(
"data.aws_kms_secret.foo", "plaintext", "Super secret data"),
),
},
},
})
}
const testAccDataSourceAwsKmsCiphertextConfig_basic = `
provider "aws" {
region = "us-west-2"
}
resource "aws_kms_key" "foo" {
description = "tf-test-acc-data-source-aws-kms-ciphertext-basic"
is_enabled = true
}
data "aws_kms_ciphertext" "foo" {
key_id = "${aws_kms_key.foo.key_id}"
plaintext = "Super secret data"
}
`
const testAccDataSourceAwsKmsCiphertextConfig_validate = `
provider "aws" {
region = "us-west-2"
}
resource "aws_kms_key" "foo" {
description = "tf-test-acc-data-source-aws-kms-ciphertext-validate"
is_enabled = true
}
data "aws_kms_ciphertext" "foo" {
key_id = "${aws_kms_key.foo.key_id}"
plaintext = "Super secret data"
}
data "aws_kms_secret" "foo" {
secret {
name = "plaintext"
payload = "${data.aws_kms_ciphertext.foo.ciphertext_blob}"
}
}
`
const testAccDataSourceAwsKmsCiphertextConfig_validate_withContext = `
provider "aws" {
region = "us-west-2"
}
resource "aws_kms_key" "foo" {
description = "tf-test-acc-data-source-aws-kms-ciphertext-validate-with-context"
is_enabled = true
}
data "aws_kms_ciphertext" "foo" {
key_id = "${aws_kms_key.foo.key_id}"
plaintext = "Super secret data"
context {
name = "value"
}
}
data "aws_kms_secret" "foo" {
secret {
name = "plaintext"
payload = "${data.aws_kms_ciphertext.foo.ciphertext_blob}"
context {
name = "value"
}
}
}
`

View File

@ -17,7 +17,7 @@ func TestAccAWSIAMRole_importBasic(t *testing.T) {
CheckDestroy: testAccCheckAWSRoleDestroy, CheckDestroy: testAccCheckAWSRoleDestroy,
Steps: []resource.TestStep{ Steps: []resource.TestStep{
{ {
Config: testAccAWSRoleConfig(rName), Config: testAccAWSIAMRoleConfig(rName),
}, },
{ {

View File

@ -0,0 +1,32 @@
package aws
import (
"testing"
"time"
"github.com/hashicorp/terraform/helper/acctest"
"github.com/hashicorp/terraform/helper/resource"
)
func TestAccAWSKmsAlias_importBasic(t *testing.T) {
resourceName := "aws_kms_alias.single"
rInt := acctest.RandInt()
kmsAliasTimestamp := time.Now().Format(time.RFC1123)
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
CheckDestroy: testAccCheckAWSKmsAliasDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: testAccAWSKmsSingleAlias(rInt, kmsAliasTimestamp),
},
resource.TestStep{
ResourceName: resourceName,
ImportState: true,
ImportStateVerify: true,
},
},
})
}

View File

@ -183,14 +183,15 @@ func Provider() terraform.ResourceProvider {
"aws_eip": dataSourceAwsEip(), "aws_eip": dataSourceAwsEip(),
"aws_elb_hosted_zone_id": dataSourceAwsElbHostedZoneId(), "aws_elb_hosted_zone_id": dataSourceAwsElbHostedZoneId(),
"aws_elb_service_account": dataSourceAwsElbServiceAccount(), "aws_elb_service_account": dataSourceAwsElbServiceAccount(),
"aws_kinesis_stream": dataSourceAwsKinesisStream(),
"aws_iam_account_alias": dataSourceAwsIamAccountAlias(), "aws_iam_account_alias": dataSourceAwsIamAccountAlias(),
"aws_iam_policy_document": dataSourceAwsIamPolicyDocument(), "aws_iam_policy_document": dataSourceAwsIamPolicyDocument(),
"aws_iam_role": dataSourceAwsIAMRole(), "aws_iam_role": dataSourceAwsIAMRole(),
"aws_iam_server_certificate": dataSourceAwsIAMServerCertificate(), "aws_iam_server_certificate": dataSourceAwsIAMServerCertificate(),
"aws_instance": dataSourceAwsInstance(), "aws_instance": dataSourceAwsInstance(),
"aws_ip_ranges": dataSourceAwsIPRanges(), "aws_ip_ranges": dataSourceAwsIPRanges(),
"aws_kinesis_stream": dataSourceAwsKinesisStream(),
"aws_kms_alias": dataSourceAwsKmsAlias(), "aws_kms_alias": dataSourceAwsKmsAlias(),
"aws_kms_ciphertext": dataSourceAwsKmsCiphetext(),
"aws_kms_secret": dataSourceAwsKmsSecret(), "aws_kms_secret": dataSourceAwsKmsSecret(),
"aws_partition": dataSourceAwsPartition(), "aws_partition": dataSourceAwsPartition(),
"aws_prefix_list": dataSourceAwsPrefixList(), "aws_prefix_list": dataSourceAwsPrefixList(),

View File

@ -57,7 +57,7 @@ func resourceAwsCloudwatchLogSubscriptionFilterCreate(d *schema.ResourceData, me
params := getAwsCloudWatchLogsSubscriptionFilterInput(d) params := getAwsCloudWatchLogsSubscriptionFilterInput(d)
log.Printf("[DEBUG] Creating SubscriptionFilter %#v", params) log.Printf("[DEBUG] Creating SubscriptionFilter %#v", params)
return resource.Retry(3*time.Minute, func() *resource.RetryError { return resource.Retry(5*time.Minute, func() *resource.RetryError {
_, err := conn.PutSubscriptionFilter(&params) _, err := conn.PutSubscriptionFilter(&params)
if err == nil { if err == nil {

View File

@ -19,6 +19,12 @@ func resourceAwsDmsReplicationInstance() *schema.Resource {
Update: resourceAwsDmsReplicationInstanceUpdate, Update: resourceAwsDmsReplicationInstanceUpdate,
Delete: resourceAwsDmsReplicationInstanceDelete, Delete: resourceAwsDmsReplicationInstanceDelete,
Timeouts: &schema.ResourceTimeout{
Create: schema.DefaultTimeout(30 * time.Minute),
Update: schema.DefaultTimeout(30 * time.Minute),
Delete: schema.DefaultTimeout(30 * time.Minute),
},
Importer: &schema.ResourceImporter{ Importer: &schema.ResourceImporter{
State: schema.ImportStatePassthrough, State: schema.ImportStatePassthrough,
}, },
@ -304,7 +310,7 @@ func resourceAwsDmsReplicationInstanceUpdate(d *schema.ResourceData, meta interf
Pending: []string{"modifying"}, Pending: []string{"modifying"},
Target: []string{"available"}, Target: []string{"available"},
Refresh: resourceAwsDmsReplicationInstanceStateRefreshFunc(d, meta), Refresh: resourceAwsDmsReplicationInstanceStateRefreshFunc(d, meta),
Timeout: d.Timeout(schema.TimeoutCreate), Timeout: d.Timeout(schema.TimeoutUpdate),
MinTimeout: 10 * time.Second, MinTimeout: 10 * time.Second,
Delay: 30 * time.Second, // Wait 30 secs before starting Delay: 30 * time.Second, // Wait 30 secs before starting
} }
@ -339,7 +345,7 @@ func resourceAwsDmsReplicationInstanceDelete(d *schema.ResourceData, meta interf
Pending: []string{"deleting"}, Pending: []string{"deleting"},
Target: []string{}, Target: []string{},
Refresh: resourceAwsDmsReplicationInstanceStateRefreshFunc(d, meta), Refresh: resourceAwsDmsReplicationInstanceStateRefreshFunc(d, meta),
Timeout: d.Timeout(schema.TimeoutCreate), Timeout: d.Timeout(schema.TimeoutDelete),
MinTimeout: 10 * time.Second, MinTimeout: 10 * time.Second,
Delay: 30 * time.Second, // Wait 30 secs before starting Delay: 30 * time.Second, // Wait 30 secs before starting
} }

View File

@ -139,6 +139,10 @@ resource "aws_dms_replication_instance" "dms_replication_instance" {
Update = "to-update" Update = "to-update"
Remove = "to-remove" Remove = "to-remove"
} }
timeouts {
create = "40m"
}
} }
`, randId) `, randId)
} }

View File

@ -230,13 +230,13 @@ func resourceAwsEcsServiceCreate(d *schema.ResourceData, meta interface{}) error
out, err = conn.CreateService(&input) out, err = conn.CreateService(&input)
if err != nil { if err != nil {
ec2err, ok := err.(awserr.Error) awsErr, ok := err.(awserr.Error)
if !ok { if !ok {
return resource.NonRetryableError(err) return resource.NonRetryableError(err)
} }
if ec2err.Code() == "InvalidParameterException" { if awsErr.Code() == "InvalidParameterException" {
log.Printf("[DEBUG] Trying to create ECS service again: %q", log.Printf("[DEBUG] Trying to create ECS service again: %q",
ec2err.Message()) awsErr.Message())
return resource.RetryableError(err) return resource.RetryableError(err)
} }
@ -400,12 +400,26 @@ func resourceAwsEcsServiceUpdate(d *schema.ResourceData, meta interface{}) error
} }
} }
// Retry due to AWS IAM policy eventual consistency
// See https://github.com/hashicorp/terraform/issues/4375
err := resource.Retry(2*time.Minute, func() *resource.RetryError {
out, err := conn.UpdateService(&input) out, err := conn.UpdateService(&input)
if err != nil {
awsErr, ok := err.(awserr.Error)
if ok && awsErr.Code() == "InvalidParameterException" {
log.Printf("[DEBUG] Trying to update ECS service again: %#v", err)
return resource.RetryableError(err)
}
return resource.NonRetryableError(err)
}
log.Printf("[DEBUG] Updated ECS service %s", out.Service)
return nil
})
if err != nil { if err != nil {
return err return err
} }
service := out.Service
log.Printf("[DEBUG] Updated ECS service %s", service)
return resourceAwsEcsServiceRead(d, meta) return resourceAwsEcsServiceRead(d, meta)
} }

View File

@ -565,7 +565,18 @@ func resourceAwsElasticacheClusterDelete(d *schema.ResourceData, meta interface{
req := &elasticache.DeleteCacheClusterInput{ req := &elasticache.DeleteCacheClusterInput{
CacheClusterId: aws.String(d.Id()), CacheClusterId: aws.String(d.Id()),
} }
err := resource.Retry(5*time.Minute, func() *resource.RetryError {
_, err := conn.DeleteCacheCluster(req) _, err := conn.DeleteCacheCluster(req)
if err != nil {
awsErr, ok := err.(awserr.Error)
// The cluster may be just snapshotting, so we retry until it's ready for deletion
if ok && awsErr.Code() == "InvalidCacheClusterState" {
return resource.RetryableError(err)
}
return resource.NonRetryableError(err)
}
return nil
})
if err != nil { if err != nil {
return err return err
} }

View File

@ -370,6 +370,12 @@ func resourceAwsElasticacheReplicationGroupUpdate(d *schema.ResourceData, meta i
} }
if d.HasChange("snapshot_retention_limit") { if d.HasChange("snapshot_retention_limit") {
// This is a real hack to set the Snapshotting Cluster ID to be the first Cluster in the RG
o, _ := d.GetChange("snapshot_retention_limit")
if o.(int) == 0 {
params.SnapshottingClusterId = aws.String(fmt.Sprintf("%s-001", d.Id()))
}
params.SnapshotRetentionLimit = aws.Int64(int64(d.Get("snapshot_retention_limit").(int))) params.SnapshotRetentionLimit = aws.Int64(int64(d.Get("snapshot_retention_limit").(int)))
requestUpdate = true requestUpdate = true
} }

View File

@ -288,6 +288,36 @@ func TestAccAWSElasticacheReplicationGroup_clusteringAndCacheNodesCausesError(t
}) })
} }
func TestAccAWSElasticacheReplicationGroup_enableSnapshotting(t *testing.T) {
var rg elasticache.ReplicationGroup
rName := acctest.RandString(10)
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
CheckDestroy: testAccCheckAWSElasticacheReplicationDestroy,
Steps: []resource.TestStep{
{
Config: testAccAWSElasticacheReplicationGroupConfig(rName),
Check: resource.ComposeTestCheckFunc(
testAccCheckAWSElasticacheReplicationGroupExists("aws_elasticache_replication_group.bar", &rg),
resource.TestCheckResourceAttr(
"aws_elasticache_replication_group.bar", "snapshot_retention_limit", "0"),
),
},
{
Config: testAccAWSElasticacheReplicationGroupConfigEnableSnapshotting(rName),
Check: resource.ComposeTestCheckFunc(
testAccCheckAWSElasticacheReplicationGroupExists("aws_elasticache_replication_group.bar", &rg),
resource.TestCheckResourceAttr(
"aws_elasticache_replication_group.bar", "snapshot_retention_limit", "2"),
),
},
},
})
}
func TestResourceAWSElastiCacheReplicationGroupIdValidation(t *testing.T) { func TestResourceAWSElastiCacheReplicationGroupIdValidation(t *testing.T) {
cases := []struct { cases := []struct {
Value string Value string
@ -446,6 +476,44 @@ resource "aws_elasticache_replication_group" "bar" {
}`, rName, rName, rName) }`, rName, rName, rName)
} }
func testAccAWSElasticacheReplicationGroupConfigEnableSnapshotting(rName string) string {
return fmt.Sprintf(`
provider "aws" {
region = "us-east-1"
}
resource "aws_security_group" "bar" {
name = "tf-test-security-group-%s"
description = "tf-test-security-group-descr"
ingress {
from_port = -1
to_port = -1
protocol = "icmp"
cidr_blocks = ["0.0.0.0/0"]
}
}
resource "aws_elasticache_security_group" "bar" {
name = "tf-test-security-group-%s"
description = "tf-test-security-group-descr"
security_group_names = ["${aws_security_group.bar.name}"]
}
resource "aws_elasticache_replication_group" "bar" {
replication_group_id = "tf-%s"
replication_group_description = "test description"
node_type = "cache.m1.small"
number_cache_clusters = 2
port = 6379
parameter_group_name = "default.redis3.2"
security_group_names = ["${aws_elasticache_security_group.bar.name}"]
apply_immediately = true
auto_minor_version_upgrade = false
maintenance_window = "tue:06:30-tue:07:30"
snapshot_window = "01:00-02:00"
snapshot_retention_limit = 2
}`, rName, rName, rName)
}
func testAccAWSElasticacheReplicationGroupConfigUpdatedParameterGroup(rName string, rInt int) string { func testAccAWSElasticacheReplicationGroupConfigUpdatedParameterGroup(rName string, rInt int) string {
return fmt.Sprintf(` return fmt.Sprintf(`
provider "aws" { provider "aws" {

View File

@ -24,24 +24,24 @@ func resourceAwsIamPolicy() *schema.Resource {
}, },
Schema: map[string]*schema.Schema{ Schema: map[string]*schema.Schema{
"description": &schema.Schema{ "description": {
Type: schema.TypeString, Type: schema.TypeString,
ForceNew: true, ForceNew: true,
Optional: true, Optional: true,
}, },
"path": &schema.Schema{ "path": {
Type: schema.TypeString, Type: schema.TypeString,
Optional: true, Optional: true,
Default: "/", Default: "/",
ForceNew: true, ForceNew: true,
}, },
"policy": &schema.Schema{ "policy": {
Type: schema.TypeString, Type: schema.TypeString,
Required: true, Required: true,
ValidateFunc: validateJsonString, ValidateFunc: validateIAMPolicyJson,
DiffSuppressFunc: suppressEquivalentAwsPolicyDiffs, DiffSuppressFunc: suppressEquivalentAwsPolicyDiffs,
}, },
"name": &schema.Schema{ "name": {
Type: schema.TypeString, Type: schema.TypeString,
Optional: true, Optional: true,
Computed: true, Computed: true,
@ -79,7 +79,7 @@ func resourceAwsIamPolicy() *schema.Resource {
return return
}, },
}, },
"arn": &schema.Schema{ "arn": {
Type: schema.TypeString, Type: schema.TypeString,
Computed: true, Computed: true,
}, },

View File

@ -2,6 +2,7 @@ package aws
import ( import (
"fmt" "fmt"
"regexp"
"strings" "strings"
"testing" "testing"
@ -19,7 +20,7 @@ func TestAWSPolicy_namePrefix(t *testing.T) {
Providers: testAccProviders, Providers: testAccProviders,
CheckDestroy: testAccCheckAWSPolicyDestroy, CheckDestroy: testAccCheckAWSPolicyDestroy,
Steps: []resource.TestStep{ Steps: []resource.TestStep{
resource.TestStep{ {
Config: testAccAWSPolicyPrefixNameConfig, Config: testAccAWSPolicyPrefixNameConfig,
Check: resource.ComposeTestCheckFunc( Check: resource.ComposeTestCheckFunc(
testAccCheckAWSPolicyExists("aws_iam_policy.policy", &out), testAccCheckAWSPolicyExists("aws_iam_policy.policy", &out),
@ -31,6 +32,20 @@ func TestAWSPolicy_namePrefix(t *testing.T) {
}) })
} }
func TestAWSPolicy_invalidJson(t *testing.T) {
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
CheckDestroy: testAccCheckAWSPolicyDestroy,
Steps: []resource.TestStep{
{
Config: testAccAWSPolicyInvalidJsonConfig,
ExpectError: regexp.MustCompile("invalid JSON"),
},
},
})
}
func testAccCheckAWSPolicyExists(resource string, res *iam.GetPolicyOutput) resource.TestCheckFunc { func testAccCheckAWSPolicyExists(resource string, res *iam.GetPolicyOutput) resource.TestCheckFunc {
return func(s *terraform.State) error { return func(s *terraform.State) error {
rs, ok := s.RootModule().Resources[resource] rs, ok := s.RootModule().Resources[resource]
@ -94,3 +109,23 @@ resource "aws_iam_policy" "policy" {
EOF EOF
} }
` `
const testAccAWSPolicyInvalidJsonConfig = `
resource "aws_iam_policy" "policy" {
name_prefix = "test-policy-"
path = "/"
policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"ec2:Describe*"
],
"Effect": "Allow",
"Resource": "*"
}
]
}
EOF
}
`

View File

@ -258,8 +258,17 @@ func resourceAwsIamRoleDelete(d *schema.ResourceData, meta interface{}) error {
RoleName: aws.String(d.Id()), RoleName: aws.String(d.Id()),
} }
if _, err := iamconn.DeleteRole(request); err != nil { // IAM is eventually consistent and deletion of attached policies may take time
return fmt.Errorf("Error deleting IAM Role %s: %s", d.Id(), err) return resource.Retry(30*time.Second, func() *resource.RetryError {
_, err := iamconn.DeleteRole(request)
if err != nil {
awsErr, ok := err.(awserr.Error)
if ok && awsErr.Code() == "DeleteConflict" {
return resource.RetryableError(err)
}
return resource.NonRetryableError(fmt.Errorf("Error deleting IAM Role %s: %s", d.Id(), err))
} }
return nil return nil
})
} }

View File

@ -26,11 +26,13 @@ func resourceAwsIamRolePolicy() *schema.Resource {
}, },
Schema: map[string]*schema.Schema{ Schema: map[string]*schema.Schema{
"policy": &schema.Schema{ "policy": {
Type: schema.TypeString, Type: schema.TypeString,
Required: true, Required: true,
ValidateFunc: validateIAMPolicyJson,
DiffSuppressFunc: suppressEquivalentAwsPolicyDiffs,
}, },
"name": &schema.Schema{ "name": {
Type: schema.TypeString, Type: schema.TypeString,
Optional: true, Optional: true,
Computed: true, Computed: true,
@ -38,13 +40,13 @@ func resourceAwsIamRolePolicy() *schema.Resource {
ConflictsWith: []string{"name_prefix"}, ConflictsWith: []string{"name_prefix"},
ValidateFunc: validateIamRolePolicyName, ValidateFunc: validateIamRolePolicyName,
}, },
"name_prefix": &schema.Schema{ "name_prefix": {
Type: schema.TypeString, Type: schema.TypeString,
Optional: true, Optional: true,
ForceNew: true, ForceNew: true,
ValidateFunc: validateIamRolePolicyNamePrefix, ValidateFunc: validateIamRolePolicyNamePrefix,
}, },
"role": &schema.Schema{ "role": {
Type: schema.TypeString, Type: schema.TypeString,
Required: true, Required: true,
ForceNew: true, ForceNew: true,

View File

@ -2,6 +2,7 @@ package aws
import ( import (
"fmt" "fmt"
"regexp"
"testing" "testing"
"github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/aws"
@ -22,7 +23,7 @@ func TestAccAWSIAMRolePolicy_basic(t *testing.T) {
Providers: testAccProviders, Providers: testAccProviders,
CheckDestroy: testAccCheckIAMRolePolicyDestroy, CheckDestroy: testAccCheckIAMRolePolicyDestroy,
Steps: []resource.TestStep{ Steps: []resource.TestStep{
resource.TestStep{ {
Config: testAccIAMRolePolicyConfig(role, policy1), Config: testAccIAMRolePolicyConfig(role, policy1),
Check: resource.ComposeTestCheckFunc( Check: resource.ComposeTestCheckFunc(
testAccCheckIAMRolePolicy( testAccCheckIAMRolePolicy(
@ -31,7 +32,7 @@ func TestAccAWSIAMRolePolicy_basic(t *testing.T) {
), ),
), ),
}, },
resource.TestStep{ {
Config: testAccIAMRolePolicyConfigUpdate(role, policy1, policy2), Config: testAccIAMRolePolicyConfigUpdate(role, policy1, policy2),
Check: resource.ComposeTestCheckFunc( Check: resource.ComposeTestCheckFunc(
testAccCheckIAMRolePolicy( testAccCheckIAMRolePolicy(
@ -53,7 +54,7 @@ func TestAccAWSIAMRolePolicy_namePrefix(t *testing.T) {
Providers: testAccProviders, Providers: testAccProviders,
CheckDestroy: testAccCheckIAMRolePolicyDestroy, CheckDestroy: testAccCheckIAMRolePolicyDestroy,
Steps: []resource.TestStep{ Steps: []resource.TestStep{
resource.TestStep{ {
Config: testAccIAMRolePolicyConfig_namePrefix(role), Config: testAccIAMRolePolicyConfig_namePrefix(role),
Check: resource.ComposeTestCheckFunc( Check: resource.ComposeTestCheckFunc(
testAccCheckIAMRolePolicy( testAccCheckIAMRolePolicy(
@ -75,7 +76,7 @@ func TestAccAWSIAMRolePolicy_generatedName(t *testing.T) {
Providers: testAccProviders, Providers: testAccProviders,
CheckDestroy: testAccCheckIAMRolePolicyDestroy, CheckDestroy: testAccCheckIAMRolePolicyDestroy,
Steps: []resource.TestStep{ Steps: []resource.TestStep{
resource.TestStep{ {
Config: testAccIAMRolePolicyConfig_generatedName(role), Config: testAccIAMRolePolicyConfig_generatedName(role),
Check: resource.ComposeTestCheckFunc( Check: resource.ComposeTestCheckFunc(
testAccCheckIAMRolePolicy( testAccCheckIAMRolePolicy(
@ -88,6 +89,22 @@ func TestAccAWSIAMRolePolicy_generatedName(t *testing.T) {
}) })
} }
func TestAccAWSIAMRolePolicy_invalidJSON(t *testing.T) {
role := acctest.RandString(10)
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
CheckDestroy: testAccCheckIAMRolePolicyDestroy,
Steps: []resource.TestStep{
{
Config: testAccIAMRolePolicyConfig_invalidJSON(role),
ExpectError: regexp.MustCompile("invalid JSON"),
},
},
})
}
func testAccCheckIAMRolePolicyDestroy(s *terraform.State) error { func testAccCheckIAMRolePolicyDestroy(s *terraform.State) error {
iamconn := testAccProvider.Meta().(*AWSClient).iamconn iamconn := testAccProvider.Meta().(*AWSClient).iamconn
@ -328,3 +345,42 @@ EOF
} }
`, role, policy1, policy2) `, role, policy1, policy2)
} }
func testAccIAMRolePolicyConfig_invalidJSON(role string) string {
return fmt.Sprintf(`
resource "aws_iam_role" "role" {
name = "tf_test_role_%s"
path = "/"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "ec2.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
}
]
}
EOF
}
resource "aws_iam_role_policy" "foo" {
name = "tf_test_policy_%s"
role = "${aws_iam_role.role.name}"
policy = <<EOF
{
"Version": "2012-10-17",
"Statement": {
"Effect": "Allow",
"Action": "*",
"Resource": "*"
}
}
EOF
}
`, role, role)
}

View File

@ -15,7 +15,7 @@ import (
"github.com/hashicorp/terraform/terraform" "github.com/hashicorp/terraform/terraform"
) )
func TestAccAWSRole_basic(t *testing.T) { func TestAccAWSIAMRole_basic(t *testing.T) {
var conf iam.GetRoleOutput var conf iam.GetRoleOutput
rName := acctest.RandString(10) rName := acctest.RandString(10)
@ -25,7 +25,7 @@ func TestAccAWSRole_basic(t *testing.T) {
CheckDestroy: testAccCheckAWSRoleDestroy, CheckDestroy: testAccCheckAWSRoleDestroy,
Steps: []resource.TestStep{ Steps: []resource.TestStep{
{ {
Config: testAccAWSRoleConfig(rName), Config: testAccAWSIAMRoleConfig(rName),
Check: resource.ComposeTestCheckFunc( Check: resource.ComposeTestCheckFunc(
testAccCheckAWSRoleExists("aws_iam_role.role", &conf), testAccCheckAWSRoleExists("aws_iam_role.role", &conf),
resource.TestCheckResourceAttr("aws_iam_role.role", "path", "/"), resource.TestCheckResourceAttr("aws_iam_role.role", "path", "/"),
@ -36,7 +36,7 @@ func TestAccAWSRole_basic(t *testing.T) {
}) })
} }
func TestAccAWSRole_basicWithDescription(t *testing.T) { func TestAccAWSIAMRole_basicWithDescription(t *testing.T) {
var conf iam.GetRoleOutput var conf iam.GetRoleOutput
rName := acctest.RandString(10) rName := acctest.RandString(10)
@ -46,7 +46,7 @@ func TestAccAWSRole_basicWithDescription(t *testing.T) {
CheckDestroy: testAccCheckAWSRoleDestroy, CheckDestroy: testAccCheckAWSRoleDestroy,
Steps: []resource.TestStep{ Steps: []resource.TestStep{
{ {
Config: testAccAWSRoleConfigWithDescription(rName), Config: testAccAWSIAMRoleConfigWithDescription(rName),
Check: resource.ComposeTestCheckFunc( Check: resource.ComposeTestCheckFunc(
testAccCheckAWSRoleExists("aws_iam_role.role", &conf), testAccCheckAWSRoleExists("aws_iam_role.role", &conf),
resource.TestCheckResourceAttr("aws_iam_role.role", "path", "/"), resource.TestCheckResourceAttr("aws_iam_role.role", "path", "/"),
@ -54,7 +54,7 @@ func TestAccAWSRole_basicWithDescription(t *testing.T) {
), ),
}, },
{ {
Config: testAccAWSRoleConfigWithUpdatedDescription(rName), Config: testAccAWSIAMRoleConfigWithUpdatedDescription(rName),
Check: resource.ComposeTestCheckFunc( Check: resource.ComposeTestCheckFunc(
testAccCheckAWSRoleExists("aws_iam_role.role", &conf), testAccCheckAWSRoleExists("aws_iam_role.role", &conf),
resource.TestCheckResourceAttr("aws_iam_role.role", "path", "/"), resource.TestCheckResourceAttr("aws_iam_role.role", "path", "/"),
@ -62,7 +62,7 @@ func TestAccAWSRole_basicWithDescription(t *testing.T) {
), ),
}, },
{ {
Config: testAccAWSRoleConfig(rName), Config: testAccAWSIAMRoleConfig(rName),
Check: resource.ComposeTestCheckFunc( Check: resource.ComposeTestCheckFunc(
testAccCheckAWSRoleExists("aws_iam_role.role", &conf), testAccCheckAWSRoleExists("aws_iam_role.role", &conf),
resource.TestCheckResourceAttrSet("aws_iam_role.role", "create_date"), resource.TestCheckResourceAttrSet("aws_iam_role.role", "create_date"),
@ -73,7 +73,7 @@ func TestAccAWSRole_basicWithDescription(t *testing.T) {
}) })
} }
func TestAccAWSRole_namePrefix(t *testing.T) { func TestAccAWSIAMRole_namePrefix(t *testing.T) {
var conf iam.GetRoleOutput var conf iam.GetRoleOutput
rName := acctest.RandString(10) rName := acctest.RandString(10)
@ -85,7 +85,7 @@ func TestAccAWSRole_namePrefix(t *testing.T) {
CheckDestroy: testAccCheckAWSRoleDestroy, CheckDestroy: testAccCheckAWSRoleDestroy,
Steps: []resource.TestStep{ Steps: []resource.TestStep{
{ {
Config: testAccAWSRolePrefixNameConfig(rName), Config: testAccAWSIAMRolePrefixNameConfig(rName),
Check: resource.ComposeTestCheckFunc( Check: resource.ComposeTestCheckFunc(
testAccCheckAWSRoleExists("aws_iam_role.role", &conf), testAccCheckAWSRoleExists("aws_iam_role.role", &conf),
testAccCheckAWSRoleGeneratedNamePrefix( testAccCheckAWSRoleGeneratedNamePrefix(
@ -96,7 +96,7 @@ func TestAccAWSRole_namePrefix(t *testing.T) {
}) })
} }
func TestAccAWSRole_testNameChange(t *testing.T) { func TestAccAWSIAMRole_testNameChange(t *testing.T) {
var conf iam.GetRoleOutput var conf iam.GetRoleOutput
rName := acctest.RandString(10) rName := acctest.RandString(10)
@ -106,14 +106,14 @@ func TestAccAWSRole_testNameChange(t *testing.T) {
CheckDestroy: testAccCheckAWSRoleDestroy, CheckDestroy: testAccCheckAWSRoleDestroy,
Steps: []resource.TestStep{ Steps: []resource.TestStep{
{ {
Config: testAccAWSRolePre(rName), Config: testAccAWSIAMRolePre(rName),
Check: resource.ComposeTestCheckFunc( Check: resource.ComposeTestCheckFunc(
testAccCheckAWSRoleExists("aws_iam_role.role_update_test", &conf), testAccCheckAWSRoleExists("aws_iam_role.role_update_test", &conf),
), ),
}, },
{ {
Config: testAccAWSRolePost(rName), Config: testAccAWSIAMRolePost(rName),
Check: resource.ComposeTestCheckFunc( Check: resource.ComposeTestCheckFunc(
testAccCheckAWSRoleExists("aws_iam_role.role_update_test", &conf), testAccCheckAWSRoleExists("aws_iam_role.role_update_test", &conf),
), ),
@ -122,7 +122,7 @@ func TestAccAWSRole_testNameChange(t *testing.T) {
}) })
} }
func TestAccAWSRole_badJSON(t *testing.T) { func TestAccAWSIAMRole_badJSON(t *testing.T) {
rName := acctest.RandString(10) rName := acctest.RandString(10)
resource.Test(t, resource.TestCase{ resource.Test(t, resource.TestCase{
@ -131,7 +131,7 @@ func TestAccAWSRole_badJSON(t *testing.T) {
CheckDestroy: testAccCheckAWSRoleDestroy, CheckDestroy: testAccCheckAWSRoleDestroy,
Steps: []resource.TestStep{ Steps: []resource.TestStep{
{ {
Config: testAccAWSRoleConfig_badJson(rName), Config: testAccAWSIAMRoleConfig_badJson(rName),
ExpectError: regexp.MustCompile(`.*contains an invalid JSON:.*`), ExpectError: regexp.MustCompile(`.*contains an invalid JSON:.*`),
}, },
}, },
@ -210,7 +210,7 @@ func testAccCheckAWSRoleGeneratedNamePrefix(resource, prefix string) resource.Te
} }
} }
func testAccAWSRoleConfig(rName string) string { func testAccAWSIAMRoleConfig(rName string) string {
return fmt.Sprintf(` return fmt.Sprintf(`
resource "aws_iam_role" "role" { resource "aws_iam_role" "role" {
name = "test-role-%s" name = "test-role-%s"
@ -220,7 +220,7 @@ resource "aws_iam_role" "role" {
`, rName) `, rName)
} }
func testAccAWSRoleConfigWithDescription(rName string) string { func testAccAWSIAMRoleConfigWithDescription(rName string) string {
return fmt.Sprintf(` return fmt.Sprintf(`
resource "aws_iam_role" "role" { resource "aws_iam_role" "role" {
name = "test-role-%s" name = "test-role-%s"
@ -231,7 +231,7 @@ resource "aws_iam_role" "role" {
`, rName) `, rName)
} }
func testAccAWSRoleConfigWithUpdatedDescription(rName string) string { func testAccAWSIAMRoleConfigWithUpdatedDescription(rName string) string {
return fmt.Sprintf(` return fmt.Sprintf(`
resource "aws_iam_role" "role" { resource "aws_iam_role" "role" {
name = "test-role-%s" name = "test-role-%s"
@ -242,7 +242,7 @@ resource "aws_iam_role" "role" {
`, rName) `, rName)
} }
func testAccAWSRolePrefixNameConfig(rName string) string { func testAccAWSIAMRolePrefixNameConfig(rName string) string {
return fmt.Sprintf(` return fmt.Sprintf(`
resource "aws_iam_role" "role" { resource "aws_iam_role" "role" {
name_prefix = "test-role-%s" name_prefix = "test-role-%s"
@ -252,7 +252,7 @@ resource "aws_iam_role" "role" {
`, rName) `, rName)
} }
func testAccAWSRolePre(rName string) string { func testAccAWSIAMRolePre(rName string) string {
return fmt.Sprintf(` return fmt.Sprintf(`
resource "aws_iam_role" "role_update_test" { resource "aws_iam_role" "role_update_test" {
name = "tf_old_name_%s" name = "tf_old_name_%s"
@ -302,7 +302,7 @@ resource "aws_iam_instance_profile" "role_update_test" {
`, rName, rName, rName) `, rName, rName, rName)
} }
func testAccAWSRolePost(rName string) string { func testAccAWSIAMRolePost(rName string) string {
return fmt.Sprintf(` return fmt.Sprintf(`
resource "aws_iam_role" "role_update_test" { resource "aws_iam_role" "role_update_test" {
name = "tf_new_name_%s" name = "tf_new_name_%s"
@ -352,7 +352,7 @@ resource "aws_iam_instance_profile" "role_update_test" {
`, rName, rName, rName) `, rName, rName, rName)
} }
func testAccAWSRoleConfig_badJson(rName string) string { func testAccAWSIAMRoleConfig_badJson(rName string) string {
return fmt.Sprintf(` return fmt.Sprintf(`
resource "aws_iam_role" "my_instance_role" { resource "aws_iam_role" "my_instance_role" {
name = "test-role-%s" name = "test-role-%s"

View File

@ -172,7 +172,7 @@ func resourceAwsIAMServerCertificateRead(d *schema.ResourceData, meta interface{
func resourceAwsIAMServerCertificateDelete(d *schema.ResourceData, meta interface{}) error { func resourceAwsIAMServerCertificateDelete(d *schema.ResourceData, meta interface{}) error {
conn := meta.(*AWSClient).iamconn conn := meta.(*AWSClient).iamconn
log.Printf("[INFO] Deleting IAM Server Certificate: %s", d.Id()) log.Printf("[INFO] Deleting IAM Server Certificate: %s", d.Id())
err := resource.Retry(5*time.Minute, func() *resource.RetryError { err := resource.Retry(10*time.Minute, func() *resource.RetryError {
_, err := conn.DeleteServerCertificate(&iam.DeleteServerCertificateInput{ _, err := conn.DeleteServerCertificate(&iam.DeleteServerCertificateInput{
ServerCertificateName: aws.String(d.Get("name").(string)), ServerCertificateName: aws.String(d.Get("name").(string)),
}) })

View File

@ -32,6 +32,12 @@ func resourceAwsInstance() *schema.Resource {
SchemaVersion: 1, SchemaVersion: 1,
MigrateState: resourceAwsInstanceMigrateState, MigrateState: resourceAwsInstanceMigrateState,
Timeouts: &schema.ResourceTimeout{
Create: schema.DefaultTimeout(10 * time.Minute),
Update: schema.DefaultTimeout(10 * time.Minute),
Delete: schema.DefaultTimeout(10 * time.Minute),
},
Schema: map[string]*schema.Schema{ Schema: map[string]*schema.Schema{
"ami": { "ami": {
Type: schema.TypeString, Type: schema.TypeString,
@ -524,7 +530,7 @@ func resourceAwsInstanceCreate(d *schema.ResourceData, meta interface{}) error {
Pending: []string{"pending"}, Pending: []string{"pending"},
Target: []string{"running"}, Target: []string{"running"},
Refresh: InstanceStateRefreshFunc(conn, *instance.InstanceId, "terminated"), Refresh: InstanceStateRefreshFunc(conn, *instance.InstanceId, "terminated"),
Timeout: 10 * time.Minute, Timeout: d.Timeout(schema.TimeoutCreate),
Delay: 10 * time.Second, Delay: 10 * time.Second,
MinTimeout: 3 * time.Second, MinTimeout: 3 * time.Second,
} }
@ -649,12 +655,23 @@ func resourceAwsInstanceRead(d *schema.ResourceData, meta interface{}) error {
} }
// Set primary network interface details // Set primary network interface details
// If an instance is shutting down, network interfaces are detached, and attributes may be nil,
// need to protect against nil pointer dereferences
if primaryNetworkInterface.SubnetId != nil {
d.Set("subnet_id", primaryNetworkInterface.SubnetId) d.Set("subnet_id", primaryNetworkInterface.SubnetId)
}
if primaryNetworkInterface.NetworkInterfaceId != nil {
d.Set("network_interface_id", primaryNetworkInterface.NetworkInterfaceId) // TODO: Deprecate me v0.10.0 d.Set("network_interface_id", primaryNetworkInterface.NetworkInterfaceId) // TODO: Deprecate me v0.10.0
d.Set("primary_network_interface_id", primaryNetworkInterface.NetworkInterfaceId) d.Set("primary_network_interface_id", primaryNetworkInterface.NetworkInterfaceId)
d.Set("associate_public_ip_address", primaryNetworkInterface.Association != nil) }
if primaryNetworkInterface.Ipv6Addresses != nil {
d.Set("ipv6_address_count", len(primaryNetworkInterface.Ipv6Addresses)) d.Set("ipv6_address_count", len(primaryNetworkInterface.Ipv6Addresses))
d.Set("source_dest_check", *primaryNetworkInterface.SourceDestCheck) }
if primaryNetworkInterface.SourceDestCheck != nil {
d.Set("source_dest_check", primaryNetworkInterface.SourceDestCheck)
}
d.Set("associate_public_ip_address", primaryNetworkInterface.Association != nil)
for _, address := range primaryNetworkInterface.Ipv6Addresses { for _, address := range primaryNetworkInterface.Ipv6Addresses {
ipv6Addresses = append(ipv6Addresses, *address.Ipv6Address) ipv6Addresses = append(ipv6Addresses, *address.Ipv6Address)
@ -888,7 +905,7 @@ func resourceAwsInstanceUpdate(d *schema.ResourceData, meta interface{}) error {
Pending: []string{"pending", "running", "shutting-down", "stopped", "stopping"}, Pending: []string{"pending", "running", "shutting-down", "stopped", "stopping"},
Target: []string{"stopped"}, Target: []string{"stopped"},
Refresh: InstanceStateRefreshFunc(conn, d.Id(), ""), Refresh: InstanceStateRefreshFunc(conn, d.Id(), ""),
Timeout: 10 * time.Minute, Timeout: d.Timeout(schema.TimeoutUpdate),
Delay: 10 * time.Second, Delay: 10 * time.Second,
MinTimeout: 3 * time.Second, MinTimeout: 3 * time.Second,
} }
@ -919,7 +936,7 @@ func resourceAwsInstanceUpdate(d *schema.ResourceData, meta interface{}) error {
Pending: []string{"pending", "stopped"}, Pending: []string{"pending", "stopped"},
Target: []string{"running"}, Target: []string{"running"},
Refresh: InstanceStateRefreshFunc(conn, d.Id(), "terminated"), Refresh: InstanceStateRefreshFunc(conn, d.Id(), "terminated"),
Timeout: 10 * time.Minute, Timeout: d.Timeout(schema.TimeoutUpdate),
Delay: 10 * time.Second, Delay: 10 * time.Second,
MinTimeout: 3 * time.Second, MinTimeout: 3 * time.Second,
} }
@ -986,7 +1003,7 @@ func resourceAwsInstanceUpdate(d *schema.ResourceData, meta interface{}) error {
func resourceAwsInstanceDelete(d *schema.ResourceData, meta interface{}) error { func resourceAwsInstanceDelete(d *schema.ResourceData, meta interface{}) error {
conn := meta.(*AWSClient).ec2conn conn := meta.(*AWSClient).ec2conn
if err := awsTerminateInstance(conn, d.Id()); err != nil { if err := awsTerminateInstance(conn, d.Id(), d); err != nil {
return err return err
} }
@ -1573,7 +1590,7 @@ func buildAwsInstanceOpts(
return opts, nil return opts, nil
} }
func awsTerminateInstance(conn *ec2.EC2, id string) error { func awsTerminateInstance(conn *ec2.EC2, id string, d *schema.ResourceData) error {
log.Printf("[INFO] Terminating instance: %s", id) log.Printf("[INFO] Terminating instance: %s", id)
req := &ec2.TerminateInstancesInput{ req := &ec2.TerminateInstancesInput{
InstanceIds: []*string{aws.String(id)}, InstanceIds: []*string{aws.String(id)},
@ -1588,7 +1605,7 @@ func awsTerminateInstance(conn *ec2.EC2, id string) error {
Pending: []string{"pending", "running", "shutting-down", "stopped", "stopping"}, Pending: []string{"pending", "running", "shutting-down", "stopped", "stopping"},
Target: []string{"terminated"}, Target: []string{"terminated"},
Refresh: InstanceStateRefreshFunc(conn, id, ""), Refresh: InstanceStateRefreshFunc(conn, id, ""),
Timeout: 10 * time.Minute, Timeout: d.Timeout(schema.TimeoutDelete),
Delay: 10 * time.Second, Delay: 10 * time.Second,
MinTimeout: 3 * time.Second, MinTimeout: 3 * time.Second,
} }

View File

@ -134,7 +134,7 @@ func resourceAwsInternetGatewayDelete(d *schema.ResourceData, meta interface{})
log.Printf("[INFO] Deleting Internet Gateway: %s", d.Id()) log.Printf("[INFO] Deleting Internet Gateway: %s", d.Id())
return resource.Retry(5*time.Minute, func() *resource.RetryError { return resource.Retry(10*time.Minute, func() *resource.RetryError {
_, err := conn.DeleteInternetGateway(&ec2.DeleteInternetGatewayInput{ _, err := conn.DeleteInternetGateway(&ec2.DeleteInternetGatewayInput{
InternetGatewayId: aws.String(d.Id()), InternetGatewayId: aws.String(d.Id()),
}) })

View File

@ -19,6 +19,10 @@ func resourceAwsKmsAlias() *schema.Resource {
Update: resourceAwsKmsAliasUpdate, Update: resourceAwsKmsAliasUpdate,
Delete: resourceAwsKmsAliasDelete, Delete: resourceAwsKmsAliasDelete,
Importer: &schema.ResourceImporter{
State: resourceAwsKmsAliasImport,
},
Schema: map[string]*schema.Schema{ Schema: map[string]*schema.Schema{
"arn": &schema.Schema{ "arn": &schema.Schema{
Type: schema.TypeString, Type: schema.TypeString,
@ -173,3 +177,8 @@ func findKmsAliasByName(conn *kms.KMS, name string, marker *string) (*kms.AliasL
return nil, nil return nil, nil
} }
func resourceAwsKmsAliasImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) {
d.Set("name", d.Id())
return []*schema.ResourceData{d}, nil
}

View File

@ -38,6 +38,30 @@ func TestAccAWSLightsailInstance_basic(t *testing.T) {
}) })
} }
func TestAccAWSLightsailInstance_euRegion(t *testing.T) {
var conf lightsail.Instance
lightsailName := fmt.Sprintf("tf-test-lightsail-%d", acctest.RandInt())
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
IDRefreshName: "aws_lightsail_instance.lightsail_instance_test",
Providers: testAccProviders,
CheckDestroy: testAccCheckAWSLightsailInstanceDestroy,
Steps: []resource.TestStep{
{
Config: testAccAWSLightsailInstanceConfig_euRegion(lightsailName),
Check: resource.ComposeAggregateTestCheckFunc(
testAccCheckAWSLightsailInstanceExists("aws_lightsail_instance.lightsail_instance_test", &conf),
resource.TestCheckResourceAttrSet("aws_lightsail_instance.lightsail_instance_test", "availability_zone"),
resource.TestCheckResourceAttrSet("aws_lightsail_instance.lightsail_instance_test", "blueprint_id"),
resource.TestCheckResourceAttrSet("aws_lightsail_instance.lightsail_instance_test", "bundle_id"),
resource.TestCheckResourceAttrSet("aws_lightsail_instance.lightsail_instance_test", "key_pair_name"),
),
},
},
})
}
func TestAccAWSLightsailInstance_disapear(t *testing.T) { func TestAccAWSLightsailInstance_disapear(t *testing.T) {
var conf lightsail.Instance var conf lightsail.Instance
lightsailName := fmt.Sprintf("tf-test-lightsail-%d", acctest.RandInt()) lightsailName := fmt.Sprintf("tf-test-lightsail-%d", acctest.RandInt())
@ -149,3 +173,17 @@ resource "aws_lightsail_instance" "lightsail_instance_test" {
} }
`, lightsailName) `, lightsailName)
} }
func testAccAWSLightsailInstanceConfig_euRegion(lightsailName string) string {
return fmt.Sprintf(`
provider "aws" {
region = "eu-west-1"
}
resource "aws_lightsail_instance" "lightsail_instance_test" {
name = "%s"
availability_zone = "eu-west-1a"
blueprint_id = "joomla_3_6_5"
bundle_id = "nano_1_0"
}
`, lightsailName)
}

View File

@ -308,7 +308,7 @@ func resourceAwsRDSClusterCreate(d *schema.ResourceData, meta interface{}) error
log.Println("[INFO] Waiting for RDS Cluster to be available") log.Println("[INFO] Waiting for RDS Cluster to be available")
stateConf := &resource.StateChangeConf{ stateConf := &resource.StateChangeConf{
Pending: []string{"creating", "backing-up", "modifying", "preparing-data-migration"}, Pending: []string{"creating", "backing-up", "modifying", "preparing-data-migration", "migrating"},
Target: []string{"available"}, Target: []string{"available"},
Refresh: resourceAwsRDSClusterStateRefreshFunc(d, meta), Refresh: resourceAwsRDSClusterStateRefreshFunc(d, meta),
Timeout: d.Timeout(schema.TimeoutCreate), Timeout: d.Timeout(schema.TimeoutCreate),

View File

@ -120,7 +120,7 @@ func resourceAwsRouteTableCreate(d *schema.ResourceData, meta interface{}) error
Pending: []string{"pending"}, Pending: []string{"pending"},
Target: []string{"ready"}, Target: []string{"ready"},
Refresh: resourceAwsRouteTableStateRefreshFunc(conn, d.Id()), Refresh: resourceAwsRouteTableStateRefreshFunc(conn, d.Id()),
Timeout: 5 * time.Minute, Timeout: 10 * time.Minute,
} }
if _, err := stateConf.WaitForState(); err != nil { if _, err := stateConf.WaitForState(); err != nil {
return fmt.Errorf( return fmt.Errorf(

View File

@ -253,7 +253,7 @@ func resourceAwsSecurityGroupCreate(d *schema.ResourceData, meta interface{}) er
Pending: []string{""}, Pending: []string{""},
Target: []string{"exists"}, Target: []string{"exists"},
Refresh: SGStateRefreshFunc(conn, d.Id()), Refresh: SGStateRefreshFunc(conn, d.Id()),
Timeout: 3 * time.Minute, Timeout: 5 * time.Minute,
} }
resp, err := stateConf.WaitForState() resp, err := stateConf.WaitForState()

View File

@ -19,6 +19,11 @@ func resourceAwsSpotInstanceRequest() *schema.Resource {
Delete: resourceAwsSpotInstanceRequestDelete, Delete: resourceAwsSpotInstanceRequestDelete,
Update: resourceAwsSpotInstanceRequestUpdate, Update: resourceAwsSpotInstanceRequestUpdate,
Timeouts: &schema.ResourceTimeout{
Create: schema.DefaultTimeout(10 * time.Minute),
Delete: schema.DefaultTimeout(10 * time.Minute),
},
Schema: func() map[string]*schema.Schema { Schema: func() map[string]*schema.Schema {
// The Spot Instance Request Schema is based on the AWS Instance schema. // The Spot Instance Request Schema is based on the AWS Instance schema.
s := resourceAwsInstance().Schema s := resourceAwsInstance().Schema
@ -157,7 +162,7 @@ func resourceAwsSpotInstanceRequestCreate(d *schema.ResourceData, meta interface
Pending: []string{"start", "pending-evaluation", "pending-fulfillment"}, Pending: []string{"start", "pending-evaluation", "pending-fulfillment"},
Target: []string{"fulfilled"}, Target: []string{"fulfilled"},
Refresh: SpotInstanceStateRefreshFunc(conn, sir), Refresh: SpotInstanceStateRefreshFunc(conn, sir),
Timeout: 10 * time.Minute, Timeout: d.Timeout(schema.TimeoutCreate),
Delay: 10 * time.Second, Delay: 10 * time.Second,
MinTimeout: 3 * time.Second, MinTimeout: 3 * time.Second,
} }
@ -328,7 +333,7 @@ func resourceAwsSpotInstanceRequestDelete(d *schema.ResourceData, meta interface
if instanceId := d.Get("spot_instance_id").(string); instanceId != "" { if instanceId := d.Get("spot_instance_id").(string); instanceId != "" {
log.Printf("[INFO] Terminating instance: %s", instanceId) log.Printf("[INFO] Terminating instance: %s", instanceId)
if err := awsTerminateInstance(conn, instanceId); err != nil { if err := awsTerminateInstance(conn, instanceId, d); err != nil {
return fmt.Errorf("Error terminating spot instance: %s", err) return fmt.Errorf("Error terminating spot instance: %s", err)
} }
} }

View File

@ -68,7 +68,6 @@ func resourceAwsSsmMaintenanceWindowCreate(d *schema.ResourceData, meta interfac
} }
d.SetId(*resp.WindowId) d.SetId(*resp.WindowId)
return resourceAwsSsmMaintenanceWindowRead(d, meta) return resourceAwsSsmMaintenanceWindowRead(d, meta)
} }
@ -114,38 +113,21 @@ func resourceAwsSsmMaintenanceWindowUpdate(d *schema.ResourceData, meta interfac
func resourceAwsSsmMaintenanceWindowRead(d *schema.ResourceData, meta interface{}) error { func resourceAwsSsmMaintenanceWindowRead(d *schema.ResourceData, meta interface{}) error {
ssmconn := meta.(*AWSClient).ssmconn ssmconn := meta.(*AWSClient).ssmconn
params := &ssm.DescribeMaintenanceWindowsInput{ params := &ssm.GetMaintenanceWindowInput{
Filters: []*ssm.MaintenanceWindowFilter{ WindowId: aws.String(d.Id()),
{
Key: aws.String("Name"),
Values: []*string{aws.String(d.Get("name").(string))},
},
},
} }
resp, err := ssmconn.DescribeMaintenanceWindows(params) resp, err := ssmconn.GetMaintenanceWindow(params)
if err != nil { if err != nil {
return err return err
} }
found := false d.Set("name", resp.Name)
d.Set("cutoff", resp.Cutoff)
for _, window := range resp.WindowIdentities { d.Set("duration", resp.Duration)
if *window.WindowId == d.Id() { d.Set("enabled", resp.Enabled)
found = true d.Set("allow_unassociated_targets", resp.AllowUnassociatedTargets)
d.Set("schedule", resp.Schedule)
d.Set("name", window.Name)
d.Set("cutoff", window.Cutoff)
d.Set("duration", window.Duration)
d.Set("enabled", window.Enabled)
}
}
if !found {
log.Printf("[INFO] Cannot find the SSM Maintenance Window %q. Removing from state", d.Get("name").(string))
d.SetId("")
return nil
}
return nil return nil
} }

View File

@ -1,6 +1,7 @@
package aws package aws
import ( import (
"fmt"
"log" "log"
"github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/aws"
@ -106,34 +107,47 @@ func resourceAwsWafByteMatchSetRead(d *schema.ResourceData, meta interface{}) er
} }
d.Set("name", resp.ByteMatchSet.Name) d.Set("name", resp.ByteMatchSet.Name)
d.Set("byte_match_tuples", flattenWafByteMatchTuples(resp.ByteMatchSet.ByteMatchTuples))
return nil return nil
} }
func resourceAwsWafByteMatchSetUpdate(d *schema.ResourceData, meta interface{}) error { func resourceAwsWafByteMatchSetUpdate(d *schema.ResourceData, meta interface{}) error {
conn := meta.(*AWSClient).wafconn
log.Printf("[INFO] Updating ByteMatchSet: %s", d.Get("name").(string)) log.Printf("[INFO] Updating ByteMatchSet: %s", d.Get("name").(string))
err := updateByteMatchSetResource(d, meta, waf.ChangeActionInsert)
if d.HasChange("byte_match_tuples") {
o, n := d.GetChange("byte_match_tuples")
oldT, newT := o.(*schema.Set).List(), n.(*schema.Set).List()
err := updateByteMatchSetResource(d.Id(), oldT, newT, conn)
if err != nil { if err != nil {
return errwrap.Wrapf("[ERROR] Error updating ByteMatchSet: {{err}}", err) return errwrap.Wrapf("[ERROR] Error updating ByteMatchSet: {{err}}", err)
} }
}
return resourceAwsWafByteMatchSetRead(d, meta) return resourceAwsWafByteMatchSetRead(d, meta)
} }
func resourceAwsWafByteMatchSetDelete(d *schema.ResourceData, meta interface{}) error { func resourceAwsWafByteMatchSetDelete(d *schema.ResourceData, meta interface{}) error {
conn := meta.(*AWSClient).wafconn conn := meta.(*AWSClient).wafconn
log.Printf("[INFO] Deleting ByteMatchSet: %s", d.Get("name").(string)) oldTuples := d.Get("byte_match_tuples").(*schema.Set).List()
err := updateByteMatchSetResource(d, meta, waf.ChangeActionDelete) if len(oldTuples) > 0 {
noTuples := []interface{}{}
err := updateByteMatchSetResource(d.Id(), oldTuples, noTuples, conn)
if err != nil { if err != nil {
return errwrap.Wrapf("[ERROR] Error deleting ByteMatchSet: {{err}}", err) return fmt.Errorf("Error updating ByteMatchSet: %s", err)
}
} }
wr := newWafRetryer(conn, "global") wr := newWafRetryer(conn, "global")
_, err = wr.RetryWithToken(func(token *string) (interface{}, error) { _, err := wr.RetryWithToken(func(token *string) (interface{}, error) {
req := &waf.DeleteByteMatchSetInput{ req := &waf.DeleteByteMatchSetInput{
ChangeToken: token, ChangeToken: token,
ByteMatchSetId: aws.String(d.Id()), ByteMatchSetId: aws.String(d.Id()),
} }
log.Printf("[INFO] Deleting WAF ByteMatchSet: %s", req)
return conn.DeleteByteMatchSet(req) return conn.DeleteByteMatchSet(req)
}) })
if err != nil { if err != nil {
@ -143,29 +157,13 @@ func resourceAwsWafByteMatchSetDelete(d *schema.ResourceData, meta interface{})
return nil return nil
} }
func updateByteMatchSetResource(d *schema.ResourceData, meta interface{}, ChangeAction string) error { func updateByteMatchSetResource(id string, oldT, newT []interface{}, conn *waf.WAF) error {
conn := meta.(*AWSClient).wafconn
wr := newWafRetryer(conn, "global") wr := newWafRetryer(conn, "global")
_, err := wr.RetryWithToken(func(token *string) (interface{}, error) { _, err := wr.RetryWithToken(func(token *string) (interface{}, error) {
req := &waf.UpdateByteMatchSetInput{ req := &waf.UpdateByteMatchSetInput{
ChangeToken: token, ChangeToken: token,
ByteMatchSetId: aws.String(d.Id()), ByteMatchSetId: aws.String(id),
} Updates: diffWafByteMatchSetTuples(oldT, newT),
ByteMatchTuples := d.Get("byte_match_tuples").(*schema.Set)
for _, ByteMatchTuple := range ByteMatchTuples.List() {
ByteMatch := ByteMatchTuple.(map[string]interface{})
ByteMatchUpdate := &waf.ByteMatchSetUpdate{
Action: aws.String(ChangeAction),
ByteMatchTuple: &waf.ByteMatchTuple{
FieldToMatch: expandFieldToMatch(ByteMatch["field_to_match"].(*schema.Set).List()[0].(map[string]interface{})),
PositionalConstraint: aws.String(ByteMatch["positional_constraint"].(string)),
TargetString: []byte(ByteMatch["target_string"].(string)),
TextTransformation: aws.String(ByteMatch["text_transformation"].(string)),
},
}
req.Updates = append(req.Updates, ByteMatchUpdate)
} }
return conn.UpdateByteMatchSet(req) return conn.UpdateByteMatchSet(req)
@ -177,6 +175,23 @@ func updateByteMatchSetResource(d *schema.ResourceData, meta interface{}, Change
return nil return nil
} }
func flattenWafByteMatchTuples(bmt []*waf.ByteMatchTuple) []interface{} {
out := make([]interface{}, len(bmt), len(bmt))
for i, t := range bmt {
m := make(map[string]interface{})
if t.FieldToMatch != nil {
m["field_to_match"] = flattenFieldToMatch(t.FieldToMatch)
}
m["positional_constraint"] = *t.PositionalConstraint
m["target_string"] = string(t.TargetString)
m["text_transformation"] = *t.TextTransformation
out[i] = m
}
return out
}
func expandFieldToMatch(d map[string]interface{}) *waf.FieldToMatch { func expandFieldToMatch(d map[string]interface{}) *waf.FieldToMatch {
return &waf.FieldToMatch{ return &waf.FieldToMatch{
Type: aws.String(d["type"].(string)), Type: aws.String(d["type"].(string)),
@ -184,9 +199,51 @@ func expandFieldToMatch(d map[string]interface{}) *waf.FieldToMatch {
} }
} }
func flattenFieldToMatch(fm *waf.FieldToMatch) map[string]interface{} { func flattenFieldToMatch(fm *waf.FieldToMatch) []interface{} {
m := make(map[string]interface{}) m := make(map[string]interface{})
if fm.Data != nil {
m["data"] = *fm.Data m["data"] = *fm.Data
}
if fm.Type != nil {
m["type"] = *fm.Type m["type"] = *fm.Type
return m }
return []interface{}{m}
}
func diffWafByteMatchSetTuples(oldT, newT []interface{}) []*waf.ByteMatchSetUpdate {
updates := make([]*waf.ByteMatchSetUpdate, 0)
for _, ot := range oldT {
tuple := ot.(map[string]interface{})
if idx, contains := sliceContainsMap(newT, tuple); contains {
newT = append(newT[:idx], newT[idx+1:]...)
continue
}
updates = append(updates, &waf.ByteMatchSetUpdate{
Action: aws.String(waf.ChangeActionDelete),
ByteMatchTuple: &waf.ByteMatchTuple{
FieldToMatch: expandFieldToMatch(tuple["field_to_match"].(*schema.Set).List()[0].(map[string]interface{})),
PositionalConstraint: aws.String(tuple["positional_constraint"].(string)),
TargetString: []byte(tuple["target_string"].(string)),
TextTransformation: aws.String(tuple["text_transformation"].(string)),
},
})
}
for _, nt := range newT {
tuple := nt.(map[string]interface{})
updates = append(updates, &waf.ByteMatchSetUpdate{
Action: aws.String(waf.ChangeActionInsert),
ByteMatchTuple: &waf.ByteMatchTuple{
FieldToMatch: expandFieldToMatch(tuple["field_to_match"].(*schema.Set).List()[0].(map[string]interface{})),
PositionalConstraint: aws.String(tuple["positional_constraint"].(string)),
TargetString: []byte(tuple["target_string"].(string)),
TextTransformation: aws.String(tuple["text_transformation"].(string)),
},
})
}
return updates
} }

View File

@ -27,10 +27,20 @@ func TestAccAWSWafByteMatchSet_basic(t *testing.T) {
Config: testAccAWSWafByteMatchSetConfig(byteMatchSet), Config: testAccAWSWafByteMatchSetConfig(byteMatchSet),
Check: resource.ComposeTestCheckFunc( Check: resource.ComposeTestCheckFunc(
testAccCheckAWSWafByteMatchSetExists("aws_waf_byte_match_set.byte_set", &v), testAccCheckAWSWafByteMatchSetExists("aws_waf_byte_match_set.byte_set", &v),
resource.TestCheckResourceAttr( resource.TestCheckResourceAttr("aws_waf_byte_match_set.byte_set", "name", byteMatchSet),
"aws_waf_byte_match_set.byte_set", "name", byteMatchSet), resource.TestCheckResourceAttr("aws_waf_byte_match_set.byte_set", "byte_match_tuples.#", "2"),
resource.TestCheckResourceAttr( resource.TestCheckResourceAttr("aws_waf_byte_match_set.byte_set", "byte_match_tuples.2174619346.field_to_match.#", "1"),
"aws_waf_byte_match_set.byte_set", "byte_match_tuples.#", "2"), resource.TestCheckResourceAttr("aws_waf_byte_match_set.byte_set", "byte_match_tuples.2174619346.field_to_match.2991901334.data", "referer"),
resource.TestCheckResourceAttr("aws_waf_byte_match_set.byte_set", "byte_match_tuples.2174619346.field_to_match.2991901334.type", "HEADER"),
resource.TestCheckResourceAttr("aws_waf_byte_match_set.byte_set", "byte_match_tuples.2174619346.positional_constraint", "CONTAINS"),
resource.TestCheckResourceAttr("aws_waf_byte_match_set.byte_set", "byte_match_tuples.2174619346.target_string", "badrefer1"),
resource.TestCheckResourceAttr("aws_waf_byte_match_set.byte_set", "byte_match_tuples.2174619346.text_transformation", "NONE"),
resource.TestCheckResourceAttr("aws_waf_byte_match_set.byte_set", "byte_match_tuples.839525137.field_to_match.#", "1"),
resource.TestCheckResourceAttr("aws_waf_byte_match_set.byte_set", "byte_match_tuples.839525137.field_to_match.2991901334.data", "referer"),
resource.TestCheckResourceAttr("aws_waf_byte_match_set.byte_set", "byte_match_tuples.839525137.field_to_match.2991901334.type", "HEADER"),
resource.TestCheckResourceAttr("aws_waf_byte_match_set.byte_set", "byte_match_tuples.839525137.positional_constraint", "CONTAINS"),
resource.TestCheckResourceAttr("aws_waf_byte_match_set.byte_set", "byte_match_tuples.839525137.target_string", "badrefer2"),
resource.TestCheckResourceAttr("aws_waf_byte_match_set.byte_set", "byte_match_tuples.839525137.text_transformation", "NONE"),
), ),
}, },
}, },
@ -71,6 +81,82 @@ func TestAccAWSWafByteMatchSet_changeNameForceNew(t *testing.T) {
}) })
} }
func TestAccAWSWafByteMatchSet_changeTuples(t *testing.T) {
var before, after waf.ByteMatchSet
byteMatchSetName := fmt.Sprintf("byteMatchSet-%s", acctest.RandString(5))
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
CheckDestroy: testAccCheckAWSWafByteMatchSetDestroy,
Steps: []resource.TestStep{
{
Config: testAccAWSWafByteMatchSetConfig(byteMatchSetName),
Check: resource.ComposeAggregateTestCheckFunc(
testAccCheckAWSWafByteMatchSetExists("aws_waf_byte_match_set.byte_set", &before),
resource.TestCheckResourceAttr("aws_waf_byte_match_set.byte_set", "name", byteMatchSetName),
resource.TestCheckResourceAttr("aws_waf_byte_match_set.byte_set", "byte_match_tuples.#", "2"),
resource.TestCheckResourceAttr("aws_waf_byte_match_set.byte_set", "byte_match_tuples.2174619346.field_to_match.#", "1"),
resource.TestCheckResourceAttr("aws_waf_byte_match_set.byte_set", "byte_match_tuples.2174619346.field_to_match.2991901334.data", "referer"),
resource.TestCheckResourceAttr("aws_waf_byte_match_set.byte_set", "byte_match_tuples.2174619346.field_to_match.2991901334.type", "HEADER"),
resource.TestCheckResourceAttr("aws_waf_byte_match_set.byte_set", "byte_match_tuples.2174619346.positional_constraint", "CONTAINS"),
resource.TestCheckResourceAttr("aws_waf_byte_match_set.byte_set", "byte_match_tuples.2174619346.target_string", "badrefer1"),
resource.TestCheckResourceAttr("aws_waf_byte_match_set.byte_set", "byte_match_tuples.2174619346.text_transformation", "NONE"),
resource.TestCheckResourceAttr("aws_waf_byte_match_set.byte_set", "byte_match_tuples.839525137.field_to_match.#", "1"),
resource.TestCheckResourceAttr("aws_waf_byte_match_set.byte_set", "byte_match_tuples.839525137.field_to_match.2991901334.data", "referer"),
resource.TestCheckResourceAttr("aws_waf_byte_match_set.byte_set", "byte_match_tuples.839525137.field_to_match.2991901334.type", "HEADER"),
resource.TestCheckResourceAttr("aws_waf_byte_match_set.byte_set", "byte_match_tuples.839525137.positional_constraint", "CONTAINS"),
resource.TestCheckResourceAttr("aws_waf_byte_match_set.byte_set", "byte_match_tuples.839525137.target_string", "badrefer2"),
resource.TestCheckResourceAttr("aws_waf_byte_match_set.byte_set", "byte_match_tuples.839525137.text_transformation", "NONE"),
),
},
{
Config: testAccAWSWafByteMatchSetConfig_changeTuples(byteMatchSetName),
Check: resource.ComposeAggregateTestCheckFunc(
testAccCheckAWSWafByteMatchSetExists("aws_waf_byte_match_set.byte_set", &after),
resource.TestCheckResourceAttr("aws_waf_byte_match_set.byte_set", "name", byteMatchSetName),
resource.TestCheckResourceAttr("aws_waf_byte_match_set.byte_set", "byte_match_tuples.#", "2"),
resource.TestCheckResourceAttr("aws_waf_byte_match_set.byte_set", "byte_match_tuples.2174619346.field_to_match.#", "1"),
resource.TestCheckResourceAttr("aws_waf_byte_match_set.byte_set", "byte_match_tuples.2174619346.field_to_match.2991901334.data", "referer"),
resource.TestCheckResourceAttr("aws_waf_byte_match_set.byte_set", "byte_match_tuples.2174619346.field_to_match.2991901334.type", "HEADER"),
resource.TestCheckResourceAttr("aws_waf_byte_match_set.byte_set", "byte_match_tuples.2174619346.positional_constraint", "CONTAINS"),
resource.TestCheckResourceAttr("aws_waf_byte_match_set.byte_set", "byte_match_tuples.2174619346.target_string", "badrefer1"),
resource.TestCheckResourceAttr("aws_waf_byte_match_set.byte_set", "byte_match_tuples.2174619346.text_transformation", "NONE"),
resource.TestCheckResourceAttr("aws_waf_byte_match_set.byte_set", "byte_match_tuples.4224486115.field_to_match.#", "1"),
resource.TestCheckResourceAttr("aws_waf_byte_match_set.byte_set", "byte_match_tuples.4224486115.field_to_match.4253810390.data", "GET"),
resource.TestCheckResourceAttr("aws_waf_byte_match_set.byte_set", "byte_match_tuples.4224486115.field_to_match.4253810390.type", "METHOD"),
resource.TestCheckResourceAttr("aws_waf_byte_match_set.byte_set", "byte_match_tuples.4224486115.positional_constraint", "CONTAINS_WORD"),
resource.TestCheckResourceAttr("aws_waf_byte_match_set.byte_set", "byte_match_tuples.4224486115.target_string", "blah"),
resource.TestCheckResourceAttr("aws_waf_byte_match_set.byte_set", "byte_match_tuples.4224486115.text_transformation", "URL_DECODE"),
),
},
},
})
}
func TestAccAWSWafByteMatchSet_noTuples(t *testing.T) {
var byteSet waf.ByteMatchSet
byteMatchSetName := fmt.Sprintf("byteMatchSet-%s", acctest.RandString(5))
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
CheckDestroy: testAccCheckAWSWafByteMatchSetDestroy,
Steps: []resource.TestStep{
{
Config: testAccAWSWafByteMatchSetConfig_noTuples(byteMatchSetName),
Check: resource.ComposeAggregateTestCheckFunc(
testAccCheckAWSWafByteMatchSetExists("aws_waf_byte_match_set.byte_set", &byteSet),
resource.TestCheckResourceAttr(
"aws_waf_byte_match_set.byte_set", "name", byteMatchSetName),
resource.TestCheckResourceAttr(
"aws_waf_byte_match_set.byte_set", "byte_match_tuples.#", "0"),
),
},
},
})
}
func TestAccAWSWafByteMatchSet_disappears(t *testing.T) { func TestAccAWSWafByteMatchSet_disappears(t *testing.T) {
var v waf.ByteMatchSet var v waf.ByteMatchSet
byteMatchSet := fmt.Sprintf("byteMatchSet-%s", acctest.RandString(5)) byteMatchSet := fmt.Sprintf("byteMatchSet-%s", acctest.RandString(5))
@ -248,3 +334,36 @@ resource "aws_waf_byte_match_set" "byte_set" {
} }
}`, name) }`, name)
} }
func testAccAWSWafByteMatchSetConfig_changeTuples(name string) string {
return fmt.Sprintf(`
resource "aws_waf_byte_match_set" "byte_set" {
name = "%s"
byte_match_tuples {
text_transformation = "NONE"
target_string = "badrefer1"
positional_constraint = "CONTAINS"
field_to_match {
type = "HEADER"
data = "referer"
}
}
byte_match_tuples {
text_transformation = "URL_DECODE"
target_string = "blah"
positional_constraint = "CONTAINS_WORD"
field_to_match {
type = "METHOD"
data = "GET"
}
}
}`, name)
}
func testAccAWSWafByteMatchSetConfig_noTuples(name string) string {
return fmt.Sprintf(`
resource "aws_waf_byte_match_set" "byte_set" {
name = "%s"
}`, name)
}

View File

@ -25,7 +25,7 @@ func resourceAwsWafSizeConstraintSet() *schema.Resource {
}, },
"size_constraints": &schema.Schema{ "size_constraints": &schema.Schema{
Type: schema.TypeSet, Type: schema.TypeSet,
Required: true, Optional: true,
Elem: &schema.Resource{ Elem: &schema.Resource{
Schema: map[string]*schema.Schema{ Schema: map[string]*schema.Schema{
"field_to_match": { "field_to_match": {
@ -107,30 +107,42 @@ func resourceAwsWafSizeConstraintSetRead(d *schema.ResourceData, meta interface{
} }
d.Set("name", resp.SizeConstraintSet.Name) d.Set("name", resp.SizeConstraintSet.Name)
d.Set("size_constraints", flattenWafSizeConstraints(resp.SizeConstraintSet.SizeConstraints))
return nil return nil
} }
func resourceAwsWafSizeConstraintSetUpdate(d *schema.ResourceData, meta interface{}) error { func resourceAwsWafSizeConstraintSetUpdate(d *schema.ResourceData, meta interface{}) error {
log.Printf("[INFO] Updating SizeConstraintSet: %s", d.Get("name").(string)) conn := meta.(*AWSClient).wafconn
err := updateSizeConstraintSetResource(d, meta, waf.ChangeActionInsert)
if d.HasChange("size_constraints") {
o, n := d.GetChange("size_constraints")
oldS, newS := o.(*schema.Set).List(), n.(*schema.Set).List()
err := updateSizeConstraintSetResource(d.Id(), oldS, newS, conn)
if err != nil { if err != nil {
return errwrap.Wrapf("[ERROR] Error updating SizeConstraintSet: {{err}}", err) return errwrap.Wrapf("[ERROR] Error updating SizeConstraintSet: {{err}}", err)
} }
}
return resourceAwsWafSizeConstraintSetRead(d, meta) return resourceAwsWafSizeConstraintSetRead(d, meta)
} }
func resourceAwsWafSizeConstraintSetDelete(d *schema.ResourceData, meta interface{}) error { func resourceAwsWafSizeConstraintSetDelete(d *schema.ResourceData, meta interface{}) error {
conn := meta.(*AWSClient).wafconn conn := meta.(*AWSClient).wafconn
log.Printf("[INFO] Deleting SizeConstraintSet: %s", d.Get("name").(string)) oldConstraints := d.Get("size_constraints").(*schema.Set).List()
err := updateSizeConstraintSetResource(d, meta, waf.ChangeActionDelete)
if len(oldConstraints) > 0 {
noConstraints := []interface{}{}
err := updateSizeConstraintSetResource(d.Id(), oldConstraints, noConstraints, conn)
if err != nil { if err != nil {
return errwrap.Wrapf("[ERROR] Error deleting SizeConstraintSet: {{err}}", err) return errwrap.Wrapf("[ERROR] Error deleting SizeConstraintSet: {{err}}", err)
} }
}
wr := newWafRetryer(conn, "global") wr := newWafRetryer(conn, "global")
_, err = wr.RetryWithToken(func(token *string) (interface{}, error) { _, err := wr.RetryWithToken(func(token *string) (interface{}, error) {
req := &waf.DeleteSizeConstraintSetInput{ req := &waf.DeleteSizeConstraintSetInput{
ChangeToken: token, ChangeToken: token,
SizeConstraintSetId: aws.String(d.Id()), SizeConstraintSetId: aws.String(d.Id()),
@ -144,31 +156,16 @@ func resourceAwsWafSizeConstraintSetDelete(d *schema.ResourceData, meta interfac
return nil return nil
} }
func updateSizeConstraintSetResource(d *schema.ResourceData, meta interface{}, ChangeAction string) error { func updateSizeConstraintSetResource(id string, oldS, newS []interface{}, conn *waf.WAF) error {
conn := meta.(*AWSClient).wafconn
wr := newWafRetryer(conn, "global") wr := newWafRetryer(conn, "global")
_, err := wr.RetryWithToken(func(token *string) (interface{}, error) { _, err := wr.RetryWithToken(func(token *string) (interface{}, error) {
req := &waf.UpdateSizeConstraintSetInput{ req := &waf.UpdateSizeConstraintSetInput{
ChangeToken: token, ChangeToken: token,
SizeConstraintSetId: aws.String(d.Id()), SizeConstraintSetId: aws.String(id),
} Updates: diffWafSizeConstraints(oldS, newS),
sizeConstraints := d.Get("size_constraints").(*schema.Set)
for _, sizeConstraint := range sizeConstraints.List() {
sc := sizeConstraint.(map[string]interface{})
sizeConstraintUpdate := &waf.SizeConstraintSetUpdate{
Action: aws.String(ChangeAction),
SizeConstraint: &waf.SizeConstraint{
FieldToMatch: expandFieldToMatch(sc["field_to_match"].(*schema.Set).List()[0].(map[string]interface{})),
ComparisonOperator: aws.String(sc["comparison_operator"].(string)),
Size: aws.Int64(int64(sc["size"].(int))),
TextTransformation: aws.String(sc["text_transformation"].(string)),
},
}
req.Updates = append(req.Updates, sizeConstraintUpdate)
} }
log.Printf("[INFO] Updating WAF Size Constraint constraints: %s", req)
return conn.UpdateSizeConstraintSet(req) return conn.UpdateSizeConstraintSet(req)
}) })
if err != nil { if err != nil {
@ -177,3 +174,56 @@ func updateSizeConstraintSetResource(d *schema.ResourceData, meta interface{}, C
return nil return nil
} }
func flattenWafSizeConstraints(sc []*waf.SizeConstraint) []interface{} {
out := make([]interface{}, len(sc), len(sc))
for i, c := range sc {
m := make(map[string]interface{})
m["comparison_operator"] = *c.ComparisonOperator
if c.FieldToMatch != nil {
m["field_to_match"] = flattenFieldToMatch(c.FieldToMatch)
}
m["size"] = *c.Size
m["text_transformation"] = *c.TextTransformation
out[i] = m
}
return out
}
func diffWafSizeConstraints(oldS, newS []interface{}) []*waf.SizeConstraintSetUpdate {
updates := make([]*waf.SizeConstraintSetUpdate, 0)
for _, os := range oldS {
constraint := os.(map[string]interface{})
if idx, contains := sliceContainsMap(newS, constraint); contains {
newS = append(newS[:idx], newS[idx+1:]...)
continue
}
updates = append(updates, &waf.SizeConstraintSetUpdate{
Action: aws.String(waf.ChangeActionDelete),
SizeConstraint: &waf.SizeConstraint{
FieldToMatch: expandFieldToMatch(constraint["field_to_match"].(*schema.Set).List()[0].(map[string]interface{})),
ComparisonOperator: aws.String(constraint["comparison_operator"].(string)),
Size: aws.Int64(int64(constraint["size"].(int))),
TextTransformation: aws.String(constraint["text_transformation"].(string)),
},
})
}
for _, ns := range newS {
constraint := ns.(map[string]interface{})
updates = append(updates, &waf.SizeConstraintSetUpdate{
Action: aws.String(waf.ChangeActionInsert),
SizeConstraint: &waf.SizeConstraint{
FieldToMatch: expandFieldToMatch(constraint["field_to_match"].(*schema.Set).List()[0].(map[string]interface{})),
ComparisonOperator: aws.String(constraint["comparison_operator"].(string)),
Size: aws.Int64(int64(constraint["size"].(int))),
TextTransformation: aws.String(constraint["text_transformation"].(string)),
},
})
}
return updates
}

View File

@ -31,6 +31,18 @@ func TestAccAWSWafSizeConstraintSet_basic(t *testing.T) {
"aws_waf_size_constraint_set.size_constraint_set", "name", sizeConstraintSet), "aws_waf_size_constraint_set.size_constraint_set", "name", sizeConstraintSet),
resource.TestCheckResourceAttr( resource.TestCheckResourceAttr(
"aws_waf_size_constraint_set.size_constraint_set", "size_constraints.#", "1"), "aws_waf_size_constraint_set.size_constraint_set", "size_constraints.#", "1"),
resource.TestCheckResourceAttr(
"aws_waf_size_constraint_set.size_constraint_set", "size_constraints.2029852522.comparison_operator", "EQ"),
resource.TestCheckResourceAttr(
"aws_waf_size_constraint_set.size_constraint_set", "size_constraints.2029852522.field_to_match.#", "1"),
resource.TestCheckResourceAttr(
"aws_waf_size_constraint_set.size_constraint_set", "size_constraints.2029852522.field_to_match.281401076.data", ""),
resource.TestCheckResourceAttr(
"aws_waf_size_constraint_set.size_constraint_set", "size_constraints.2029852522.field_to_match.281401076.type", "BODY"),
resource.TestCheckResourceAttr(
"aws_waf_size_constraint_set.size_constraint_set", "size_constraints.2029852522.size", "4096"),
resource.TestCheckResourceAttr(
"aws_waf_size_constraint_set.size_constraint_set", "size_constraints.2029852522.text_transformation", "NONE"),
), ),
}, },
}, },
@ -92,6 +104,86 @@ func TestAccAWSWafSizeConstraintSet_disappears(t *testing.T) {
}) })
} }
func TestAccAWSWafSizeConstraintSet_changeConstraints(t *testing.T) {
var before, after waf.SizeConstraintSet
setName := fmt.Sprintf("sizeConstraintSet-%s", acctest.RandString(5))
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
CheckDestroy: testAccCheckAWSWafSizeConstraintSetDestroy,
Steps: []resource.TestStep{
{
Config: testAccAWSWafSizeConstraintSetConfig(setName),
Check: resource.ComposeAggregateTestCheckFunc(
testAccCheckAWSWafSizeConstraintSetExists("aws_waf_size_constraint_set.size_constraint_set", &before),
resource.TestCheckResourceAttr(
"aws_waf_size_constraint_set.size_constraint_set", "name", setName),
resource.TestCheckResourceAttr(
"aws_waf_size_constraint_set.size_constraint_set", "size_constraints.#", "1"),
resource.TestCheckResourceAttr(
"aws_waf_size_constraint_set.size_constraint_set", "size_constraints.2029852522.comparison_operator", "EQ"),
resource.TestCheckResourceAttr(
"aws_waf_size_constraint_set.size_constraint_set", "size_constraints.2029852522.field_to_match.#", "1"),
resource.TestCheckResourceAttr(
"aws_waf_size_constraint_set.size_constraint_set", "size_constraints.2029852522.field_to_match.281401076.data", ""),
resource.TestCheckResourceAttr(
"aws_waf_size_constraint_set.size_constraint_set", "size_constraints.2029852522.field_to_match.281401076.type", "BODY"),
resource.TestCheckResourceAttr(
"aws_waf_size_constraint_set.size_constraint_set", "size_constraints.2029852522.size", "4096"),
resource.TestCheckResourceAttr(
"aws_waf_size_constraint_set.size_constraint_set", "size_constraints.2029852522.text_transformation", "NONE"),
),
},
{
Config: testAccAWSWafSizeConstraintSetConfig_changeConstraints(setName),
Check: resource.ComposeAggregateTestCheckFunc(
testAccCheckAWSWafSizeConstraintSetExists("aws_waf_size_constraint_set.size_constraint_set", &after),
resource.TestCheckResourceAttr(
"aws_waf_size_constraint_set.size_constraint_set", "name", setName),
resource.TestCheckResourceAttr(
"aws_waf_size_constraint_set.size_constraint_set", "size_constraints.#", "1"),
resource.TestCheckResourceAttr(
"aws_waf_size_constraint_set.size_constraint_set", "size_constraints.3222308386.comparison_operator", "GE"),
resource.TestCheckResourceAttr(
"aws_waf_size_constraint_set.size_constraint_set", "size_constraints.3222308386.field_to_match.#", "1"),
resource.TestCheckResourceAttr(
"aws_waf_size_constraint_set.size_constraint_set", "size_constraints.3222308386.field_to_match.281401076.data", ""),
resource.TestCheckResourceAttr(
"aws_waf_size_constraint_set.size_constraint_set", "size_constraints.3222308386.field_to_match.281401076.type", "BODY"),
resource.TestCheckResourceAttr(
"aws_waf_size_constraint_set.size_constraint_set", "size_constraints.3222308386.size", "1024"),
resource.TestCheckResourceAttr(
"aws_waf_size_constraint_set.size_constraint_set", "size_constraints.3222308386.text_transformation", "NONE"),
),
},
},
})
}
func TestAccAWSWafSizeConstraintSet_noConstraints(t *testing.T) {
var ipset waf.SizeConstraintSet
setName := fmt.Sprintf("sizeConstraintSet-%s", acctest.RandString(5))
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
CheckDestroy: testAccCheckAWSWafSizeConstraintSetDestroy,
Steps: []resource.TestStep{
{
Config: testAccAWSWafSizeConstraintSetConfig_noConstraints(setName),
Check: resource.ComposeAggregateTestCheckFunc(
testAccCheckAWSWafSizeConstraintSetExists("aws_waf_size_constraint_set.size_constraint_set", &ipset),
resource.TestCheckResourceAttr(
"aws_waf_size_constraint_set.size_constraint_set", "name", setName),
resource.TestCheckResourceAttr(
"aws_waf_size_constraint_set.size_constraint_set", "size_constraints.#", "0"),
),
},
},
})
}
func testAccCheckAWSWafSizeConstraintSetDisappears(v *waf.SizeConstraintSet) resource.TestCheckFunc { func testAccCheckAWSWafSizeConstraintSetDisappears(v *waf.SizeConstraintSet) resource.TestCheckFunc {
return func(s *terraform.State) error { return func(s *terraform.State) error {
conn := testAccProvider.Meta().(*AWSClient).wafconn conn := testAccProvider.Meta().(*AWSClient).wafconn
@ -224,3 +316,25 @@ resource "aws_waf_size_constraint_set" "size_constraint_set" {
} }
}`, name) }`, name)
} }
func testAccAWSWafSizeConstraintSetConfig_changeConstraints(name string) string {
return fmt.Sprintf(`
resource "aws_waf_size_constraint_set" "size_constraint_set" {
name = "%s"
size_constraints {
text_transformation = "NONE"
comparison_operator = "GE"
size = "1024"
field_to_match {
type = "BODY"
}
}
}`, name)
}
func testAccAWSWafSizeConstraintSetConfig_noConstraints(name string) string {
return fmt.Sprintf(`
resource "aws_waf_size_constraint_set" "size_constraint_set" {
name = "%s"
}`, name)
}

View File

@ -98,30 +98,42 @@ func resourceAwsWafSqlInjectionMatchSetRead(d *schema.ResourceData, meta interfa
} }
d.Set("name", resp.SqlInjectionMatchSet.Name) d.Set("name", resp.SqlInjectionMatchSet.Name)
d.Set("sql_injection_match_tuples", resp.SqlInjectionMatchSet.SqlInjectionMatchTuples)
return nil return nil
} }
func resourceAwsWafSqlInjectionMatchSetUpdate(d *schema.ResourceData, meta interface{}) error { func resourceAwsWafSqlInjectionMatchSetUpdate(d *schema.ResourceData, meta interface{}) error {
log.Printf("[INFO] Updating SqlInjectionMatchSet: %s", d.Get("name").(string)) conn := meta.(*AWSClient).wafconn
err := updateSqlInjectionMatchSetResource(d, meta, waf.ChangeActionInsert)
if d.HasChange("sql_injection_match_tuples") {
o, n := d.GetChange("sql_injection_match_tuples")
oldT, newT := o.(*schema.Set).List(), n.(*schema.Set).List()
err := updateSqlInjectionMatchSetResource(d.Id(), oldT, newT, conn)
if err != nil { if err != nil {
return errwrap.Wrapf("[ERROR] Error updating SqlInjectionMatchSet: {{err}}", err) return errwrap.Wrapf("[ERROR] Error updating SqlInjectionMatchSet: {{err}}", err)
} }
}
return resourceAwsWafSqlInjectionMatchSetRead(d, meta) return resourceAwsWafSqlInjectionMatchSetRead(d, meta)
} }
func resourceAwsWafSqlInjectionMatchSetDelete(d *schema.ResourceData, meta interface{}) error { func resourceAwsWafSqlInjectionMatchSetDelete(d *schema.ResourceData, meta interface{}) error {
conn := meta.(*AWSClient).wafconn conn := meta.(*AWSClient).wafconn
log.Printf("[INFO] Deleting SqlInjectionMatchSet: %s", d.Get("name").(string)) oldTuples := d.Get("sql_injection_match_tuples").(*schema.Set).List()
err := updateSqlInjectionMatchSetResource(d, meta, waf.ChangeActionDelete)
if len(oldTuples) > 0 {
noTuples := []interface{}{}
err := updateSqlInjectionMatchSetResource(d.Id(), oldTuples, noTuples, conn)
if err != nil { if err != nil {
return errwrap.Wrapf("[ERROR] Error deleting SqlInjectionMatchSet: {{err}}", err) return errwrap.Wrapf("[ERROR] Error deleting SqlInjectionMatchSet: {{err}}", err)
} }
}
wr := newWafRetryer(conn, "global") wr := newWafRetryer(conn, "global")
_, err = wr.RetryWithToken(func(token *string) (interface{}, error) { _, err := wr.RetryWithToken(func(token *string) (interface{}, error) {
req := &waf.DeleteSqlInjectionMatchSetInput{ req := &waf.DeleteSqlInjectionMatchSetInput{
ChangeToken: token, ChangeToken: token,
SqlInjectionMatchSetId: aws.String(d.Id()), SqlInjectionMatchSetId: aws.String(d.Id()),
@ -136,29 +148,16 @@ func resourceAwsWafSqlInjectionMatchSetDelete(d *schema.ResourceData, meta inter
return nil return nil
} }
func updateSqlInjectionMatchSetResource(d *schema.ResourceData, meta interface{}, ChangeAction string) error { func updateSqlInjectionMatchSetResource(id string, oldT, newT []interface{}, conn *waf.WAF) error {
conn := meta.(*AWSClient).wafconn
wr := newWafRetryer(conn, "global") wr := newWafRetryer(conn, "global")
_, err := wr.RetryWithToken(func(token *string) (interface{}, error) { _, err := wr.RetryWithToken(func(token *string) (interface{}, error) {
req := &waf.UpdateSqlInjectionMatchSetInput{ req := &waf.UpdateSqlInjectionMatchSetInput{
ChangeToken: token, ChangeToken: token,
SqlInjectionMatchSetId: aws.String(d.Id()), SqlInjectionMatchSetId: aws.String(id),
} Updates: diffWafSqlInjectionMatchTuples(oldT, newT),
sqlInjectionMatchTuples := d.Get("sql_injection_match_tuples").(*schema.Set)
for _, sqlInjectionMatchTuple := range sqlInjectionMatchTuples.List() {
simt := sqlInjectionMatchTuple.(map[string]interface{})
sizeConstraintUpdate := &waf.SqlInjectionMatchSetUpdate{
Action: aws.String(ChangeAction),
SqlInjectionMatchTuple: &waf.SqlInjectionMatchTuple{
FieldToMatch: expandFieldToMatch(simt["field_to_match"].(*schema.Set).List()[0].(map[string]interface{})),
TextTransformation: aws.String(simt["text_transformation"].(string)),
},
}
req.Updates = append(req.Updates, sizeConstraintUpdate)
} }
log.Printf("[INFO] Updating SqlInjectionMatchSet: %s", req)
return conn.UpdateSqlInjectionMatchSet(req) return conn.UpdateSqlInjectionMatchSet(req)
}) })
if err != nil { if err != nil {
@ -167,3 +166,49 @@ func updateSqlInjectionMatchSetResource(d *schema.ResourceData, meta interface{}
return nil return nil
} }
func flattenWafSqlInjectionMatchTuples(ts []*waf.SqlInjectionMatchTuple) []interface{} {
out := make([]interface{}, len(ts), len(ts))
for i, t := range ts {
m := make(map[string]interface{})
m["text_transformation"] = *t.TextTransformation
m["field_to_match"] = flattenFieldToMatch(t.FieldToMatch)
out[i] = m
}
return out
}
func diffWafSqlInjectionMatchTuples(oldT, newT []interface{}) []*waf.SqlInjectionMatchSetUpdate {
updates := make([]*waf.SqlInjectionMatchSetUpdate, 0)
for _, od := range oldT {
tuple := od.(map[string]interface{})
if idx, contains := sliceContainsMap(newT, tuple); contains {
newT = append(newT[:idx], newT[idx+1:]...)
continue
}
updates = append(updates, &waf.SqlInjectionMatchSetUpdate{
Action: aws.String(waf.ChangeActionDelete),
SqlInjectionMatchTuple: &waf.SqlInjectionMatchTuple{
FieldToMatch: expandFieldToMatch(tuple["field_to_match"].(*schema.Set).List()[0].(map[string]interface{})),
TextTransformation: aws.String(tuple["text_transformation"].(string)),
},
})
}
for _, nd := range newT {
tuple := nd.(map[string]interface{})
updates = append(updates, &waf.SqlInjectionMatchSetUpdate{
Action: aws.String(waf.ChangeActionInsert),
SqlInjectionMatchTuple: &waf.SqlInjectionMatchTuple{
FieldToMatch: expandFieldToMatch(tuple["field_to_match"].(*schema.Set).List()[0].(map[string]interface{})),
TextTransformation: aws.String(tuple["text_transformation"].(string)),
},
})
}
return updates
}

View File

@ -31,6 +31,14 @@ func TestAccAWSWafSqlInjectionMatchSet_basic(t *testing.T) {
"aws_waf_sql_injection_match_set.sql_injection_match_set", "name", sqlInjectionMatchSet), "aws_waf_sql_injection_match_set.sql_injection_match_set", "name", sqlInjectionMatchSet),
resource.TestCheckResourceAttr( resource.TestCheckResourceAttr(
"aws_waf_sql_injection_match_set.sql_injection_match_set", "sql_injection_match_tuples.#", "1"), "aws_waf_sql_injection_match_set.sql_injection_match_set", "sql_injection_match_tuples.#", "1"),
resource.TestCheckResourceAttr(
"aws_waf_sql_injection_match_set.sql_injection_match_set", "sql_injection_match_tuples.3367958210.field_to_match.#", "1"),
resource.TestCheckResourceAttr(
"aws_waf_sql_injection_match_set.sql_injection_match_set", "sql_injection_match_tuples.3367958210.field_to_match.2316364334.data", ""),
resource.TestCheckResourceAttr(
"aws_waf_sql_injection_match_set.sql_injection_match_set", "sql_injection_match_tuples.3367958210.field_to_match.2316364334.type", "QUERY_STRING"),
resource.TestCheckResourceAttr(
"aws_waf_sql_injection_match_set.sql_injection_match_set", "sql_injection_match_tuples.3367958210.text_transformation", "URL_DECODE"),
), ),
}, },
}, },
@ -92,6 +100,78 @@ func TestAccAWSWafSqlInjectionMatchSet_disappears(t *testing.T) {
}) })
} }
func TestAccAWSWafSqlInjectionMatchSet_changeTuples(t *testing.T) {
var before, after waf.SqlInjectionMatchSet
setName := fmt.Sprintf("sqlInjectionMatchSet-%s", acctest.RandString(5))
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
CheckDestroy: testAccCheckAWSWafSqlInjectionMatchSetDestroy,
Steps: []resource.TestStep{
{
Config: testAccAWSWafSqlInjectionMatchSetConfig(setName),
Check: resource.ComposeAggregateTestCheckFunc(
testAccCheckAWSWafSqlInjectionMatchSetExists("aws_waf_sql_injection_match_set.sql_injection_match_set", &before),
resource.TestCheckResourceAttr(
"aws_waf_sql_injection_match_set.sql_injection_match_set", "name", setName),
resource.TestCheckResourceAttr(
"aws_waf_sql_injection_match_set.sql_injection_match_set", "sql_injection_match_tuples.#", "1"),
resource.TestCheckResourceAttr(
"aws_waf_sql_injection_match_set.sql_injection_match_set", "sql_injection_match_tuples.3367958210.field_to_match.#", "1"),
resource.TestCheckResourceAttr(
"aws_waf_sql_injection_match_set.sql_injection_match_set", "sql_injection_match_tuples.3367958210.field_to_match.2316364334.data", ""),
resource.TestCheckResourceAttr(
"aws_waf_sql_injection_match_set.sql_injection_match_set", "sql_injection_match_tuples.3367958210.field_to_match.2316364334.type", "QUERY_STRING"),
resource.TestCheckResourceAttr(
"aws_waf_sql_injection_match_set.sql_injection_match_set", "sql_injection_match_tuples.3367958210.text_transformation", "URL_DECODE"),
),
},
{
Config: testAccAWSWafSqlInjectionMatchSetConfig_changeTuples(setName),
Check: resource.ComposeAggregateTestCheckFunc(
testAccCheckAWSWafSqlInjectionMatchSetExists("aws_waf_sql_injection_match_set.sql_injection_match_set", &after),
resource.TestCheckResourceAttr(
"aws_waf_sql_injection_match_set.sql_injection_match_set", "name", setName),
resource.TestCheckResourceAttr(
"aws_waf_sql_injection_match_set.sql_injection_match_set", "sql_injection_match_tuples.#", "1"),
resource.TestCheckResourceAttr(
"aws_waf_sql_injection_match_set.sql_injection_match_set", "sql_injection_match_tuples.3333510133.field_to_match.#", "1"),
resource.TestCheckResourceAttr(
"aws_waf_sql_injection_match_set.sql_injection_match_set", "sql_injection_match_tuples.3333510133.field_to_match.4253810390.data", "GET"),
resource.TestCheckResourceAttr(
"aws_waf_sql_injection_match_set.sql_injection_match_set", "sql_injection_match_tuples.3333510133.field_to_match.4253810390.type", "METHOD"),
resource.TestCheckResourceAttr(
"aws_waf_sql_injection_match_set.sql_injection_match_set", "sql_injection_match_tuples.3333510133.text_transformation", "NONE"),
),
},
},
})
}
func TestAccAWSWafSqlInjectionMatchSet_noTuples(t *testing.T) {
var ipset waf.SqlInjectionMatchSet
setName := fmt.Sprintf("sqlInjectionMatchSet-%s", acctest.RandString(5))
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
CheckDestroy: testAccCheckAWSWafSqlInjectionMatchSetDestroy,
Steps: []resource.TestStep{
{
Config: testAccAWSWafSqlInjectionMatchSetConfig_noTuples(setName),
Check: resource.ComposeAggregateTestCheckFunc(
testAccCheckAWSWafSqlInjectionMatchSetExists("aws_waf_sql_injection_match_set.sql_injection_match_set", &ipset),
resource.TestCheckResourceAttr(
"aws_waf_sql_injection_match_set.sql_injection_match_set", "name", setName),
resource.TestCheckResourceAttr(
"aws_waf_sql_injection_match_set.sql_injection_match_set", "sql_injection_match_tuples.#", "0"),
),
},
},
})
}
func testAccCheckAWSWafSqlInjectionMatchSetDisappears(v *waf.SqlInjectionMatchSet) resource.TestCheckFunc { func testAccCheckAWSWafSqlInjectionMatchSetDisappears(v *waf.SqlInjectionMatchSet) resource.TestCheckFunc {
return func(s *terraform.State) error { return func(s *terraform.State) error {
conn := testAccProvider.Meta().(*AWSClient).wafconn conn := testAccProvider.Meta().(*AWSClient).wafconn
@ -218,3 +298,24 @@ resource "aws_waf_sql_injection_match_set" "sql_injection_match_set" {
} }
}`, name) }`, name)
} }
func testAccAWSWafSqlInjectionMatchSetConfig_changeTuples(name string) string {
return fmt.Sprintf(`
resource "aws_waf_sql_injection_match_set" "sql_injection_match_set" {
name = "%s"
sql_injection_match_tuples {
text_transformation = "NONE"
field_to_match {
type = "METHOD"
data = "GET"
}
}
}`, name)
}
func testAccAWSWafSqlInjectionMatchSetConfig_noTuples(name string) string {
return fmt.Sprintf(`
resource "aws_waf_sql_injection_match_set" "sql_injection_match_set" {
name = "%s"
}`, name)
}

View File

@ -1,6 +1,7 @@
package aws package aws
import ( import (
"fmt"
"log" "log"
"github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/aws"
@ -25,7 +26,7 @@ func resourceAwsWafXssMatchSet() *schema.Resource {
}, },
"xss_match_tuples": &schema.Schema{ "xss_match_tuples": &schema.Schema{
Type: schema.TypeSet, Type: schema.TypeSet,
Required: true, Optional: true,
Elem: &schema.Resource{ Elem: &schema.Resource{
Schema: map[string]*schema.Schema{ Schema: map[string]*schema.Schema{
"field_to_match": { "field_to_match": {
@ -99,30 +100,41 @@ func resourceAwsWafXssMatchSetRead(d *schema.ResourceData, meta interface{}) err
} }
d.Set("name", resp.XssMatchSet.Name) d.Set("name", resp.XssMatchSet.Name)
d.Set("xss_match_tuples", flattenWafXssMatchTuples(resp.XssMatchSet.XssMatchTuples))
return nil return nil
} }
func resourceAwsWafXssMatchSetUpdate(d *schema.ResourceData, meta interface{}) error { func resourceAwsWafXssMatchSetUpdate(d *schema.ResourceData, meta interface{}) error {
log.Printf("[INFO] Updating XssMatchSet: %s", d.Get("name").(string)) conn := meta.(*AWSClient).wafconn
err := updateXssMatchSetResource(d, meta, waf.ChangeActionInsert)
if d.HasChange("xss_match_tuples") {
o, n := d.GetChange("xss_match_tuples")
oldT, newT := o.(*schema.Set).List(), n.(*schema.Set).List()
err := updateXssMatchSetResource(d.Id(), oldT, newT, conn)
if err != nil { if err != nil {
return errwrap.Wrapf("[ERROR] Error updating XssMatchSet: {{err}}", err) return errwrap.Wrapf("[ERROR] Error updating XssMatchSet: {{err}}", err)
} }
}
return resourceAwsWafXssMatchSetRead(d, meta) return resourceAwsWafXssMatchSetRead(d, meta)
} }
func resourceAwsWafXssMatchSetDelete(d *schema.ResourceData, meta interface{}) error { func resourceAwsWafXssMatchSetDelete(d *schema.ResourceData, meta interface{}) error {
conn := meta.(*AWSClient).wafconn conn := meta.(*AWSClient).wafconn
log.Printf("[INFO] Deleting XssMatchSet: %s", d.Get("name").(string)) oldTuples := d.Get("xss_match_tuples").(*schema.Set).List()
err := updateXssMatchSetResource(d, meta, waf.ChangeActionDelete) if len(oldTuples) > 0 {
noTuples := []interface{}{}
err := updateXssMatchSetResource(d.Id(), oldTuples, noTuples, conn)
if err != nil { if err != nil {
return errwrap.Wrapf("[ERROR] Error deleting XssMatchSet: {{err}}", err) return fmt.Errorf("Error updating IPSetDescriptors: %s", err)
}
} }
wr := newWafRetryer(conn, "global") wr := newWafRetryer(conn, "global")
_, err = wr.RetryWithToken(func(token *string) (interface{}, error) { _, err := wr.RetryWithToken(func(token *string) (interface{}, error) {
req := &waf.DeleteXssMatchSetInput{ req := &waf.DeleteXssMatchSetInput{
ChangeToken: token, ChangeToken: token,
XssMatchSetId: aws.String(d.Id()), XssMatchSetId: aws.String(d.Id()),
@ -137,29 +149,16 @@ func resourceAwsWafXssMatchSetDelete(d *schema.ResourceData, meta interface{}) e
return nil return nil
} }
func updateXssMatchSetResource(d *schema.ResourceData, meta interface{}, ChangeAction string) error { func updateXssMatchSetResource(id string, oldT, newT []interface{}, conn *waf.WAF) error {
conn := meta.(*AWSClient).wafconn
wr := newWafRetryer(conn, "global") wr := newWafRetryer(conn, "global")
_, err := wr.RetryWithToken(func(token *string) (interface{}, error) { _, err := wr.RetryWithToken(func(token *string) (interface{}, error) {
req := &waf.UpdateXssMatchSetInput{ req := &waf.UpdateXssMatchSetInput{
ChangeToken: token, ChangeToken: token,
XssMatchSetId: aws.String(d.Id()), XssMatchSetId: aws.String(id),
} Updates: diffWafXssMatchSetTuples(oldT, newT),
xssMatchTuples := d.Get("xss_match_tuples").(*schema.Set)
for _, xssMatchTuple := range xssMatchTuples.List() {
xmt := xssMatchTuple.(map[string]interface{})
xssMatchTupleUpdate := &waf.XssMatchSetUpdate{
Action: aws.String(ChangeAction),
XssMatchTuple: &waf.XssMatchTuple{
FieldToMatch: expandFieldToMatch(xmt["field_to_match"].(*schema.Set).List()[0].(map[string]interface{})),
TextTransformation: aws.String(xmt["text_transformation"].(string)),
},
}
req.Updates = append(req.Updates, xssMatchTupleUpdate)
} }
log.Printf("[INFO] Updating XssMatchSet tuples: %s", req)
return conn.UpdateXssMatchSet(req) return conn.UpdateXssMatchSet(req)
}) })
if err != nil { if err != nil {
@ -168,3 +167,48 @@ func updateXssMatchSetResource(d *schema.ResourceData, meta interface{}, ChangeA
return nil return nil
} }
func flattenWafXssMatchTuples(ts []*waf.XssMatchTuple) []interface{} {
out := make([]interface{}, len(ts), len(ts))
for i, t := range ts {
m := make(map[string]interface{})
m["field_to_match"] = flattenFieldToMatch(t.FieldToMatch)
m["text_transformation"] = *t.TextTransformation
out[i] = m
}
return out
}
func diffWafXssMatchSetTuples(oldT, newT []interface{}) []*waf.XssMatchSetUpdate {
updates := make([]*waf.XssMatchSetUpdate, 0)
for _, od := range oldT {
tuple := od.(map[string]interface{})
if idx, contains := sliceContainsMap(newT, tuple); contains {
newT = append(newT[:idx], newT[idx+1:]...)
continue
}
updates = append(updates, &waf.XssMatchSetUpdate{
Action: aws.String(waf.ChangeActionDelete),
XssMatchTuple: &waf.XssMatchTuple{
FieldToMatch: expandFieldToMatch(tuple["field_to_match"].(*schema.Set).List()[0].(map[string]interface{})),
TextTransformation: aws.String(tuple["text_transformation"].(string)),
},
})
}
for _, nd := range newT {
tuple := nd.(map[string]interface{})
updates = append(updates, &waf.XssMatchSetUpdate{
Action: aws.String(waf.ChangeActionInsert),
XssMatchTuple: &waf.XssMatchTuple{
FieldToMatch: expandFieldToMatch(tuple["field_to_match"].(*schema.Set).List()[0].(map[string]interface{})),
TextTransformation: aws.String(tuple["text_transformation"].(string)),
},
})
}
return updates
}

View File

@ -31,6 +31,22 @@ func TestAccAWSWafXssMatchSet_basic(t *testing.T) {
"aws_waf_xss_match_set.xss_match_set", "name", xssMatchSet), "aws_waf_xss_match_set.xss_match_set", "name", xssMatchSet),
resource.TestCheckResourceAttr( resource.TestCheckResourceAttr(
"aws_waf_xss_match_set.xss_match_set", "xss_match_tuples.#", "2"), "aws_waf_xss_match_set.xss_match_set", "xss_match_tuples.#", "2"),
resource.TestCheckResourceAttr(
"aws_waf_xss_match_set.xss_match_set", "xss_match_tuples.2018581549.field_to_match.#", "1"),
resource.TestCheckResourceAttr(
"aws_waf_xss_match_set.xss_match_set", "xss_match_tuples.2018581549.field_to_match.2316364334.data", ""),
resource.TestCheckResourceAttr(
"aws_waf_xss_match_set.xss_match_set", "xss_match_tuples.2018581549.field_to_match.2316364334.type", "QUERY_STRING"),
resource.TestCheckResourceAttr(
"aws_waf_xss_match_set.xss_match_set", "xss_match_tuples.2018581549.text_transformation", "NONE"),
resource.TestCheckResourceAttr(
"aws_waf_xss_match_set.xss_match_set", "xss_match_tuples.2786024938.field_to_match.#", "1"),
resource.TestCheckResourceAttr(
"aws_waf_xss_match_set.xss_match_set", "xss_match_tuples.2786024938.field_to_match.3756326843.data", ""),
resource.TestCheckResourceAttr(
"aws_waf_xss_match_set.xss_match_set", "xss_match_tuples.2786024938.field_to_match.3756326843.type", "URI"),
resource.TestCheckResourceAttr(
"aws_waf_xss_match_set.xss_match_set", "xss_match_tuples.2786024938.text_transformation", "NONE"),
), ),
}, },
}, },
@ -92,6 +108,94 @@ func TestAccAWSWafXssMatchSet_disappears(t *testing.T) {
}) })
} }
func TestAccAWSWafXssMatchSet_changeTuples(t *testing.T) {
var before, after waf.XssMatchSet
setName := fmt.Sprintf("xssMatchSet-%s", acctest.RandString(5))
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
CheckDestroy: testAccCheckAWSWafXssMatchSetDestroy,
Steps: []resource.TestStep{
{
Config: testAccAWSWafXssMatchSetConfig(setName),
Check: resource.ComposeAggregateTestCheckFunc(
testAccCheckAWSWafXssMatchSetExists("aws_waf_xss_match_set.xss_match_set", &before),
resource.TestCheckResourceAttr(
"aws_waf_xss_match_set.xss_match_set", "name", setName),
resource.TestCheckResourceAttr(
"aws_waf_xss_match_set.xss_match_set", "xss_match_tuples.#", "2"),
resource.TestCheckResourceAttr(
"aws_waf_xss_match_set.xss_match_set", "xss_match_tuples.2018581549.field_to_match.#", "1"),
resource.TestCheckResourceAttr(
"aws_waf_xss_match_set.xss_match_set", "xss_match_tuples.2018581549.field_to_match.2316364334.data", ""),
resource.TestCheckResourceAttr(
"aws_waf_xss_match_set.xss_match_set", "xss_match_tuples.2018581549.field_to_match.2316364334.type", "QUERY_STRING"),
resource.TestCheckResourceAttr(
"aws_waf_xss_match_set.xss_match_set", "xss_match_tuples.2018581549.text_transformation", "NONE"),
resource.TestCheckResourceAttr(
"aws_waf_xss_match_set.xss_match_set", "xss_match_tuples.2786024938.field_to_match.#", "1"),
resource.TestCheckResourceAttr(
"aws_waf_xss_match_set.xss_match_set", "xss_match_tuples.2786024938.field_to_match.3756326843.data", ""),
resource.TestCheckResourceAttr(
"aws_waf_xss_match_set.xss_match_set", "xss_match_tuples.2786024938.field_to_match.3756326843.type", "URI"),
resource.TestCheckResourceAttr(
"aws_waf_xss_match_set.xss_match_set", "xss_match_tuples.2786024938.text_transformation", "NONE"),
),
},
{
Config: testAccAWSWafXssMatchSetConfig_changeTuples(setName),
Check: resource.ComposeAggregateTestCheckFunc(
testAccCheckAWSWafXssMatchSetExists("aws_waf_xss_match_set.xss_match_set", &after),
resource.TestCheckResourceAttr(
"aws_waf_xss_match_set.xss_match_set", "name", setName),
resource.TestCheckResourceAttr(
"aws_waf_xss_match_set.xss_match_set", "xss_match_tuples.#", "2"),
resource.TestCheckResourceAttr(
"aws_waf_xss_match_set.xss_match_set", "xss_match_tuples.2893682529.field_to_match.#", "1"),
resource.TestCheckResourceAttr(
"aws_waf_xss_match_set.xss_match_set", "xss_match_tuples.2893682529.field_to_match.4253810390.data", "GET"),
resource.TestCheckResourceAttr(
"aws_waf_xss_match_set.xss_match_set", "xss_match_tuples.2893682529.field_to_match.4253810390.type", "METHOD"),
resource.TestCheckResourceAttr(
"aws_waf_xss_match_set.xss_match_set", "xss_match_tuples.2893682529.text_transformation", "HTML_ENTITY_DECODE"),
resource.TestCheckResourceAttr(
"aws_waf_xss_match_set.xss_match_set", "xss_match_tuples.4270311415.field_to_match.#", "1"),
resource.TestCheckResourceAttr(
"aws_waf_xss_match_set.xss_match_set", "xss_match_tuples.4270311415.field_to_match.281401076.data", ""),
resource.TestCheckResourceAttr(
"aws_waf_xss_match_set.xss_match_set", "xss_match_tuples.4270311415.field_to_match.281401076.type", "BODY"),
resource.TestCheckResourceAttr(
"aws_waf_xss_match_set.xss_match_set", "xss_match_tuples.4270311415.text_transformation", "CMD_LINE"),
),
},
},
})
}
func TestAccAWSWafXssMatchSet_noTuples(t *testing.T) {
var ipset waf.XssMatchSet
setName := fmt.Sprintf("xssMatchSet-%s", acctest.RandString(5))
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
CheckDestroy: testAccCheckAWSWafXssMatchSetDestroy,
Steps: []resource.TestStep{
{
Config: testAccAWSWafXssMatchSetConfig_noTuples(setName),
Check: resource.ComposeAggregateTestCheckFunc(
testAccCheckAWSWafXssMatchSetExists("aws_waf_xss_match_set.xss_match_set", &ipset),
resource.TestCheckResourceAttr(
"aws_waf_xss_match_set.xss_match_set", "name", setName),
resource.TestCheckResourceAttr(
"aws_waf_xss_match_set.xss_match_set", "xss_match_tuples.#", "0"),
),
},
},
})
}
func testAccCheckAWSWafXssMatchSetDisappears(v *waf.XssMatchSet) resource.TestCheckFunc { func testAccCheckAWSWafXssMatchSetDisappears(v *waf.XssMatchSet) resource.TestCheckFunc {
return func(s *terraform.State) error { return func(s *terraform.State) error {
conn := testAccProvider.Meta().(*AWSClient).wafconn conn := testAccProvider.Meta().(*AWSClient).wafconn
@ -232,3 +336,31 @@ resource "aws_waf_xss_match_set" "xss_match_set" {
} }
}`, name) }`, name)
} }
func testAccAWSWafXssMatchSetConfig_changeTuples(name string) string {
return fmt.Sprintf(`
resource "aws_waf_xss_match_set" "xss_match_set" {
name = "%s"
xss_match_tuples {
text_transformation = "CMD_LINE"
field_to_match {
type = "BODY"
}
}
xss_match_tuples {
text_transformation = "HTML_ENTITY_DECODE"
field_to_match {
type = "METHOD"
data = "GET"
}
}
}`, name)
}
func testAccAWSWafXssMatchSetConfig_noTuples(name string) string {
return fmt.Sprintf(`
resource "aws_waf_xss_match_set" "xss_match_set" {
name = "%s"
}`, name)
}

View File

@ -1886,7 +1886,10 @@ func normalizeJsonString(jsonString interface{}) (string, error) {
return s, err return s, err
} }
// The error is intentionally ignored here to allow empty policies to passthrough validation.
// This covers any interpolated values
bytes, _ := json.Marshal(j) bytes, _ := json.Marshal(j)
return string(bytes[:]), nil return string(bytes[:]), nil
} }

View File

@ -605,6 +605,23 @@ func validateJsonString(v interface{}, k string) (ws []string, errors []error) {
return return
} }
func validateIAMPolicyJson(v interface{}, k string) (ws []string, errors []error) {
// IAM Policy documents need to be valid JSON, and pass legacy parsing
value := v.(string)
if len(value) < 1 {
errors = append(errors, fmt.Errorf("%q contains an invalid JSON policy", k))
return
}
if value[:1] != "{" {
errors = append(errors, fmt.Errorf("%q conatains an invalid JSON policy", k))
return
}
if _, err := normalizeJsonString(v); err != nil {
errors = append(errors, fmt.Errorf("%q contains an invalid JSON: %s", k, err))
}
return
}
func validateCloudFormationTemplate(v interface{}, k string) (ws []string, errors []error) { func validateCloudFormationTemplate(v interface{}, k string) (ws []string, errors []error) {
if looksLikeJsonString(v) { if looksLikeJsonString(v) {
if _, err := normalizeJsonString(v); err != nil { if _, err := normalizeJsonString(v); err != nil {

View File

@ -799,6 +799,65 @@ func TestValidateJsonString(t *testing.T) {
} }
} }
func TestValidateIAMPolicyJsonString(t *testing.T) {
type testCases struct {
Value string
ErrCount int
}
invalidCases := []testCases{
{
Value: `{0:"1"}`,
ErrCount: 1,
},
{
Value: `{'abc':1}`,
ErrCount: 1,
},
{
Value: `{"def":}`,
ErrCount: 1,
},
{
Value: `{"xyz":[}}`,
ErrCount: 1,
},
{
Value: ``,
ErrCount: 1,
},
{
Value: ` {"xyz": "foo"}`,
ErrCount: 1,
},
}
for _, tc := range invalidCases {
_, errors := validateIAMPolicyJson(tc.Value, "json")
if len(errors) != tc.ErrCount {
t.Fatalf("Expected %q to trigger a validation error.", tc.Value)
}
}
validCases := []testCases{
{
Value: `{}`,
ErrCount: 0,
},
{
Value: `{"abc":["1","2"]}`,
ErrCount: 0,
},
}
for _, tc := range validCases {
_, errors := validateIAMPolicyJson(tc.Value, "json")
if len(errors) != tc.ErrCount {
t.Fatalf("Expected %q not to trigger a validation error.", tc.Value)
}
}
}
func TestValidateCloudFormationTemplate(t *testing.T) { func TestValidateCloudFormationTemplate(t *testing.T) {
type testCases struct { type testCases struct {
Value string Value string

View File

@ -2,12 +2,13 @@ package azurerm
import ( import (
"fmt" "fmt"
"github.com/Azure/azure-sdk-for-go/arm/disk"
"github.com/hashicorp/terraform/helper/schema"
"github.com/hashicorp/terraform/helper/validation"
"log" "log"
"net/http" "net/http"
"strings" "strings"
"github.com/Azure/azure-sdk-for-go/arm/disk"
"github.com/hashicorp/terraform/helper/schema"
"github.com/hashicorp/terraform/helper/validation"
) )
func resourceArmManagedDisk() *schema.Resource { func resourceArmManagedDisk() *schema.Resource {
@ -90,9 +91,9 @@ func resourceArmManagedDisk() *schema.Resource {
func validateDiskSizeGB(v interface{}, k string) (ws []string, errors []error) { func validateDiskSizeGB(v interface{}, k string) (ws []string, errors []error) {
value := v.(int) value := v.(int)
if value < 1 || value > 1023 { if value < 1 || value > 4095 {
errors = append(errors, fmt.Errorf( errors = append(errors, fmt.Errorf(
"The `disk_size_gb` can only be between 1 and 1023")) "The `disk_size_gb` can only be between 1 and 4095"))
} }
return return
} }

View File

@ -1,7 +1,6 @@
package circonus package circonus
import ( import (
"bytes"
"fmt" "fmt"
"github.com/circonus-labs/circonus-gometrics/api" "github.com/circonus-labs/circonus-gometrics/api"
@ -123,13 +122,5 @@ func providerConfigure(d *schema.ResourceData) (interface{}, error) {
} }
func tfAppName() string { func tfAppName() string {
const VersionPrerelease = terraform.VersionPrerelease return fmt.Sprintf("Terraform v%s", terraform.VersionString())
var versionString bytes.Buffer
fmt.Fprintf(&versionString, "Terraform v%s", terraform.Version)
if VersionPrerelease != "" {
fmt.Fprintf(&versionString, "-%s", VersionPrerelease)
}
return versionString.String()
} }

View File

@ -376,6 +376,14 @@ func resourceDockerContainer() *schema.Resource {
ForceNew: true, ForceNew: true,
}, },
"network_alias": &schema.Schema{
Type: schema.TypeSet,
Optional: true,
ForceNew: true,
Elem: &schema.Schema{Type: schema.TypeString},
Set: schema.HashString,
},
"network_mode": &schema.Schema{ "network_mode": &schema.Schema{
Type: schema.TypeString, Type: schema.TypeString,
Optional: true, Optional: true,

View File

@ -188,7 +188,14 @@ func resourceDockerContainerCreate(d *schema.ResourceData, meta interface{}) err
d.SetId(retContainer.ID) d.SetId(retContainer.ID)
if v, ok := d.GetOk("networks"); ok { if v, ok := d.GetOk("networks"); ok {
connectionOpts := dc.NetworkConnectionOptions{Container: retContainer.ID} var connectionOpts dc.NetworkConnectionOptions
if v, ok := d.GetOk("network_alias"); ok {
endpointConfig := &dc.EndpointConfig{}
endpointConfig.Aliases = stringSetToStringSlice(v.(*schema.Set))
connectionOpts = dc.NetworkConnectionOptions{Container: retContainer.ID, EndpointConfig: endpointConfig}
} else {
connectionOpts = dc.NetworkConnectionOptions{Container: retContainer.ID}
}
for _, rawNetwork := range v.(*schema.Set).List() { for _, rawNetwork := range v.(*schema.Set).List() {
network := rawNetwork.(string) network := rawNetwork.(string)

View File

@ -203,6 +203,10 @@ func TestAccDockerContainer_customized(t *testing.T) {
return fmt.Errorf("Container has incorrect extra host string: %q", c.HostConfig.ExtraHosts[1]) return fmt.Errorf("Container has incorrect extra host string: %q", c.HostConfig.ExtraHosts[1])
} }
if _, ok := c.NetworkSettings.Networks["test"]; !ok {
return fmt.Errorf("Container is not connected to the right user defined network: test")
}
return nil return nil
} }
@ -370,6 +374,9 @@ resource "docker_container" "foo" {
} }
network_mode = "bridge" network_mode = "bridge"
networks = ["${docker_network.test_network.name}"]
network_alias = ["tftest"]
host { host {
host = "testhost" host = "testhost"
ip = "10.0.1.0" ip = "10.0.1.0"
@ -380,6 +387,10 @@ resource "docker_container" "foo" {
ip = "10.0.2.0" ip = "10.0.2.0"
} }
} }
resource "docker_network" "test_network" {
name = "test"
}
` `
const testAccDockerContainerUploadConfig = ` const testAccDockerContainerUploadConfig = `

View File

@ -0,0 +1,85 @@
package github
import (
"context"
"fmt"
"log"
"strconv"
"github.com/google/go-github/github"
"github.com/hashicorp/terraform/helper/schema"
)
func dataSourceGithubTeam() *schema.Resource {
return &schema.Resource{
Read: dataSourceGithubTeamRead,
Schema: map[string]*schema.Schema{
"slug": &schema.Schema{
Type: schema.TypeString,
Required: true,
},
"name": &schema.Schema{
Type: schema.TypeString,
Computed: true,
},
"description": &schema.Schema{
Type: schema.TypeString,
Computed: true,
},
"privacy": &schema.Schema{
Type: schema.TypeString,
Computed: true,
},
"permission": &schema.Schema{
Type: schema.TypeString,
Computed: true,
},
},
}
}
func dataSourceGithubTeamRead(d *schema.ResourceData, meta interface{}) error {
slug := d.Get("slug").(string)
log.Printf("[INFO] Refreshing Gitub Team: %s", slug)
client := meta.(*Organization).client
team, err := getGithubTeamBySlug(client, meta.(*Organization).name, slug)
if err != nil {
return err
}
d.SetId(strconv.Itoa(*team.ID))
d.Set("name", *team.Name)
d.Set("description", *team.Description)
d.Set("privacy", *team.Privacy)
d.Set("permission", *team.Permission)
d.Set("members_count", *team.MembersCount)
d.Set("repos_count", *team.ReposCount)
return nil
}
func getGithubTeamBySlug(client *github.Client, org string, slug string) (team *github.Team, err error) {
opt := &github.ListOptions{PerPage: 10}
for {
teams, resp, err := client.Organizations.ListTeams(context.TODO(), org, opt)
if err != nil {
return team, err
}
for _, t := range teams {
if *t.Slug == slug {
return t, nil
}
}
if resp.NextPage == 0 {
break
}
opt.Page = resp.NextPage
}
return team, fmt.Errorf("Could not find team with slug: %s", slug)
}

View File

@ -0,0 +1,33 @@
package github
import (
"fmt"
"regexp"
"testing"
"github.com/hashicorp/terraform/helper/resource"
)
func TestAccGithubTeamDataSource_noMatchReturnsError(t *testing.T) {
slug := "non-existing"
resource.Test(t, resource.TestCase{
PreCheck: func() {
testAccPreCheck(t)
},
Providers: testAccProviders,
Steps: []resource.TestStep{
{
Config: testAccCheckGithubTeamDataSourceConfig(slug),
ExpectError: regexp.MustCompile(`Could not find team`),
},
},
})
}
func testAccCheckGithubTeamDataSourceConfig(slug string) string {
return fmt.Sprintf(`
data "github_team" "test" {
slug = "%s"
}
`, slug)
}

View File

@ -46,6 +46,7 @@ func Provider() terraform.ResourceProvider {
DataSourcesMap: map[string]*schema.Resource{ DataSourcesMap: map[string]*schema.Resource{
"github_user": dataSourceGithubUser(), "github_user": dataSourceGithubUser(),
"github_team": dataSourceGithubTeam(),
}, },
ConfigureFunc: providerConfigure, ConfigureFunc: providerConfigure,

View File

@ -84,7 +84,9 @@ func resourceGithubIssueLabelCreateOrUpdate(d *schema.ResourceData, meta interfa
} else { } else {
log.Printf("[DEBUG] Creating label: %s/%s (%s: %s)", o, r, n, c) log.Printf("[DEBUG] Creating label: %s/%s (%s: %s)", o, r, n, c)
_, resp, err := client.Issues.CreateLabel(context.TODO(), o, r, label) _, resp, err := client.Issues.CreateLabel(context.TODO(), o, r, label)
if resp != nil {
log.Printf("[DEBUG] Response from creating label: %s", *resp) log.Printf("[DEBUG] Response from creating label: %s", *resp)
}
if err != nil { if err != nil {
return err return err
} }

View File

@ -94,7 +94,7 @@ func resourceGithubOrganizationWebhookRead(d *schema.ResourceData, meta interfac
hook, resp, err := client.Organizations.GetHook(context.TODO(), meta.(*Organization).name, hookID) hook, resp, err := client.Organizations.GetHook(context.TODO(), meta.(*Organization).name, hookID)
if err != nil { if err != nil {
if resp.StatusCode == 404 { if resp != nil && resp.StatusCode == 404 {
d.SetId("") d.SetId("")
return nil return nil
} }

View File

@ -127,7 +127,7 @@ func resourceGithubRepositoryRead(d *schema.ResourceData, meta interface{}) erro
log.Printf("[DEBUG] read github repository %s/%s", meta.(*Organization).name, repoName) log.Printf("[DEBUG] read github repository %s/%s", meta.(*Organization).name, repoName)
repo, resp, err := client.Repositories.Get(context.TODO(), meta.(*Organization).name, repoName) repo, resp, err := client.Repositories.Get(context.TODO(), meta.(*Organization).name, repoName)
if err != nil { if err != nil {
if resp.StatusCode == 404 { if resp != nil && resp.StatusCode == 404 {
log.Printf( log.Printf(
"[WARN] removing %s/%s from state because it no longer exists in github", "[WARN] removing %s/%s from state because it no longer exists in github",
meta.(*Organization).name, meta.(*Organization).name,

View File

@ -89,7 +89,7 @@ func resourceGithubRepositoryWebhookRead(d *schema.ResourceData, meta interface{
hook, resp, err := client.Repositories.GetHook(context.TODO(), meta.(*Organization).name, d.Get("repository").(string), hookID) hook, resp, err := client.Repositories.GetHook(context.TODO(), meta.(*Organization).name, d.Get("repository").(string), hookID)
if err != nil { if err != nil {
if resp.StatusCode == 404 { if resp != nil && resp.StatusCode == 404 {
d.SetId("") d.SetId("")
return nil return nil
} }

View File

@ -0,0 +1,32 @@
package google
import (
"testing"
"github.com/hashicorp/terraform/helper/acctest"
"github.com/hashicorp/terraform/helper/resource"
)
func TestAccGoogleSqlUser_importBasic(t *testing.T) {
resourceName := "google_sql_user.user"
user := acctest.RandString(10)
instance := acctest.RandString(10)
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
CheckDestroy: testAccGoogleSqlUserDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: testGoogleSqlUser_basic(instance, user),
},
resource.TestStep{
ResourceName: resourceName,
ImportState: true,
ImportStateVerify: true,
ImportStateVerifyIgnore: []string{"password"},
},
},
})
}

View File

@ -0,0 +1,30 @@
package google
import (
"fmt"
"testing"
"github.com/hashicorp/terraform/helper/acctest"
"github.com/hashicorp/terraform/helper/resource"
)
func TestAccStorageBucket_import(t *testing.T) {
bucketName := fmt.Sprintf("tf-test-acl-bucket-%d", acctest.RandInt())
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
CheckDestroy: testAccStorageBucketDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: testAccStorageBucket_basic(bucketName),
},
resource.TestStep{
ResourceName: "google_storage_bucket.bucket",
ImportState: true,
ImportStateVerify: true,
ImportStateVerifyIgnore: []string{"force_destroy"},
},
},
})
}

View File

@ -143,6 +143,10 @@ func resourceComputeDiskCreate(d *schema.ResourceData, meta interface{}) error {
if v, ok := d.GetOk("snapshot"); ok { if v, ok := d.GetOk("snapshot"); ok {
snapshotName := v.(string) snapshotName := v.(string)
match, _ := regexp.MatchString("^https://www.googleapis.com/compute", snapshotName)
if match {
disk.SourceSnapshot = snapshotName
} else {
log.Printf("[DEBUG] Loading snapshot: %s", snapshotName) log.Printf("[DEBUG] Loading snapshot: %s", snapshotName)
snapshotData, err := config.clientCompute.Snapshots.Get( snapshotData, err := config.clientCompute.Snapshots.Get(
project, snapshotName).Do() project, snapshotName).Do()
@ -152,9 +156,9 @@ func resourceComputeDiskCreate(d *schema.ResourceData, meta interface{}) error {
"Error loading snapshot '%s': %s", "Error loading snapshot '%s': %s",
snapshotName, err) snapshotName, err)
} }
disk.SourceSnapshot = snapshotData.SelfLink disk.SourceSnapshot = snapshotData.SelfLink
} }
}
if v, ok := d.GetOk("disk_encryption_key_raw"); ok { if v, ok := d.GetOk("disk_encryption_key_raw"); ok {
disk.DiskEncryptionKey = &compute.CustomerEncryptionKey{} disk.DiskEncryptionKey = &compute.CustomerEncryptionKey{}

View File

@ -2,6 +2,7 @@ package google
import ( import (
"fmt" "fmt"
"os"
"strconv" "strconv"
"testing" "testing"
@ -31,6 +32,30 @@ func TestAccComputeDisk_basic(t *testing.T) {
}) })
} }
func TestAccComputeDisk_fromSnapshotURI(t *testing.T) {
diskName := fmt.Sprintf("tf-test-%s", acctest.RandString(10))
firstDiskName := fmt.Sprintf("tf-test-%s", acctest.RandString(10))
snapshotName := fmt.Sprintf("tf-test-%s", acctest.RandString(10))
var xpn_host = os.Getenv("GOOGLE_XPN_HOST_PROJECT")
var disk compute.Disk
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
CheckDestroy: testAccCheckComputeDiskDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: testAccComputeDisk_fromSnapshotURI(firstDiskName, snapshotName, diskName, xpn_host),
Check: resource.ComposeTestCheckFunc(
testAccCheckComputeDiskExists(
"google_compute_disk.seconddisk", &disk),
),
},
},
})
}
func TestAccComputeDisk_encryption(t *testing.T) { func TestAccComputeDisk_encryption(t *testing.T) {
diskName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) diskName := fmt.Sprintf("tf-test-%s", acctest.RandString(10))
var disk compute.Disk var disk compute.Disk
@ -187,6 +212,31 @@ resource "google_compute_disk" "foobar" {
}`, diskName) }`, diskName)
} }
func testAccComputeDisk_fromSnapshotURI(firstDiskName, snapshotName, diskName, xpn_host string) string {
return fmt.Sprintf(`
resource "google_compute_disk" "foobar" {
name = "%s"
image = "debian-8-jessie-v20160803"
size = 50
type = "pd-ssd"
zone = "us-central1-a"
project = "%s"
}
resource "google_compute_snapshot" "snapdisk" {
name = "%s"
source_disk = "${google_compute_disk.foobar.name}"
zone = "us-central1-a"
project = "%s"
}
resource "google_compute_disk" "seconddisk" {
name = "%s"
snapshot = "${google_compute_snapshot.snapdisk.self_link}"
type = "pd-ssd"
zone = "us-central1-a"
}`, firstDiskName, xpn_host, snapshotName, xpn_host, diskName)
}
func testAccComputeDisk_encryption(diskName string) string { func testAccComputeDisk_encryption(diskName string) string {
return fmt.Sprintf(` return fmt.Sprintf(`
resource "google_compute_disk" "foobar" { resource "google_compute_disk" "foobar" {

View File

@ -13,7 +13,7 @@ import (
func resourceComputeFirewallMigrateState( func resourceComputeFirewallMigrateState(
v int, is *terraform.InstanceState, meta interface{}) (*terraform.InstanceState, error) { v int, is *terraform.InstanceState, meta interface{}) (*terraform.InstanceState, error) {
if is.Empty() { if is.Empty() {
log.Println("[DEBUG] Empty FirewallState; nothing to migrate.") log.Println("[DEBUG] Empty InstanceState; nothing to migrate.")
return is, nil return is, nil
} }

View File

@ -243,6 +243,7 @@ func resourceSqlDatabaseInstance() *schema.Resource {
"replica_configuration": &schema.Schema{ "replica_configuration": &schema.Schema{
Type: schema.TypeList, Type: schema.TypeList,
Optional: true, Optional: true,
MaxItems: 1,
Elem: &schema.Resource{ Elem: &schema.Resource{
Schema: map[string]*schema.Schema{ Schema: map[string]*schema.Schema{
"ca_certificate": &schema.Schema{ "ca_certificate": &schema.Schema{
@ -270,6 +271,11 @@ func resourceSqlDatabaseInstance() *schema.Resource {
Optional: true, Optional: true,
ForceNew: true, ForceNew: true,
}, },
"failover_target": &schema.Schema{
Type: schema.TypeBool,
Optional: true,
ForceNew: true,
},
"master_heartbeat_period": &schema.Schema{ "master_heartbeat_period": &schema.Schema{
Type: schema.TypeInt, Type: schema.TypeInt,
Optional: true, Optional: true,
@ -517,15 +523,16 @@ func resourceSqlDatabaseInstanceCreate(d *schema.ResourceData, meta interface{})
if v, ok := d.GetOk("replica_configuration"); ok { if v, ok := d.GetOk("replica_configuration"); ok {
_replicaConfigurationList := v.([]interface{}) _replicaConfigurationList := v.([]interface{})
if len(_replicaConfigurationList) > 1 {
return fmt.Errorf("Only one replica_configuration block may be defined")
}
if len(_replicaConfigurationList) == 1 && _replicaConfigurationList[0] != nil { if len(_replicaConfigurationList) == 1 && _replicaConfigurationList[0] != nil {
replicaConfiguration := &sqladmin.ReplicaConfiguration{} replicaConfiguration := &sqladmin.ReplicaConfiguration{}
mySqlReplicaConfiguration := &sqladmin.MySqlReplicaConfiguration{} mySqlReplicaConfiguration := &sqladmin.MySqlReplicaConfiguration{}
_replicaConfiguration := _replicaConfigurationList[0].(map[string]interface{}) _replicaConfiguration := _replicaConfigurationList[0].(map[string]interface{})
if vp, okp := _replicaConfiguration["failover_target"]; okp {
replicaConfiguration.FailoverTarget = vp.(bool)
}
if vp, okp := _replicaConfiguration["ca_certificate"]; okp { if vp, okp := _replicaConfiguration["ca_certificate"]; okp {
mySqlReplicaConfiguration.CaCertificate = vp.(string) mySqlReplicaConfiguration.CaCertificate = vp.(string)
} }
@ -827,53 +834,16 @@ func resourceSqlDatabaseInstanceRead(d *schema.ResourceData, meta interface{}) e
if v, ok := d.GetOk("replica_configuration"); ok && v != nil { if v, ok := d.GetOk("replica_configuration"); ok && v != nil {
_replicaConfigurationList := v.([]interface{}) _replicaConfigurationList := v.([]interface{})
if len(_replicaConfigurationList) > 1 {
return fmt.Errorf("Only one replica_configuration block may be defined")
}
if len(_replicaConfigurationList) == 1 && _replicaConfigurationList[0] != nil { if len(_replicaConfigurationList) == 1 && _replicaConfigurationList[0] != nil {
mySqlReplicaConfiguration := instance.ReplicaConfiguration.MysqlReplicaConfiguration
_replicaConfiguration := _replicaConfigurationList[0].(map[string]interface{}) _replicaConfiguration := _replicaConfigurationList[0].(map[string]interface{})
if vp, okp := _replicaConfiguration["ca_certificate"]; okp && vp != nil { if vp, okp := _replicaConfiguration["failover_target"]; okp && vp != nil {
_replicaConfiguration["ca_certificate"] = mySqlReplicaConfiguration.CaCertificate _replicaConfiguration["failover_target"] = instance.ReplicaConfiguration.FailoverTarget
} }
if vp, okp := _replicaConfiguration["client_certificate"]; okp && vp != nil { // Don't attempt to assign anything from instance.ReplicaConfiguration.MysqlReplicaConfiguration,
_replicaConfiguration["client_certificate"] = mySqlReplicaConfiguration.ClientCertificate // since those fields are set on create and then not stored. See description at
} // https://cloud.google.com/sql/docs/mysql/admin-api/v1beta4/instances
if vp, okp := _replicaConfiguration["client_key"]; okp && vp != nil {
_replicaConfiguration["client_key"] = mySqlReplicaConfiguration.ClientKey
}
if vp, okp := _replicaConfiguration["connect_retry_interval"]; okp && vp != nil {
_replicaConfiguration["connect_retry_interval"] = mySqlReplicaConfiguration.ConnectRetryInterval
}
if vp, okp := _replicaConfiguration["dump_file_path"]; okp && vp != nil {
_replicaConfiguration["dump_file_path"] = mySqlReplicaConfiguration.DumpFilePath
}
if vp, okp := _replicaConfiguration["master_heartbeat_period"]; okp && vp != nil {
_replicaConfiguration["master_heartbeat_period"] = mySqlReplicaConfiguration.MasterHeartbeatPeriod
}
if vp, okp := _replicaConfiguration["password"]; okp && vp != nil {
_replicaConfiguration["password"] = mySqlReplicaConfiguration.Password
}
if vp, okp := _replicaConfiguration["ssl_cipher"]; okp && vp != nil {
_replicaConfiguration["ssl_cipher"] = mySqlReplicaConfiguration.SslCipher
}
if vp, okp := _replicaConfiguration["username"]; okp && vp != nil {
_replicaConfiguration["username"] = mySqlReplicaConfiguration.Username
}
if vp, okp := _replicaConfiguration["verify_server_certificate"]; okp && vp != nil {
_replicaConfiguration["verify_server_certificate"] = mySqlReplicaConfiguration.VerifyServerCertificate
}
_replicaConfigurationList[0] = _replicaConfiguration _replicaConfigurationList[0] = _replicaConfiguration
d.Set("replica_configuration", _replicaConfigurationList) d.Set("replica_configuration", _replicaConfigurationList)

View File

@ -408,66 +408,11 @@ func testAccCheckGoogleSqlDatabaseInstanceEquals(n string,
return fmt.Errorf("Error settings.pricing_plan mismatch, (%s, %s)", server, local) return fmt.Errorf("Error settings.pricing_plan mismatch, (%s, %s)", server, local)
} }
if instance.ReplicaConfiguration != nil && if instance.ReplicaConfiguration != nil {
instance.ReplicaConfiguration.MysqlReplicaConfiguration != nil { server = strconv.FormatBool(instance.ReplicaConfiguration.FailoverTarget)
server = instance.ReplicaConfiguration.MysqlReplicaConfiguration.CaCertificate local = attributes["replica_configuration.0.failover_target"]
local = attributes["replica_configuration.0.ca_certificate"]
if server != local && len(server) > 0 && len(local) > 0 { if server != local && len(server) > 0 && len(local) > 0 {
return fmt.Errorf("Error replica_configuration.ca_certificate mismatch, (%s, %s)", server, local) return fmt.Errorf("Error replica_configuration.failover_target mismatch, (%s, %s)", server, local)
}
server = instance.ReplicaConfiguration.MysqlReplicaConfiguration.ClientCertificate
local = attributes["replica_configuration.0.client_certificate"]
if server != local && len(server) > 0 && len(local) > 0 {
return fmt.Errorf("Error replica_configuration.client_certificate mismatch, (%s, %s)", server, local)
}
server = instance.ReplicaConfiguration.MysqlReplicaConfiguration.ClientKey
local = attributes["replica_configuration.0.client_key"]
if server != local && len(server) > 0 && len(local) > 0 {
return fmt.Errorf("Error replica_configuration.client_key mismatch, (%s, %s)", server, local)
}
server = strconv.FormatInt(instance.ReplicaConfiguration.MysqlReplicaConfiguration.ConnectRetryInterval, 10)
local = attributes["replica_configuration.0.connect_retry_interval"]
if server != local && len(server) > 0 && len(local) > 0 {
return fmt.Errorf("Error replica_configuration.connect_retry_interval mismatch, (%s, %s)", server, local)
}
server = instance.ReplicaConfiguration.MysqlReplicaConfiguration.DumpFilePath
local = attributes["replica_configuration.0.dump_file_path"]
if server != local && len(server) > 0 && len(local) > 0 {
return fmt.Errorf("Error replica_configuration.dump_file_path mismatch, (%s, %s)", server, local)
}
server = strconv.FormatInt(instance.ReplicaConfiguration.MysqlReplicaConfiguration.MasterHeartbeatPeriod, 10)
local = attributes["replica_configuration.0.master_heartbeat_period"]
if server != local && len(server) > 0 && len(local) > 0 {
return fmt.Errorf("Error replica_configuration.master_heartbeat_period mismatch, (%s, %s)", server, local)
}
server = instance.ReplicaConfiguration.MysqlReplicaConfiguration.Password
local = attributes["replica_configuration.0.password"]
if server != local && len(server) > 0 && len(local) > 0 {
return fmt.Errorf("Error replica_configuration.password mismatch, (%s, %s)", server, local)
}
server = instance.ReplicaConfiguration.MysqlReplicaConfiguration.SslCipher
local = attributes["replica_configuration.0.ssl_cipher"]
if server != local && len(server) > 0 && len(local) > 0 {
return fmt.Errorf("Error replica_configuration.ssl_cipher mismatch, (%s, %s)", server, local)
}
server = instance.ReplicaConfiguration.MysqlReplicaConfiguration.Username
local = attributes["replica_configuration.0.username"]
if server != local && len(server) > 0 && len(local) > 0 {
return fmt.Errorf("Error replica_configuration.username mismatch, (%s, %s)", server, local)
}
server = strconv.FormatBool(instance.ReplicaConfiguration.MysqlReplicaConfiguration.VerifyServerCertificate)
local = attributes["replica_configuration.0.verify_server_certificate"]
if server != local && len(server) > 0 && len(local) > 0 {
return fmt.Errorf("Error replica_configuration.verify_server_certificate mismatch, (%s, %s)", server, local)
} }
} }

View File

@ -3,9 +3,9 @@ package google
import ( import (
"fmt" "fmt"
"log" "log"
"strings"
"github.com/hashicorp/terraform/helper/schema" "github.com/hashicorp/terraform/helper/schema"
"google.golang.org/api/sqladmin/v1beta4" "google.golang.org/api/sqladmin/v1beta4"
) )
@ -15,6 +15,12 @@ func resourceSqlUser() *schema.Resource {
Read: resourceSqlUserRead, Read: resourceSqlUserRead,
Update: resourceSqlUserUpdate, Update: resourceSqlUserUpdate,
Delete: resourceSqlUserDelete, Delete: resourceSqlUserDelete,
Importer: &schema.ResourceImporter{
State: schema.ImportStatePassthrough,
},
SchemaVersion: 1,
MigrateState: resourceSqlUserMigrateState,
Schema: map[string]*schema.Schema{ Schema: map[string]*schema.Schema{
"host": &schema.Schema{ "host": &schema.Schema{
@ -38,6 +44,7 @@ func resourceSqlUser() *schema.Resource {
"password": &schema.Schema{ "password": &schema.Schema{
Type: schema.TypeString, Type: schema.TypeString,
Required: true, Required: true,
Sensitive: true,
}, },
"project": &schema.Schema{ "project": &schema.Schema{
@ -77,6 +84,8 @@ func resourceSqlUserCreate(d *schema.ResourceData, meta interface{}) error {
"user %s into instance %s: %s", name, instance, err) "user %s into instance %s: %s", name, instance, err)
} }
d.SetId(fmt.Sprintf("%s/%s", instance, name))
err = sqladminOperationWait(config, op, "Insert User") err = sqladminOperationWait(config, op, "Insert User")
if err != nil { if err != nil {
@ -95,8 +104,16 @@ func resourceSqlUserRead(d *schema.ResourceData, meta interface{}) error {
return err return err
} }
name := d.Get("name").(string) instanceAndName := strings.SplitN(d.Id(), "/", 2)
instance := d.Get("instance").(string) if len(instanceAndName) != 2 {
return fmt.Errorf(
"Wrong number of arguments when specifying imported id. Expected: 2. Saw: %d. Expected Input: $INSTANCENAME/$SQLUSERNAME Input: %s",
len(instanceAndName),
d.Id())
}
instance := instanceAndName[0]
name := instanceAndName[1]
users, err := config.clientSqlAdmin.Users.List(project, instance).Do() users, err := config.clientSqlAdmin.Users.List(project, instance).Do()
@ -104,23 +121,24 @@ func resourceSqlUserRead(d *schema.ResourceData, meta interface{}) error {
return handleNotFoundError(err, d, fmt.Sprintf("SQL User %q in instance %q", name, instance)) return handleNotFoundError(err, d, fmt.Sprintf("SQL User %q in instance %q", name, instance))
} }
found := false var user *sqladmin.User
for _, user := range users.Items { for _, currentUser := range users.Items {
if user.Name == name { if currentUser.Name == name {
found = true user = currentUser
break break
} }
} }
if !found { if user == nil {
log.Printf("[WARN] Removing SQL User %q because it's gone", d.Get("name").(string)) log.Printf("[WARN] Removing SQL User %q because it's gone", d.Get("name").(string))
d.SetId("") d.SetId("")
return nil return nil
} }
d.SetId(name) d.Set("host", user.Host)
d.Set("instance", user.Instance)
d.Set("name", user.Name)
return nil return nil
} }

View File

@ -0,0 +1,39 @@
package google
import (
"fmt"
"log"
"github.com/hashicorp/terraform/terraform"
)
func resourceSqlUserMigrateState(
v int, is *terraform.InstanceState, meta interface{}) (*terraform.InstanceState, error) {
if is.Empty() {
log.Println("[DEBUG] Empty InstanceState; nothing to migrate.")
return is, nil
}
switch v {
case 0:
log.Println("[INFO] Found Google Sql User State v0; migrating to v1")
is, err := migrateSqlUserStateV0toV1(is)
if err != nil {
return is, err
}
return is, nil
default:
return is, fmt.Errorf("Unexpected schema version: %d", v)
}
}
func migrateSqlUserStateV0toV1(is *terraform.InstanceState) (*terraform.InstanceState, error) {
log.Printf("[DEBUG] Attributes before migration: %#v", is.Attributes)
name := is.Attributes["name"]
instance := is.Attributes["instance"]
is.ID = fmt.Sprintf("%s/%s", instance, name)
log.Printf("[DEBUG] Attributes after migration: %#v", is.Attributes)
return is, nil
}

View File

@ -0,0 +1,81 @@
package google
import (
"testing"
"github.com/hashicorp/terraform/terraform"
)
func TestSqlUserMigrateState(t *testing.T) {
cases := map[string]struct {
StateVersion int
Attributes map[string]string
Expected map[string]string
Meta interface{}
ID string
ExpectedID string
}{
"change id from $NAME to $INSTANCENAME.$NAME": {
StateVersion: 0,
Attributes: map[string]string{
"name": "tf-user",
"instance": "tf-instance",
},
Expected: map[string]string{
"name": "tf-user",
"instance": "tf-instance",
},
Meta: &Config{},
ID: "tf-user",
ExpectedID: "tf-instance/tf-user",
},
}
for tn, tc := range cases {
is := &terraform.InstanceState{
ID: tc.ID,
Attributes: tc.Attributes,
}
is, err := resourceSqlUserMigrateState(
tc.StateVersion, is, tc.Meta)
if err != nil {
t.Fatalf("bad: %s, err: %#v", tn, err)
}
if is.ID != tc.ExpectedID {
t.Fatalf("bad ID.\n\n expected: %s\n got: %s", tc.ExpectedID, is.ID)
}
for k, v := range tc.Expected {
if is.Attributes[k] != v {
t.Fatalf(
"bad: %s\n\n expected: %#v -> %#v\n got: %#v -> %#v\n in: %#v",
tn, k, v, k, is.Attributes[k], is.Attributes)
}
}
}
}
func TestSqlUserMigrateState_empty(t *testing.T) {
var is *terraform.InstanceState
var meta *Config
// should handle nil
is, err := resourceSqlUserMigrateState(0, is, meta)
if err != nil {
t.Fatalf("err: %#v", err)
}
if is != nil {
t.Fatalf("expected nil instancestate, got: %#v", is)
}
// should handle non-nil but empty
is = &terraform.InstanceState{}
is, err = resourceSqlUserMigrateState(0, is, meta)
if err != nil {
t.Fatalf("err: %#v", err)
}
}

View File

@ -3,6 +3,7 @@ package google
import ( import (
"bytes" "bytes"
"fmt" "fmt"
"log"
"testing" "testing"
"github.com/hashicorp/terraform/helper/acctest" "github.com/hashicorp/terraform/helper/acctest"
@ -13,19 +14,20 @@ import (
storage "google.golang.org/api/storage/v1" storage "google.golang.org/api/storage/v1"
) )
func TestAccStorage_basic(t *testing.T) { func TestAccStorageBucket_basic(t *testing.T) {
var bucket storage.Bucket
bucketName := fmt.Sprintf("tf-test-acl-bucket-%d", acctest.RandInt()) bucketName := fmt.Sprintf("tf-test-acl-bucket-%d", acctest.RandInt())
resource.Test(t, resource.TestCase{ resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) }, PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders, Providers: testAccProviders,
CheckDestroy: testAccGoogleStorageDestroy, CheckDestroy: testAccStorageBucketDestroy,
Steps: []resource.TestStep{ Steps: []resource.TestStep{
resource.TestStep{ resource.TestStep{
Config: testGoogleStorageBucketsReaderDefaults(bucketName), Config: testAccStorageBucket_basic(bucketName),
Check: resource.ComposeTestCheckFunc( Check: resource.ComposeTestCheckFunc(
testAccCheckCloudStorageBucketExists( testAccCheckStorageBucketExists(
"google_storage_bucket.bucket", bucketName), "google_storage_bucket.bucket", bucketName, &bucket),
resource.TestCheckResourceAttr( resource.TestCheckResourceAttr(
"google_storage_bucket.bucket", "location", "US"), "google_storage_bucket.bucket", "location", "US"),
resource.TestCheckResourceAttr( resource.TestCheckResourceAttr(
@ -36,19 +38,20 @@ func TestAccStorage_basic(t *testing.T) {
}) })
} }
func TestAccStorageCustomAttributes(t *testing.T) { func TestAccStorageBucket_customAttributes(t *testing.T) {
var bucket storage.Bucket
bucketName := fmt.Sprintf("tf-test-acl-bucket-%d", acctest.RandInt()) bucketName := fmt.Sprintf("tf-test-acl-bucket-%d", acctest.RandInt())
resource.Test(t, resource.TestCase{ resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) }, PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders, Providers: testAccProviders,
CheckDestroy: testAccGoogleStorageDestroy, CheckDestroy: testAccStorageBucketDestroy,
Steps: []resource.TestStep{ Steps: []resource.TestStep{
resource.TestStep{ resource.TestStep{
Config: testGoogleStorageBucketsReaderCustomAttributes(bucketName), Config: testAccStorageBucket_customAttributes(bucketName),
Check: resource.ComposeTestCheckFunc( Check: resource.ComposeTestCheckFunc(
testAccCheckCloudStorageBucketExists( testAccCheckStorageBucketExists(
"google_storage_bucket.bucket", bucketName), "google_storage_bucket.bucket", bucketName, &bucket),
resource.TestCheckResourceAttr( resource.TestCheckResourceAttr(
"google_storage_bucket.bucket", "location", "EU"), "google_storage_bucket.bucket", "location", "EU"),
resource.TestCheckResourceAttr( resource.TestCheckResourceAttr(
@ -59,37 +62,38 @@ func TestAccStorageCustomAttributes(t *testing.T) {
}) })
} }
func TestAccStorageStorageClass(t *testing.T) { func TestAccStorageBucket_storageClass(t *testing.T) {
var bucket storage.Bucket
bucketName := fmt.Sprintf("tf-test-acc-bucket-%d", acctest.RandInt()) bucketName := fmt.Sprintf("tf-test-acc-bucket-%d", acctest.RandInt())
resource.Test(t, resource.TestCase{ resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) }, PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders, Providers: testAccProviders,
CheckDestroy: testAccGoogleStorageDestroy, CheckDestroy: testAccStorageBucketDestroy,
Steps: []resource.TestStep{ Steps: []resource.TestStep{
{ {
Config: testGoogleStorageBucketsReaderStorageClass(bucketName, "MULTI_REGIONAL", ""), Config: testAccStorageBucket_storageClass(bucketName, "MULTI_REGIONAL", ""),
Check: resource.ComposeTestCheckFunc( Check: resource.ComposeTestCheckFunc(
testAccCheckCloudStorageBucketExists( testAccCheckStorageBucketExists(
"google_storage_bucket.bucket", bucketName), "google_storage_bucket.bucket", bucketName, &bucket),
resource.TestCheckResourceAttr( resource.TestCheckResourceAttr(
"google_storage_bucket.bucket", "storage_class", "MULTI_REGIONAL"), "google_storage_bucket.bucket", "storage_class", "MULTI_REGIONAL"),
), ),
}, },
{ {
Config: testGoogleStorageBucketsReaderStorageClass(bucketName, "NEARLINE", ""), Config: testAccStorageBucket_storageClass(bucketName, "NEARLINE", ""),
Check: resource.ComposeTestCheckFunc( Check: resource.ComposeTestCheckFunc(
testAccCheckCloudStorageBucketExists( testAccCheckStorageBucketExists(
"google_storage_bucket.bucket", bucketName), "google_storage_bucket.bucket", bucketName, &bucket),
resource.TestCheckResourceAttr( resource.TestCheckResourceAttr(
"google_storage_bucket.bucket", "storage_class", "NEARLINE"), "google_storage_bucket.bucket", "storage_class", "NEARLINE"),
), ),
}, },
{ {
Config: testGoogleStorageBucketsReaderStorageClass(bucketName, "REGIONAL", "US-CENTRAL1"), Config: testAccStorageBucket_storageClass(bucketName, "REGIONAL", "US-CENTRAL1"),
Check: resource.ComposeTestCheckFunc( Check: resource.ComposeTestCheckFunc(
testAccCheckCloudStorageBucketExists( testAccCheckStorageBucketExists(
"google_storage_bucket.bucket", bucketName), "google_storage_bucket.bucket", bucketName, &bucket),
resource.TestCheckResourceAttr( resource.TestCheckResourceAttr(
"google_storage_bucket.bucket", "storage_class", "REGIONAL"), "google_storage_bucket.bucket", "storage_class", "REGIONAL"),
resource.TestCheckResourceAttr( resource.TestCheckResourceAttr(
@ -100,19 +104,20 @@ func TestAccStorageStorageClass(t *testing.T) {
}) })
} }
func TestAccStorageBucketUpdate(t *testing.T) { func TestAccStorageBucket_update(t *testing.T) {
var bucket storage.Bucket
bucketName := fmt.Sprintf("tf-test-acl-bucket-%d", acctest.RandInt()) bucketName := fmt.Sprintf("tf-test-acl-bucket-%d", acctest.RandInt())
resource.Test(t, resource.TestCase{ resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) }, PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders, Providers: testAccProviders,
CheckDestroy: testAccGoogleStorageDestroy, CheckDestroy: testAccStorageBucketDestroy,
Steps: []resource.TestStep{ Steps: []resource.TestStep{
resource.TestStep{ resource.TestStep{
Config: testGoogleStorageBucketsReaderDefaults(bucketName), Config: testAccStorageBucket_basic(bucketName),
Check: resource.ComposeTestCheckFunc( Check: resource.ComposeTestCheckFunc(
testAccCheckCloudStorageBucketExists( testAccCheckStorageBucketExists(
"google_storage_bucket.bucket", bucketName), "google_storage_bucket.bucket", bucketName, &bucket),
resource.TestCheckResourceAttr( resource.TestCheckResourceAttr(
"google_storage_bucket.bucket", "location", "US"), "google_storage_bucket.bucket", "location", "US"),
resource.TestCheckResourceAttr( resource.TestCheckResourceAttr(
@ -120,10 +125,10 @@ func TestAccStorageBucketUpdate(t *testing.T) {
), ),
}, },
resource.TestStep{ resource.TestStep{
Config: testGoogleStorageBucketsReaderCustomAttributes(bucketName), Config: testAccStorageBucket_customAttributes(bucketName),
Check: resource.ComposeTestCheckFunc( Check: resource.ComposeTestCheckFunc(
testAccCheckCloudStorageBucketExists( testAccCheckStorageBucketExists(
"google_storage_bucket.bucket", bucketName), "google_storage_bucket.bucket", bucketName, &bucket),
resource.TestCheckResourceAttr( resource.TestCheckResourceAttr(
"google_storage_bucket.bucket", "predefined_acl", "publicReadWrite"), "google_storage_bucket.bucket", "predefined_acl", "publicReadWrite"),
resource.TestCheckResourceAttr( resource.TestCheckResourceAttr(
@ -136,59 +141,39 @@ func TestAccStorageBucketUpdate(t *testing.T) {
}) })
} }
func TestAccStorageBucketImport(t *testing.T) { func TestAccStorageBucket_forceDestroy(t *testing.T) {
var bucket storage.Bucket
bucketName := fmt.Sprintf("tf-test-acl-bucket-%d", acctest.RandInt()) bucketName := fmt.Sprintf("tf-test-acl-bucket-%d", acctest.RandInt())
resource.Test(t, resource.TestCase{ resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) }, PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders, Providers: testAccProviders,
CheckDestroy: testAccGoogleStorageDestroy, CheckDestroy: testAccStorageBucketDestroy,
Steps: []resource.TestStep{ Steps: []resource.TestStep{
resource.TestStep{ resource.TestStep{
Config: testGoogleStorageBucketsReaderDefaults(bucketName), Config: testAccStorageBucket_customAttributes(bucketName),
},
resource.TestStep{
ResourceName: "google_storage_bucket.bucket",
ImportState: true,
ImportStateVerify: true,
ImportStateVerifyIgnore: []string{"force_destroy"},
},
},
})
}
func TestAccStorageForceDestroy(t *testing.T) {
bucketName := fmt.Sprintf("tf-test-acl-bucket-%d", acctest.RandInt())
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
CheckDestroy: testAccGoogleStorageDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: testGoogleStorageBucketsReaderCustomAttributes(bucketName),
Check: resource.ComposeTestCheckFunc( Check: resource.ComposeTestCheckFunc(
testAccCheckCloudStorageBucketExists( testAccCheckStorageBucketExists(
"google_storage_bucket.bucket", bucketName), "google_storage_bucket.bucket", bucketName, &bucket),
), ),
}, },
resource.TestStep{ resource.TestStep{
Config: testGoogleStorageBucketsReaderCustomAttributes(bucketName), Config: testAccStorageBucket_customAttributes(bucketName),
Check: resource.ComposeTestCheckFunc( Check: resource.ComposeTestCheckFunc(
testAccCheckCloudStorageBucketPutItem(bucketName), testAccCheckStorageBucketPutItem(bucketName),
), ),
}, },
resource.TestStep{ resource.TestStep{
Config: testGoogleStorageBucketsReaderCustomAttributes("idontexist"), Config: testAccStorageBucket_customAttributes("idontexist"),
Check: resource.ComposeTestCheckFunc( Check: resource.ComposeTestCheckFunc(
testAccCheckCloudStorageBucketMissing(bucketName), testAccCheckStorageBucketMissing(bucketName),
), ),
}, },
}, },
}) })
} }
func testAccCheckCloudStorageBucketExists(n string, bucketName string) resource.TestCheckFunc { func testAccCheckStorageBucketExists(n string, bucketName string, bucket *storage.Bucket) resource.TestCheckFunc {
return func(s *terraform.State) error { return func(s *terraform.State) error {
rs, ok := s.RootModule().Resources[n] rs, ok := s.RootModule().Resources[n]
if !ok { if !ok {
@ -213,11 +198,13 @@ func testAccCheckCloudStorageBucketExists(n string, bucketName string) resource.
if found.Name != bucketName { if found.Name != bucketName {
return fmt.Errorf("expected name %s, got %s", bucketName, found.Name) return fmt.Errorf("expected name %s, got %s", bucketName, found.Name)
} }
*bucket = *found
return nil return nil
} }
} }
func testAccCheckCloudStorageBucketPutItem(bucketName string) resource.TestCheckFunc { func testAccCheckStorageBucketPutItem(bucketName string) resource.TestCheckFunc {
return func(s *terraform.State) error { return func(s *terraform.State) error {
config := testAccProvider.Meta().(*Config) config := testAccProvider.Meta().(*Config)
@ -227,7 +214,7 @@ func testAccCheckCloudStorageBucketPutItem(bucketName string) resource.TestCheck
// This needs to use Media(io.Reader) call, otherwise it does not go to /upload API and fails // This needs to use Media(io.Reader) call, otherwise it does not go to /upload API and fails
if res, err := config.clientStorage.Objects.Insert(bucketName, object).Media(dataReader).Do(); err == nil { if res, err := config.clientStorage.Objects.Insert(bucketName, object).Media(dataReader).Do(); err == nil {
fmt.Printf("Created object %v at location %v\n\n", res.Name, res.SelfLink) log.Printf("[INFO] Created object %v at location %v\n\n", res.Name, res.SelfLink)
} else { } else {
return fmt.Errorf("Objects.Insert failed: %v", err) return fmt.Errorf("Objects.Insert failed: %v", err)
} }
@ -236,7 +223,7 @@ func testAccCheckCloudStorageBucketPutItem(bucketName string) resource.TestCheck
} }
} }
func testAccCheckCloudStorageBucketMissing(bucketName string) resource.TestCheckFunc { func testAccCheckStorageBucketMissing(bucketName string) resource.TestCheckFunc {
return func(s *terraform.State) error { return func(s *terraform.State) error {
config := testAccProvider.Meta().(*Config) config := testAccProvider.Meta().(*Config)
@ -253,7 +240,7 @@ func testAccCheckCloudStorageBucketMissing(bucketName string) resource.TestCheck
} }
} }
func testAccGoogleStorageDestroy(s *terraform.State) error { func testAccStorageBucketDestroy(s *terraform.State) error {
config := testAccProvider.Meta().(*Config) config := testAccProvider.Meta().(*Config)
for _, rs := range s.RootModule().Resources { for _, rs := range s.RootModule().Resources {
@ -270,7 +257,7 @@ func testAccGoogleStorageDestroy(s *terraform.State) error {
return nil return nil
} }
func testGoogleStorageBucketsReaderDefaults(bucketName string) string { func testAccStorageBucket_basic(bucketName string) string {
return fmt.Sprintf(` return fmt.Sprintf(`
resource "google_storage_bucket" "bucket" { resource "google_storage_bucket" "bucket" {
name = "%s" name = "%s"
@ -278,7 +265,7 @@ resource "google_storage_bucket" "bucket" {
`, bucketName) `, bucketName)
} }
func testGoogleStorageBucketsReaderCustomAttributes(bucketName string) string { func testAccStorageBucket_customAttributes(bucketName string) string {
return fmt.Sprintf(` return fmt.Sprintf(`
resource "google_storage_bucket" "bucket" { resource "google_storage_bucket" "bucket" {
name = "%s" name = "%s"
@ -289,7 +276,7 @@ resource "google_storage_bucket" "bucket" {
`, bucketName) `, bucketName)
} }
func testGoogleStorageBucketsReaderStorageClass(bucketName, storageClass, location string) string { func testAccStorageBucket_storageClass(bucketName, storageClass, location string) string {
var locationBlock string var locationBlock string
if location != "" { if location != "" {
locationBlock = fmt.Sprintf(` locationBlock = fmt.Sprintf(`

View File

@ -23,6 +23,8 @@ func canonicalizeServiceScope(scope string) string {
"storage-ro": "https://www.googleapis.com/auth/devstorage.read_only", "storage-ro": "https://www.googleapis.com/auth/devstorage.read_only",
"storage-rw": "https://www.googleapis.com/auth/devstorage.read_write", "storage-rw": "https://www.googleapis.com/auth/devstorage.read_write",
"taskqueue": "https://www.googleapis.com/auth/taskqueue", "taskqueue": "https://www.googleapis.com/auth/taskqueue",
"trace-append": "https://www.googleapis.com/auth/trace.append",
"trace-ro": "https://www.googleapis.com/auth/trace.readonly",
"useraccounts-ro": "https://www.googleapis.com/auth/cloud.useraccounts.readonly", "useraccounts-ro": "https://www.googleapis.com/auth/cloud.useraccounts.readonly",
"useraccounts-rw": "https://www.googleapis.com/auth/cloud.useraccounts", "useraccounts-rw": "https://www.googleapis.com/auth/cloud.useraccounts",
"userinfo-email": "https://www.googleapis.com/auth/userinfo.email", "userinfo-email": "https://www.googleapis.com/auth/userinfo.email",

View File

@ -70,6 +70,7 @@ func TestIngnitionFileAppend(t *testing.T) {
func testIgnitionError(t *testing.T, input string, expectedErr *regexp.Regexp) { func testIgnitionError(t *testing.T, input string, expectedErr *regexp.Regexp) {
resource.Test(t, resource.TestCase{ resource.Test(t, resource.TestCase{
IsUnitTest: true,
Providers: testProviders, Providers: testProviders,
Steps: []resource.TestStep{ Steps: []resource.TestStep{
{ {
@ -94,6 +95,7 @@ func testIgnition(t *testing.T, input string, assert func(*types.Config) error)
} }
resource.Test(t, resource.TestCase{ resource.Test(t, resource.TestCase{
IsUnitTest: true,
Providers: testProviders, Providers: testProviders,
Steps: []resource.TestStep{ Steps: []resource.TestStep{
{ {

View File

@ -152,6 +152,13 @@ func (c *Config) computeV2Client(region string) (*gophercloud.ServiceClient, err
}) })
} }
func (c *Config) dnsV2Client(region string) (*gophercloud.ServiceClient, error) {
return openstack.NewDNSV2(c.osClient, gophercloud.EndpointOpts{
Region: region,
Availability: c.getEndpointType(),
})
}
func (c *Config) imageV2Client(region string) (*gophercloud.ServiceClient, error) { func (c *Config) imageV2Client(region string) (*gophercloud.ServiceClient, error) {
return openstack.NewImageServiceV2(c.osClient, gophercloud.EndpointOpts{ return openstack.NewImageServiceV2(c.osClient, gophercloud.EndpointOpts{
Region: region, Region: region,

View File

@ -0,0 +1,31 @@
package openstack
import (
"fmt"
"testing"
"github.com/hashicorp/terraform/helper/acctest"
"github.com/hashicorp/terraform/helper/resource"
)
func TestAccDNSV2Zone_importBasic(t *testing.T) {
var zoneName = fmt.Sprintf("ACPTTEST%s.com.", acctest.RandString(5))
resourceName := "openstack_dns_zone_v2.zone_1"
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheckDNSZoneV2(t) },
Providers: testAccProviders,
CheckDestroy: testAccCheckDNSV2ZoneDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: testAccDNSV2Zone_basic(zoneName),
},
resource.TestStep{
ResourceName: resourceName,
ImportState: true,
ImportStateVerify: true,
},
},
})
}

View File

@ -150,6 +150,7 @@ func Provider() terraform.ResourceProvider {
"openstack_compute_floatingip_v2": resourceComputeFloatingIPV2(), "openstack_compute_floatingip_v2": resourceComputeFloatingIPV2(),
"openstack_compute_floatingip_associate_v2": resourceComputeFloatingIPAssociateV2(), "openstack_compute_floatingip_associate_v2": resourceComputeFloatingIPAssociateV2(),
"openstack_compute_volume_attach_v2": resourceComputeVolumeAttachV2(), "openstack_compute_volume_attach_v2": resourceComputeVolumeAttachV2(),
"openstack_dns_zone_v2": resourceDNSZoneV2(),
"openstack_fw_firewall_v1": resourceFWFirewallV1(), "openstack_fw_firewall_v1": resourceFWFirewallV1(),
"openstack_fw_policy_v1": resourceFWPolicyV1(), "openstack_fw_policy_v1": resourceFWPolicyV1(),
"openstack_fw_rule_v1": resourceFWRuleV1(), "openstack_fw_rule_v1": resourceFWRuleV1(),

View File

@ -1056,19 +1056,26 @@ func getInstanceNetworks(computeClient *gophercloud.ServiceClient, d *schema.Res
log.Printf("[DEBUG] os-tenant-networks disabled.") log.Printf("[DEBUG] os-tenant-networks disabled.")
tenantNetworkExt = false tenantNetworkExt = false
} else { } else {
return nil, err log.Printf("[DEBUG] unexpected os-tenant-networks error: %s", err)
tenantNetworkExt = false
} }
} }
} }
// In some cases, a call to os-tenant-networks might work,
// but the response is invalid. Catch this during extraction.
networkList := []tenantnetworks.Network{}
if tenantNetworkExt {
networkList, err = tenantnetworks.ExtractNetworks(allPages)
if err != nil {
log.Printf("[DEBUG] error extracting os-tenant-networks results: %s", err)
tenantNetworkExt = false
}
}
networkID := "" networkID := ""
networkName := "" networkName := ""
if tenantNetworkExt { if tenantNetworkExt {
networkList, err := tenantnetworks.ExtractNetworks(allPages)
if err != nil {
return nil, err
}
for _, network := range networkList { for _, network := range networkList {
if network.Name == rawMap["name"] { if network.Name == rawMap["name"] {
tenantnet = network tenantnet = network

View File

@ -13,6 +13,7 @@ import (
"github.com/gophercloud/gophercloud/openstack/compute/v2/extensions/secgroups" "github.com/gophercloud/gophercloud/openstack/compute/v2/extensions/secgroups"
"github.com/gophercloud/gophercloud/openstack/compute/v2/extensions/volumeattach" "github.com/gophercloud/gophercloud/openstack/compute/v2/extensions/volumeattach"
"github.com/gophercloud/gophercloud/openstack/compute/v2/servers" "github.com/gophercloud/gophercloud/openstack/compute/v2/servers"
"github.com/gophercloud/gophercloud/openstack/networking/v2/networks"
"github.com/gophercloud/gophercloud/pagination" "github.com/gophercloud/gophercloud/pagination"
) )
@ -666,6 +667,27 @@ func TestAccComputeV2Instance_timeout(t *testing.T) {
}) })
} }
func TestAccComputeV2Instance_networkNameToID(t *testing.T) {
var instance servers.Server
var network networks.Network
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
CheckDestroy: testAccCheckComputeV2InstanceDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: testAccComputeV2Instance_networkNameToID,
Check: resource.ComposeTestCheckFunc(
testAccCheckComputeV2InstanceExists("openstack_compute_instance_v2.instance_1", &instance),
testAccCheckNetworkingV2NetworkExists("openstack_networking_network_v2.network_1", &network),
resource.TestCheckResourceAttrPtr(
"openstack_compute_instance_v2.instance_1", "network.1.uuid", &network.ID),
),
},
},
})
}
func testAccCheckComputeV2InstanceDestroy(s *terraform.State) error { func testAccCheckComputeV2InstanceDestroy(s *terraform.State) error {
config := testAccProvider.Meta().(*Config) config := testAccProvider.Meta().(*Config)
computeClient, err := config.computeV2Client(OS_REGION_NAME) computeClient, err := config.computeV2Client(OS_REGION_NAME)
@ -1580,3 +1602,34 @@ resource "openstack_compute_instance_v2" "instance_1" {
} }
} }
` `
var testAccComputeV2Instance_networkNameToID = fmt.Sprintf(`
resource "openstack_networking_network_v2" "network_1" {
name = "network_1"
}
resource "openstack_networking_subnet_v2" "subnet_1" {
name = "subnet_1"
network_id = "${openstack_networking_network_v2.network_1.id}"
cidr = "192.168.1.0/24"
ip_version = 4
enable_dhcp = true
no_gateway = true
}
resource "openstack_compute_instance_v2" "instance_1" {
depends_on = ["openstack_networking_subnet_v2.subnet_1"]
name = "instance_1"
security_groups = ["default"]
network {
uuid = "%s"
}
network {
name = "${openstack_networking_network_v2.network_1.name}"
}
}
`, OS_NETWORK_ID)

View File

@ -0,0 +1,271 @@
package openstack
import (
"fmt"
"log"
"time"
"github.com/gophercloud/gophercloud"
"github.com/gophercloud/gophercloud/openstack/dns/v2/zones"
"github.com/hashicorp/terraform/helper/resource"
"github.com/hashicorp/terraform/helper/schema"
)
func resourceDNSZoneV2() *schema.Resource {
return &schema.Resource{
Create: resourceDNSZoneV2Create,
Read: resourceDNSZoneV2Read,
Update: resourceDNSZoneV2Update,
Delete: resourceDNSZoneV2Delete,
Importer: &schema.ResourceImporter{
State: schema.ImportStatePassthrough,
},
Timeouts: &schema.ResourceTimeout{
Create: schema.DefaultTimeout(10 * time.Minute),
Update: schema.DefaultTimeout(10 * time.Minute),
Delete: schema.DefaultTimeout(10 * time.Minute),
},
Schema: map[string]*schema.Schema{
"region": &schema.Schema{
Type: schema.TypeString,
Required: true,
ForceNew: true,
DefaultFunc: schema.EnvDefaultFunc("OS_REGION_NAME", ""),
},
"name": &schema.Schema{
Type: schema.TypeString,
Required: true,
ForceNew: true,
},
"email": &schema.Schema{
Type: schema.TypeString,
Optional: true,
ForceNew: false,
},
"type": &schema.Schema{
Type: schema.TypeString,
Optional: true,
Computed: true,
ForceNew: true,
ValidateFunc: resourceDNSZoneV2ValidType,
},
"attributes": &schema.Schema{
Type: schema.TypeMap,
Optional: true,
ForceNew: true,
},
"ttl": &schema.Schema{
Type: schema.TypeInt,
Optional: true,
Computed: true,
ForceNew: false,
},
"description": &schema.Schema{
Type: schema.TypeString,
Optional: true,
ForceNew: false,
},
"masters": &schema.Schema{
Type: schema.TypeSet,
Optional: true,
ForceNew: false,
Elem: &schema.Schema{Type: schema.TypeString},
},
"value_specs": &schema.Schema{
Type: schema.TypeMap,
Optional: true,
ForceNew: true,
},
},
}
}
func resourceDNSZoneV2Create(d *schema.ResourceData, meta interface{}) error {
config := meta.(*Config)
dnsClient, err := config.dnsV2Client(GetRegion(d))
if err != nil {
return fmt.Errorf("Error creating OpenStack DNS client: %s", err)
}
mastersraw := d.Get("masters").(*schema.Set).List()
masters := make([]string, len(mastersraw))
for i, masterraw := range mastersraw {
masters[i] = masterraw.(string)
}
attrsraw := d.Get("attributes").(map[string]interface{})
attrs := make(map[string]string, len(attrsraw))
for k, v := range attrsraw {
attrs[k] = v.(string)
}
createOpts := ZoneCreateOpts{
zones.CreateOpts{
Name: d.Get("name").(string),
Type: d.Get("type").(string),
Attributes: attrs,
TTL: d.Get("ttl").(int),
Email: d.Get("email").(string),
Description: d.Get("description").(string),
Masters: masters,
},
MapValueSpecs(d),
}
log.Printf("[DEBUG] Create Options: %#v", createOpts)
n, err := zones.Create(dnsClient, createOpts).Extract()
if err != nil {
return fmt.Errorf("Error creating OpenStack DNS zone: %s", err)
}
log.Printf("[DEBUG] Waiting for DNS Zone (%s) to become available", n.ID)
stateConf := &resource.StateChangeConf{
Target: []string{"ACTIVE"},
Pending: []string{"PENDING"},
Refresh: waitForDNSZone(dnsClient, n.ID),
Timeout: d.Timeout(schema.TimeoutCreate),
Delay: 5 * time.Second,
MinTimeout: 3 * time.Second,
}
_, err = stateConf.WaitForState()
d.SetId(n.ID)
log.Printf("[DEBUG] Created OpenStack DNS Zone %s: %#v", n.ID, n)
return resourceDNSZoneV2Read(d, meta)
}
func resourceDNSZoneV2Read(d *schema.ResourceData, meta interface{}) error {
config := meta.(*Config)
dnsClient, err := config.dnsV2Client(GetRegion(d))
if err != nil {
return fmt.Errorf("Error creating OpenStack DNS client: %s", err)
}
n, err := zones.Get(dnsClient, d.Id()).Extract()
if err != nil {
return CheckDeleted(d, err, "zone")
}
log.Printf("[DEBUG] Retrieved Zone %s: %#v", d.Id(), n)
d.Set("name", n.Name)
d.Set("email", n.Email)
d.Set("description", n.Description)
d.Set("ttl", n.TTL)
d.Set("type", n.Type)
d.Set("attributes", n.Attributes)
d.Set("masters", n.Masters)
d.Set("region", GetRegion(d))
return nil
}
func resourceDNSZoneV2Update(d *schema.ResourceData, meta interface{}) error {
config := meta.(*Config)
dnsClient, err := config.dnsV2Client(GetRegion(d))
if err != nil {
return fmt.Errorf("Error creating OpenStack DNS client: %s", err)
}
var updateOpts zones.UpdateOpts
if d.HasChange("email") {
updateOpts.Email = d.Get("email").(string)
}
if d.HasChange("ttl") {
updateOpts.TTL = d.Get("ttl").(int)
}
if d.HasChange("masters") {
updateOpts.Masters = d.Get("masters").([]string)
}
if d.HasChange("description") {
updateOpts.Description = d.Get("description").(string)
}
log.Printf("[DEBUG] Updating Zone %s with options: %#v", d.Id(), updateOpts)
_, err = zones.Update(dnsClient, d.Id(), updateOpts).Extract()
if err != nil {
return fmt.Errorf("Error updating OpenStack DNS Zone: %s", err)
}
log.Printf("[DEBUG] Waiting for DNS Zone (%s) to update", d.Id())
stateConf := &resource.StateChangeConf{
Target: []string{"ACTIVE"},
Pending: []string{"PENDING"},
Refresh: waitForDNSZone(dnsClient, d.Id()),
Timeout: d.Timeout(schema.TimeoutUpdate),
Delay: 5 * time.Second,
MinTimeout: 3 * time.Second,
}
_, err = stateConf.WaitForState()
return resourceDNSZoneV2Read(d, meta)
}
func resourceDNSZoneV2Delete(d *schema.ResourceData, meta interface{}) error {
config := meta.(*Config)
dnsClient, err := config.dnsV2Client(GetRegion(d))
if err != nil {
return fmt.Errorf("Error creating OpenStack DNS client: %s", err)
}
_, err = zones.Delete(dnsClient, d.Id()).Extract()
if err != nil {
return fmt.Errorf("Error deleting OpenStack DNS Zone: %s", err)
}
log.Printf("[DEBUG] Waiting for DNS Zone (%s) to become available", d.Id())
stateConf := &resource.StateChangeConf{
Target: []string{"DELETED"},
Pending: []string{"ACTIVE", "PENDING"},
Refresh: waitForDNSZone(dnsClient, d.Id()),
Timeout: d.Timeout(schema.TimeoutDelete),
Delay: 5 * time.Second,
MinTimeout: 3 * time.Second,
}
_, err = stateConf.WaitForState()
d.SetId("")
return nil
}
func resourceDNSZoneV2ValidType(v interface{}, k string) (ws []string, errors []error) {
value := v.(string)
validTypes := []string{
"PRIMARY",
"SECONDARY",
}
for _, v := range validTypes {
if value == v {
return
}
}
err := fmt.Errorf("%s must be one of %s", k, validTypes)
errors = append(errors, err)
return
}
func waitForDNSZone(dnsClient *gophercloud.ServiceClient, zoneId string) resource.StateRefreshFunc {
return func() (interface{}, string, error) {
zone, err := zones.Get(dnsClient, zoneId).Extract()
if err != nil {
if _, ok := err.(gophercloud.ErrDefault404); ok {
return zone, "DELETED", nil
}
return nil, "", err
}
log.Printf("[DEBUG] OpenStack DNS Zone (%s) current status: %s", zone.ID, zone.Status)
return zone, zone.Status, nil
}
}

View File

@ -0,0 +1,196 @@
package openstack
import (
"fmt"
"os"
"regexp"
"testing"
"github.com/hashicorp/terraform/helper/acctest"
"github.com/hashicorp/terraform/helper/resource"
"github.com/hashicorp/terraform/terraform"
"github.com/gophercloud/gophercloud/openstack/dns/v2/zones"
)
func TestAccDNSV2Zone_basic(t *testing.T) {
var zone zones.Zone
var zoneName = fmt.Sprintf("ACPTTEST%s.com.", acctest.RandString(5))
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheckDNSZoneV2(t) },
Providers: testAccProviders,
CheckDestroy: testAccCheckDNSV2ZoneDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: testAccDNSV2Zone_basic(zoneName),
Check: resource.ComposeTestCheckFunc(
testAccCheckDNSV2ZoneExists("openstack_dns_zone_v2.zone_1", &zone),
resource.TestCheckResourceAttr(
"openstack_dns_zone_v2.zone_1", "description", "a zone"),
),
},
resource.TestStep{
Config: testAccDNSV2Zone_update(zoneName),
Check: resource.ComposeTestCheckFunc(
resource.TestCheckResourceAttr("openstack_dns_zone_v2.zone_1", "name", zoneName),
resource.TestCheckResourceAttr("openstack_dns_zone_v2.zone_1", "email", "email2@example.com"),
resource.TestCheckResourceAttr("openstack_dns_zone_v2.zone_1", "ttl", "6000"),
resource.TestCheckResourceAttr("openstack_dns_zone_v2.zone_1", "type", "PRIMARY"),
resource.TestCheckResourceAttr(
"openstack_dns_zone_v2.zone_1", "description", "an updated zone"),
),
},
},
})
}
func TestAccDNSV2Zone_readTTL(t *testing.T) {
var zone zones.Zone
var zoneName = fmt.Sprintf("ACPTTEST%s.com.", acctest.RandString(5))
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheckDNSZoneV2(t) },
Providers: testAccProviders,
CheckDestroy: testAccCheckDNSV2ZoneDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: testAccDNSV2Zone_readTTL(zoneName),
Check: resource.ComposeTestCheckFunc(
testAccCheckDNSV2ZoneExists("openstack_dns_zone_v2.zone_1", &zone),
resource.TestCheckResourceAttr("openstack_dns_zone_v2.zone_1", "type", "PRIMARY"),
resource.TestMatchResourceAttr(
"openstack_dns_zone_v2.zone_1", "ttl", regexp.MustCompile("^[0-9]+$")),
),
},
},
})
}
func TestAccDNSV2Zone_timeout(t *testing.T) {
var zone zones.Zone
var zoneName = fmt.Sprintf("ACPTTEST%s.com.", acctest.RandString(5))
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheckDNSZoneV2(t) },
Providers: testAccProviders,
CheckDestroy: testAccCheckDNSV2ZoneDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: testAccDNSV2Zone_timeout(zoneName),
Check: resource.ComposeTestCheckFunc(
testAccCheckDNSV2ZoneExists("openstack_dns_zone_v2.zone_1", &zone),
),
},
},
})
}
func testAccCheckDNSV2ZoneDestroy(s *terraform.State) error {
config := testAccProvider.Meta().(*Config)
dnsClient, err := config.dnsV2Client(OS_REGION_NAME)
if err != nil {
return fmt.Errorf("Error creating OpenStack DNS client: %s", err)
}
for _, rs := range s.RootModule().Resources {
if rs.Type != "openstack_dns_zone_v2" {
continue
}
_, err := zones.Get(dnsClient, rs.Primary.ID).Extract()
if err == nil {
return fmt.Errorf("Zone still exists")
}
}
return nil
}
func testAccCheckDNSV2ZoneExists(n string, zone *zones.Zone) resource.TestCheckFunc {
return func(s *terraform.State) error {
rs, ok := s.RootModule().Resources[n]
if !ok {
return fmt.Errorf("Not found: %s", n)
}
if rs.Primary.ID == "" {
return fmt.Errorf("No ID is set")
}
config := testAccProvider.Meta().(*Config)
dnsClient, err := config.dnsV2Client(OS_REGION_NAME)
if err != nil {
return fmt.Errorf("Error creating OpenStack DNS client: %s", err)
}
found, err := zones.Get(dnsClient, rs.Primary.ID).Extract()
if err != nil {
return err
}
if found.ID != rs.Primary.ID {
return fmt.Errorf("Zone not found")
}
*zone = *found
return nil
}
}
func testAccPreCheckDNSZoneV2(t *testing.T) {
v := os.Getenv("OS_AUTH_URL")
if v == "" {
t.Fatal("OS_AUTH_URL must be set for acceptance tests")
}
}
func testAccDNSV2Zone_basic(zoneName string) string {
return fmt.Sprintf(`
resource "openstack_dns_zone_v2" "zone_1" {
name = "%s"
email = "email1@example.com"
description = "a zone"
ttl = 3000
type = "PRIMARY"
}
`, zoneName)
}
func testAccDNSV2Zone_update(zoneName string) string {
return fmt.Sprintf(`
resource "openstack_dns_zone_v2" "zone_1" {
name = "%s"
email = "email2@example.com"
description = "an updated zone"
ttl = 6000
type = "PRIMARY"
}
`, zoneName)
}
func testAccDNSV2Zone_readTTL(zoneName string) string {
return fmt.Sprintf(`
resource "openstack_dns_zone_v2" "zone_1" {
name = "%s"
email = "email1@example.com"
}
`, zoneName)
}
func testAccDNSV2Zone_timeout(zoneName string) string {
return fmt.Sprintf(`
resource "openstack_dns_zone_v2" "zone_1" {
name = "%s"
email = "email@example.com"
ttl = 3000
timeouts {
create = "5m"
update = "5m"
delete = "5m"
}
}
`, zoneName)
}

View File

@ -3,6 +3,7 @@ package openstack
import ( import (
"bytes" "bytes"
"encoding/json" "encoding/json"
"fmt"
"io" "io"
"io/ioutil" "io/ioutil"
"log" "log"
@ -11,6 +12,7 @@ import (
"github.com/gophercloud/gophercloud/openstack/compute/v2/extensions/keypairs" "github.com/gophercloud/gophercloud/openstack/compute/v2/extensions/keypairs"
"github.com/gophercloud/gophercloud/openstack/compute/v2/extensions/servergroups" "github.com/gophercloud/gophercloud/openstack/compute/v2/extensions/servergroups"
"github.com/gophercloud/gophercloud/openstack/dns/v2/zones"
"github.com/gophercloud/gophercloud/openstack/networking/v2/extensions/fwaas/firewalls" "github.com/gophercloud/gophercloud/openstack/networking/v2/extensions/fwaas/firewalls"
"github.com/gophercloud/gophercloud/openstack/networking/v2/extensions/fwaas/policies" "github.com/gophercloud/gophercloud/openstack/networking/v2/extensions/fwaas/policies"
"github.com/gophercloud/gophercloud/openstack/networking/v2/extensions/fwaas/rules" "github.com/gophercloud/gophercloud/openstack/networking/v2/extensions/fwaas/rules"
@ -288,3 +290,28 @@ func (opts SubnetCreateOpts) ToSubnetCreateMap() (map[string]interface{}, error)
return b, nil return b, nil
} }
// ZoneCreateOpts represents the attributes used when creating a new DNS zone.
type ZoneCreateOpts struct {
zones.CreateOpts
ValueSpecs map[string]string `json:"value_specs,omitempty"`
}
// ToZoneCreateMap casts a CreateOpts struct to a map.
// It overrides zones.ToZoneCreateMap to add the ValueSpecs field.
func (opts ZoneCreateOpts) ToZoneCreateMap() (map[string]interface{}, error) {
b, err := BuildRequest(opts, "")
if err != nil {
return nil, err
}
if m, ok := b[""].(map[string]interface{}); ok {
if opts.TTL > 0 {
m["ttl"] = opts.TTL
}
return m, nil
}
return nil, fmt.Errorf("Expected map but got %T", b[""])
}

View File

@ -1,7 +1,6 @@
package postgresql package postgresql
import ( import (
"bytes"
"fmt" "fmt"
"github.com/hashicorp/errwrap" "github.com/hashicorp/errwrap"
@ -109,13 +108,5 @@ func providerConfigure(d *schema.ResourceData) (interface{}, error) {
} }
func tfAppName() string { func tfAppName() string {
const VersionPrerelease = terraform.VersionPrerelease return fmt.Sprintf("Terraform v%s", terraform.VersionString())
var versionString bytes.Buffer
fmt.Fprintf(&versionString, "Terraform v%s", terraform.Version)
if terraform.VersionPrerelease != "" {
fmt.Fprintf(&versionString, "-%s", terraform.VersionPrerelease)
}
return versionString.String()
} }

View File

@ -46,6 +46,11 @@ func testResource() *schema.Resource {
Computed: true, Computed: true,
ForceNew: true, ForceNew: true,
}, },
"computed_from_required": {
Type: schema.TypeString,
Computed: true,
ForceNew: true,
},
"computed_read_only_force_new": { "computed_read_only_force_new": {
Type: schema.TypeString, Type: schema.TypeString,
Computed: true, Computed: true,
@ -129,6 +134,8 @@ func testResourceCreate(d *schema.ResourceData, meta interface{}) error {
return fmt.Errorf("Missing attribute 'required_map', but it's required!") return fmt.Errorf("Missing attribute 'required_map', but it's required!")
} }
d.Set("computed_from_required", d.Get("required"))
return testResourceRead(d, meta) return testResourceRead(d, meta)
} }

View File

@ -0,0 +1,79 @@
package test
import (
"errors"
"testing"
"github.com/hashicorp/terraform/helper/resource"
"github.com/hashicorp/terraform/terraform"
)
// This is actually a test of some core functionality in conjunction with
// helper/schema, rather than of the test provider itself.
//
// Here we're just verifying that unknown splats get flattened when assigned
// to list and set attributes. A variety of other situations are tested in
// an apply context test in the core package, but for this part we lean on
// helper/schema and thus need to exercise it at a higher level.
func TestSplatFlatten(t *testing.T) {
resource.UnitTest(t, resource.TestCase{
Providers: testAccProviders,
CheckDestroy: testAccCheckResourceDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: `
resource "test_resource" "source" {
required = "foo ${count.index}"
required_map = {
key = "value"
}
count = 3
}
resource "test_resource" "splatted" {
# This legacy form of splatting into a list is still supported for
# backward-compatibility but no longer suggested.
set = ["${test_resource.source.*.computed_from_required}"]
list = ["${test_resource.source.*.computed_from_required}"]
required = "yep"
required_map = {
key = "value"
}
}
`,
Check: func(s *terraform.State) error {
gotAttrs := s.RootModule().Resources["test_resource.splatted"].Primary.Attributes
t.Logf("attrs %#v", gotAttrs)
wantAttrs := map[string]string{
"list.#": "3",
"list.0": "foo 0",
"list.1": "foo 1",
"list.2": "foo 2",
// This depends on the default set hash implementation.
// If that changes, these keys will need to be updated.
"set.#": "3",
"set.1136855734": "foo 0",
"set.885275168": "foo 1",
"set.2915920794": "foo 2",
}
errored := false
for k, want := range wantAttrs {
got := gotAttrs[k]
if got != want {
t.Errorf("Wrong %s value %q; want %q", k, got, want)
errored = true
}
}
if errored {
return errors.New("incorrect attribute values")
}
return nil
},
},
},
})
}

View File

@ -1746,7 +1746,7 @@ process.
const successBackendLegacyUnset = ` const successBackendLegacyUnset = `
Terraform has successfully migrated from legacy remote state to your Terraform has successfully migrated from legacy remote state to your
configured remote state. configured backend (%q).
` `
const successBackendReconfigureWithLegacy = ` const successBackendReconfigureWithLegacy = `

View File

@ -705,17 +705,6 @@ func (c *Config) Validate() error {
} }
} }
// Check that all variables are in the proper context
for source, rc := range c.rawConfigs() {
walker := &interpolationWalker{
ContextF: c.validateVarContextFn(source, &errs),
}
if err := reflectwalk.Walk(rc.Raw, walker); err != nil {
errs = append(errs, fmt.Errorf(
"%s: error reading config: %s", source, err))
}
}
// Validate the self variable // Validate the self variable
for source, rc := range c.rawConfigs() { for source, rc := range c.rawConfigs() {
// Ignore provisioners. This is a pretty brittle way to do this, // Ignore provisioners. This is a pretty brittle way to do this,
@ -787,57 +776,6 @@ func (c *Config) rawConfigs() map[string]*RawConfig {
return result return result
} }
func (c *Config) validateVarContextFn(
source string, errs *[]error) interpolationWalkerContextFunc {
return func(loc reflectwalk.Location, node ast.Node) {
// If we're in a slice element, then its fine, since you can do
// anything in there.
if loc == reflectwalk.SliceElem {
return
}
// Otherwise, let's check if there is a splat resource variable
// at the top level in here. We do this by doing a transform that
// replaces everything with a noop node unless its a variable
// access or concat. This should turn the AST into a flat tree
// of Concat(Noop, ...). If there are any variables left that are
// multi-access, then its still broken.
node = node.Accept(func(n ast.Node) ast.Node {
// If it is a concat or variable access, we allow it.
switch n.(type) {
case *ast.Output:
return n
case *ast.VariableAccess:
return n
}
// Otherwise, noop
return &noopNode{}
})
vars, err := DetectVariables(node)
if err != nil {
// Ignore it since this will be caught during parse. This
// actually probably should never happen by the time this
// is called, but its okay.
return
}
for _, v := range vars {
rv, ok := v.(*ResourceVariable)
if !ok {
return
}
if rv.Multi && rv.Index == -1 {
*errs = append(*errs, fmt.Errorf(
"%s: use of the splat ('*') operator must be wrapped in a list declaration",
source))
}
}
}
}
func (c *Config) validateDependsOn( func (c *Config) validateDependsOn(
n string, n string,
v []string, v []string,

View File

@ -593,20 +593,6 @@ func TestConfigValidate_varMultiExactNonSlice(t *testing.T) {
} }
} }
func TestConfigValidate_varMultiNonSlice(t *testing.T) {
c := testConfig(t, "validate-var-multi-non-slice")
if err := c.Validate(); err == nil {
t.Fatal("should not be valid")
}
}
func TestConfigValidate_varMultiNonSliceProvisioner(t *testing.T) {
c := testConfig(t, "validate-var-multi-non-slice-provisioner")
if err := c.Validate(); err == nil {
t.Fatal("should not be valid")
}
}
func TestConfigValidate_varMultiFunctionCall(t *testing.T) { func TestConfigValidate_varMultiFunctionCall(t *testing.T) {
c := testConfig(t, "validate-var-multi-func") c := testConfig(t, "validate-var-multi-func")
if err := c.Validate(); err != nil { if err := c.Validate(); err != nil {

View File

@ -1,9 +0,0 @@
resource "aws_instance" "foo" {
count = 3
}
resource "aws_instance" "bar" {
provisioner "local-exec" {
foo = "${aws_instance.foo.*.id}"
}
}

View File

@ -1,7 +0,0 @@
resource "aws_instance" "foo" {
count = 3
}
resource "aws_instance" "bar" {
foo = "${aws_instance.foo.*.id}"
}

View File

@ -0,0 +1,44 @@
# Enable encryption on a running Linux VM.
This Terraform template was based on [this](https://github.com/Azure/azure-quickstart-templates/tree/master/201-encrypt-running-linux-vm) Azure Quickstart Template. Changes to the ARM template that may have occurred since the creation of this example may not be reflected in this Terraform template.
This template enables encryption on a running linux vm using AAD client secret. This template assumes that the VM is located in the same region as the resource group. If not, please edit the template to pass appropriate location for the VM sub-resources.
## Prerequisites:
Azure Disk Encryption securely stores the encryption secrets in a specified Azure Key Vault.
Create the Key Vault and assign appropriate access policies. You may use this script to ensure that your vault is properly configured: [AzureDiskEncryptionPreRequisiteSetup.ps1](https://github.com/Azure/azure-powershell/blob/10fc37e9141af3fde6f6f79b9d46339b73cf847d/src/ResourceManager/Compute/Commands.Compute/Extension/AzureDiskEncryption/Scripts/AzureDiskEncryptionPreRequisiteSetup.ps1)
Use the below PS cmdlet for getting the `key_vault_secret_url` and `key_vault_resource_id`.
```
Get-AzureRmKeyVault -VaultName $KeyVaultName -ResourceGroupName $rgname
```
References:
- [White paper](https://azure.microsoft.com/en-us/documentation/articles/azure-security-disk-encryption/)
- [Explore Azure Disk Encryption with Azure Powershell](https://blogs.msdn.microsoft.com/azuresecurity/2015/11/16/explore-azure-disk-encryption-with-azure-powershell/)
- [Explore Azure Disk Encryption with Azure PowerShell Part 2](http://blogs.msdn.com/b/azuresecurity/archive/2015/11/21/explore-azure-disk-encryption-with-azure-powershell-part-2.aspx)
## main.tf
The `main.tf` file contains the actual resources that will be deployed. It also contains the Azure Resource Group definition and any defined variables.
## outputs.tf
This data is outputted when `terraform apply` is called, and can be queried using the `terraform output` command.
## provider.tf
You may leave the provider block in the `main.tf`, as it is in this template, or you can create a file called `provider.tf` and add it to your `.gitignore` file.
Azure requires that an application is added to Azure Active Directory to generate the `client_id`, `client_secret`, and `tenant_id` needed by Terraform (`subscription_id` can be recovered from your Azure account details). Please go [here](https://www.terraform.io/docs/providers/azurerm/) for full instructions on how to create this to populate your `provider.tf` file.
## terraform.tfvars
If a `terraform.tfvars` file is present in the current directory, Terraform automatically loads it to populate variables. We don't recommend saving usernames and password to version control, but you can create a local secret variables file and use `-var-file` to load it.
If you are committing this template to source control, please insure that you add this file to your .gitignore file.
## variables.tf
The `variables.tf` file contains all of the input parameters that the user can specify when deploying this Terraform template.
![graph](/examples/azure-encrypt-running-linux-vm/graph.png)

View File

@ -0,0 +1,60 @@
#!/bin/bash
set -o errexit -o nounset
docker run --rm -it \
-e ARM_CLIENT_ID \
-e ARM_CLIENT_SECRET \
-e ARM_SUBSCRIPTION_ID \
-e ARM_TENANT_ID \
-e AAD_CLIENT_ID \
-e AAD_CLIENT_SECRET \
-e KEY_ENCRYPTION_KEY_URL \
-e KEY_VAULT_RESOURCE_ID \
-v $(pwd):/data \
--workdir=/data \
--entrypoint "/bin/sh" \
hashicorp/terraform:light \
-c "/bin/terraform get; \
/bin/terraform validate; \
/bin/terraform plan -out=out.tfplan \
-var resource_group=$KEY \
-var hostname=$KEY \
-var admin_username=$KEY \
-var admin_password=$PASSWORD \
-var passphrase=$PASSWORD \
-var key_vault_name=$KEY_VAULT_NAME \
-var aad_client_id=$AAD_CLIENT_ID \
-var aad_client_secret=$AAD_CLIENT_SECRET \
-var key_encryption_key_url=$KEY_ENCRYPTION_KEY_URL \
-var key_vault_resource_id=$KEY_VAULT_RESOURCE_ID; \
/bin/terraform apply out.tfplan"
# cleanup deployed azure resources via azure-cli
docker run --rm -it \
azuresdk/azure-cli-python \
sh -c "az login --service-principal -u $ARM_CLIENT_ID -p $ARM_CLIENT_SECRET --tenant $ARM_TENANT_ID > /dev/null; \
az vm show -g $KEY -n $KEY; \
az vm encryption show -g $KEY -n $KEY"
# cleanup deployed azure resources via terraform
docker run --rm -it \
-e ARM_CLIENT_ID \
-e ARM_CLIENT_SECRET \
-e ARM_SUBSCRIPTION_ID \
-e ARM_TENANT_ID \
-v $(pwd):/data \
--workdir=/data \
--entrypoint "/bin/sh" \
hashicorp/terraform:light \
-c "/bin/terraform destroy -force \
-var resource_group=$KEY \
-var hostname=$KEY \
-var admin_username=$KEY \
-var admin_password=$PASSWORD \
-var passphrase=$PASSWORD \
-var key_vault_name=$KEY_VAULT_NAME \
-var aad_client_id=$AAD_CLIENT_ID \
-var aad_client_secret=$AAD_CLIENT_SECRET \
-var key_encryption_key_url=$KEY_ENCRYPTION_KEY_URL \
-var key_vault_resource_id=$KEY_VAULT_RESOURCE_ID;"

View File

@ -0,0 +1,17 @@
#!/bin/bash
set -o errexit -o nounset
if docker -v; then
# generate a unique string for CI deployment
export KEY=$(cat /dev/urandom | env LC_CTYPE=C tr -cd 'a-z' | head -c 12)
export PASSWORD=$KEY$(cat /dev/urandom | env LC_CTYPE=C tr -cd 'A-Z' | head -c 2)$(cat /dev/urandom | env LC_CTYPE=C tr -cd '0-9' | head -c 2)
export EXISTING_RESOURCE_GROUP=permanent
export KEY_VAULT_NAME=permanentkeyvault
/bin/sh ./deploy.ci.sh
else
echo "Docker is used to run terraform commands, please install before run: https://docs.docker.com/docker-for-mac/install/"
fi

Binary file not shown.

After

Width:  |  Height:  |  Size: 405 KiB

View File

@ -0,0 +1,223 @@
# provider "azurerm" {
# subscription_id = "REPLACE-WITH-YOUR-SUBSCRIPTION-ID"
# client_id = "REPLACE-WITH-YOUR-CLIENT-ID"
# client_secret = "REPLACE-WITH-YOUR-CLIENT-SECRET"
# tenant_id = "REPLACE-WITH-YOUR-TENANT-ID"
# }
resource "azurerm_resource_group" "rg" {
name = "${var.resource_group}"
location = "${var.location}"
}
resource "azurerm_virtual_network" "vnet" {
name = "${var.hostname}vnet"
location = "${var.location}"
address_space = ["${var.address_space}"]
resource_group_name = "${azurerm_resource_group.rg.name}"
}
resource "azurerm_subnet" "subnet" {
name = "${var.hostname}subnet"
virtual_network_name = "${azurerm_virtual_network.vnet.name}"
resource_group_name = "${azurerm_resource_group.rg.name}"
address_prefix = "${var.subnet_prefix}"
}
resource "azurerm_network_interface" "nic" {
name = "nic"
location = "${var.location}"
resource_group_name = "${azurerm_resource_group.rg.name}"
ip_configuration {
name = "ipconfig"
subnet_id = "${azurerm_subnet.subnet.id}"
private_ip_address_allocation = "Dynamic"
}
}
resource "azurerm_storage_account" "stor" {
name = "${var.hostname}stor"
resource_group_name = "${azurerm_resource_group.rg.name}"
location = "${azurerm_resource_group.rg.location}"
account_type = "${var.storage_account_type}"
}
resource "azurerm_virtual_machine" "vm" {
name = "${var.hostname}"
location = "${var.location}"
resource_group_name = "${azurerm_resource_group.rg.name}"
vm_size = "${var.vm_size}"
network_interface_ids = ["${azurerm_network_interface.nic.id}"]
storage_image_reference {
publisher = "${var.image_publisher}"
offer = "${var.image_offer}"
sku = "${var.image_sku}"
version = "${var.image_version}"
}
storage_os_disk {
name = "${var.hostname}osdisk"
create_option = "FromImage"
disk_size_gb = "15"
}
os_profile {
computer_name = "${var.hostname}"
admin_username = "${var.admin_username}"
admin_password = "${var.admin_password}"
}
os_profile_linux_config {
disable_password_authentication = false
}
}
resource "azurerm_template_deployment" "linux_vm" {
name = "encrypt"
resource_group_name = "${azurerm_resource_group.rg.name}"
deployment_mode = "Incremental"
depends_on = ["azurerm_virtual_machine.vm"]
template_body = <<DEPLOY
{
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"aadClientID": {
"defaultValue": "${var.aad_client_id}",
"type": "string"
},
"aadClientSecret": {
"defaultValue": "${var.aad_client_secret}",
"type": "string"
},
"diskFormatQuery": {
"defaultValue": "",
"type": "string"
},
"encryptionOperation": {
"allowedValues": [ "EnableEncryption", "EnableEncryptionFormat" ],
"defaultValue": "${var.encryption_operation}",
"type": "string"
},
"volumeType": {
"allowedValues": [ "OS", "Data", "All" ],
"defaultValue": "${var.volume_type}",
"type": "string"
},
"keyEncryptionKeyURL": {
"defaultValue": "${var.key_encryption_key_url}",
"type": "string"
},
"keyVaultName": {
"defaultValue": "${var.key_vault_name}",
"type": "string"
},
"keyVaultResourceGroup": {
"defaultValue": "${azurerm_resource_group.rg.name}",
"type": "string"
},
"passphrase": {
"defaultValue": "${var.passphrase}",
"type": "string"
},
"sequenceVersion": {
"defaultValue": "${var.sequence_version}",
"type": "string"
},
"useKek": {
"allowedValues": [
"nokek",
"kek"
],
"defaultValue": "${var.use_kek}",
"type": "string"
},
"vmName": {
"defaultValue": "${azurerm_virtual_machine.vm.name}",
"type": "string"
},
"_artifactsLocation": {
"type": "string",
"defaultValue": "${var.artifacts_location}"
},
"_artifactsLocationSasToken": {
"type": "string",
"defaultValue": "${var.artifacts_location_sas_token}"
}
},
"variables": {
"extensionName": "${var.extension_name}",
"extensionVersion": "0.1",
"keyEncryptionAlgorithm": "RSA-OAEP",
"keyVaultURL": "https://${var.key_vault_name}.vault.azure.net/",
"keyVaultResourceID": "${var.key_vault_resource_id}",
"updateVmUrl": "${var.artifacts_location}/201-encrypt-running-linux-vm/updatevm-${var.use_kek}.json${var.artifacts_location_sas_token}"
},
"resources": [
{
"type": "Microsoft.Compute/virtualMachines/extensions",
"name": "[concat(parameters('vmName'),'/', variables('extensionName'))]",
"apiVersion": "2015-06-15",
"location": "[resourceGroup().location]",
"properties": {
"protectedSettings": {
"AADClientSecret": "[parameters('aadClientSecret')]",
"Passphrase": "[parameters('passphrase')]"
},
"publisher": "Microsoft.Azure.Security",
"settings": {
"AADClientID": "[parameters('aadClientID')]",
"DiskFormatQuery": "[parameters('diskFormatQuery')]",
"EncryptionOperation": "[parameters('encryptionOperation')]",
"KeyEncryptionAlgorithm": "[variables('keyEncryptionAlgorithm')]",
"KeyEncryptionKeyURL": "[parameters('keyEncryptionKeyURL')]",
"KeyVaultURL": "[variables('keyVaultURL')]",
"SequenceVersion": "[parameters('sequenceVersion')]",
"VolumeType": "[parameters('volumeType')]"
},
"type": "AzureDiskEncryptionForLinux",
"typeHandlerVersion": "[variables('extensionVersion')]"
}
},
{
"apiVersion": "2015-01-01",
"dependsOn": [
"[resourceId('Microsoft.Compute/virtualMachines/extensions', parameters('vmName'), variables('extensionName'))]"
],
"name": "[concat(parameters('vmName'), 'updateVm')]",
"type": "Microsoft.Resources/deployments",
"properties": {
"mode": "Incremental",
"parameters": {
"keyEncryptionKeyURL": {
"value": "[parameters('keyEncryptionKeyURL')]"
},
"keyVaultResourceID": {
"value": "[variables('keyVaultResourceID')]"
},
"keyVaultSecretUrl": {
"value": "[reference(resourceId('Microsoft.Compute/virtualMachines/extensions', parameters('vmName'), variables('extensionName'))).instanceView.statuses[0].message]"
},
"vmName": {
"value": "[parameters('vmName')]"
}
},
"templateLink": {
"contentVersion": "1.0.0.0",
"uri": "[variables('updateVmUrl')]"
}
}
}
],
"outputs": {
"BitLockerKey": {
"type": "string",
"value": "[reference(resourceId('Microsoft.Compute/virtualMachines/extensions', parameters('vmName'), variables('extensionName'))).instanceView.statuses[0].message]"
}
}
}
DEPLOY
}

View File

@ -0,0 +1,8 @@
output "hostname" {
value = "${var.hostname}"
}
output "BitLockerKey" {
value = "${azurerm_template_deployment.linux_vm.outputs["BitLockerKey"]}"
sensitive = true
}

View File

@ -0,0 +1,125 @@
variable "resource_group" {
description = "Resource group name into which your new virtual machine will go."
}
variable "location" {
description = "The location/region where the virtual network is created. Changing this forces a new resource to be created."
default = "southcentralus"
}
variable "hostname" {
description = "Used to form various names including the key vault, vm, and storage. Must be unique."
}
variable "address_space" {
description = "The address space that is used by the virtual network. You can supply more than one address space. Changing this forces a new resource to be created."
default = "10.0.0.0/24"
}
variable "subnet_prefix" {
description = "The address prefix to use for the subnet."
default = "10.0.0.0/24"
}
variable "storage_account_type" {
description = "Defines the type of storage account to be created. Valid options are Standard_LRS, Standard_ZRS, Standard_GRS, Standard_RAGRS, Premium_LRS. Changing this is sometimes valid - see the Azure documentation for more information on which types of accounts can be converted into other types."
default = "Standard_LRS"
}
variable "vm_size" {
description = "Specifies the size of the virtual machine. This must be the same as the vm image from which you are copying."
default = "Standard_A0"
}
variable "image_publisher" {
description = "name of the publisher of the image (az vm image list)"
default = "Canonical"
}
variable "image_offer" {
description = "the name of the offer (az vm image list)"
default = "UbuntuServer"
}
variable "image_sku" {
description = "image sku to apply (az vm image list)"
default = "16.04-LTS"
}
variable "image_version" {
description = "version of the image to apply (az vm image list)"
default = "latest"
}
variable "admin_username" {
description = "administrator user name for the vm"
default = "vmadmin"
}
variable "admin_password" {
description = "administrator password for the vm (recommended to disable password auth)"
}
variable "aad_client_id" {
description = "Client ID of AAD app which has permissions to KeyVault"
}
variable "aad_client_secret" {
description = "Client Secret of AAD app which has permissions to KeyVault"
}
variable "disk_format_query" {
description = "The query string used to identify the disks to format and encrypt. This parameter only works when you set the EncryptionOperation as EnableEncryptionFormat. For example, passing [{\"dev_path\":\"/dev/md0\",\"name\":\"encryptedraid\",\"file_system\":\"ext4\"}] will format /dev/md0, encrypt it and mount it at /mnt/dataraid. This parameter should only be used for RAID devices. The specified device must not have any existing filesystem on it."
default = ""
}
variable "encryption_operation" {
description = "EnableEncryption would encrypt the disks in place and EnableEncryptionFormat would format the disks directly"
default = "EnableEncryption"
}
variable "volume_type" {
description = "Defines which drives should be encrypted. OS encryption is supported on RHEL 7.2, CentOS 7.2 & Ubuntu 16.04. Allowed values: OS, Data, All"
default = "All"
}
variable "key_encryption_key_url" {
description = "URL of the KeyEncryptionKey used to encrypt the volume encryption key"
}
variable "key_vault_resource_id" {
description = "uri of Azure key vault resource"
}
variable "key_vault_name" {
description = "name of Azure key vault resource"
}
variable "passphrase" {
description = "The passphrase for the disks"
}
variable "extension_name" {
description = "the name of the vm extension"
default = "AzureDiskEncryptionForLinux"
}
variable "sequence_version" {
description = "sequence version of the bitlocker operation. Increment this everytime an operation is performed on the same VM"
default = 1
}
variable "use_kek" {
description = "Select kek if the secret should be encrypted with a key encryption key. Allowed values: kek, nokek"
default = "kek"
}
variable "artifacts_location" {
description = "The base URI where artifacts required by this template are located. When the template is deployed using the accompanying scripts, a private location in the subscription will be used and this value will be automatically generated."
default = "https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master"
}
variable "artifacts_location_sas_token" {
description = "The sasToken required to access _artifactsLocation. When the template is deployed using the accompanying scripts, a sasToken will be automatically generated."
default = ""
}

View File

@ -0,0 +1,22 @@
# Provision a SQL Database
This sample creates a SQL Database at the "Basic" service level. The template can support other tiers of service, details for each service can be found here:
[SQL Database Pricing](https://azure.microsoft.com/en-us/pricing/details/sql-database/)
## main.tf
The `main.tf` file contains the actual resources that will be deployed. It also contains the Azure Resource Group definition and any defined variables.
## outputs.tf
This data is outputted when `terraform apply` is called, and can be queried using the `terraform output` command.
## provider.tf
Azure requires that an application is added to Azure Active Directory to generate the `client_id`, `client_secret`, and `tenant_id` needed by Terraform (`subscription_id` can be recovered from your Azure account details). Please go [here](https://www.terraform.io/docs/providers/azurerm/) for full instructions on how to create this to populate your `provider.tf` file.
## terraform.tfvars
If a `terraform.tfvars` file is present in the current directory, Terraform automatically loads it to populate variables. We don't recommend saving usernames and password to version control, but you can create a local secret variables file and use `-var-file` to load it.
## variables.tf
The `variables.tf` file contains all of the input parameters that the user can specify when deploying this Terraform template.
![graph](/examples/azure-sql-database/graph.png)

View File

@ -0,0 +1,37 @@
#!/bin/bash
set -o errexit -o nounset
docker run --rm -it \
-e ARM_CLIENT_ID \
-e ARM_CLIENT_SECRET \
-e ARM_SUBSCRIPTION_ID \
-e ARM_TENANT_ID \
-v $(pwd):/data \
--workdir=/data \
--entrypoint "/bin/sh" \
hashicorp/terraform:light \
-c "/bin/terraform get; \
/bin/terraform validate; \
/bin/terraform plan -out=out.tfplan -var resource_group=$KEY -var sql_admin=$KEY -var sql_password=a!@abcd9753w0w@h@12; \
/bin/terraform apply out.tfplan; \
/bin/terraform show;"
# check that resources exist via azure cli
docker run --rm -it \
azuresdk/azure-cli-python \
sh -c "az login --service-principal -u $ARM_CLIENT_ID -p $ARM_CLIENT_SECRET --tenant $ARM_TENANT_ID > /dev/null; \
az sql db show -g $KEY -n MySQLDatabase -s $KEY-sqlsvr; \
az sql server show -g $KEY -n $KEY-sqlsvr;"
# cleanup deployed azure resources via terraform
docker run --rm -it \
-e ARM_CLIENT_ID \
-e ARM_CLIENT_SECRET \
-e ARM_SUBSCRIPTION_ID \
-e ARM_TENANT_ID \
-v $(pwd):/data \
--workdir=/data \
--entrypoint "/bin/sh" \
hashicorp/terraform:light \
-c "/bin/terraform destroy -force -var resource_group=$KEY -var sql_admin=$KEY -var sql_password=a!@abcd9753w0w@h@12;"

View File

@ -0,0 +1,16 @@
#!/bin/bash
set -o errexit -o nounset
if docker -v; then
# generate a unique string for CI deployment
export KEY=$(cat /dev/urandom | env LC_CTYPE=C tr -cd 'a-z' | head -c 12)
export PASSWORD=$a@abcd9753w0w@h@12
# =$KEY$(cat /dev/urandom | env LC_CTYPE=C tr -cd 'A-Z' | head -c 2)$(cat /dev/urandom | env LC_CTYPE=C tr -cd '0-9' | head -c 2)
/bin/sh ./deploy.ci.sh
else
echo "Docker is used to run terraform commands, please install before run: https://docs.docker.com/docker-for-mac/install/"
fi

Binary file not shown.

After

Width:  |  Height:  |  Size: 100 KiB

View File

@ -0,0 +1,39 @@
# provider "azurerm" {
# subscription_id = "REPLACE-WITH-YOUR-SUBSCRIPTION-ID"
# client_id = "REPLACE-WITH-YOUR-CLIENT-ID"
# client_secret = "REPLACE-WITH-YOUR-CLIENT-SECRET"
# tenant_id = "REPLACE-WITH-YOUR-TENANT-ID"
# }
resource "azurerm_resource_group" "rg" {
name = "${var.resource_group}"
location = "${var.location}"
}
resource "azurerm_sql_database" "db" {
name = "mysqldatabase"
resource_group_name = "${azurerm_resource_group.rg.name}"
location = "${var.location}"
edition = "Basic"
collation = "SQL_Latin1_General_CP1_CI_AS"
create_mode = "Default"
requested_service_objective_name = "Basic"
server_name = "${azurerm_sql_server.server.name}"
}
resource "azurerm_sql_server" "server" {
name = "${var.resource_group}-sqlsvr"
resource_group_name = "${azurerm_resource_group.rg.name}"
location = "${var.location}"
version = "12.0"
administrator_login = "${var.sql_admin}"
administrator_login_password = "${var.sql_password}"
}
resource "azurerm_sql_firewall_rule" "fw" {
name = "firewallrules"
resource_group_name = "${azurerm_resource_group.rg.name}"
server_name = "${azurerm_sql_server.server.name}"
start_ip_address = "0.0.0.0"
end_ip_address = "0.0.0.0"
}

View File

@ -0,0 +1,7 @@
output "database_name" {
value = "${azurerm_sql_database.db.name}"
}
output "sql_server_fqdn" {
value = "${azurerm_sql_server.server.fully_qualified_domain_name}"
}

View File

@ -0,0 +1,16 @@
variable "resource_group" {
description = "The name of the resource group in which to create the virtual network."
}
variable "location" {
description = "The location/region where the virtual network is created. Changing this forces a new resource to be created."
default = "southcentralus"
}
variable "sql_admin" {
description = "The administrator username of the SQL Server."
}
variable "sql_password" {
description = "The administrator password of the SQL Server."
}

View File

@ -41,9 +41,10 @@ export CGO_ENABLED=0
# Allow LD_FLAGS to be appended during development compilations # Allow LD_FLAGS to be appended during development compilations
LD_FLAGS="-X main.GitCommit=${GIT_COMMIT}${GIT_DIRTY} $LD_FLAGS" LD_FLAGS="-X main.GitCommit=${GIT_COMMIT}${GIT_DIRTY} $LD_FLAGS"
# In relase mode we don't want debug information in the binary
# In release mode we don't want debug information in the binary
if [[ -n "${TF_RELEASE}" ]]; then if [[ -n "${TF_RELEASE}" ]]; then
LD_FLAGS="-X main.GitCommit=${GIT_COMMIT}${GIT_DIRTY} -s -w" LD_FLAGS="-X main.GitCommit=${GIT_COMMIT}${GIT_DIRTY} -X github.com/hashicorp/terraform/terraform.VersionPrerelease= -s -w"
fi fi
# Build! # Build!

Some files were not shown because too many files have changed in this diff Show More