diff --git a/.gitignore b/.gitignore index 314611940..66ea31701 100644 --- a/.gitignore +++ b/.gitignore @@ -18,3 +18,4 @@ website/node_modules *.bak *~ .*.swp +.idea diff --git a/CHANGELOG.md b/CHANGELOG.md index 7d7d3c8a3..9a7d91318 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -3,31 +3,63 @@ FEATURES: * **New provider: `rundeck`** [GH-2412] + * **New provider: `packet`** [GH-2260], [GH-3472] + * **New provider: `vsphere`**: Initial support for a VM resource [GH-3419] * **New resource: `cloudstack_loadbalancer_rule`** [GH-2934] * **New resource: `google_compute_project_metadata`** [GH-3065] - * **New resources: `aws_ami`, `aws_ami_copy`, `aws_ami_from_instance`** [GH-2874] + * **New resources: `aws_ami`, `aws_ami_copy`, `aws_ami_from_instance`** [GH-2784] + * **New resources: `aws_cloudwatch_log_group`** [GH-2415] * **New resource: `google_storage_bucket_object`** [GH-3192] * **New resources: `google_compute_vpn_gateway`, `google_compute_vpn_tunnel`** [GH-3213] + * **New resources: `google_storage_bucket_acl`, `google_storage_object_acl`** [GH-3272] + * **New resource: `aws_iam_saml_provider`** [GH-3156] + * **New resources: `aws_efs_file_system` and `aws_efs_mount_target`** [GH-2196] + * **New resources: `aws_opsworks_*`** [GH-2162] + * **New resource: `aws_elasticsearch_domain`** [GH-3443] + * **New resource: `aws_directory_service_directory`** [GH-3228] + * **New resource: `aws_autoscaling_lifecycle_hook`** [GH-3351] + * **New resource: `aws_placement_group`** [GH-3457] + * **New resource: `aws_glacier_vault`** [GH-3491] + * **New lifecycle flag: `ignore_changes`** [GH-2525] IMPROVEMENTS: * core: Add a function to find the index of an element in a list. [GH-2704] * core: Print all outputs when `terraform output` is called with no arguments [GH-2920] * core: In plan output summary, count resource replacement as Add/Remove instead of Change [GH-3173] + * core: Add interpolation functions for base64 encoding and decoding. [GH-3325] + * core: Expose parallelism as a CLI option instead of a hard-coding the default of 10 [GH-3365] + * core: Add interpolation function `compact`, to remove empty elements from a list. [GH-3239], [GH-3479] + * core: Allow filtering of log output by level, using e.g. ``TF_LOG=INFO`` [GH-3380] * provider/aws: Add `instance_initiated_shutdown_behavior` to AWS Instance [GH-2887] * provider/aws: Support IAM role names (previously just ARNs) in `aws_ecs_service.iam_role` [GH-3061] * provider/aws: Add update method to RDS Subnet groups, can modify subnets without recreating [GH-3053] * provider/aws: Paginate notifications returned for ASG Notifications [GH-3043] + * provider/aws: Adds additional S3 Bucket Object inputs [GH-3265] * provider/aws: add `ses_smtp_password` to `aws_iam_access_key` [GH-3165] * provider/aws: read `iam_instance_profile` for `aws_instance` and save to state [GH-3167] + * provider/aws: allow `instance` to be computed in `aws_eip` [GH-3036] * provider/aws: Add `versioning` option to `aws_s3_bucket` [GH-2942] * provider/aws: Add `configuation_endpoint` to `aws_elasticache_cluster` [GH-3250] + * provider/aws: Add validation for `app_cookie_stickiness_policy.name` [GH-3277] + * provider/aws: Add validation for `db_parameter_group.name` [GH-3279] + * provider/aws: Set DynamoDB Table ARN after creation [GH-3500] + * provider/aws: `aws_s3_bucket_object` allows interpolated content to be set with new `content` attribute. [GH-3200] + * provider/aws: Allow tags for `aws_kinesis_stream` resource. [GH-3397] + * provider/aws: Configurable capacity waiting duration for ASGs [GH-3191] + * provider/aws: Allow non-persistent Spot Requests [GH-3311] + * provider/aws: Support tags for AWS DB subnet group [GH-3138] * provider/cloudstack: Add `project` parameter to `cloudstack_vpc`, `cloudstack_network`, `cloudstack_ipaddress` and `cloudstack_disk` [GH-3035] + * provider/openstack: add functionality to attach FloatingIP to Port [GH-1788] + * provider/google: Can now do multi-region deployments without using multiple providers [GH-3258] + * remote/s3: Allow canned ACLs to be set on state objects. [GH-3233] + * remote/s3: Remote state is stored in S3 with `Content-Type: application/json` [GH-3385] BUG FIXES: * core: Fix problems referencing list attributes in interpolations [GH-2157] * core: don't error on computed value during input walk [GH-2988] + * core: Ignore missing variables during destroy phase [GH-3393] * provider/google: Crashes with interface conversion in GCE Instance Template [GH-3027] * provider/google: Convert int to int64 when building the GKE cluster.NodeConfig struct [GH-2978] * provider/google: google_compute_instance_template.network_interface.network should be a URL [GH-3226] @@ -40,11 +72,26 @@ BUG FIXES: by AWS [GH-3120] * provider/aws: Read instance source_dest_check and save to state [GH-3152] * provider/aws: Allow `weight = 0` in Route53 records [GH-3196] + * provider/aws: Normalize aws_elasticache_cluster id to lowercase, allowing convergence. [GH-3235] + * provider/aws: Fix ValidateAccountId for IAM Instance Profiles [GH-3313] + * provider/aws: Update Security Group Rules to Version 2 [GH-3019] + * provider/aws: Migrate KeyPair to version 1, fixing issue with using `file()` [GH-3470] + * provider/aws: Fix force_delete on autoscaling groups [GH-3485] + * provider/aws: Fix crash with VPC Peering connections [GH-3490] + * provider/docker: Fix issue preventing private images from being referenced [GH-2619] + * provider/digitalocean: Fix issue causing unnecessary diffs based on droplet slugsize case [GH-3284] * provider/openstack: add state 'downloading' to list of expected states in `blockstorage_volume_v1` creation [GH-2866] * provider/openstack: remove security groups (by name) before adding security groups (by id) [GH-2008] +INTERNAL IMPROVEMENTS: + + * core: Makefile target "plugin-dev" for building just one plugin. [GH-3229] + * helper/schema: Don't allow ``Update`` func if no attributes can actually be updated, per schema. [GH-3288] + * helper/schema: Default hashing function for sets [GH-3018] + * helper/multierror: Remove in favor of [github.com/hashicorp/go-multierror](http://github.com/hashicorp/go-multierror). [GH-3336] + ## 0.6.3 (August 11, 2015) BUG FIXES: diff --git a/Makefile b/Makefile index 548b3ed2b..e53106307 100644 --- a/Makefile +++ b/Makefile @@ -15,6 +15,12 @@ dev: generate quickdev: generate @TF_QUICKDEV=1 TF_DEV=1 sh -c "'$(CURDIR)/scripts/build.sh'" +# Shorthand for building and installing just one plugin for local testing. +# Run as (for example): make plugin-dev PLUGIN=provider-aws +plugin-dev: generate + go install github.com/hashicorp/terraform/builtin/bins/$(PLUGIN) + mv $(GOPATH)/bin/$(PLUGIN) $(GOPATH)/bin/terraform-$(PLUGIN) + release: updatedeps gox -build-toolchain @$(MAKE) bin diff --git a/builtin/bins/provider-packet/main.go b/builtin/bins/provider-packet/main.go new file mode 100644 index 000000000..6d8198ef2 --- /dev/null +++ b/builtin/bins/provider-packet/main.go @@ -0,0 +1,12 @@ +package main + +import ( + "github.com/hashicorp/terraform/builtin/providers/packet" + "github.com/hashicorp/terraform/plugin" +) + +func main() { + plugin.Serve(&plugin.ServeOpts{ + ProviderFunc: packet.Provider, + }) +} diff --git a/builtin/bins/provider-vsphere/main.go b/builtin/bins/provider-vsphere/main.go new file mode 100644 index 000000000..99dba9584 --- /dev/null +++ b/builtin/bins/provider-vsphere/main.go @@ -0,0 +1,12 @@ +package main + +import ( + "github.com/hashicorp/terraform/builtin/providers/vsphere" + "github.com/hashicorp/terraform/plugin" +) + +func main() { + plugin.Serve(&plugin.ServeOpts{ + ProviderFunc: vsphere.Provider, + }) +} diff --git a/builtin/bins/provider-vsphere/main_test.go b/builtin/bins/provider-vsphere/main_test.go new file mode 100644 index 000000000..06ab7d0f9 --- /dev/null +++ b/builtin/bins/provider-vsphere/main_test.go @@ -0,0 +1 @@ +package main diff --git a/builtin/providers/atlas/resource_artifact.go b/builtin/providers/atlas/resource_artifact.go index f4d264a8a..b9ed5aea0 100644 --- a/builtin/providers/atlas/resource_artifact.go +++ b/builtin/providers/atlas/resource_artifact.go @@ -19,7 +19,6 @@ func resourceArtifact() *schema.Resource { return &schema.Resource{ Create: resourceArtifactRead, Read: resourceArtifactRead, - Update: resourceArtifactRead, Delete: resourceArtifactDelete, Schema: map[string]*schema.Schema{ diff --git a/builtin/providers/aws/config.go b/builtin/providers/aws/config.go index a57c65c1b..f8f443b73 100644 --- a/builtin/providers/aws/config.go +++ b/builtin/providers/aws/config.go @@ -5,21 +5,27 @@ import ( "log" "strings" - "github.com/hashicorp/terraform/helper/multierror" + "github.com/hashicorp/go-multierror" "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/aws/awserr" "github.com/aws/aws-sdk-go/aws/credentials" "github.com/aws/aws-sdk-go/service/autoscaling" "github.com/aws/aws-sdk-go/service/cloudwatch" + "github.com/aws/aws-sdk-go/service/cloudwatchlogs" + "github.com/aws/aws-sdk-go/service/directoryservice" "github.com/aws/aws-sdk-go/service/dynamodb" "github.com/aws/aws-sdk-go/service/ec2" "github.com/aws/aws-sdk-go/service/ecs" + "github.com/aws/aws-sdk-go/service/efs" "github.com/aws/aws-sdk-go/service/elasticache" + elasticsearch "github.com/aws/aws-sdk-go/service/elasticsearchservice" "github.com/aws/aws-sdk-go/service/elb" + "github.com/aws/aws-sdk-go/service/glacier" "github.com/aws/aws-sdk-go/service/iam" "github.com/aws/aws-sdk-go/service/kinesis" "github.com/aws/aws-sdk-go/service/lambda" + "github.com/aws/aws-sdk-go/service/opsworks" "github.com/aws/aws-sdk-go/service/rds" "github.com/aws/aws-sdk-go/service/route53" "github.com/aws/aws-sdk-go/service/s3" @@ -41,22 +47,28 @@ type Config struct { } type AWSClient struct { - cloudwatchconn *cloudwatch.CloudWatch - dynamodbconn *dynamodb.DynamoDB - ec2conn *ec2.EC2 - ecsconn *ecs.ECS - elbconn *elb.ELB - autoscalingconn *autoscaling.AutoScaling - s3conn *s3.S3 - sqsconn *sqs.SQS - snsconn *sns.SNS - r53conn *route53.Route53 - region string - rdsconn *rds.RDS - iamconn *iam.IAM - kinesisconn *kinesis.Kinesis - elasticacheconn *elasticache.ElastiCache - lambdaconn *lambda.Lambda + cloudwatchconn *cloudwatch.CloudWatch + cloudwatchlogsconn *cloudwatchlogs.CloudWatchLogs + dsconn *directoryservice.DirectoryService + dynamodbconn *dynamodb.DynamoDB + ec2conn *ec2.EC2 + ecsconn *ecs.ECS + efsconn *efs.EFS + elbconn *elb.ELB + esconn *elasticsearch.ElasticsearchService + autoscalingconn *autoscaling.AutoScaling + s3conn *s3.S3 + sqsconn *sqs.SQS + snsconn *sns.SNS + r53conn *route53.Route53 + region string + rdsconn *rds.RDS + iamconn *iam.IAM + kinesisconn *kinesis.Kinesis + elasticacheconn *elasticache.ElastiCache + lambdaconn *lambda.Lambda + opsworksconn *opsworks.OpsWorks + glacierconn *glacier.Glacier } // Client configures and returns a fully initialized AWSClient @@ -102,6 +114,16 @@ func (c *Config) Client() (interface{}, error) { MaxRetries: aws.Int(c.MaxRetries), Endpoint: aws.String(c.DynamoDBEndpoint), } + // Some services exist only in us-east-1, e.g. because they manage + // resources that can span across multiple regions, or because + // signature format v4 requires region to be us-east-1 for global + // endpoints: + // http://docs.aws.amazon.com/general/latest/gr/sigv4_changes.html + usEast1AwsConfig := &aws.Config{ + Credentials: creds, + Region: aws.String("us-east-1"), + MaxRetries: aws.Int(c.MaxRetries), + } log.Println("[INFO] Initializing DynamoDB connection") client.dynamodbconn = dynamodb.New(awsDynamoDBConfig) @@ -138,15 +160,14 @@ func (c *Config) Client() (interface{}, error) { log.Println("[INFO] Initializing ECS Connection") client.ecsconn = ecs.New(awsConfig) - // aws-sdk-go uses v4 for signing requests, which requires all global - // endpoints to use 'us-east-1'. - // See http://docs.aws.amazon.com/general/latest/gr/sigv4_changes.html + log.Println("[INFO] Initializing EFS Connection") + client.efsconn = efs.New(awsConfig) + + log.Println("[INFO] Initializing ElasticSearch Connection") + client.esconn = elasticsearch.New(awsConfig) + log.Println("[INFO] Initializing Route 53 connection") - client.r53conn = route53.New(&aws.Config{ - Credentials: creds, - Region: aws.String("us-east-1"), - MaxRetries: aws.Int(c.MaxRetries), - }) + client.r53conn = route53.New(usEast1AwsConfig) log.Println("[INFO] Initializing Elasticache Connection") client.elasticacheconn = elasticache.New(awsConfig) @@ -156,6 +177,18 @@ func (c *Config) Client() (interface{}, error) { log.Println("[INFO] Initializing CloudWatch SDK connection") client.cloudwatchconn = cloudwatch.New(awsConfig) + + log.Println("[INFO] Initializing CloudWatch Logs connection") + client.cloudwatchlogsconn = cloudwatchlogs.New(awsConfig) + + log.Println("[INFO] Initializing OpsWorks Connection") + client.opsworksconn = opsworks.New(usEast1AwsConfig) + + log.Println("[INFO] Initializing Directory Service connection") + client.dsconn = directoryservice.New(awsConfig) + + log.Println("[INFO] Initializing Glacier connection") + client.glacierconn = glacier.New(awsConfig) } if len(errs) > 0 { @@ -221,6 +254,7 @@ func (c *Config) ValidateAccountId(iamconn *iam.IAM) error { // User may be an IAM instance profile, so fail silently. // If it is an IAM instance profile // validating account might be superfluous + return nil } else { return fmt.Errorf("Failed getting account ID from IAM: %s", err) // return error if the account id is explicitly not authorised diff --git a/builtin/providers/aws/conversions.go b/builtin/providers/aws/conversions.go new file mode 100644 index 000000000..1b69cee06 --- /dev/null +++ b/builtin/providers/aws/conversions.go @@ -0,0 +1,33 @@ +package aws + +import ( + "github.com/awslabs/aws-sdk-go/aws" + "github.com/hashicorp/terraform/helper/schema" +) + +func makeAwsStringList(in []interface{}) []*string { + ret := make([]*string, len(in), len(in)) + for i := 0; i < len(in); i++ { + ret[i] = aws.String(in[i].(string)) + } + return ret +} + +func makeAwsStringSet(in *schema.Set) []*string { + inList := in.List() + ret := make([]*string, len(inList), len(inList)) + for i := 0; i < len(ret); i++ { + ret[i] = aws.String(inList[i].(string)) + } + return ret +} + +func unwrapAwsStringList(in []*string) []string { + ret := make([]string, len(in), len(in)) + for i := 0; i < len(in); i++ { + if in[i] != nil { + ret[i] = *in[i] + } + } + return ret +} diff --git a/builtin/providers/aws/network_acl_entry.go b/builtin/providers/aws/network_acl_entry.go index 299c9f8c5..22b909bce 100644 --- a/builtin/providers/aws/network_acl_entry.go +++ b/builtin/providers/aws/network_acl_entry.go @@ -29,7 +29,7 @@ func expandNetworkAclEntries(configured []interface{}, entryType string) ([]*ec2 From: aws.Int64(int64(data["from_port"].(int))), To: aws.Int64(int64(data["to_port"].(int))), }, - Egress: aws.Bool((entryType == "egress")), + Egress: aws.Bool(entryType == "egress"), RuleAction: aws.String(data["action"].(string)), RuleNumber: aws.Int64(int64(data["rule_no"].(int))), CidrBlock: aws.String(data["cidr_block"].(string)), diff --git a/builtin/providers/aws/opsworks_layers.go b/builtin/providers/aws/opsworks_layers.go new file mode 100644 index 000000000..4ad9382eb --- /dev/null +++ b/builtin/providers/aws/opsworks_layers.go @@ -0,0 +1,558 @@ +package aws + +import ( + "fmt" + "log" + "strconv" + + "github.com/hashicorp/terraform/helper/hashcode" + "github.com/hashicorp/terraform/helper/schema" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/awserr" + "github.com/aws/aws-sdk-go/service/opsworks" +) + +// OpsWorks has a single concept of "layer" which represents several different +// layer types. The differences between these are in some extra properties that +// get packed into an "Attributes" map, but in the OpsWorks UI these are presented +// as first-class options, and so Terraform prefers to expose them this way and +// hide the implementation detail that they are all packed into a single type +// in the underlying API. +// +// This file contains utilities that are shared between all of the concrete +// layer resource types, which have names matching aws_opsworks_*_layer . + +type opsworksLayerTypeAttribute struct { + AttrName string + Type schema.ValueType + Default interface{} + Required bool + WriteOnly bool +} + +type opsworksLayerType struct { + TypeName string + DefaultLayerName string + Attributes map[string]*opsworksLayerTypeAttribute + CustomShortName bool +} + +var ( + opsworksTrueString = "1" + opsworksFalseString = "0" +) + +func (lt *opsworksLayerType) SchemaResource() *schema.Resource { + resourceSchema := map[string]*schema.Schema{ + "id": &schema.Schema{ + Type: schema.TypeString, + Computed: true, + }, + + "auto_assign_elastic_ips": &schema.Schema{ + Type: schema.TypeBool, + Optional: true, + Default: false, + }, + + "auto_assign_public_ips": &schema.Schema{ + Type: schema.TypeBool, + Optional: true, + Default: false, + }, + + "custom_instance_profile_arn": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + }, + + "custom_setup_recipes": &schema.Schema{ + Type: schema.TypeList, + Optional: true, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + + "custom_configure_recipes": &schema.Schema{ + Type: schema.TypeList, + Optional: true, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + + "custom_deploy_recipes": &schema.Schema{ + Type: schema.TypeList, + Optional: true, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + + "custom_undeploy_recipes": &schema.Schema{ + Type: schema.TypeList, + Optional: true, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + + "custom_shutdown_recipes": &schema.Schema{ + Type: schema.TypeList, + Optional: true, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + + "custom_security_group_ids": &schema.Schema{ + Type: schema.TypeSet, + Optional: true, + Elem: &schema.Schema{Type: schema.TypeString}, + Set: schema.HashString, + }, + + "auto_healing": &schema.Schema{ + Type: schema.TypeBool, + Optional: true, + Default: true, + }, + + "install_updates_on_boot": &schema.Schema{ + Type: schema.TypeBool, + Optional: true, + Default: true, + }, + + "instance_shutdown_timeout": &schema.Schema{ + Type: schema.TypeInt, + Optional: true, + Default: 120, + }, + + "drain_elb_on_shutdown": &schema.Schema{ + Type: schema.TypeBool, + Optional: true, + Default: true, + }, + + "system_packages": &schema.Schema{ + Type: schema.TypeSet, + Optional: true, + Elem: &schema.Schema{Type: schema.TypeString}, + Set: schema.HashString, + }, + + "stack_id": &schema.Schema{ + Type: schema.TypeString, + ForceNew: true, + Required: true, + }, + + "use_ebs_optimized_instances": &schema.Schema{ + Type: schema.TypeBool, + Optional: true, + Default: false, + }, + + "ebs_volume": &schema.Schema{ + Type: schema.TypeSet, + Optional: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + + "iops": &schema.Schema{ + Type: schema.TypeInt, + Optional: true, + Default: 0, + }, + + "mount_point": &schema.Schema{ + Type: schema.TypeString, + Required: true, + }, + + "number_of_disks": &schema.Schema{ + Type: schema.TypeInt, + Required: true, + }, + + "raid_level": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + Default: "", + }, + + "size": &schema.Schema{ + Type: schema.TypeInt, + Required: true, + }, + + "type": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + Default: "standard", + }, + }, + }, + Set: func(v interface{}) int { + m := v.(map[string]interface{}) + return hashcode.String(m["mount_point"].(string)) + }, + }, + } + + if lt.CustomShortName { + resourceSchema["short_name"] = &schema.Schema{ + Type: schema.TypeString, + Required: true, + } + } + + if lt.DefaultLayerName != "" { + resourceSchema["name"] = &schema.Schema{ + Type: schema.TypeString, + Optional: true, + Default: lt.DefaultLayerName, + } + } else { + resourceSchema["name"] = &schema.Schema{ + Type: schema.TypeString, + Required: true, + } + } + + for key, def := range lt.Attributes { + resourceSchema[key] = &schema.Schema{ + Type: def.Type, + Default: def.Default, + Required: def.Required, + Optional: !def.Required, + } + } + + return &schema.Resource{ + Read: func(d *schema.ResourceData, meta interface{}) error { + client := meta.(*AWSClient).opsworksconn + return lt.Read(d, client) + }, + Create: func(d *schema.ResourceData, meta interface{}) error { + client := meta.(*AWSClient).opsworksconn + return lt.Create(d, client) + }, + Update: func(d *schema.ResourceData, meta interface{}) error { + client := meta.(*AWSClient).opsworksconn + return lt.Update(d, client) + }, + Delete: func(d *schema.ResourceData, meta interface{}) error { + client := meta.(*AWSClient).opsworksconn + return lt.Delete(d, client) + }, + + Schema: resourceSchema, + } +} + +func (lt *opsworksLayerType) Read(d *schema.ResourceData, client *opsworks.OpsWorks) error { + + req := &opsworks.DescribeLayersInput{ + LayerIds: []*string{ + aws.String(d.Id()), + }, + } + + log.Printf("[DEBUG] Reading OpsWorks layer: %s", d.Id()) + + resp, err := client.DescribeLayers(req) + if err != nil { + if awserr, ok := err.(awserr.Error); ok { + if awserr.Code() == "ResourceNotFoundException" { + d.SetId("") + return nil + } + } + return err + } + + layer := resp.Layers[0] + d.Set("id", layer.LayerId) + d.Set("auto_assign_elastic_ips", layer.AutoAssignElasticIps) + d.Set("auto_assign_public_ips", layer.AutoAssignPublicIps) + d.Set("custom_instance_profile_arn", layer.CustomInstanceProfileArn) + d.Set("custom_security_group_ids", unwrapAwsStringList(layer.CustomSecurityGroupIds)) + d.Set("auto_healing", layer.EnableAutoHealing) + d.Set("install_updates_on_boot", layer.InstallUpdatesOnBoot) + d.Set("name", layer.Name) + d.Set("system_packages", unwrapAwsStringList(layer.Packages)) + d.Set("stack_id", layer.StackId) + d.Set("use_ebs_optimized_instances", layer.UseEbsOptimizedInstances) + + if lt.CustomShortName { + d.Set("short_name", layer.Shortname) + } + + lt.SetAttributeMap(d, layer.Attributes) + lt.SetLifecycleEventConfiguration(d, layer.LifecycleEventConfiguration) + lt.SetCustomRecipes(d, layer.CustomRecipes) + lt.SetVolumeConfigurations(d, layer.VolumeConfigurations) + + return nil +} + +func (lt *opsworksLayerType) Create(d *schema.ResourceData, client *opsworks.OpsWorks) error { + + req := &opsworks.CreateLayerInput{ + AutoAssignElasticIps: aws.Bool(d.Get("auto_assign_elastic_ips").(bool)), + AutoAssignPublicIps: aws.Bool(d.Get("auto_assign_public_ips").(bool)), + CustomInstanceProfileArn: aws.String(d.Get("custom_instance_profile_arn").(string)), + CustomRecipes: lt.CustomRecipes(d), + CustomSecurityGroupIds: makeAwsStringSet(d.Get("custom_security_group_ids").(*schema.Set)), + EnableAutoHealing: aws.Bool(d.Get("auto_healing").(bool)), + InstallUpdatesOnBoot: aws.Bool(d.Get("install_updates_on_boot").(bool)), + LifecycleEventConfiguration: lt.LifecycleEventConfiguration(d), + Name: aws.String(d.Get("name").(string)), + Packages: makeAwsStringSet(d.Get("system_packages").(*schema.Set)), + Type: aws.String(lt.TypeName), + StackId: aws.String(d.Get("stack_id").(string)), + UseEbsOptimizedInstances: aws.Bool(d.Get("use_ebs_optimized_instances").(bool)), + Attributes: lt.AttributeMap(d), + VolumeConfigurations: lt.VolumeConfigurations(d), + } + + if lt.CustomShortName { + req.Shortname = aws.String(d.Get("short_name").(string)) + } else { + req.Shortname = aws.String(lt.TypeName) + } + + log.Printf("[DEBUG] Creating OpsWorks layer: %s", d.Id()) + + resp, err := client.CreateLayer(req) + if err != nil { + return err + } + + layerId := *resp.LayerId + d.SetId(layerId) + d.Set("id", layerId) + + return lt.Read(d, client) +} + +func (lt *opsworksLayerType) Update(d *schema.ResourceData, client *opsworks.OpsWorks) error { + + req := &opsworks.UpdateLayerInput{ + LayerId: aws.String(d.Id()), + AutoAssignElasticIps: aws.Bool(d.Get("auto_assign_elastic_ips").(bool)), + AutoAssignPublicIps: aws.Bool(d.Get("auto_assign_public_ips").(bool)), + CustomInstanceProfileArn: aws.String(d.Get("custom_instance_profile_arn").(string)), + CustomRecipes: lt.CustomRecipes(d), + CustomSecurityGroupIds: makeAwsStringSet(d.Get("custom_security_group_ids").(*schema.Set)), + EnableAutoHealing: aws.Bool(d.Get("auto_healing").(bool)), + InstallUpdatesOnBoot: aws.Bool(d.Get("install_updates_on_boot").(bool)), + LifecycleEventConfiguration: lt.LifecycleEventConfiguration(d), + Name: aws.String(d.Get("name").(string)), + Packages: makeAwsStringSet(d.Get("system_packages").(*schema.Set)), + UseEbsOptimizedInstances: aws.Bool(d.Get("use_ebs_optimized_instances").(bool)), + Attributes: lt.AttributeMap(d), + VolumeConfigurations: lt.VolumeConfigurations(d), + } + + if lt.CustomShortName { + req.Shortname = aws.String(d.Get("short_name").(string)) + } else { + req.Shortname = aws.String(lt.TypeName) + } + + log.Printf("[DEBUG] Updating OpsWorks layer: %s", d.Id()) + + _, err := client.UpdateLayer(req) + if err != nil { + return err + } + + return lt.Read(d, client) +} + +func (lt *opsworksLayerType) Delete(d *schema.ResourceData, client *opsworks.OpsWorks) error { + req := &opsworks.DeleteLayerInput{ + LayerId: aws.String(d.Id()), + } + + log.Printf("[DEBUG] Deleting OpsWorks layer: %s", d.Id()) + + _, err := client.DeleteLayer(req) + return err +} + +func (lt *opsworksLayerType) AttributeMap(d *schema.ResourceData) map[string]*string { + attrs := map[string]*string{} + + for key, def := range lt.Attributes { + value := d.Get(key) + switch def.Type { + case schema.TypeString: + strValue := value.(string) + attrs[def.AttrName] = &strValue + case schema.TypeInt: + intValue := value.(int) + strValue := strconv.Itoa(intValue) + attrs[def.AttrName] = &strValue + case schema.TypeBool: + boolValue := value.(bool) + if boolValue { + attrs[def.AttrName] = &opsworksTrueString + } else { + attrs[def.AttrName] = &opsworksFalseString + } + default: + // should never happen + panic(fmt.Errorf("Unsupported OpsWorks layer attribute type")) + } + } + + return attrs +} + +func (lt *opsworksLayerType) SetAttributeMap(d *schema.ResourceData, attrs map[string]*string) { + for key, def := range lt.Attributes { + // Ignore write-only attributes; we'll just keep what we already have stored. + // (The AWS API returns garbage placeholder values for these.) + if def.WriteOnly { + continue + } + + if strPtr, ok := attrs[def.AttrName]; ok && strPtr != nil { + strValue := *strPtr + + switch def.Type { + case schema.TypeString: + d.Set(key, strValue) + case schema.TypeInt: + intValue, err := strconv.Atoi(strValue) + if err == nil { + d.Set(key, intValue) + } else { + // Got garbage from the AWS API + d.Set(key, nil) + } + case schema.TypeBool: + boolValue := true + if strValue == opsworksFalseString { + boolValue = false + } + d.Set(key, boolValue) + default: + // should never happen + panic(fmt.Errorf("Unsupported OpsWorks layer attribute type")) + } + return + + } else { + d.Set(key, nil) + } + } +} + +func (lt *opsworksLayerType) LifecycleEventConfiguration(d *schema.ResourceData) *opsworks.LifecycleEventConfiguration { + return &opsworks.LifecycleEventConfiguration{ + Shutdown: &opsworks.ShutdownEventConfiguration{ + DelayUntilElbConnectionsDrained: aws.Bool(d.Get("drain_elb_on_shutdown").(bool)), + ExecutionTimeout: aws.Int64(int64(d.Get("instance_shutdown_timeout").(int))), + }, + } +} + +func (lt *opsworksLayerType) SetLifecycleEventConfiguration(d *schema.ResourceData, v *opsworks.LifecycleEventConfiguration) { + if v == nil || v.Shutdown == nil { + d.Set("drain_elb_on_shutdown", nil) + d.Set("instance_shutdown_timeout", nil) + } else { + d.Set("drain_elb_on_shutdown", v.Shutdown.DelayUntilElbConnectionsDrained) + d.Set("instance_shutdown_timeout", v.Shutdown.ExecutionTimeout) + } +} + +func (lt *opsworksLayerType) CustomRecipes(d *schema.ResourceData) *opsworks.Recipes { + return &opsworks.Recipes{ + Configure: makeAwsStringList(d.Get("custom_configure_recipes").([]interface{})), + Deploy: makeAwsStringList(d.Get("custom_deploy_recipes").([]interface{})), + Setup: makeAwsStringList(d.Get("custom_setup_recipes").([]interface{})), + Shutdown: makeAwsStringList(d.Get("custom_shutdown_recipes").([]interface{})), + Undeploy: makeAwsStringList(d.Get("custom_undeploy_recipes").([]interface{})), + } +} + +func (lt *opsworksLayerType) SetCustomRecipes(d *schema.ResourceData, v *opsworks.Recipes) { + // Null out everything first, and then we'll consider what to put back. + d.Set("custom_configure_recipes", nil) + d.Set("custom_deploy_recipes", nil) + d.Set("custom_setup_recipes", nil) + d.Set("custom_shutdown_recipes", nil) + d.Set("custom_undeploy_recipes", nil) + + if v == nil { + return + } + + d.Set("custom_configure_recipes", unwrapAwsStringList(v.Configure)) + d.Set("custom_deploy_recipes", unwrapAwsStringList(v.Deploy)) + d.Set("custom_setup_recipes", unwrapAwsStringList(v.Setup)) + d.Set("custom_shutdown_recipes", unwrapAwsStringList(v.Shutdown)) + d.Set("custom_undeploy_recipes", unwrapAwsStringList(v.Undeploy)) +} + +func (lt *opsworksLayerType) VolumeConfigurations(d *schema.ResourceData) []*opsworks.VolumeConfiguration { + configuredVolumes := d.Get("ebs_volume").(*schema.Set).List() + result := make([]*opsworks.VolumeConfiguration, len(configuredVolumes)) + + for i := 0; i < len(configuredVolumes); i++ { + volumeData := configuredVolumes[i].(map[string]interface{}) + + result[i] = &opsworks.VolumeConfiguration{ + MountPoint: aws.String(volumeData["mount_point"].(string)), + NumberOfDisks: aws.Int64(int64(volumeData["number_of_disks"].(int))), + Size: aws.Int64(int64(volumeData["size"].(int))), + VolumeType: aws.String(volumeData["type"].(string)), + } + iops := int64(volumeData["iops"].(int)) + if iops != 0 { + result[i].Iops = aws.Int64(iops) + } + + raidLevelStr := volumeData["raid_level"].(string) + if raidLevelStr != "" { + raidLevel, err := strconv.Atoi(raidLevelStr) + if err == nil { + result[i].RaidLevel = aws.Int64(int64(raidLevel)) + } + } + } + + return result +} + +func (lt *opsworksLayerType) SetVolumeConfigurations(d *schema.ResourceData, v []*opsworks.VolumeConfiguration) { + newValue := make([]*map[string]interface{}, len(v)) + + for i := 0; i < len(v); i++ { + config := v[i] + data := make(map[string]interface{}) + newValue[i] = &data + + if config.Iops != nil { + data["iops"] = int(*config.Iops) + } else { + data["iops"] = 0 + } + if config.MountPoint != nil { + data["mount_point"] = *config.MountPoint + } + if config.NumberOfDisks != nil { + data["number_of_disks"] = int(*config.NumberOfDisks) + } + if config.RaidLevel != nil { + data["raid_level"] = strconv.Itoa(int(*config.RaidLevel)) + } + if config.Size != nil { + data["size"] = int(*config.Size) + } + if config.VolumeType != nil { + data["type"] = *config.VolumeType + } + } + + d.Set("ebs_volume", newValue) +} diff --git a/builtin/providers/aws/provider.go b/builtin/providers/aws/provider.go index 6b2c16c7a..f73580d0f 100644 --- a/builtin/providers/aws/provider.go +++ b/builtin/providers/aws/provider.go @@ -163,24 +163,31 @@ func Provider() terraform.ResourceProvider { "aws_autoscaling_group": resourceAwsAutoscalingGroup(), "aws_autoscaling_notification": resourceAwsAutoscalingNotification(), "aws_autoscaling_policy": resourceAwsAutoscalingPolicy(), + "aws_cloudwatch_log_group": resourceAwsCloudWatchLogGroup(), + "aws_autoscaling_lifecycle_hook": resourceAwsAutoscalingLifecycleHook(), "aws_cloudwatch_metric_alarm": resourceAwsCloudWatchMetricAlarm(), "aws_customer_gateway": resourceAwsCustomerGateway(), "aws_db_instance": resourceAwsDbInstance(), "aws_db_parameter_group": resourceAwsDbParameterGroup(), "aws_db_security_group": resourceAwsDbSecurityGroup(), "aws_db_subnet_group": resourceAwsDbSubnetGroup(), + "aws_directory_service_directory": resourceAwsDirectoryServiceDirectory(), "aws_dynamodb_table": resourceAwsDynamoDbTable(), "aws_ebs_volume": resourceAwsEbsVolume(), "aws_ecs_cluster": resourceAwsEcsCluster(), "aws_ecs_service": resourceAwsEcsService(), "aws_ecs_task_definition": resourceAwsEcsTaskDefinition(), + "aws_efs_file_system": resourceAwsEfsFileSystem(), + "aws_efs_mount_target": resourceAwsEfsMountTarget(), "aws_eip": resourceAwsEip(), "aws_elasticache_cluster": resourceAwsElasticacheCluster(), "aws_elasticache_parameter_group": resourceAwsElasticacheParameterGroup(), "aws_elasticache_security_group": resourceAwsElasticacheSecurityGroup(), "aws_elasticache_subnet_group": resourceAwsElasticacheSubnetGroup(), + "aws_elasticsearch_domain": resourceAwsElasticSearchDomain(), "aws_elb": resourceAwsElb(), "aws_flow_log": resourceAwsFlowLog(), + "aws_glacier_vault": resourceAwsGlacierVault(), "aws_iam_access_key": resourceAwsIamAccessKey(), "aws_iam_group_policy": resourceAwsIamGroupPolicy(), "aws_iam_group": resourceAwsIamGroup(), @@ -190,6 +197,7 @@ func Provider() terraform.ResourceProvider { "aws_iam_policy_attachment": resourceAwsIamPolicyAttachment(), "aws_iam_role_policy": resourceAwsIamRolePolicy(), "aws_iam_role": resourceAwsIamRole(), + "aws_iam_saml_provider": resourceAwsIamSamlProvider(), "aws_iam_server_certificate": resourceAwsIAMServerCertificate(), "aws_iam_user_policy": resourceAwsIamUserPolicy(), "aws_iam_user": resourceAwsIamUser(), @@ -203,7 +211,21 @@ func Provider() terraform.ResourceProvider { "aws_main_route_table_association": resourceAwsMainRouteTableAssociation(), "aws_network_acl": resourceAwsNetworkAcl(), "aws_network_interface": resourceAwsNetworkInterface(), + "aws_opsworks_stack": resourceAwsOpsworksStack(), + "aws_opsworks_java_app_layer": resourceAwsOpsworksJavaAppLayer(), + "aws_opsworks_haproxy_layer": resourceAwsOpsworksHaproxyLayer(), + "aws_opsworks_static_web_layer": resourceAwsOpsworksStaticWebLayer(), + "aws_opsworks_php_app_layer": resourceAwsOpsworksPhpAppLayer(), + "aws_opsworks_rails_app_layer": resourceAwsOpsworksRailsAppLayer(), + "aws_opsworks_nodejs_app_layer": resourceAwsOpsworksNodejsAppLayer(), + "aws_opsworks_memcached_layer": resourceAwsOpsworksMemcachedLayer(), + "aws_opsworks_mysql_layer": resourceAwsOpsworksMysqlLayer(), + "aws_opsworks_ganglia_layer": resourceAwsOpsworksGangliaLayer(), + "aws_opsworks_custom_layer": resourceAwsOpsworksCustomLayer(), + "aws_placement_group": resourceAwsPlacementGroup(), "aws_proxy_protocol_policy": resourceAwsProxyProtocolPolicy(), + "aws_rds_cluster": resourceAwsRDSCluster(), + "aws_rds_cluster_instance": resourceAwsRDSClusterInstance(), "aws_route53_delegation_set": resourceAwsRoute53DelegationSet(), "aws_route53_record": resourceAwsRoute53Record(), "aws_route53_zone_association": resourceAwsRoute53ZoneAssociation(), diff --git a/builtin/providers/aws/resource_aws_ami.go b/builtin/providers/aws/resource_aws_ami.go index ec3ce73b9..621881036 100644 --- a/builtin/providers/aws/resource_aws_ami.go +++ b/builtin/providers/aws/resource_aws_ami.go @@ -130,7 +130,7 @@ func resourceAwsAmiRead(d *schema.ResourceData, meta interface{}) error { } image := res.Images[0] - state := *(image.State) + state := *image.State if state == "pending" { // This could happen if a user manually adds an image we didn't create @@ -142,7 +142,7 @@ func resourceAwsAmiRead(d *schema.ResourceData, meta interface{}) error { if err != nil { return err } - state = *(image.State) + state = *image.State } if state == "deregistered" { @@ -170,22 +170,22 @@ func resourceAwsAmiRead(d *schema.ResourceData, meta interface{}) error { for _, blockDev := range image.BlockDeviceMappings { if blockDev.Ebs != nil { ebsBlockDev := map[string]interface{}{ - "device_name": *(blockDev.DeviceName), - "delete_on_termination": *(blockDev.Ebs.DeleteOnTermination), - "encrypted": *(blockDev.Ebs.Encrypted), + "device_name": *blockDev.DeviceName, + "delete_on_termination": *blockDev.Ebs.DeleteOnTermination, + "encrypted": *blockDev.Ebs.Encrypted, "iops": 0, - "snapshot_id": *(blockDev.Ebs.SnapshotId), - "volume_size": int(*(blockDev.Ebs.VolumeSize)), - "volume_type": *(blockDev.Ebs.VolumeType), + "snapshot_id": *blockDev.Ebs.SnapshotId, + "volume_size": int(*blockDev.Ebs.VolumeSize), + "volume_type": *blockDev.Ebs.VolumeType, } if blockDev.Ebs.Iops != nil { - ebsBlockDev["iops"] = int(*(blockDev.Ebs.Iops)) + ebsBlockDev["iops"] = int(*blockDev.Ebs.Iops) } ebsBlockDevs = append(ebsBlockDevs, ebsBlockDev) } else { ephemeralBlockDevs = append(ephemeralBlockDevs, map[string]interface{}{ - "device_name": *(blockDev.DeviceName), - "virtual_name": *(blockDev.VirtualName), + "device_name": *blockDev.DeviceName, + "virtual_name": *blockDev.VirtualName, }) } } @@ -301,7 +301,7 @@ func resourceAwsAmiWaitForAvailable(id string, client *ec2.EC2) (*ec2.Image, err return nil, fmt.Errorf("new AMI vanished while pending") } - state := *(res.Images[0].State) + state := *res.Images[0].State if state == "pending" { // Give it a few seconds before we poll again. @@ -316,7 +316,7 @@ func resourceAwsAmiWaitForAvailable(id string, client *ec2.EC2) (*ec2.Image, err // If we're not pending or available then we're in one of the invalid/error // states, so stop polling and bail out. - stateReason := *(res.Images[0].StateReason) + stateReason := *res.Images[0].StateReason return nil, fmt.Errorf("new AMI became %s while pending: %s", state, stateReason) } } diff --git a/builtin/providers/aws/resource_aws_app_cookie_stickiness_policy.go b/builtin/providers/aws/resource_aws_app_cookie_stickiness_policy.go index 3f7e1bf7f..0fe85f9e9 100644 --- a/builtin/providers/aws/resource_aws_app_cookie_stickiness_policy.go +++ b/builtin/providers/aws/resource_aws_app_cookie_stickiness_policy.go @@ -2,6 +2,7 @@ package aws import ( "fmt" + "regexp" "strings" "github.com/aws/aws-sdk-go/aws" @@ -15,8 +16,6 @@ func resourceAwsAppCookieStickinessPolicy() *schema.Resource { // There is no concept of "updating" an App Stickiness policy in // the AWS API. Create: resourceAwsAppCookieStickinessPolicyCreate, - Update: resourceAwsAppCookieStickinessPolicyCreate, - Read: resourceAwsAppCookieStickinessPolicyRead, Delete: resourceAwsAppCookieStickinessPolicyDelete, @@ -25,6 +24,14 @@ func resourceAwsAppCookieStickinessPolicy() *schema.Resource { Type: schema.TypeString, Required: true, ForceNew: true, + ValidateFunc: func(v interface{}, k string) (ws []string, es []error) { + value := v.(string) + if !regexp.MustCompile(`^[0-9A-Za-z-]+$`).MatchString(value) { + es = append(es, fmt.Errorf( + "only alphanumeric characters and hyphens allowed in %q", k)) + } + return + }, }, "load_balancer": &schema.Schema{ diff --git a/builtin/providers/aws/resource_aws_autoscaling_group.go b/builtin/providers/aws/resource_aws_autoscaling_group.go index 771bda2e3..43926f103 100644 --- a/builtin/providers/aws/resource_aws_autoscaling_group.go +++ b/builtin/providers/aws/resource_aws_autoscaling_group.go @@ -73,8 +73,7 @@ func resourceAwsAutoscalingGroup() *schema.Resource { "force_delete": &schema.Schema{ Type: schema.TypeBool, Optional: true, - Computed: true, - ForceNew: true, + Default: false, }, "health_check_grace_period": &schema.Schema{ @@ -120,6 +119,25 @@ func resourceAwsAutoscalingGroup() *schema.Resource { Set: schema.HashString, }, + "wait_for_capacity_timeout": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + Default: "10m", + ValidateFunc: func(v interface{}, k string) (ws []string, errors []error) { + value := v.(string) + duration, err := time.ParseDuration(value) + if err != nil { + errors = append(errors, fmt.Errorf( + "%q cannot be parsed as a duration: %s", k, err)) + } + if duration < 0 { + errors = append(errors, fmt.Errorf( + "%q must be greater than zero", k)) + } + return + }, + }, + "tag": autoscalingTagsSchema(), }, } @@ -334,15 +352,9 @@ func resourceAwsAutoscalingGroupDelete(d *schema.ResourceData, meta interface{}) } log.Printf("[DEBUG] AutoScaling Group destroy: %v", d.Id()) - deleteopts := autoscaling.DeleteAutoScalingGroupInput{AutoScalingGroupName: aws.String(d.Id())} - - // You can force an autoscaling group to delete - // even if it's in the process of scaling a resource. - // Normally, you would set the min-size and max-size to 0,0 - // and then delete the group. This bypasses that and leaves - // resources potentially dangling. - if d.Get("force_delete").(bool) { - deleteopts.ForceDelete = aws.Bool(true) + deleteopts := autoscaling.DeleteAutoScalingGroupInput{ + AutoScalingGroupName: aws.String(d.Id()), + ForceDelete: aws.Bool(d.Get("force_delete").(bool)), } // We retry the delete operation to handle InUse/InProgress errors coming @@ -414,6 +426,11 @@ func getAwsAutoscalingGroup( func resourceAwsAutoscalingGroupDrain(d *schema.ResourceData, meta interface{}) error { conn := meta.(*AWSClient).autoscalingconn + if d.Get("force_delete").(bool) { + log.Printf("[DEBUG] Skipping ASG drain, force_delete was set.") + return nil + } + // First, set the capacity to zero so the group will drain log.Printf("[DEBUG] Reducing autoscaling group capacity to zero") opts := autoscaling.UpdateAutoScalingGroupInput{ @@ -445,8 +462,6 @@ func resourceAwsAutoscalingGroupDrain(d *schema.ResourceData, meta interface{}) }) } -var waitForASGCapacityTimeout = 10 * time.Minute - // Waits for a minimum number of healthy instances to show up as healthy in the // ASG before continuing. Waits up to `waitForASGCapacityTimeout` for // "desired_capacity", or "min_size" if desired capacity is not specified. @@ -461,9 +476,20 @@ func waitForASGCapacity(d *schema.ResourceData, meta interface{}) error { } wantELB := d.Get("min_elb_capacity").(int) - log.Printf("[DEBUG] Waiting for capacity: %d ASG, %d ELB", wantASG, wantELB) + wait, err := time.ParseDuration(d.Get("wait_for_capacity_timeout").(string)) + if err != nil { + return err + } - return resource.Retry(waitForASGCapacityTimeout, func() error { + if wait == 0 { + log.Printf("[DEBUG] Capacity timeout set to 0, skipping capacity waiting.") + return nil + } + + log.Printf("[DEBUG] Waiting %s for capacity: %d ASG, %d ELB", + wait, wantASG, wantELB) + + return resource.Retry(wait, func() error { g, err := getAwsAutoscalingGroup(d, meta) if err != nil { return resource.RetryError{Err: err} diff --git a/builtin/providers/aws/resource_aws_autoscaling_lifecycle_hook.go b/builtin/providers/aws/resource_aws_autoscaling_lifecycle_hook.go new file mode 100644 index 000000000..faacadb7a --- /dev/null +++ b/builtin/providers/aws/resource_aws_autoscaling_lifecycle_hook.go @@ -0,0 +1,175 @@ +package aws + +import ( + "fmt" + "log" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/autoscaling" + "github.com/hashicorp/terraform/helper/schema" +) + +func resourceAwsAutoscalingLifecycleHook() *schema.Resource { + return &schema.Resource{ + Create: resourceAwsAutoscalingLifecycleHookPut, + Read: resourceAwsAutoscalingLifecycleHookRead, + Update: resourceAwsAutoscalingLifecycleHookPut, + Delete: resourceAwsAutoscalingLifecycleHookDelete, + + Schema: map[string]*schema.Schema{ + "name": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + "autoscaling_group_name": &schema.Schema{ + Type: schema.TypeString, + Required: true, + }, + "default_result": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + }, + "heartbeat_timeout": &schema.Schema{ + Type: schema.TypeInt, + Optional: true, + }, + "lifecycle_transition": &schema.Schema{ + Type: schema.TypeString, + Required: true, + }, + "notification_metadata": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + }, + "notification_target_arn": &schema.Schema{ + Type: schema.TypeString, + Required: true, + }, + "role_arn": &schema.Schema{ + Type: schema.TypeString, + Required: true, + }, + }, + } +} + +func resourceAwsAutoscalingLifecycleHookPut(d *schema.ResourceData, meta interface{}) error { + autoscalingconn := meta.(*AWSClient).autoscalingconn + + params := getAwsAutoscalingPutLifecycleHookInput(d) + + log.Printf("[DEBUG] AutoScaling PutLifecyleHook: %#v", params) + _, err := autoscalingconn.PutLifecycleHook(¶ms) + if err != nil { + return fmt.Errorf("Error putting lifecycle hook: %s", err) + } + + d.SetId(d.Get("name").(string)) + + return resourceAwsAutoscalingLifecycleHookRead(d, meta) +} + +func resourceAwsAutoscalingLifecycleHookRead(d *schema.ResourceData, meta interface{}) error { + p, err := getAwsAutoscalingLifecycleHook(d, meta) + if err != nil { + return err + } + if p == nil { + d.SetId("") + return nil + } + + log.Printf("[DEBUG] Read Lifecycle Hook: ASG: %s, SH: %s, Obj: %#v", d.Get("autoscaling_group_name"), d.Get("name"), p) + + d.Set("default_result", p.DefaultResult) + d.Set("heartbeat_timeout", p.HeartbeatTimeout) + d.Set("lifecycle_transition", p.LifecycleTransition) + d.Set("notification_metadata", p.NotificationMetadata) + d.Set("notification_target_arn", p.NotificationTargetARN) + d.Set("name", p.LifecycleHookName) + d.Set("role_arn", p.RoleARN) + + return nil +} + +func resourceAwsAutoscalingLifecycleHookDelete(d *schema.ResourceData, meta interface{}) error { + autoscalingconn := meta.(*AWSClient).autoscalingconn + p, err := getAwsAutoscalingLifecycleHook(d, meta) + if err != nil { + return err + } + if p == nil { + return nil + } + + params := autoscaling.DeleteLifecycleHookInput{ + AutoScalingGroupName: aws.String(d.Get("autoscaling_group_name").(string)), + LifecycleHookName: aws.String(d.Get("name").(string)), + } + if _, err := autoscalingconn.DeleteLifecycleHook(¶ms); err != nil { + return fmt.Errorf("Autoscaling Lifecycle Hook: %s ", err) + } + + d.SetId("") + return nil +} + +func getAwsAutoscalingPutLifecycleHookInput(d *schema.ResourceData) autoscaling.PutLifecycleHookInput { + var params = autoscaling.PutLifecycleHookInput{ + AutoScalingGroupName: aws.String(d.Get("autoscaling_group_name").(string)), + LifecycleHookName: aws.String(d.Get("name").(string)), + } + + if v, ok := d.GetOk("default_result"); ok { + params.DefaultResult = aws.String(v.(string)) + } + + if v, ok := d.GetOk("heartbeat_timeout"); ok { + params.HeartbeatTimeout = aws.Int64(int64(v.(int))) + } + + if v, ok := d.GetOk("lifecycle_transition"); ok { + params.LifecycleTransition = aws.String(v.(string)) + } + + if v, ok := d.GetOk("notification_metadata"); ok { + params.NotificationMetadata = aws.String(v.(string)) + } + + if v, ok := d.GetOk("notification_target_arn"); ok { + params.NotificationTargetARN = aws.String(v.(string)) + } + + if v, ok := d.GetOk("role_arn"); ok { + params.RoleARN = aws.String(v.(string)) + } + + return params +} + +func getAwsAutoscalingLifecycleHook(d *schema.ResourceData, meta interface{}) (*autoscaling.LifecycleHook, error) { + autoscalingconn := meta.(*AWSClient).autoscalingconn + + params := autoscaling.DescribeLifecycleHooksInput{ + AutoScalingGroupName: aws.String(d.Get("autoscaling_group_name").(string)), + LifecycleHookNames: []*string{aws.String(d.Get("name").(string))}, + } + + log.Printf("[DEBUG] AutoScaling Lifecycle Hook Describe Params: %#v", params) + resp, err := autoscalingconn.DescribeLifecycleHooks(¶ms) + if err != nil { + return nil, fmt.Errorf("Error retrieving lifecycle hooks: %s", err) + } + + // find lifecycle hooks + name := d.Get("name") + for idx, sp := range resp.LifecycleHooks { + if *sp.LifecycleHookName == name { + return resp.LifecycleHooks[idx], nil + } + } + + // lifecycle hook not found + return nil, nil +} diff --git a/builtin/providers/aws/resource_aws_autoscaling_lifecycle_hook_test.go b/builtin/providers/aws/resource_aws_autoscaling_lifecycle_hook_test.go new file mode 100644 index 000000000..f425570e9 --- /dev/null +++ b/builtin/providers/aws/resource_aws_autoscaling_lifecycle_hook_test.go @@ -0,0 +1,168 @@ +package aws + +import ( + "fmt" + "testing" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/autoscaling" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +func TestAccAWSAutoscalingLifecycleHook_basic(t *testing.T) { + var hook autoscaling.LifecycleHook + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSAutoscalingLifecycleHookDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccAWSAutoscalingLifecycleHookConfig, + Check: resource.ComposeTestCheckFunc( + testAccCheckLifecycleHookExists("aws_autoscaling_lifecycle_hook.foobar", &hook), + resource.TestCheckResourceAttr("aws_autoscaling_lifecycle_hook.foobar", "autoscaling_group_name", "terraform-test-foobar5"), + resource.TestCheckResourceAttr("aws_autoscaling_lifecycle_hook.foobar", "default_result", "CONTINUE"), + resource.TestCheckResourceAttr("aws_autoscaling_lifecycle_hook.foobar", "heartbeat_timeout", "2000"), + resource.TestCheckResourceAttr("aws_autoscaling_lifecycle_hook.foobar", "lifecycle_transition", "autoscaling:EC2_INSTANCE_LAUNCHING"), + ), + }, + }, + }) +} + +func testAccCheckLifecycleHookExists(n string, hook *autoscaling.LifecycleHook) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[n] + if !ok { + rs = rs + return fmt.Errorf("Not found: %s", n) + } + + conn := testAccProvider.Meta().(*AWSClient).autoscalingconn + params := &autoscaling.DescribeLifecycleHooksInput{ + AutoScalingGroupName: aws.String(rs.Primary.Attributes["autoscaling_group_name"]), + LifecycleHookNames: []*string{aws.String(rs.Primary.ID)}, + } + resp, err := conn.DescribeLifecycleHooks(params) + if err != nil { + return err + } + if len(resp.LifecycleHooks) == 0 { + return fmt.Errorf("LifecycleHook not found") + } + + return nil + } +} + +func testAccCheckAWSAutoscalingLifecycleHookDestroy(s *terraform.State) error { + conn := testAccProvider.Meta().(*AWSClient).autoscalingconn + + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_autoscaling_group" { + continue + } + + params := autoscaling.DescribeLifecycleHooksInput{ + AutoScalingGroupName: aws.String(rs.Primary.Attributes["autoscaling_group_name"]), + LifecycleHookNames: []*string{aws.String(rs.Primary.ID)}, + } + + resp, err := conn.DescribeLifecycleHooks(¶ms) + + if err == nil { + if len(resp.LifecycleHooks) != 0 && + *resp.LifecycleHooks[0].LifecycleHookName == rs.Primary.ID { + return fmt.Errorf("Lifecycle Hook Still Exists: %s", rs.Primary.ID) + } + } + } + + return nil +} + +var testAccAWSAutoscalingLifecycleHookConfig = fmt.Sprintf(` +resource "aws_launch_configuration" "foobar" { + name = "terraform-test-foobar5" + image_id = "ami-21f78e11" + instance_type = "t1.micro" +} + +resource "aws_sqs_queue" "foobar" { + name = "foobar" + delay_seconds = 90 + max_message_size = 2048 + message_retention_seconds = 86400 + receive_wait_time_seconds = 10 +} + +resource "aws_iam_role" "foobar" { + name = "foobar" + assume_role_policy = < 255 { + errors = append(errors, fmt.Errorf( + "%q cannot be greater than 255 characters", k)) + } + return + +} diff --git a/builtin/providers/aws/resource_aws_db_parameter_group_test.go b/builtin/providers/aws/resource_aws_db_parameter_group_test.go index 93e74bb74..d0042df23 100644 --- a/builtin/providers/aws/resource_aws_db_parameter_group_test.go +++ b/builtin/providers/aws/resource_aws_db_parameter_group_test.go @@ -2,7 +2,9 @@ package aws import ( "fmt" + "math/rand" "testing" + "time" "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/aws/awserr" @@ -106,6 +108,46 @@ func TestAccAWSDBParameterGroupOnly(t *testing.T) { }) } +func TestResourceAWSDBParameterGroupName_validation(t *testing.T) { + cases := []struct { + Value string + ErrCount int + }{ + { + Value: "tEsting123", + ErrCount: 1, + }, + { + Value: "testing123!", + ErrCount: 1, + }, + { + Value: "1testing123", + ErrCount: 1, + }, + { + Value: "testing--123", + ErrCount: 1, + }, + { + Value: "testing123-", + ErrCount: 1, + }, + { + Value: randomString(256), + ErrCount: 1, + }, + } + + for _, tc := range cases { + _, errors := validateDbParamGroupName(tc.Value, "aws_db_parameter_group_name") + + if len(errors) != tc.ErrCount { + t.Fatalf("Expected the DB Parameter Group Name to trigger a validation error") + } + } +} + func testAccCheckAWSDBParameterGroupDestroy(s *terraform.State) error { conn := testAccProvider.Meta().(*AWSClient).rdsconn @@ -193,6 +235,16 @@ func testAccCheckAWSDBParameterGroupExists(n string, v *rds.DBParameterGroup) re } } +func randomString(strlen int) string { + rand.Seed(time.Now().UTC().UnixNano()) + const chars = "abcdefghijklmnopqrstuvwxyz" + result := make([]byte, strlen) + for i := 0; i < strlen; i++ { + result[i] = chars[rand.Intn(len(chars))] + } + return string(result) +} + const testAccAWSDBParameterGroupConfig = ` resource "aws_db_parameter_group" "bar" { name = "parameter-group-test-terraform" diff --git a/builtin/providers/aws/resource_aws_db_security_group.go b/builtin/providers/aws/resource_aws_db_security_group.go index 6932fc971..367400ae7 100644 --- a/builtin/providers/aws/resource_aws_db_security_group.go +++ b/builtin/providers/aws/resource_aws_db_security_group.go @@ -9,8 +9,8 @@ import ( "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/aws/awserr" "github.com/aws/aws-sdk-go/service/rds" + "github.com/hashicorp/go-multierror" "github.com/hashicorp/terraform/helper/hashcode" - "github.com/hashicorp/terraform/helper/multierror" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/helper/schema" ) diff --git a/builtin/providers/aws/resource_aws_db_subnet_group.go b/builtin/providers/aws/resource_aws_db_subnet_group.go index 9c09b72d7..e6b17ea1f 100644 --- a/builtin/providers/aws/resource_aws_db_subnet_group.go +++ b/builtin/providers/aws/resource_aws_db_subnet_group.go @@ -9,6 +9,7 @@ import ( "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/aws/awserr" + "github.com/aws/aws-sdk-go/service/iam" "github.com/aws/aws-sdk-go/service/rds" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/helper/schema" @@ -56,12 +57,15 @@ func resourceAwsDbSubnetGroup() *schema.Resource { Elem: &schema.Schema{Type: schema.TypeString}, Set: schema.HashString, }, + + "tags": tagsSchema(), }, } } func resourceAwsDbSubnetGroupCreate(d *schema.ResourceData, meta interface{}) error { rdsconn := meta.(*AWSClient).rdsconn + tags := tagsFromMapRDS(d.Get("tags").(map[string]interface{})) subnetIdsSet := d.Get("subnet_ids").(*schema.Set) subnetIds := make([]*string, subnetIdsSet.Len()) @@ -73,6 +77,7 @@ func resourceAwsDbSubnetGroupCreate(d *schema.ResourceData, meta interface{}) er DBSubnetGroupName: aws.String(d.Get("name").(string)), DBSubnetGroupDescription: aws.String(d.Get("description").(string)), SubnetIds: subnetIds, + Tags: tags, } log.Printf("[DEBUG] Create DB Subnet Group: %#v", createOpts) @@ -130,6 +135,28 @@ func resourceAwsDbSubnetGroupRead(d *schema.ResourceData, meta interface{}) erro } d.Set("subnet_ids", subnets) + // list tags for resource + // set tags + conn := meta.(*AWSClient).rdsconn + arn, err := buildRDSsubgrpARN(d, meta) + if err != nil { + log.Printf("[DEBUG] Error building ARN for DB Subnet Group, not setting Tags for group %s", *subnetGroup.DBSubnetGroupName) + } else { + resp, err := conn.ListTagsForResource(&rds.ListTagsForResourceInput{ + ResourceName: aws.String(arn), + }) + + if err != nil { + log.Printf("[DEBUG] Error retreiving tags for ARN: %s", arn) + } + + var dt []*rds.Tag + if len(resp.TagList) > 0 { + dt = resp.TagList + } + d.Set("tags", tagsToMapRDS(dt)) + } + return nil } @@ -156,6 +183,15 @@ func resourceAwsDbSubnetGroupUpdate(d *schema.ResourceData, meta interface{}) er return err } } + + if arn, err := buildRDSsubgrpARN(d, meta); err == nil { + if err := setTagsRDS(conn, d, arn); err != nil { + return err + } else { + d.SetPartial("tags") + } + } + return resourceAwsDbSubnetGroupRead(d, meta) } @@ -196,3 +232,17 @@ func resourceAwsDbSubnetGroupDeleteRefreshFunc( return d, "destroyed", nil } } + +func buildRDSsubgrpARN(d *schema.ResourceData, meta interface{}) (string, error) { + iamconn := meta.(*AWSClient).iamconn + region := meta.(*AWSClient).region + // An zero value GetUserInput{} defers to the currently logged in user + resp, err := iamconn.GetUser(&iam.GetUserInput{}) + if err != nil { + return "", err + } + userARN := *resp.User.Arn + accountID := strings.Split(userARN, ":")[4] + arn := fmt.Sprintf("arn:aws:rds:%s:%s:subgrp:%s", region, accountID, d.Id()) + return arn, nil +} diff --git a/builtin/providers/aws/resource_aws_db_subnet_group_test.go b/builtin/providers/aws/resource_aws_db_subnet_group_test.go index cbf1f8497..e189b1e21 100644 --- a/builtin/providers/aws/resource_aws_db_subnet_group_test.go +++ b/builtin/providers/aws/resource_aws_db_subnet_group_test.go @@ -150,6 +150,9 @@ resource "aws_db_subnet_group" "foo" { name = "FOO" description = "foo description" subnet_ids = ["${aws_subnet.foo.id}", "${aws_subnet.bar.id}"] + tags { + Name = "tf-dbsubnet-group-test" + } } ` diff --git a/builtin/providers/aws/resource_aws_directory_service_directory.go b/builtin/providers/aws/resource_aws_directory_service_directory.go new file mode 100644 index 000000000..1fdb9491e --- /dev/null +++ b/builtin/providers/aws/resource_aws_directory_service_directory.go @@ -0,0 +1,291 @@ +package aws + +import ( + "fmt" + "log" + "time" + + "github.com/hashicorp/terraform/helper/schema" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/directoryservice" + "github.com/hashicorp/terraform/helper/resource" +) + +func resourceAwsDirectoryServiceDirectory() *schema.Resource { + return &schema.Resource{ + Create: resourceAwsDirectoryServiceDirectoryCreate, + Read: resourceAwsDirectoryServiceDirectoryRead, + Update: resourceAwsDirectoryServiceDirectoryUpdate, + Delete: resourceAwsDirectoryServiceDirectoryDelete, + + Schema: map[string]*schema.Schema{ + "name": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + "password": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + "size": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + "alias": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + }, + "description": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ForceNew: true, + }, + "short_name": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + }, + "vpc_settings": &schema.Schema{ + Type: schema.TypeList, + Required: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "subnet_ids": &schema.Schema{ + Type: schema.TypeSet, + Required: true, + ForceNew: true, + Elem: &schema.Schema{Type: schema.TypeString}, + Set: schema.HashString, + }, + "vpc_id": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + }, + }, + }, + "enable_sso": &schema.Schema{ + Type: schema.TypeBool, + Optional: true, + Default: false, + }, + "access_url": &schema.Schema{ + Type: schema.TypeString, + Computed: true, + }, + "dns_ip_addresses": &schema.Schema{ + Type: schema.TypeSet, + Elem: &schema.Schema{Type: schema.TypeString}, + Set: schema.HashString, + Computed: true, + }, + "type": &schema.Schema{ + Type: schema.TypeString, + Computed: true, + }, + }, + } +} + +func resourceAwsDirectoryServiceDirectoryCreate(d *schema.ResourceData, meta interface{}) error { + dsconn := meta.(*AWSClient).dsconn + + input := directoryservice.CreateDirectoryInput{ + Name: aws.String(d.Get("name").(string)), + Password: aws.String(d.Get("password").(string)), + Size: aws.String(d.Get("size").(string)), + } + + if v, ok := d.GetOk("description"); ok { + input.Description = aws.String(v.(string)) + } + if v, ok := d.GetOk("short_name"); ok { + input.ShortName = aws.String(v.(string)) + } + + if v, ok := d.GetOk("vpc_settings"); ok { + settings := v.([]interface{}) + + if len(settings) > 1 { + return fmt.Errorf("Only a single vpc_settings block is expected") + } else if len(settings) == 1 { + s := settings[0].(map[string]interface{}) + var subnetIds []*string + for _, id := range s["subnet_ids"].(*schema.Set).List() { + subnetIds = append(subnetIds, aws.String(id.(string))) + } + + vpcSettings := directoryservice.DirectoryVpcSettings{ + SubnetIds: subnetIds, + VpcId: aws.String(s["vpc_id"].(string)), + } + input.VpcSettings = &vpcSettings + } + } + + log.Printf("[DEBUG] Creating Directory Service: %s", input) + out, err := dsconn.CreateDirectory(&input) + if err != nil { + return err + } + log.Printf("[DEBUG] Directory Service created: %s", out) + d.SetId(*out.DirectoryId) + + // Wait for creation + log.Printf("[DEBUG] Waiting for DS (%q) to become available", d.Id()) + stateConf := &resource.StateChangeConf{ + Pending: []string{"Requested", "Creating", "Created"}, + Target: "Active", + Refresh: func() (interface{}, string, error) { + resp, err := dsconn.DescribeDirectories(&directoryservice.DescribeDirectoriesInput{ + DirectoryIds: []*string{aws.String(d.Id())}, + }) + if err != nil { + log.Printf("Error during creation of DS: %q", err.Error()) + return nil, "", err + } + + ds := resp.DirectoryDescriptions[0] + log.Printf("[DEBUG] Creation of DS %q is in following stage: %q.", + d.Id(), *ds.Stage) + return ds, *ds.Stage, nil + }, + Timeout: 10 * time.Minute, + } + if _, err := stateConf.WaitForState(); err != nil { + return fmt.Errorf( + "Error waiting for Directory Service (%s) to become available: %#v", + d.Id(), err) + } + + if v, ok := d.GetOk("alias"); ok { + d.SetPartial("alias") + + input := directoryservice.CreateAliasInput{ + DirectoryId: aws.String(d.Id()), + Alias: aws.String(v.(string)), + } + + log.Printf("[DEBUG] Assigning alias %q to DS directory %q", + v.(string), d.Id()) + out, err := dsconn.CreateAlias(&input) + if err != nil { + return err + } + log.Printf("[DEBUG] Alias %q assigned to DS directory %q", + *out.Alias, *out.DirectoryId) + } + + return resourceAwsDirectoryServiceDirectoryUpdate(d, meta) +} + +func resourceAwsDirectoryServiceDirectoryUpdate(d *schema.ResourceData, meta interface{}) error { + dsconn := meta.(*AWSClient).dsconn + + if d.HasChange("enable_sso") { + d.SetPartial("enable_sso") + var err error + + if v, ok := d.GetOk("enable_sso"); ok && v.(bool) { + log.Printf("[DEBUG] Enabling SSO for DS directory %q", d.Id()) + _, err = dsconn.EnableSso(&directoryservice.EnableSsoInput{ + DirectoryId: aws.String(d.Id()), + }) + } else { + log.Printf("[DEBUG] Disabling SSO for DS directory %q", d.Id()) + _, err = dsconn.DisableSso(&directoryservice.DisableSsoInput{ + DirectoryId: aws.String(d.Id()), + }) + } + + if err != nil { + return err + } + } + + return resourceAwsDirectoryServiceDirectoryRead(d, meta) +} + +func resourceAwsDirectoryServiceDirectoryRead(d *schema.ResourceData, meta interface{}) error { + dsconn := meta.(*AWSClient).dsconn + + input := directoryservice.DescribeDirectoriesInput{ + DirectoryIds: []*string{aws.String(d.Id())}, + } + out, err := dsconn.DescribeDirectories(&input) + if err != nil { + return err + } + + dir := out.DirectoryDescriptions[0] + log.Printf("[DEBUG] Received DS directory: %s", *dir) + + d.Set("access_url", *dir.AccessUrl) + d.Set("alias", *dir.Alias) + if dir.Description != nil { + d.Set("description", *dir.Description) + } + d.Set("dns_ip_addresses", schema.NewSet(schema.HashString, flattenStringList(dir.DnsIpAddrs))) + d.Set("name", *dir.Name) + if dir.ShortName != nil { + d.Set("short_name", *dir.ShortName) + } + d.Set("size", *dir.Size) + d.Set("type", *dir.Type) + d.Set("vpc_settings", flattenDSVpcSettings(dir.VpcSettings)) + d.Set("enable_sso", *dir.SsoEnabled) + + return nil +} + +func resourceAwsDirectoryServiceDirectoryDelete(d *schema.ResourceData, meta interface{}) error { + dsconn := meta.(*AWSClient).dsconn + + input := directoryservice.DeleteDirectoryInput{ + DirectoryId: aws.String(d.Id()), + } + _, err := dsconn.DeleteDirectory(&input) + if err != nil { + return err + } + + // Wait for deletion + log.Printf("[DEBUG] Waiting for DS (%q) to be deleted", d.Id()) + stateConf := &resource.StateChangeConf{ + Pending: []string{"Deleting"}, + Target: "", + Refresh: func() (interface{}, string, error) { + resp, err := dsconn.DescribeDirectories(&directoryservice.DescribeDirectoriesInput{ + DirectoryIds: []*string{aws.String(d.Id())}, + }) + if err != nil { + return nil, "", err + } + + if len(resp.DirectoryDescriptions) == 0 { + return nil, "", nil + } + + ds := resp.DirectoryDescriptions[0] + log.Printf("[DEBUG] Deletion of DS %q is in following stage: %q.", + d.Id(), *ds.Stage) + return ds, *ds.Stage, nil + }, + Timeout: 10 * time.Minute, + } + if _, err := stateConf.WaitForState(); err != nil { + return fmt.Errorf( + "Error waiting for Directory Service (%s) to be deleted: %q", + d.Id(), err.Error()) + } + + return nil +} diff --git a/builtin/providers/aws/resource_aws_directory_service_directory_test.go b/builtin/providers/aws/resource_aws_directory_service_directory_test.go new file mode 100644 index 000000000..b10174bdb --- /dev/null +++ b/builtin/providers/aws/resource_aws_directory_service_directory_test.go @@ -0,0 +1,283 @@ +package aws + +import ( + "fmt" + "testing" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/directoryservice" + + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +func TestAccAWSDirectoryServiceDirectory_basic(t *testing.T) { + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckDirectoryServiceDirectoryDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccDirectoryServiceDirectoryConfig, + Check: resource.ComposeTestCheckFunc( + testAccCheckServiceDirectoryExists("aws_directory_service_directory.bar"), + ), + }, + }, + }) +} + +func TestAccAWSDirectoryServiceDirectory_withAliasAndSso(t *testing.T) { + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckDirectoryServiceDirectoryDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccDirectoryServiceDirectoryConfig_withAlias, + Check: resource.ComposeTestCheckFunc( + testAccCheckServiceDirectoryExists("aws_directory_service_directory.bar_a"), + testAccCheckServiceDirectoryAlias("aws_directory_service_directory.bar_a", + fmt.Sprintf("tf-d-%d", randomInteger)), + testAccCheckServiceDirectorySso("aws_directory_service_directory.bar_a", false), + ), + }, + resource.TestStep{ + Config: testAccDirectoryServiceDirectoryConfig_withSso, + Check: resource.ComposeTestCheckFunc( + testAccCheckServiceDirectoryExists("aws_directory_service_directory.bar_a"), + testAccCheckServiceDirectoryAlias("aws_directory_service_directory.bar_a", + fmt.Sprintf("tf-d-%d", randomInteger)), + testAccCheckServiceDirectorySso("aws_directory_service_directory.bar_a", true), + ), + }, + resource.TestStep{ + Config: testAccDirectoryServiceDirectoryConfig_withSso_modified, + Check: resource.ComposeTestCheckFunc( + testAccCheckServiceDirectoryExists("aws_directory_service_directory.bar_a"), + testAccCheckServiceDirectoryAlias("aws_directory_service_directory.bar_a", + fmt.Sprintf("tf-d-%d", randomInteger)), + testAccCheckServiceDirectorySso("aws_directory_service_directory.bar_a", false), + ), + }, + }, + }) +} + +func testAccCheckDirectoryServiceDirectoryDestroy(s *terraform.State) error { + if len(s.RootModule().Resources) > 0 { + return fmt.Errorf("Expected all resources to be gone, but found: %#v", + s.RootModule().Resources) + } + + return nil +} + +func testAccCheckServiceDirectoryExists(name string) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[name] + if !ok { + return fmt.Errorf("Not found: %s", name) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No ID is set") + } + + dsconn := testAccProvider.Meta().(*AWSClient).dsconn + out, err := dsconn.DescribeDirectories(&directoryservice.DescribeDirectoriesInput{ + DirectoryIds: []*string{aws.String(rs.Primary.ID)}, + }) + + if err != nil { + return err + } + + if len(out.DirectoryDescriptions) < 1 { + return fmt.Errorf("No DS directory found") + } + + if *out.DirectoryDescriptions[0].DirectoryId != rs.Primary.ID { + return fmt.Errorf("DS directory ID mismatch - existing: %q, state: %q", + *out.DirectoryDescriptions[0].DirectoryId, rs.Primary.ID) + } + + return nil + } +} + +func testAccCheckServiceDirectoryAlias(name, alias string) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[name] + if !ok { + return fmt.Errorf("Not found: %s", name) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No ID is set") + } + + dsconn := testAccProvider.Meta().(*AWSClient).dsconn + out, err := dsconn.DescribeDirectories(&directoryservice.DescribeDirectoriesInput{ + DirectoryIds: []*string{aws.String(rs.Primary.ID)}, + }) + + if err != nil { + return err + } + + if *out.DirectoryDescriptions[0].Alias != alias { + return fmt.Errorf("DS directory Alias mismatch - actual: %q, expected: %q", + *out.DirectoryDescriptions[0].Alias, alias) + } + + return nil + } +} + +func testAccCheckServiceDirectorySso(name string, ssoEnabled bool) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[name] + if !ok { + return fmt.Errorf("Not found: %s", name) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No ID is set") + } + + dsconn := testAccProvider.Meta().(*AWSClient).dsconn + out, err := dsconn.DescribeDirectories(&directoryservice.DescribeDirectoriesInput{ + DirectoryIds: []*string{aws.String(rs.Primary.ID)}, + }) + + if err != nil { + return err + } + + if *out.DirectoryDescriptions[0].SsoEnabled != ssoEnabled { + return fmt.Errorf("DS directory SSO mismatch - actual: %t, expected: %t", + *out.DirectoryDescriptions[0].SsoEnabled, ssoEnabled) + } + + return nil + } +} + +const testAccDirectoryServiceDirectoryConfig = ` +resource "aws_directory_service_directory" "bar" { + name = "corp.notexample.com" + password = "SuperSecretPassw0rd" + size = "Small" + + vpc_settings { + vpc_id = "${aws_vpc.main.id}" + subnet_ids = ["${aws_subnet.foo.id}", "${aws_subnet.bar.id}"] + } +} + +resource "aws_vpc" "main" { + cidr_block = "10.0.0.0/16" +} + +resource "aws_subnet" "foo" { + vpc_id = "${aws_vpc.main.id}" + availability_zone = "us-west-2a" + cidr_block = "10.0.1.0/24" +} +resource "aws_subnet" "bar" { + vpc_id = "${aws_vpc.main.id}" + availability_zone = "us-west-2b" + cidr_block = "10.0.2.0/24" +} +` + +var randomInteger = genRandInt() +var testAccDirectoryServiceDirectoryConfig_withAlias = fmt.Sprintf(` +resource "aws_directory_service_directory" "bar_a" { + name = "corp.notexample.com" + password = "SuperSecretPassw0rd" + size = "Small" + alias = "tf-d-%d" + + vpc_settings { + vpc_id = "${aws_vpc.main.id}" + subnet_ids = ["${aws_subnet.foo.id}", "${aws_subnet.bar.id}"] + } +} + +resource "aws_vpc" "main" { + cidr_block = "10.0.0.0/16" +} + +resource "aws_subnet" "foo" { + vpc_id = "${aws_vpc.main.id}" + availability_zone = "us-west-2a" + cidr_block = "10.0.1.0/24" +} +resource "aws_subnet" "bar" { + vpc_id = "${aws_vpc.main.id}" + availability_zone = "us-west-2b" + cidr_block = "10.0.2.0/24" +} +`, randomInteger) + +var testAccDirectoryServiceDirectoryConfig_withSso = fmt.Sprintf(` +resource "aws_directory_service_directory" "bar_a" { + name = "corp.notexample.com" + password = "SuperSecretPassw0rd" + size = "Small" + alias = "tf-d-%d" + enable_sso = true + + vpc_settings { + vpc_id = "${aws_vpc.main.id}" + subnet_ids = ["${aws_subnet.foo.id}", "${aws_subnet.bar.id}"] + } +} + +resource "aws_vpc" "main" { + cidr_block = "10.0.0.0/16" +} + +resource "aws_subnet" "foo" { + vpc_id = "${aws_vpc.main.id}" + availability_zone = "us-west-2a" + cidr_block = "10.0.1.0/24" +} +resource "aws_subnet" "bar" { + vpc_id = "${aws_vpc.main.id}" + availability_zone = "us-west-2b" + cidr_block = "10.0.2.0/24" +} +`, randomInteger) + +var testAccDirectoryServiceDirectoryConfig_withSso_modified = fmt.Sprintf(` +resource "aws_directory_service_directory" "bar_a" { + name = "corp.notexample.com" + password = "SuperSecretPassw0rd" + size = "Small" + alias = "tf-d-%d" + enable_sso = false + + vpc_settings { + vpc_id = "${aws_vpc.main.id}" + subnet_ids = ["${aws_subnet.foo.id}", "${aws_subnet.bar.id}"] + } +} + +resource "aws_vpc" "main" { + cidr_block = "10.0.0.0/16" +} + +resource "aws_subnet" "foo" { + vpc_id = "${aws_vpc.main.id}" + availability_zone = "us-west-2a" + cidr_block = "10.0.1.0/24" +} +resource "aws_subnet" "bar" { + vpc_id = "${aws_vpc.main.id}" + availability_zone = "us-west-2b" + cidr_block = "10.0.2.0/24" +} +`, randomInteger) diff --git a/builtin/providers/aws/resource_aws_dynamodb_table.go b/builtin/providers/aws/resource_aws_dynamodb_table.go index 333686229..c88f50d8a 100644 --- a/builtin/providers/aws/resource_aws_dynamodb_table.go +++ b/builtin/providers/aws/resource_aws_dynamodb_table.go @@ -287,7 +287,11 @@ func resourceAwsDynamoDbTableCreate(d *schema.ResourceData, meta interface{}) er } else { // No error, set ID and return d.SetId(*output.TableDescription.TableName) - return nil + if err := d.Set("arn", *output.TableDescription.TableArn); err != nil { + return err + } + + return resourceAwsDynamoDbTableRead(d, meta) } } @@ -384,7 +388,7 @@ func resourceAwsDynamoDbTableUpdate(d *schema.ResourceData, meta interface{}) er updates = append(updates, update) // Hash key is required, range key isn't - hashkey_type, err := getAttributeType(d, *(gsi.KeySchema[0].AttributeName)) + hashkey_type, err := getAttributeType(d, *gsi.KeySchema[0].AttributeName) if err != nil { return err } @@ -396,7 +400,7 @@ func resourceAwsDynamoDbTableUpdate(d *schema.ResourceData, meta interface{}) er // If there's a range key, there will be 2 elements in KeySchema if len(gsi.KeySchema) == 2 { - rangekey_type, err := getAttributeType(d, *(gsi.KeySchema[1].AttributeName)) + rangekey_type, err := getAttributeType(d, *gsi.KeySchema[1].AttributeName) if err != nil { return err } @@ -480,8 +484,8 @@ func resourceAwsDynamoDbTableUpdate(d *schema.ResourceData, meta interface{}) er capacityUpdated := false - if int64(gsiReadCapacity) != *(gsi.ProvisionedThroughput.ReadCapacityUnits) || - int64(gsiWriteCapacity) != *(gsi.ProvisionedThroughput.WriteCapacityUnits) { + if int64(gsiReadCapacity) != *gsi.ProvisionedThroughput.ReadCapacityUnits || + int64(gsiWriteCapacity) != *gsi.ProvisionedThroughput.WriteCapacityUnits { capacityUpdated = true } @@ -544,8 +548,8 @@ func resourceAwsDynamoDbTableRead(d *schema.ResourceData, meta interface{}) erro attributes := []interface{}{} for _, attrdef := range table.AttributeDefinitions { attribute := map[string]string{ - "name": *(attrdef.AttributeName), - "type": *(attrdef.AttributeType), + "name": *attrdef.AttributeName, + "type": *attrdef.AttributeType, } attributes = append(attributes, attribute) log.Printf("[DEBUG] Added Attribute: %s", attribute["name"]) @@ -556,9 +560,9 @@ func resourceAwsDynamoDbTableRead(d *schema.ResourceData, meta interface{}) erro gsiList := make([]map[string]interface{}, 0, len(table.GlobalSecondaryIndexes)) for _, gsiObject := range table.GlobalSecondaryIndexes { gsi := map[string]interface{}{ - "write_capacity": *(gsiObject.ProvisionedThroughput.WriteCapacityUnits), - "read_capacity": *(gsiObject.ProvisionedThroughput.ReadCapacityUnits), - "name": *(gsiObject.IndexName), + "write_capacity": *gsiObject.ProvisionedThroughput.WriteCapacityUnits, + "read_capacity": *gsiObject.ProvisionedThroughput.ReadCapacityUnits, + "name": *gsiObject.IndexName, } for _, attribute := range gsiObject.KeySchema { @@ -571,7 +575,7 @@ func resourceAwsDynamoDbTableRead(d *schema.ResourceData, meta interface{}) erro } } - gsi["projection_type"] = *(gsiObject.Projection.ProjectionType) + gsi["projection_type"] = *gsiObject.Projection.ProjectionType gsi["non_key_attributes"] = gsiObject.Projection.NonKeyAttributes gsiList = append(gsiList, gsi) @@ -647,7 +651,7 @@ func createGSIFromData(data *map[string]interface{}) dynamodb.GlobalSecondaryInd func getGlobalSecondaryIndex(indexName string, indexList []*dynamodb.GlobalSecondaryIndexDescription) (*dynamodb.GlobalSecondaryIndexDescription, error) { for _, gsi := range indexList { - if *(gsi.IndexName) == indexName { + if *gsi.IndexName == indexName { return gsi, nil } } @@ -726,7 +730,7 @@ func waitForTableToBeActive(tableName string, meta interface{}) error { return err } - activeState = *(result.Table.TableStatus) == "ACTIVE" + activeState = *result.Table.TableStatus == "ACTIVE" // Wait for a few seconds if !activeState { diff --git a/builtin/providers/aws/resource_aws_dynamodb_table_test.go b/builtin/providers/aws/resource_aws_dynamodb_table_test.go index 6c26efc73..adf457f0a 100644 --- a/builtin/providers/aws/resource_aws_dynamodb_table_test.go +++ b/builtin/providers/aws/resource_aws_dynamodb_table_test.go @@ -211,7 +211,7 @@ func dynamoDbAttributesToMap(attributes *[]*dynamodb.AttributeDefinition) map[st attrmap := make(map[string]string) for _, attrdef := range *attributes { - attrmap[*(attrdef.AttributeName)] = *(attrdef.AttributeType) + attrmap[*attrdef.AttributeName] = *attrdef.AttributeType } return attrmap diff --git a/builtin/providers/aws/resource_aws_efs_file_system.go b/builtin/providers/aws/resource_aws_efs_file_system.go new file mode 100644 index 000000000..4beae81e0 --- /dev/null +++ b/builtin/providers/aws/resource_aws_efs_file_system.go @@ -0,0 +1,165 @@ +package aws + +import ( + "fmt" + "log" + "time" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/awserr" + "github.com/aws/aws-sdk-go/service/efs" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/helper/schema" +) + +func resourceAwsEfsFileSystem() *schema.Resource { + return &schema.Resource{ + Create: resourceAwsEfsFileSystemCreate, + Read: resourceAwsEfsFileSystemRead, + Update: resourceAwsEfsFileSystemUpdate, + Delete: resourceAwsEfsFileSystemDelete, + + Schema: map[string]*schema.Schema{ + "reference_name": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ForceNew: true, + }, + + "tags": tagsSchema(), + }, + } +} + +func resourceAwsEfsFileSystemCreate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).efsconn + + referenceName := "" + if v, ok := d.GetOk("reference_name"); ok { + referenceName = v.(string) + "-" + } + token := referenceName + resource.UniqueId() + fs, err := conn.CreateFileSystem(&efs.CreateFileSystemInput{ + CreationToken: aws.String(token), + }) + if err != nil { + return err + } + + log.Printf("[DEBUG] Creating EFS file system: %s", *fs) + d.SetId(*fs.FileSystemId) + + stateConf := &resource.StateChangeConf{ + Pending: []string{"creating"}, + Target: "available", + Refresh: func() (interface{}, string, error) { + resp, err := conn.DescribeFileSystems(&efs.DescribeFileSystemsInput{ + FileSystemId: aws.String(d.Id()), + }) + if err != nil { + return nil, "error", err + } + + if len(resp.FileSystems) < 1 { + return nil, "not-found", fmt.Errorf("EFS file system %q not found", d.Id()) + } + + fs := resp.FileSystems[0] + log.Printf("[DEBUG] current status of %q: %q", *fs.FileSystemId, *fs.LifeCycleState) + return fs, *fs.LifeCycleState, nil + }, + Timeout: 10 * time.Minute, + Delay: 2 * time.Second, + MinTimeout: 3 * time.Second, + } + + _, err = stateConf.WaitForState() + if err != nil { + return fmt.Errorf("Error waiting for EFS file system (%q) to create: %q", + d.Id(), err.Error()) + } + log.Printf("[DEBUG] EFS file system created: %q", *fs.FileSystemId) + + return resourceAwsEfsFileSystemUpdate(d, meta) +} + +func resourceAwsEfsFileSystemUpdate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).efsconn + err := setTagsEFS(conn, d) + if err != nil { + return err + } + + return resourceAwsEfsFileSystemRead(d, meta) +} + +func resourceAwsEfsFileSystemRead(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).efsconn + + resp, err := conn.DescribeFileSystems(&efs.DescribeFileSystemsInput{ + FileSystemId: aws.String(d.Id()), + }) + if err != nil { + return err + } + if len(resp.FileSystems) < 1 { + return fmt.Errorf("EFS file system %q not found", d.Id()) + } + + tagsResp, err := conn.DescribeTags(&efs.DescribeTagsInput{ + FileSystemId: aws.String(d.Id()), + }) + if err != nil { + return err + } + + d.Set("tags", tagsToMapEFS(tagsResp.Tags)) + + return nil +} + +func resourceAwsEfsFileSystemDelete(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).efsconn + + log.Printf("[DEBUG] Deleting EFS file system %s", d.Id()) + _, err := conn.DeleteFileSystem(&efs.DeleteFileSystemInput{ + FileSystemId: aws.String(d.Id()), + }) + stateConf := &resource.StateChangeConf{ + Pending: []string{"available", "deleting"}, + Target: "", + Refresh: func() (interface{}, string, error) { + resp, err := conn.DescribeFileSystems(&efs.DescribeFileSystemsInput{ + FileSystemId: aws.String(d.Id()), + }) + if err != nil { + efsErr, ok := err.(awserr.Error) + if ok && efsErr.Code() == "FileSystemNotFound" { + return nil, "", nil + } + return nil, "error", err + } + + if len(resp.FileSystems) < 1 { + return nil, "", nil + } + + fs := resp.FileSystems[0] + log.Printf("[DEBUG] current status of %q: %q", + *fs.FileSystemId, *fs.LifeCycleState) + return fs, *fs.LifeCycleState, nil + }, + Timeout: 10 * time.Minute, + Delay: 2 * time.Second, + MinTimeout: 3 * time.Second, + } + + _, err = stateConf.WaitForState() + if err != nil { + return err + } + + log.Printf("[DEBUG] EFS file system %q deleted.", d.Id()) + + return nil +} diff --git a/builtin/providers/aws/resource_aws_efs_file_system_test.go b/builtin/providers/aws/resource_aws_efs_file_system_test.go new file mode 100644 index 000000000..03e683476 --- /dev/null +++ b/builtin/providers/aws/resource_aws_efs_file_system_test.go @@ -0,0 +1,133 @@ +package aws + +import ( + "fmt" + "reflect" + "testing" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/efs" + + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +func TestAccAWSEFSFileSystem(t *testing.T) { + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckEfsFileSystemDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccAWSEFSFileSystemConfig, + Check: resource.ComposeTestCheckFunc( + testAccCheckEfsFileSystem( + "aws_efs_file_system.foo", + ), + ), + }, + resource.TestStep{ + Config: testAccAWSEFSFileSystemConfigWithTags, + Check: resource.ComposeTestCheckFunc( + testAccCheckEfsFileSystem( + "aws_efs_file_system.foo-with-tags", + ), + testAccCheckEfsFileSystemTags( + "aws_efs_file_system.foo-with-tags", + map[string]string{ + "Name": "foo-efs", + "Another": "tag", + }, + ), + ), + }, + }, + }) +} + +func testAccCheckEfsFileSystemDestroy(s *terraform.State) error { + if len(s.RootModule().Resources) > 0 { + return fmt.Errorf("Expected all resources to be gone, but found: %#v", s.RootModule().Resources) + } + + return nil +} + +func testAccCheckEfsFileSystem(resourceID string) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[resourceID] + if !ok { + return fmt.Errorf("Not found: %s", resourceID) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No ID is set") + } + + fs, ok := s.RootModule().Resources[resourceID] + if !ok { + return fmt.Errorf("Not found: %s", resourceID) + } + + conn := testAccProvider.Meta().(*AWSClient).efsconn + _, err := conn.DescribeFileSystems(&efs.DescribeFileSystemsInput{ + FileSystemId: aws.String(fs.Primary.ID), + }) + + if err != nil { + return err + } + + return nil + } +} + +func testAccCheckEfsFileSystemTags(resourceID string, expectedTags map[string]string) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[resourceID] + if !ok { + return fmt.Errorf("Not found: %s", resourceID) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No ID is set") + } + + fs, ok := s.RootModule().Resources[resourceID] + if !ok { + return fmt.Errorf("Not found: %s", resourceID) + } + + conn := testAccProvider.Meta().(*AWSClient).efsconn + resp, err := conn.DescribeTags(&efs.DescribeTagsInput{ + FileSystemId: aws.String(fs.Primary.ID), + }) + + if !reflect.DeepEqual(expectedTags, tagsToMapEFS(resp.Tags)) { + return fmt.Errorf("Tags mismatch.\nExpected: %#v\nGiven: %#v", + expectedTags, resp.Tags) + } + + if err != nil { + return err + } + + return nil + } +} + +const testAccAWSEFSFileSystemConfig = ` +resource "aws_efs_file_system" "foo" { + reference_name = "radeksimko" +} +` + +const testAccAWSEFSFileSystemConfigWithTags = ` +resource "aws_efs_file_system" "foo-with-tags" { + reference_name = "yada_yada" + tags { + Name = "foo-efs" + Another = "tag" + } +} +` diff --git a/builtin/providers/aws/resource_aws_efs_mount_target.go b/builtin/providers/aws/resource_aws_efs_mount_target.go new file mode 100644 index 000000000..ca7656e63 --- /dev/null +++ b/builtin/providers/aws/resource_aws_efs_mount_target.go @@ -0,0 +1,223 @@ +package aws + +import ( + "fmt" + "log" + "time" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/awserr" + "github.com/aws/aws-sdk-go/service/efs" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/helper/schema" +) + +func resourceAwsEfsMountTarget() *schema.Resource { + return &schema.Resource{ + Create: resourceAwsEfsMountTargetCreate, + Read: resourceAwsEfsMountTargetRead, + Update: resourceAwsEfsMountTargetUpdate, + Delete: resourceAwsEfsMountTargetDelete, + + Schema: map[string]*schema.Schema{ + "file_system_id": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + + "ip_address": &schema.Schema{ + Type: schema.TypeString, + Computed: true, + Optional: true, + ForceNew: true, + }, + + "security_groups": &schema.Schema{ + Type: schema.TypeSet, + Elem: &schema.Schema{Type: schema.TypeString}, + Set: schema.HashString, + Computed: true, + Optional: true, + }, + + "subnet_id": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + + "network_interface_id": &schema.Schema{ + Type: schema.TypeString, + Computed: true, + }, + }, + } +} + +func resourceAwsEfsMountTargetCreate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).efsconn + + input := efs.CreateMountTargetInput{ + FileSystemId: aws.String(d.Get("file_system_id").(string)), + SubnetId: aws.String(d.Get("subnet_id").(string)), + } + + if v, ok := d.GetOk("ip_address"); ok { + input.IpAddress = aws.String(v.(string)) + } + if v, ok := d.GetOk("security_groups"); ok { + input.SecurityGroups = expandStringList(v.(*schema.Set).List()) + } + + log.Printf("[DEBUG] Creating EFS mount target: %#v", input) + + mt, err := conn.CreateMountTarget(&input) + if err != nil { + return err + } + + d.SetId(*mt.MountTargetId) + + stateConf := &resource.StateChangeConf{ + Pending: []string{"creating"}, + Target: "available", + Refresh: func() (interface{}, string, error) { + resp, err := conn.DescribeMountTargets(&efs.DescribeMountTargetsInput{ + MountTargetId: aws.String(d.Id()), + }) + if err != nil { + return nil, "error", err + } + + if len(resp.MountTargets) < 1 { + return nil, "error", fmt.Errorf("EFS mount target %q not found", d.Id()) + } + + mt := resp.MountTargets[0] + + log.Printf("[DEBUG] Current status of %q: %q", *mt.MountTargetId, *mt.LifeCycleState) + return mt, *mt.LifeCycleState, nil + }, + Timeout: 10 * time.Minute, + Delay: 2 * time.Second, + MinTimeout: 3 * time.Second, + } + + _, err = stateConf.WaitForState() + if err != nil { + return fmt.Errorf("Error waiting for EFS mount target (%s) to create: %s", d.Id(), err) + } + + log.Printf("[DEBUG] EFS mount target created: %s", *mt.MountTargetId) + + return resourceAwsEfsMountTargetRead(d, meta) +} + +func resourceAwsEfsMountTargetUpdate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).efsconn + + if d.HasChange("security_groups") { + input := efs.ModifyMountTargetSecurityGroupsInput{ + MountTargetId: aws.String(d.Id()), + SecurityGroups: expandStringList(d.Get("security_groups").(*schema.Set).List()), + } + _, err := conn.ModifyMountTargetSecurityGroups(&input) + if err != nil { + return err + } + } + + return resourceAwsEfsMountTargetRead(d, meta) +} + +func resourceAwsEfsMountTargetRead(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).efsconn + resp, err := conn.DescribeMountTargets(&efs.DescribeMountTargetsInput{ + MountTargetId: aws.String(d.Id()), + }) + if err != nil { + return err + } + + if len(resp.MountTargets) < 1 { + return fmt.Errorf("EFS mount target %q not found", d.Id()) + } + + mt := resp.MountTargets[0] + + log.Printf("[DEBUG] Found EFS mount target: %#v", mt) + + d.SetId(*mt.MountTargetId) + d.Set("file_system_id", *mt.FileSystemId) + d.Set("ip_address", *mt.IpAddress) + d.Set("subnet_id", *mt.SubnetId) + d.Set("network_interface_id", *mt.NetworkInterfaceId) + + sgResp, err := conn.DescribeMountTargetSecurityGroups(&efs.DescribeMountTargetSecurityGroupsInput{ + MountTargetId: aws.String(d.Id()), + }) + if err != nil { + return err + } + + d.Set("security_groups", schema.NewSet(schema.HashString, flattenStringList(sgResp.SecurityGroups))) + + return nil +} + +func resourceAwsEfsMountTargetDelete(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).efsconn + + log.Printf("[DEBUG] Deleting EFS mount target %q", d.Id()) + _, err := conn.DeleteMountTarget(&efs.DeleteMountTargetInput{ + MountTargetId: aws.String(d.Id()), + }) + if err != nil { + return err + } + + stateConf := &resource.StateChangeConf{ + Pending: []string{"available", "deleting", "deleted"}, + Target: "", + Refresh: func() (interface{}, string, error) { + resp, err := conn.DescribeMountTargets(&efs.DescribeMountTargetsInput{ + MountTargetId: aws.String(d.Id()), + }) + if err != nil { + awsErr, ok := err.(awserr.Error) + if !ok { + return nil, "error", err + } + + if awsErr.Code() == "MountTargetNotFound" { + return nil, "", nil + } + + return nil, "error", awsErr + } + + if len(resp.MountTargets) < 1 { + return nil, "", nil + } + + mt := resp.MountTargets[0] + + log.Printf("[DEBUG] Current status of %q: %q", *mt.MountTargetId, *mt.LifeCycleState) + return mt, *mt.LifeCycleState, nil + }, + Timeout: 10 * time.Minute, + Delay: 2 * time.Second, + MinTimeout: 3 * time.Second, + } + + _, err = stateConf.WaitForState() + if err != nil { + return fmt.Errorf("Error waiting for EFS mount target (%q) to delete: %q", + d.Id(), err.Error()) + } + + log.Printf("[DEBUG] EFS mount target %q deleted.", d.Id()) + + return nil +} diff --git a/builtin/providers/aws/resource_aws_efs_mount_target_test.go b/builtin/providers/aws/resource_aws_efs_mount_target_test.go new file mode 100644 index 000000000..e9d624e03 --- /dev/null +++ b/builtin/providers/aws/resource_aws_efs_mount_target_test.go @@ -0,0 +1,135 @@ +package aws + +import ( + "fmt" + "testing" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/efs" + + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +func TestAccAWSEFSMountTarget(t *testing.T) { + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckEfsMountTargetDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccAWSEFSMountTargetConfig, + Check: resource.ComposeTestCheckFunc( + testAccCheckEfsMountTarget( + "aws_efs_mount_target.alpha", + ), + ), + }, + resource.TestStep{ + Config: testAccAWSEFSMountTargetConfigModified, + Check: resource.ComposeTestCheckFunc( + testAccCheckEfsMountTarget( + "aws_efs_mount_target.alpha", + ), + testAccCheckEfsMountTarget( + "aws_efs_mount_target.beta", + ), + ), + }, + }, + }) +} + +func testAccCheckEfsMountTargetDestroy(s *terraform.State) error { + if len(s.RootModule().Resources) > 0 { + return fmt.Errorf("Expected all resources to be gone, but found: %#v", s.RootModule().Resources) + } + + return nil +} + +func testAccCheckEfsMountTarget(resourceID string) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[resourceID] + if !ok { + return fmt.Errorf("Not found: %s", resourceID) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No ID is set") + } + + fs, ok := s.RootModule().Resources[resourceID] + if !ok { + return fmt.Errorf("Not found: %s", resourceID) + } + + conn := testAccProvider.Meta().(*AWSClient).efsconn + mt, err := conn.DescribeMountTargets(&efs.DescribeMountTargetsInput{ + MountTargetId: aws.String(fs.Primary.ID), + }) + if err != nil { + return err + } + + if *mt.MountTargets[0].MountTargetId != fs.Primary.ID { + return fmt.Errorf("Mount target ID mismatch: %q != %q", + *mt.MountTargets[0].MountTargetId, fs.Primary.ID) + } + + return nil + } +} + +const testAccAWSEFSMountTargetConfig = ` +resource "aws_efs_file_system" "foo" { + reference_name = "radeksimko" +} + +resource "aws_efs_mount_target" "alpha" { + file_system_id = "${aws_efs_file_system.foo.id}" + subnet_id = "${aws_subnet.alpha.id}" +} + +resource "aws_vpc" "foo" { + cidr_block = "10.0.0.0/16" +} + +resource "aws_subnet" "alpha" { + vpc_id = "${aws_vpc.foo.id}" + availability_zone = "us-west-2a" + cidr_block = "10.0.1.0/24" +} +` + +const testAccAWSEFSMountTargetConfigModified = ` +resource "aws_efs_file_system" "foo" { + reference_name = "radeksimko" +} + +resource "aws_efs_mount_target" "alpha" { + file_system_id = "${aws_efs_file_system.foo.id}" + subnet_id = "${aws_subnet.alpha.id}" +} + +resource "aws_efs_mount_target" "beta" { + file_system_id = "${aws_efs_file_system.foo.id}" + subnet_id = "${aws_subnet.beta.id}" +} + +resource "aws_vpc" "foo" { + cidr_block = "10.0.0.0/16" +} + +resource "aws_subnet" "alpha" { + vpc_id = "${aws_vpc.foo.id}" + availability_zone = "us-west-2a" + cidr_block = "10.0.1.0/24" +} + +resource "aws_subnet" "beta" { + vpc_id = "${aws_vpc.foo.id}" + availability_zone = "us-west-2b" + cidr_block = "10.0.2.0/24" +} +` diff --git a/builtin/providers/aws/resource_aws_eip.go b/builtin/providers/aws/resource_aws_eip.go index 4729af537..4b369ee60 100644 --- a/builtin/providers/aws/resource_aws_eip.go +++ b/builtin/providers/aws/resource_aws_eip.go @@ -30,13 +30,13 @@ func resourceAwsEip() *schema.Resource { "instance": &schema.Schema{ Type: schema.TypeString, Optional: true, + Computed: true, }, "network_interface": &schema.Schema{ - Type: schema.TypeString, - Optional: true, - Computed: true, - ConflictsWith: []string{"instance"}, + Type: schema.TypeString, + Optional: true, + Computed: true, }, "allocation_id": &schema.Schema{ @@ -134,7 +134,7 @@ func resourceAwsEipRead(d *schema.ResourceData, meta interface{}) error { // Verify AWS returned our EIP if len(describeAddresses.Addresses) != 1 || - (domain == "vpc" && *describeAddresses.Addresses[0].AllocationId != id) || + domain == "vpc" && *describeAddresses.Addresses[0].AllocationId != id || *describeAddresses.Addresses[0].PublicIp != id { if err != nil { return fmt.Errorf("Unable to find EIP: %#v", describeAddresses.Addresses) diff --git a/builtin/providers/aws/resource_aws_elasticache_cluster.go b/builtin/providers/aws/resource_aws_elasticache_cluster.go index 093ea88f8..700b7ddc8 100644 --- a/builtin/providers/aws/resource_aws_elasticache_cluster.go +++ b/builtin/providers/aws/resource_aws_elasticache_cluster.go @@ -28,6 +28,12 @@ func resourceAwsElasticacheCluster() *schema.Resource { Type: schema.TypeString, Required: true, ForceNew: true, + StateFunc: func(val interface{}) string { + // Elasticache normalizes cluster ids to lowercase, + // so we have to do this too or else we can end up + // with non-converging diffs. + return strings.ToLower(val.(string)) + }, }, "configuration_endpoint": &schema.Schema{ Type: schema.TypeString, @@ -194,7 +200,11 @@ func resourceAwsElasticacheClusterCreate(d *schema.ResourceData, meta interface{ return fmt.Errorf("Error creating Elasticache: %s", err) } - d.SetId(*resp.CacheCluster.CacheClusterId) + // Assign the cluster id as the resource ID + // Elasticache always retains the id in lower case, so we have to + // mimic that or else we won't be able to refresh a resource whose + // name contained uppercase characters. + d.SetId(strings.ToLower(*resp.CacheCluster.CacheClusterId)) pending := []string{"creating"} stateConf := &resource.StateChangeConf{ diff --git a/builtin/providers/aws/resource_aws_elasticache_cluster_test.go b/builtin/providers/aws/resource_aws_elasticache_cluster_test.go index caa14a8df..173ca21ea 100644 --- a/builtin/providers/aws/resource_aws_elasticache_cluster_test.go +++ b/builtin/providers/aws/resource_aws_elasticache_cluster_test.go @@ -163,7 +163,10 @@ resource "aws_security_group" "bar" { } resource "aws_elasticache_cluster" "bar" { - cluster_id = "tf-test-%03d" + // Including uppercase letters in this name to ensure + // that we correctly handle the fact that the API + // normalizes names to lowercase. + cluster_id = "tf-TEST-%03d" node_type = "cache.m1.small" num_cache_nodes = 1 engine = "redis" diff --git a/builtin/providers/aws/resource_aws_elasticsearch_domain.go b/builtin/providers/aws/resource_aws_elasticsearch_domain.go new file mode 100644 index 000000000..8f2d6c9c9 --- /dev/null +++ b/builtin/providers/aws/resource_aws_elasticsearch_domain.go @@ -0,0 +1,399 @@ +package aws + +import ( + "fmt" + "log" + "regexp" + "time" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/awserr" + elasticsearch "github.com/aws/aws-sdk-go/service/elasticsearchservice" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/helper/schema" +) + +func resourceAwsElasticSearchDomain() *schema.Resource { + return &schema.Resource{ + Create: resourceAwsElasticSearchDomainCreate, + Read: resourceAwsElasticSearchDomainRead, + Update: resourceAwsElasticSearchDomainUpdate, + Delete: resourceAwsElasticSearchDomainDelete, + + Schema: map[string]*schema.Schema{ + "access_policies": &schema.Schema{ + Type: schema.TypeString, + StateFunc: normalizeJson, + Optional: true, + }, + "advanced_options": &schema.Schema{ + Type: schema.TypeMap, + Optional: true, + Computed: true, + }, + "domain_name": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + ValidateFunc: func(v interface{}, k string) (ws []string, errors []error) { + value := v.(string) + if !regexp.MustCompile(`^[0-9A-Za-z]+`).MatchString(value) { + errors = append(errors, fmt.Errorf( + "%q must start with a letter or number", k)) + } + if !regexp.MustCompile(`^[0-9A-Za-z][0-9a-z-]+$`).MatchString(value) { + errors = append(errors, fmt.Errorf( + "%q can only contain lowercase characters, numbers and hyphens", k)) + } + return + }, + }, + "arn": &schema.Schema{ + Type: schema.TypeString, + Computed: true, + }, + "domain_id": &schema.Schema{ + Type: schema.TypeString, + Computed: true, + }, + "endpoint": &schema.Schema{ + Type: schema.TypeString, + Computed: true, + }, + "ebs_options": &schema.Schema{ + Type: schema.TypeList, + Optional: true, + Computed: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "ebs_enabled": &schema.Schema{ + Type: schema.TypeBool, + Required: true, + }, + "iops": &schema.Schema{ + Type: schema.TypeInt, + Optional: true, + }, + "volume_size": &schema.Schema{ + Type: schema.TypeInt, + Optional: true, + }, + "volume_type": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + }, + }, + }, + }, + "cluster_config": &schema.Schema{ + Type: schema.TypeList, + Optional: true, + Computed: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "dedicated_master_count": &schema.Schema{ + Type: schema.TypeInt, + Optional: true, + }, + "dedicated_master_enabled": &schema.Schema{ + Type: schema.TypeBool, + Optional: true, + Default: false, + }, + "dedicated_master_type": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + }, + "instance_count": &schema.Schema{ + Type: schema.TypeInt, + Optional: true, + Default: 1, + }, + "instance_type": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + Default: "m3.medium.elasticsearch", + }, + "zone_awareness_enabled": &schema.Schema{ + Type: schema.TypeBool, + Optional: true, + }, + }, + }, + }, + "snapshot_options": &schema.Schema{ + Type: schema.TypeList, + Optional: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "automated_snapshot_start_hour": &schema.Schema{ + Type: schema.TypeInt, + Required: true, + }, + }, + }, + }, + }, + } +} + +func resourceAwsElasticSearchDomainCreate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).esconn + + input := elasticsearch.CreateElasticsearchDomainInput{ + DomainName: aws.String(d.Get("domain_name").(string)), + } + + if v, ok := d.GetOk("access_policies"); ok { + input.AccessPolicies = aws.String(v.(string)) + } + + if v, ok := d.GetOk("advanced_options"); ok { + input.AdvancedOptions = stringMapToPointers(v.(map[string]interface{})) + } + + if v, ok := d.GetOk("ebs_options"); ok { + options := v.([]interface{}) + + if len(options) > 1 { + return fmt.Errorf("Only a single ebs_options block is expected") + } else if len(options) == 1 { + if options[0] == nil { + return fmt.Errorf("At least one field is expected inside ebs_options") + } + + s := options[0].(map[string]interface{}) + input.EBSOptions = expandESEBSOptions(s) + } + } + + if v, ok := d.GetOk("cluster_config"); ok { + config := v.([]interface{}) + + if len(config) > 1 { + return fmt.Errorf("Only a single cluster_config block is expected") + } else if len(config) == 1 { + if config[0] == nil { + return fmt.Errorf("At least one field is expected inside cluster_config") + } + m := config[0].(map[string]interface{}) + input.ElasticsearchClusterConfig = expandESClusterConfig(m) + } + } + + if v, ok := d.GetOk("snapshot_options"); ok { + options := v.([]interface{}) + + if len(options) > 1 { + return fmt.Errorf("Only a single snapshot_options block is expected") + } else if len(options) == 1 { + if options[0] == nil { + return fmt.Errorf("At least one field is expected inside snapshot_options") + } + + o := options[0].(map[string]interface{}) + + snapshotOptions := elasticsearch.SnapshotOptions{ + AutomatedSnapshotStartHour: aws.Int64(int64(o["automated_snapshot_start_hour"].(int))), + } + + input.SnapshotOptions = &snapshotOptions + } + } + + log.Printf("[DEBUG] Creating ElasticSearch domain: %s", input) + out, err := conn.CreateElasticsearchDomain(&input) + if err != nil { + return err + } + + d.SetId(*out.DomainStatus.ARN) + + log.Printf("[DEBUG] Waiting for ElasticSearch domain %q to be created", d.Id()) + err = resource.Retry(15*time.Minute, func() error { + out, err := conn.DescribeElasticsearchDomain(&elasticsearch.DescribeElasticsearchDomainInput{ + DomainName: aws.String(d.Get("domain_name").(string)), + }) + if err != nil { + return resource.RetryError{Err: err} + } + + if !*out.DomainStatus.Processing && out.DomainStatus.Endpoint != nil { + return nil + } + + return fmt.Errorf("%q: Timeout while waiting for the domain to be created", d.Id()) + }) + if err != nil { + return err + } + + log.Printf("[DEBUG] ElasticSearch domain %q created", d.Id()) + + return resourceAwsElasticSearchDomainRead(d, meta) +} + +func resourceAwsElasticSearchDomainRead(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).esconn + + out, err := conn.DescribeElasticsearchDomain(&elasticsearch.DescribeElasticsearchDomainInput{ + DomainName: aws.String(d.Get("domain_name").(string)), + }) + if err != nil { + return err + } + + log.Printf("[DEBUG] Received ElasticSearch domain: %s", out) + + ds := out.DomainStatus + + d.Set("access_policies", *ds.AccessPolicies) + err = d.Set("advanced_options", pointersMapToStringList(ds.AdvancedOptions)) + if err != nil { + return err + } + d.Set("domain_id", *ds.DomainId) + d.Set("domain_name", *ds.DomainName) + if ds.Endpoint != nil { + d.Set("endpoint", *ds.Endpoint) + } + + err = d.Set("ebs_options", flattenESEBSOptions(ds.EBSOptions)) + if err != nil { + return err + } + err = d.Set("cluster_config", flattenESClusterConfig(ds.ElasticsearchClusterConfig)) + if err != nil { + return err + } + if ds.SnapshotOptions != nil { + d.Set("snapshot_options", map[string]interface{}{ + "automated_snapshot_start_hour": *ds.SnapshotOptions.AutomatedSnapshotStartHour, + }) + } + + d.Set("arn", *ds.ARN) + + return nil +} + +func resourceAwsElasticSearchDomainUpdate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).esconn + + input := elasticsearch.UpdateElasticsearchDomainConfigInput{ + DomainName: aws.String(d.Get("domain_name").(string)), + } + + if d.HasChange("access_policies") { + input.AccessPolicies = aws.String(d.Get("access_policies").(string)) + } + + if d.HasChange("advanced_options") { + input.AdvancedOptions = stringMapToPointers(d.Get("advanced_options").(map[string]interface{})) + } + + if d.HasChange("ebs_options") { + options := d.Get("ebs_options").([]interface{}) + + if len(options) > 1 { + return fmt.Errorf("Only a single ebs_options block is expected") + } else if len(options) == 1 { + s := options[0].(map[string]interface{}) + input.EBSOptions = expandESEBSOptions(s) + } + } + + if d.HasChange("cluster_config") { + config := d.Get("cluster_config").([]interface{}) + + if len(config) > 1 { + return fmt.Errorf("Only a single cluster_config block is expected") + } else if len(config) == 1 { + m := config[0].(map[string]interface{}) + input.ElasticsearchClusterConfig = expandESClusterConfig(m) + } + } + + if d.HasChange("snapshot_options") { + options := d.Get("snapshot_options").([]interface{}) + + if len(options) > 1 { + return fmt.Errorf("Only a single snapshot_options block is expected") + } else if len(options) == 1 { + o := options[0].(map[string]interface{}) + + snapshotOptions := elasticsearch.SnapshotOptions{ + AutomatedSnapshotStartHour: aws.Int64(int64(o["automated_snapshot_start_hour"].(int))), + } + + input.SnapshotOptions = &snapshotOptions + } + } + + _, err := conn.UpdateElasticsearchDomainConfig(&input) + if err != nil { + return err + } + + err = resource.Retry(25*time.Minute, func() error { + out, err := conn.DescribeElasticsearchDomain(&elasticsearch.DescribeElasticsearchDomainInput{ + DomainName: aws.String(d.Get("domain_name").(string)), + }) + if err != nil { + return resource.RetryError{Err: err} + } + + if *out.DomainStatus.Processing == false { + return nil + } + + return fmt.Errorf("%q: Timeout while waiting for changes to be processed", d.Id()) + }) + if err != nil { + return err + } + + return resourceAwsElasticSearchDomainRead(d, meta) +} + +func resourceAwsElasticSearchDomainDelete(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).esconn + + log.Printf("[DEBUG] Deleting ElasticSearch domain: %q", d.Get("domain_name").(string)) + _, err := conn.DeleteElasticsearchDomain(&elasticsearch.DeleteElasticsearchDomainInput{ + DomainName: aws.String(d.Get("domain_name").(string)), + }) + if err != nil { + return err + } + + log.Printf("[DEBUG] Waiting for ElasticSearch domain %q to be deleted", d.Get("domain_name").(string)) + err = resource.Retry(15*time.Minute, func() error { + out, err := conn.DescribeElasticsearchDomain(&elasticsearch.DescribeElasticsearchDomainInput{ + DomainName: aws.String(d.Get("domain_name").(string)), + }) + + if err != nil { + awsErr, ok := err.(awserr.Error) + if !ok { + return resource.RetryError{Err: err} + } + + if awsErr.Code() == "ResourceNotFoundException" { + return nil + } + + return resource.RetryError{Err: awsErr} + } + + if !*out.DomainStatus.Processing { + return nil + } + + return fmt.Errorf("%q: Timeout while waiting for the domain to be deleted", d.Id()) + }) + + d.SetId("") + + return err +} diff --git a/builtin/providers/aws/resource_aws_elasticsearch_domain_test.go b/builtin/providers/aws/resource_aws_elasticsearch_domain_test.go new file mode 100644 index 000000000..dee675d0d --- /dev/null +++ b/builtin/providers/aws/resource_aws_elasticsearch_domain_test.go @@ -0,0 +1,122 @@ +package aws + +import ( + "fmt" + "testing" + + "github.com/aws/aws-sdk-go/aws" + elasticsearch "github.com/aws/aws-sdk-go/service/elasticsearchservice" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +func TestAccAWSElasticSearchDomain_basic(t *testing.T) { + var domain elasticsearch.ElasticsearchDomainStatus + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckESDomainDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccESDomainConfig_basic, + Check: resource.ComposeTestCheckFunc( + testAccCheckESDomainExists("aws_elasticsearch_domain.example", &domain), + ), + }, + }, + }) +} + +func TestAccAWSElasticSearchDomain_complex(t *testing.T) { + var domain elasticsearch.ElasticsearchDomainStatus + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckESDomainDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccESDomainConfig_complex, + Check: resource.ComposeTestCheckFunc( + testAccCheckESDomainExists("aws_elasticsearch_domain.example", &domain), + ), + }, + }, + }) +} + +func testAccCheckESDomainExists(n string, domain *elasticsearch.ElasticsearchDomainStatus) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[n] + if !ok { + return fmt.Errorf("Not found: %s", n) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No ES Domain ID is set") + } + + conn := testAccProvider.Meta().(*AWSClient).esconn + opts := &elasticsearch.DescribeElasticsearchDomainInput{ + DomainName: aws.String(rs.Primary.Attributes["domain_name"]), + } + + resp, err := conn.DescribeElasticsearchDomain(opts) + if err != nil { + return fmt.Errorf("Error describing domain: %s", err.Error()) + } + + *domain = *resp.DomainStatus + + return nil + } +} + +func testAccCheckESDomainDestroy(s *terraform.State) error { + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_elasticsearch_domain" { + continue + } + + conn := testAccProvider.Meta().(*AWSClient).esconn + opts := &elasticsearch.DescribeElasticsearchDomainInput{ + DomainName: aws.String(rs.Primary.Attributes["domain_name"]), + } + + _, err := conn.DescribeElasticsearchDomain(opts) + if err != nil { + return fmt.Errorf("Error describing ES domains: %q", err.Error()) + } + } + return nil +} + +const testAccESDomainConfig_basic = ` +resource "aws_elasticsearch_domain" "example" { + domain_name = "tf-test-1" +} +` + +const testAccESDomainConfig_complex = ` +resource "aws_elasticsearch_domain" "example" { + domain_name = "tf-test-2" + + advanced_options { + "indices.fielddata.cache.size" = 80 + } + + ebs_options { + ebs_enabled = false + } + + cluster_config { + instance_count = 2 + zone_awareness_enabled = true + } + + snapshot_options { + automated_snapshot_start_hour = 23 + } +} +` diff --git a/builtin/providers/aws/resource_aws_elb.go b/builtin/providers/aws/resource_aws_elb.go index a57fc840a..9955c7cf0 100644 --- a/builtin/providers/aws/resource_aws_elb.go +++ b/builtin/providers/aws/resource_aws_elb.go @@ -24,31 +24,11 @@ func resourceAwsElb() *schema.Resource { Schema: map[string]*schema.Schema{ "name": &schema.Schema{ - Type: schema.TypeString, - Optional: true, - Computed: true, - ForceNew: true, - ValidateFunc: func(v interface{}, k string) (ws []string, errors []error) { - value := v.(string) - if !regexp.MustCompile(`^[0-9A-Za-z-]+$`).MatchString(value) { - errors = append(errors, fmt.Errorf( - "only alphanumeric characters and hyphens allowed in %q: %q", - k, value)) - } - if len(value) > 32 { - errors = append(errors, fmt.Errorf( - "%q cannot be longer than 32 characters: %q", k, value)) - } - if regexp.MustCompile(`^-`).MatchString(value) { - errors = append(errors, fmt.Errorf( - "%q cannot begin with a hyphen: %q", k, value)) - } - if regexp.MustCompile(`-$`).MatchString(value) { - errors = append(errors, fmt.Errorf( - "%q cannot end with a hyphen: %q", k, value)) - } - return - }, + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + ValidateFunc: validateElbName, }, "internal": &schema.Schema{ @@ -591,3 +571,26 @@ func isLoadBalancerNotFound(err error) bool { elberr, ok := err.(awserr.Error) return ok && elberr.Code() == "LoadBalancerNotFound" } + +func validateElbName(v interface{}, k string) (ws []string, errors []error) { + value := v.(string) + if !regexp.MustCompile(`^[0-9A-Za-z-]+$`).MatchString(value) { + errors = append(errors, fmt.Errorf( + "only alphanumeric characters and hyphens allowed in %q: %q", + k, value)) + } + if len(value) > 32 { + errors = append(errors, fmt.Errorf( + "%q cannot be longer than 32 characters: %q", k, value)) + } + if regexp.MustCompile(`^-`).MatchString(value) { + errors = append(errors, fmt.Errorf( + "%q cannot begin with a hyphen: %q", k, value)) + } + if regexp.MustCompile(`-$`).MatchString(value) { + errors = append(errors, fmt.Errorf( + "%q cannot end with a hyphen: %q", k, value)) + } + return + +} diff --git a/builtin/providers/aws/resource_aws_elb_test.go b/builtin/providers/aws/resource_aws_elb_test.go index 941b2fdef..dadf4aba3 100644 --- a/builtin/providers/aws/resource_aws_elb_test.go +++ b/builtin/providers/aws/resource_aws_elb_test.go @@ -431,12 +431,48 @@ func TestResourceAwsElbListenerHash(t *testing.T) { for tn, tc := range cases { leftHash := resourceAwsElbListenerHash(tc.Left) rightHash := resourceAwsElbListenerHash(tc.Right) - if (leftHash == rightHash) != tc.Match { + if leftHash == rightHash != tc.Match { t.Fatalf("%s: expected match: %t, but did not get it", tn, tc.Match) } } } +func TestResourceAWSELB_validateElbNameCannotBeginWithHyphen(t *testing.T) { + var elbName = "-Testing123" + _, errors := validateElbName(elbName, "SampleKey") + + if len(errors) != 1 { + t.Fatalf("Expected the ELB Name to trigger a validation error") + } +} + +func TestResourceAWSELB_validateElbNameCannotBeLongerThen32Characters(t *testing.T) { + var elbName = "Testing123dddddddddddddddddddvvvv" + _, errors := validateElbName(elbName, "SampleKey") + + if len(errors) != 1 { + t.Fatalf("Expected the ELB Name to trigger a validation error") + } +} + +func TestResourceAWSELB_validateElbNameCannotHaveSpecialCharacters(t *testing.T) { + var elbName = "Testing123%%" + _, errors := validateElbName(elbName, "SampleKey") + + if len(errors) != 1 { + t.Fatalf("Expected the ELB Name to trigger a validation error") + } +} + +func TestResourceAWSELB_validateElbNameCannotEndWithHyphen(t *testing.T) { + var elbName = "Testing123-" + _, errors := validateElbName(elbName, "SampleKey") + + if len(errors) != 1 { + t.Fatalf("Expected the ELB Name to trigger a validation error") + } +} + func testAccCheckAWSELBDestroy(s *terraform.State) error { conn := testAccProvider.Meta().(*AWSClient).elbconn diff --git a/builtin/providers/aws/resource_aws_glacier_vault.go b/builtin/providers/aws/resource_aws_glacier_vault.go new file mode 100644 index 000000000..21ac4d7cc --- /dev/null +++ b/builtin/providers/aws/resource_aws_glacier_vault.go @@ -0,0 +1,387 @@ +package aws + +import ( + "fmt" + "log" + "regexp" + + "github.com/hashicorp/terraform/helper/schema" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/awserr" + "github.com/aws/aws-sdk-go/service/glacier" +) + +func resourceAwsGlacierVault() *schema.Resource { + return &schema.Resource{ + Create: resourceAwsGlacierVaultCreate, + Read: resourceAwsGlacierVaultRead, + Update: resourceAwsGlacierVaultUpdate, + Delete: resourceAwsGlacierVaultDelete, + + Schema: map[string]*schema.Schema{ + "name": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + ValidateFunc: func(v interface{}, k string) (ws []string, errors []error) { + value := v.(string) + if !regexp.MustCompile(`^[.0-9A-Za-z-_]+$`).MatchString(value) { + errors = append(errors, fmt.Errorf( + "only alphanumeric characters, hyphens, underscores, and periods allowed in %q", k)) + } + if len(value) > 255 { + errors = append(errors, fmt.Errorf( + "%q cannot be longer than 255 characters", k)) + } + return + }, + }, + + "location": &schema.Schema{ + Type: schema.TypeString, + Computed: true, + }, + + "arn": &schema.Schema{ + Type: schema.TypeString, + Computed: true, + }, + + "access_policy": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + StateFunc: normalizeJson, + }, + + "notification": &schema.Schema{ + Type: schema.TypeList, + Optional: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "events": &schema.Schema{ + Type: schema.TypeSet, + Required: true, + Elem: &schema.Schema{Type: schema.TypeString}, + Set: schema.HashString, + }, + "sns_topic": &schema.Schema{ + Type: schema.TypeString, + Required: true, + }, + }, + }, + }, + + "tags": tagsSchema(), + }, + } +} + +func resourceAwsGlacierVaultCreate(d *schema.ResourceData, meta interface{}) error { + glacierconn := meta.(*AWSClient).glacierconn + + input := &glacier.CreateVaultInput{ + VaultName: aws.String(d.Get("name").(string)), + } + + out, err := glacierconn.CreateVault(input) + if err != nil { + return fmt.Errorf("Error creating Glacier Vault: %s", err) + } + + d.SetId(d.Get("name").(string)) + d.Set("location", *out.Location) + + return resourceAwsGlacierVaultUpdate(d, meta) +} + +func resourceAwsGlacierVaultUpdate(d *schema.ResourceData, meta interface{}) error { + glacierconn := meta.(*AWSClient).glacierconn + + if err := setGlacierVaultTags(glacierconn, d); err != nil { + return err + } + + if d.HasChange("access_policy") { + if err := resourceAwsGlacierVaultPolicyUpdate(glacierconn, d); err != nil { + return err + } + } + + if d.HasChange("notification") { + if err := resourceAwsGlacierVaultNotificationUpdate(glacierconn, d); err != nil { + return err + } + } + + return resourceAwsGlacierVaultRead(d, meta) +} + +func resourceAwsGlacierVaultRead(d *schema.ResourceData, meta interface{}) error { + glacierconn := meta.(*AWSClient).glacierconn + + input := &glacier.DescribeVaultInput{ + VaultName: aws.String(d.Id()), + } + + out, err := glacierconn.DescribeVault(input) + if err != nil { + return fmt.Errorf("Error reading Glacier Vault: %s", err.Error()) + } + + d.Set("arn", *out.VaultARN) + + tags, err := getGlacierVaultTags(glacierconn, d.Id()) + if err != nil { + return err + } + d.Set("tags", tags) + + log.Printf("[DEBUG] Getting the access_policy for Vault %s", d.Id()) + pol, err := glacierconn.GetVaultAccessPolicy(&glacier.GetVaultAccessPolicyInput{ + VaultName: aws.String(d.Id()), + }) + + if awserr, ok := err.(awserr.Error); ok && awserr.Code() == "ResourceNotFoundException" { + d.Set("access_policy", "") + } else if pol != nil { + d.Set("access_policy", normalizeJson(*pol.Policy.Policy)) + } else { + return err + } + + notifications, err := getGlacierVaultNotification(glacierconn, d.Id()) + if awserr, ok := err.(awserr.Error); ok && awserr.Code() == "ResourceNotFoundException" { + d.Set("notification", "") + } else if pol != nil { + d.Set("notification", notifications) + } else { + return err + } + + return nil +} + +func resourceAwsGlacierVaultDelete(d *schema.ResourceData, meta interface{}) error { + glacierconn := meta.(*AWSClient).glacierconn + + log.Printf("[DEBUG] Glacier Delete Vault: %s", d.Id()) + _, err := glacierconn.DeleteVault(&glacier.DeleteVaultInput{ + VaultName: aws.String(d.Id()), + }) + if err != nil { + return fmt.Errorf("Error deleting Glacier Vault: %s", err.Error()) + } + return nil +} + +func resourceAwsGlacierVaultNotificationUpdate(glacierconn *glacier.Glacier, d *schema.ResourceData) error { + + if v, ok := d.GetOk("notification"); ok { + settings := v.([]interface{}) + + if len(settings) > 1 { + return fmt.Errorf("Only a single Notification Block is allowed for Glacier Vault") + } else if len(settings) == 1 { + s := settings[0].(map[string]interface{}) + var events []*string + for _, id := range s["events"].(*schema.Set).List() { + events = append(events, aws.String(id.(string))) + } + + _, err := glacierconn.SetVaultNotifications(&glacier.SetVaultNotificationsInput{ + VaultName: aws.String(d.Id()), + VaultNotificationConfig: &glacier.VaultNotificationConfig{ + SNSTopic: aws.String(s["sns_topic"].(string)), + Events: events, + }, + }) + + if err != nil { + return fmt.Errorf("Error Updating Glacier Vault Notifications: %s", err.Error()) + } + } + } else { + _, err := glacierconn.DeleteVaultNotifications(&glacier.DeleteVaultNotificationsInput{ + VaultName: aws.String(d.Id()), + }) + + if err != nil { + return fmt.Errorf("Error Removing Glacier Vault Notifications: %s", err.Error()) + } + + } + + return nil +} + +func resourceAwsGlacierVaultPolicyUpdate(glacierconn *glacier.Glacier, d *schema.ResourceData) error { + vaultName := d.Id() + policyContents := d.Get("access_policy").(string) + + policy := &glacier.VaultAccessPolicy{ + Policy: aws.String(policyContents), + } + + if policyContents != "" { + log.Printf("[DEBUG] Glacier Vault: %s, put policy", vaultName) + + _, err := glacierconn.SetVaultAccessPolicy(&glacier.SetVaultAccessPolicyInput{ + VaultName: aws.String(d.Id()), + Policy: policy, + }) + + if err != nil { + return fmt.Errorf("Error putting Glacier Vault policy: %s", err.Error()) + } + } else { + log.Printf("[DEBUG] Glacier Vault: %s, delete policy: %s", vaultName, policy) + _, err := glacierconn.DeleteVaultAccessPolicy(&glacier.DeleteVaultAccessPolicyInput{ + VaultName: aws.String(d.Id()), + }) + + if err != nil { + return fmt.Errorf("Error deleting Glacier Vault policy: %s", err.Error()) + } + } + + return nil +} + +func setGlacierVaultTags(conn *glacier.Glacier, d *schema.ResourceData) error { + if d.HasChange("tags") { + oraw, nraw := d.GetChange("tags") + o := oraw.(map[string]interface{}) + n := nraw.(map[string]interface{}) + create, remove := diffGlacierVaultTags(mapGlacierVaultTags(o), mapGlacierVaultTags(n)) + + // Set tags + if len(remove) > 0 { + tagsToRemove := &glacier.RemoveTagsFromVaultInput{ + VaultName: aws.String(d.Id()), + TagKeys: glacierStringsToPointyString(remove), + } + + log.Printf("[DEBUG] Removing tags: from %s", d.Id()) + _, err := conn.RemoveTagsFromVault(tagsToRemove) + if err != nil { + return err + } + } + if len(create) > 0 { + tagsToAdd := &glacier.AddTagsToVaultInput{ + VaultName: aws.String(d.Id()), + Tags: glacierVaultTagsFromMap(create), + } + + log.Printf("[DEBUG] Creating tags: for %s", d.Id()) + _, err := conn.AddTagsToVault(tagsToAdd) + if err != nil { + return err + } + } + } + + return nil +} + +func mapGlacierVaultTags(m map[string]interface{}) map[string]string { + results := make(map[string]string) + for k, v := range m { + results[k] = v.(string) + } + + return results +} + +func diffGlacierVaultTags(oldTags, newTags map[string]string) (map[string]string, []string) { + + create := make(map[string]string) + for k, v := range newTags { + create[k] = v + } + + // Build the list of what to remove + var remove []string + for k, v := range oldTags { + old, ok := create[k] + if !ok || old != v { + // Delete it! + remove = append(remove, k) + } + } + + return create, remove +} + +func getGlacierVaultTags(glacierconn *glacier.Glacier, vaultName string) (map[string]string, error) { + request := &glacier.ListTagsForVaultInput{ + VaultName: aws.String(vaultName), + } + + log.Printf("[DEBUG] Getting the tags: for %s", vaultName) + response, err := glacierconn.ListTagsForVault(request) + if awserr, ok := err.(awserr.Error); ok && awserr.Code() == "NoSuchTagSet" { + return map[string]string{}, nil + } else if err != nil { + return nil, err + } + + return glacierVaultTagsToMap(response.Tags), nil +} + +func glacierVaultTagsToMap(responseTags map[string]*string) map[string]string { + results := make(map[string]string, len(responseTags)) + for k, v := range responseTags { + results[k] = *v + } + + return results +} + +func glacierVaultTagsFromMap(responseTags map[string]string) map[string]*string { + results := make(map[string]*string, len(responseTags)) + for k, v := range responseTags { + results[k] = aws.String(v) + } + + return results +} + +func glacierStringsToPointyString(s []string) []*string { + results := make([]*string, len(s)) + for i, x := range s { + results[i] = aws.String(x) + } + + return results +} + +func glacierPointersToStringList(pointers []*string) []interface{} { + list := make([]interface{}, len(pointers)) + for i, v := range pointers { + list[i] = *v + } + return list +} + +func getGlacierVaultNotification(glacierconn *glacier.Glacier, vaultName string) ([]map[string]interface{}, error) { + request := &glacier.GetVaultNotificationsInput{ + VaultName: aws.String(vaultName), + } + + response, err := glacierconn.GetVaultNotifications(request) + if err != nil { + return nil, fmt.Errorf("Error reading Glacier Vault Notifications: %s", err.Error()) + } + + notifications := make(map[string]interface{}, 0) + + log.Print("[DEBUG] Flattening Glacier Vault Notifications") + + notifications["events"] = schema.NewSet(schema.HashString, glacierPointersToStringList(response.VaultNotificationConfig.Events)) + notifications["sns_topic"] = *response.VaultNotificationConfig.SNSTopic + + return []map[string]interface{}{notifications}, nil +} diff --git a/builtin/providers/aws/resource_aws_glacier_vault_test.go b/builtin/providers/aws/resource_aws_glacier_vault_test.go new file mode 100644 index 000000000..4f5c26bf2 --- /dev/null +++ b/builtin/providers/aws/resource_aws_glacier_vault_test.go @@ -0,0 +1,227 @@ +package aws + +import ( + "fmt" + "reflect" + "testing" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/awserr" + "github.com/aws/aws-sdk-go/service/glacier" + + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +func TestAccAWSGlacierVault_basic(t *testing.T) { + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckGlacierVaultDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccGlacierVault_basic, + Check: resource.ComposeTestCheckFunc( + testAccCheckGlacierVaultExists("aws_glacier_vault.test"), + ), + }, + }, + }) +} + +func TestAccAWSGlacierVault_full(t *testing.T) { + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckGlacierVaultDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccGlacierVault_full, + Check: resource.ComposeTestCheckFunc( + testAccCheckGlacierVaultExists("aws_glacier_vault.full"), + ), + }, + }, + }) +} + +func TestAccAWSGlacierVault_RemoveNotifications(t *testing.T) { + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckGlacierVaultDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccGlacierVault_full, + Check: resource.ComposeTestCheckFunc( + testAccCheckGlacierVaultExists("aws_glacier_vault.full"), + ), + }, + resource.TestStep{ + Config: testAccGlacierVault_withoutNotification, + Check: resource.ComposeTestCheckFunc( + testAccCheckGlacierVaultExists("aws_glacier_vault.full"), + testAccCheckVaultNotificationsMissing("aws_glacier_vault.full"), + ), + }, + }, + }) +} + +func TestDiffGlacierVaultTags(t *testing.T) { + cases := []struct { + Old, New map[string]interface{} + Create map[string]string + Remove []string + }{ + // Basic add/remove + { + Old: map[string]interface{}{ + "foo": "bar", + }, + New: map[string]interface{}{ + "bar": "baz", + }, + Create: map[string]string{ + "bar": "baz", + }, + Remove: []string{ + "foo", + }, + }, + + // Modify + { + Old: map[string]interface{}{ + "foo": "bar", + }, + New: map[string]interface{}{ + "foo": "baz", + }, + Create: map[string]string{ + "foo": "baz", + }, + Remove: []string{ + "foo", + }, + }, + } + + for i, tc := range cases { + c, r := diffGlacierVaultTags(mapGlacierVaultTags(tc.Old), mapGlacierVaultTags(tc.New)) + + if !reflect.DeepEqual(c, tc.Create) { + t.Fatalf("%d: bad create: %#v", i, c) + } + if !reflect.DeepEqual(r, tc.Remove) { + t.Fatalf("%d: bad remove: %#v", i, r) + } + } +} + +func testAccCheckGlacierVaultExists(name string) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[name] + if !ok { + return fmt.Errorf("Not found: %s", name) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No ID is set") + } + + glacierconn := testAccProvider.Meta().(*AWSClient).glacierconn + out, err := glacierconn.DescribeVault(&glacier.DescribeVaultInput{ + VaultName: aws.String(rs.Primary.ID), + }) + + if err != nil { + return err + } + + if out.VaultARN == nil { + return fmt.Errorf("No Glacier Vault Found") + } + + if *out.VaultName != rs.Primary.ID { + return fmt.Errorf("Glacier Vault Mismatch - existing: %q, state: %q", + *out.VaultName, rs.Primary.ID) + } + + return nil + } +} + +func testAccCheckVaultNotificationsMissing(name string) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[name] + if !ok { + return fmt.Errorf("Not found: %s", name) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No ID is set") + } + + glacierconn := testAccProvider.Meta().(*AWSClient).glacierconn + out, err := glacierconn.GetVaultNotifications(&glacier.GetVaultNotificationsInput{ + VaultName: aws.String(rs.Primary.ID), + }) + + if awserr, ok := err.(awserr.Error); ok && awserr.Code() != "ResourceNotFoundException" { + return fmt.Errorf("Expected ResourceNotFoundException for Vault %s Notification Block but got %s", rs.Primary.ID, awserr.Code()) + } + + if out.VaultNotificationConfig != nil { + return fmt.Errorf("Vault Notification Block has been found for %s", rs.Primary.ID) + } + + return nil + } + +} + +func testAccCheckGlacierVaultDestroy(s *terraform.State) error { + if len(s.RootModule().Resources) > 0 { + return fmt.Errorf("Expected all resources to be gone, but found: %#v", + s.RootModule().Resources) + } + + return nil +} + +const testAccGlacierVault_basic = ` +resource "aws_glacier_vault" "test" { + name = "my_test_vault" +} +` + +const testAccGlacierVault_full = ` +resource "aws_sns_topic" "aws_sns_topic" { + name = "glacier-sns-topic" +} + +resource "aws_glacier_vault" "full" { + name = "my_test_vault" + notification { + sns_topic = "${aws_sns_topic.aws_sns_topic.arn}" + events = ["ArchiveRetrievalCompleted","InventoryRetrievalCompleted"] + } + tags { + Test="Test1" + } +} +` + +const testAccGlacierVault_withoutNotification = ` +resource "aws_sns_topic" "aws_sns_topic" { + name = "glacier-sns-topic" +} + +resource "aws_glacier_vault" "full" { + name = "my_test_vault" + tags { + Test="Test1" + } +} +` diff --git a/builtin/providers/aws/resource_aws_iam_policy_attachment_test.go b/builtin/providers/aws/resource_aws_iam_policy_attachment_test.go index a68d956a5..11e50b0d9 100644 --- a/builtin/providers/aws/resource_aws_iam_policy_attachment_test.go +++ b/builtin/providers/aws/resource_aws_iam_policy_attachment_test.go @@ -102,7 +102,7 @@ func testAccCheckAWSPolicyAttachmentAttributes(users []string, roles []string, g } } if uc != 0 || rc != 0 || gc != 0 { - return fmt.Errorf("Error: Number of attached users, roles, or groups was incorrect:\n expected %d users and found %d\nexpected %d roles and found %d\nexpected %d groups and found %d", len(users), (len(users) - uc), len(roles), (len(roles) - rc), len(groups), (len(groups) - gc)) + return fmt.Errorf("Error: Number of attached users, roles, or groups was incorrect:\n expected %d users and found %d\nexpected %d roles and found %d\nexpected %d groups and found %d", len(users), len(users)-uc, len(roles), len(roles)-rc, len(groups), len(groups)-gc) } return nil } diff --git a/builtin/providers/aws/resource_aws_iam_saml_provider.go b/builtin/providers/aws/resource_aws_iam_saml_provider.go new file mode 100644 index 000000000..6a166d711 --- /dev/null +++ b/builtin/providers/aws/resource_aws_iam_saml_provider.go @@ -0,0 +1,101 @@ +package aws + +import ( + "time" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/iam" + + "github.com/hashicorp/terraform/helper/schema" +) + +func resourceAwsIamSamlProvider() *schema.Resource { + return &schema.Resource{ + Create: resourceAwsIamSamlProviderCreate, + Read: resourceAwsIamSamlProviderRead, + Update: resourceAwsIamSamlProviderUpdate, + Delete: resourceAwsIamSamlProviderDelete, + + Schema: map[string]*schema.Schema{ + "arn": &schema.Schema{ + Type: schema.TypeString, + Computed: true, + }, + "valid_until": &schema.Schema{ + Type: schema.TypeString, + Computed: true, + }, + "name": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + "saml_metadata_document": &schema.Schema{ + Type: schema.TypeString, + Required: true, + }, + }, + } +} + +func resourceAwsIamSamlProviderCreate(d *schema.ResourceData, meta interface{}) error { + iamconn := meta.(*AWSClient).iamconn + + input := &iam.CreateSAMLProviderInput{ + Name: aws.String(d.Get("name").(string)), + SAMLMetadataDocument: aws.String(d.Get("saml_metadata_document").(string)), + } + + out, err := iamconn.CreateSAMLProvider(input) + if err != nil { + return err + } + + d.SetId(*out.SAMLProviderArn) + + return resourceAwsIamSamlProviderRead(d, meta) +} + +func resourceAwsIamSamlProviderRead(d *schema.ResourceData, meta interface{}) error { + iamconn := meta.(*AWSClient).iamconn + + input := &iam.GetSAMLProviderInput{ + SAMLProviderArn: aws.String(d.Id()), + } + out, err := iamconn.GetSAMLProvider(input) + if err != nil { + return err + } + + validUntil := out.ValidUntil.Format(time.RFC1123) + d.Set("valid_until", validUntil) + d.Set("saml_metadata_document", *out.SAMLMetadataDocument) + + return nil +} + +func resourceAwsIamSamlProviderUpdate(d *schema.ResourceData, meta interface{}) error { + iamconn := meta.(*AWSClient).iamconn + + input := &iam.UpdateSAMLProviderInput{ + SAMLProviderArn: aws.String(d.Id()), + SAMLMetadataDocument: aws.String(d.Get("saml_metadata_document").(string)), + } + _, err := iamconn.UpdateSAMLProvider(input) + if err != nil { + return err + } + + return resourceAwsIamSamlProviderRead(d, meta) +} + +func resourceAwsIamSamlProviderDelete(d *schema.ResourceData, meta interface{}) error { + iamconn := meta.(*AWSClient).iamconn + + input := &iam.DeleteSAMLProviderInput{ + SAMLProviderArn: aws.String(d.Id()), + } + _, err := iamconn.DeleteSAMLProvider(input) + + return err +} diff --git a/builtin/providers/aws/resource_aws_iam_saml_provider_test.go b/builtin/providers/aws/resource_aws_iam_saml_provider_test.go new file mode 100644 index 000000000..63ed39588 --- /dev/null +++ b/builtin/providers/aws/resource_aws_iam_saml_provider_test.go @@ -0,0 +1,79 @@ +package aws + +import ( + "fmt" + "testing" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/iam" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +func TestAccAWSIAMSamlProvider_basic(t *testing.T) { + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckIAMSamlProviderDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccIAMSamlProviderConfig, + Check: resource.ComposeTestCheckFunc( + testAccCheckIAMSamlProvider("aws_iam_saml_provider.salesforce"), + ), + }, + resource.TestStep{ + Config: testAccIAMSamlProviderConfigUpdate, + Check: resource.ComposeTestCheckFunc( + testAccCheckIAMSamlProvider("aws_iam_saml_provider.salesforce"), + ), + }, + }, + }) +} + +func testAccCheckIAMSamlProviderDestroy(s *terraform.State) error { + if len(s.RootModule().Resources) > 0 { + return fmt.Errorf("Expected all resources to be gone, but found: %#v", s.RootModule().Resources) + } + + return nil +} + +func testAccCheckIAMSamlProvider(id string) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[id] + if !ok { + return fmt.Errorf("Not Found: %s", id) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No ID is set") + } + + iamconn := testAccProvider.Meta().(*AWSClient).iamconn + _, err := iamconn.GetSAMLProvider(&iam.GetSAMLProviderInput{ + SAMLProviderArn: aws.String(rs.Primary.ID), + }) + + if err != nil { + return err + } + + return nil + } +} + +const testAccIAMSamlProviderConfig = ` +resource "aws_iam_saml_provider" "salesforce" { + name = "tf-salesforce-test" + saml_metadata_document = "${file("./test-fixtures/saml-metadata.xml")}" +} +` + +const testAccIAMSamlProviderConfigUpdate = ` +resource "aws_iam_saml_provider" "salesforce" { + name = "tf-salesforce-test" + saml_metadata_document = "${file("./test-fixtures/saml-metadata-modified.xml")}" +} +` diff --git a/builtin/providers/aws/resource_aws_instance.go b/builtin/providers/aws/resource_aws_instance.go index 093b6ae86..d096a45d6 100644 --- a/builtin/providers/aws/resource_aws_instance.go +++ b/builtin/providers/aws/resource_aws_instance.go @@ -414,11 +414,6 @@ func resourceAwsInstanceCreate(d *schema.ResourceData, meta interface{}) error { }) } - // Set our attributes - if err := resourceAwsInstanceRead(d, meta); err != nil { - return err - } - // Update if we need to return resourceAwsInstanceUpdate(d, meta) } @@ -548,16 +543,23 @@ func resourceAwsInstanceUpdate(d *schema.ResourceData, meta interface{}) error { } // SourceDestCheck can only be set on VPC instances - if d.Get("subnet_id").(string) != "" { - log.Printf("[INFO] Modifying instance %s", d.Id()) - _, err := conn.ModifyInstanceAttribute(&ec2.ModifyInstanceAttributeInput{ - InstanceId: aws.String(d.Id()), - SourceDestCheck: &ec2.AttributeBooleanValue{ - Value: aws.Bool(d.Get("source_dest_check").(bool)), - }, - }) - if err != nil { - return err + // AWS will return an error of InvalidParameterCombination if we attempt + // to modify the source_dest_check of an instance in EC2 Classic + log.Printf("[INFO] Modifying instance %s", d.Id()) + _, err := conn.ModifyInstanceAttribute(&ec2.ModifyInstanceAttributeInput{ + InstanceId: aws.String(d.Id()), + SourceDestCheck: &ec2.AttributeBooleanValue{ + Value: aws.Bool(d.Get("source_dest_check").(bool)), + }, + }) + if err != nil { + if ec2err, ok := err.(awserr.Error); ok { + // Toloerate InvalidParameterCombination error in Classic, otherwise + // return the error + if "InvalidParameterCombination" != ec2err.Code() { + return err + } + log.Printf("[WARN] Attempted to modify SourceDestCheck on non VPC instance: %s", ec2err.Message()) } } @@ -693,7 +695,7 @@ func readBlockDevicesFromInstance(instance *ec2.Instance, conn *ec2.EC2) (map[st instanceBlockDevices := make(map[string]*ec2.InstanceBlockDeviceMapping) for _, bd := range instance.BlockDeviceMappings { if bd.Ebs != nil { - instanceBlockDevices[*(bd.Ebs.VolumeId)] = bd + instanceBlockDevices[*bd.Ebs.VolumeId] = bd } } @@ -753,9 +755,9 @@ func readBlockDevicesFromInstance(instance *ec2.Instance, conn *ec2.EC2) (map[st } func blockDeviceIsRoot(bd *ec2.InstanceBlockDeviceMapping, instance *ec2.Instance) bool { - return (bd.DeviceName != nil && + return bd.DeviceName != nil && instance.RootDeviceName != nil && - *bd.DeviceName == *instance.RootDeviceName) + *bd.DeviceName == *instance.RootDeviceName } func fetchRootDeviceName(ami string, conn *ec2.EC2) (*string, error) { diff --git a/builtin/providers/aws/resource_aws_instance_test.go b/builtin/providers/aws/resource_aws_instance_test.go index 258320d54..3224f9b5e 100644 --- a/builtin/providers/aws/resource_aws_instance_test.go +++ b/builtin/providers/aws/resource_aws_instance_test.go @@ -190,6 +190,9 @@ func TestAccAWSInstance_sourceDestCheck(t *testing.T) { testCheck := func(enabled bool) resource.TestCheckFunc { return func(*terraform.State) error { + if v.SourceDestCheck == nil { + return fmt.Errorf("bad source_dest_check: got nil") + } if *v.SourceDestCheck != enabled { return fmt.Errorf("bad source_dest_check: %#v", *v.SourceDestCheck) } diff --git a/builtin/providers/aws/resource_aws_key_pair.go b/builtin/providers/aws/resource_aws_key_pair.go index e747fbfc5..0d6c51fcf 100644 --- a/builtin/providers/aws/resource_aws_key_pair.go +++ b/builtin/providers/aws/resource_aws_key_pair.go @@ -2,6 +2,7 @@ package aws import ( "fmt" + "strings" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/helper/schema" @@ -18,6 +19,9 @@ func resourceAwsKeyPair() *schema.Resource { Update: nil, Delete: resourceAwsKeyPairDelete, + SchemaVersion: 1, + MigrateState: resourceAwsKeyPairMigrateState, + Schema: map[string]*schema.Schema{ "key_name": &schema.Schema{ Type: schema.TypeString, @@ -29,6 +33,14 @@ func resourceAwsKeyPair() *schema.Resource { Type: schema.TypeString, Required: true, ForceNew: true, + StateFunc: func(v interface{}) string { + switch v.(type) { + case string: + return strings.TrimSpace(v.(string)) + default: + return "" + } + }, }, "fingerprint": &schema.Schema{ Type: schema.TypeString, @@ -45,6 +57,7 @@ func resourceAwsKeyPairCreate(d *schema.ResourceData, meta interface{}) error { if keyName == "" { keyName = resource.UniqueId() } + publicKey := d.Get("public_key").(string) req := &ec2.ImportKeyPairInput{ KeyName: aws.String(keyName), diff --git a/builtin/providers/aws/resource_aws_key_pair_migrate.go b/builtin/providers/aws/resource_aws_key_pair_migrate.go new file mode 100644 index 000000000..0d56123aa --- /dev/null +++ b/builtin/providers/aws/resource_aws_key_pair_migrate.go @@ -0,0 +1,38 @@ +package aws + +import ( + "fmt" + "log" + "strings" + + "github.com/hashicorp/terraform/terraform" +) + +func resourceAwsKeyPairMigrateState( + v int, is *terraform.InstanceState, meta interface{}) (*terraform.InstanceState, error) { + switch v { + case 0: + log.Println("[INFO] Found AWS Key Pair State v0; migrating to v1") + return migrateKeyPairStateV0toV1(is) + default: + return is, fmt.Errorf("Unexpected schema version: %d", v) + } + + return is, nil +} + +func migrateKeyPairStateV0toV1(is *terraform.InstanceState) (*terraform.InstanceState, error) { + if is.Empty() { + log.Println("[DEBUG] Empty InstanceState; nothing to migrate.") + return is, nil + } + + log.Printf("[DEBUG] Attributes before migration: %#v", is.Attributes) + + // replace public_key with a stripped version, removing `\n` from the end + // see https://github.com/hashicorp/terraform/issues/3455 + is.Attributes["public_key"] = strings.TrimSpace(is.Attributes["public_key"]) + + log.Printf("[DEBUG] Attributes after migration: %#v", is.Attributes) + return is, nil +} diff --git a/builtin/providers/aws/resource_aws_key_pair_migrate_test.go b/builtin/providers/aws/resource_aws_key_pair_migrate_test.go new file mode 100644 index 000000000..825d3c40f --- /dev/null +++ b/builtin/providers/aws/resource_aws_key_pair_migrate_test.go @@ -0,0 +1,55 @@ +package aws + +import ( + "testing" + + "github.com/hashicorp/terraform/terraform" +) + +func TestAWSKeyPairMigrateState(t *testing.T) { + cases := map[string]struct { + StateVersion int + ID string + Attributes map[string]string + Expected string + Meta interface{} + }{ + "v0_1": { + StateVersion: 0, + ID: "tf-testing-file", + Attributes: map[string]string{ + "fingerprint": "1d:cd:46:31:a9:4a:e0:06:8a:a1:22:cb:3b:bf:8e:42", + "key_name": "tf-testing-file", + "public_key": "ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4LBtwcFsQAYWw1cnOwRTZCJCzPSzq0dl3== ctshryock", + }, + Expected: "ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4LBtwcFsQAYWw1cnOwRTZCJCzPSzq0dl3== ctshryock", + }, + "v0_2": { + StateVersion: 0, + ID: "tf-testing-file", + Attributes: map[string]string{ + "fingerprint": "1d:cd:46:31:a9:4a:e0:06:8a:a1:22:cb:3b:bf:8e:42", + "key_name": "tf-testing-file", + "public_key": "ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4LBtwcFsQAYWw1cnOwRTZCJCzPSzq0dl3== ctshryock\n", + }, + Expected: "ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4LBtwcFsQAYWw1cnOwRTZCJCzPSzq0dl3== ctshryock", + }, + } + + for tn, tc := range cases { + is := &terraform.InstanceState{ + ID: tc.ID, + Attributes: tc.Attributes, + } + is, err := resourceAwsKeyPairMigrateState( + tc.StateVersion, is, tc.Meta) + + if err != nil { + t.Fatalf("bad: %s, err: %#v", tn, err) + } + + if is.Attributes["public_key"] != tc.Expected { + t.Fatalf("Bad public_key migration: %s\n\n expected: %s", is.Attributes["public_key"], tc.Expected) + } + } +} diff --git a/builtin/providers/aws/resource_aws_kinesis_stream.go b/builtin/providers/aws/resource_aws_kinesis_stream.go index 45d685c1d..1abb9dbc3 100644 --- a/builtin/providers/aws/resource_aws_kinesis_stream.go +++ b/builtin/providers/aws/resource_aws_kinesis_stream.go @@ -2,6 +2,7 @@ package aws import ( "fmt" + "log" "time" "github.com/aws/aws-sdk-go/aws" @@ -15,6 +16,7 @@ func resourceAwsKinesisStream() *schema.Resource { return &schema.Resource{ Create: resourceAwsKinesisStreamCreate, Read: resourceAwsKinesisStreamRead, + Update: resourceAwsKinesisStreamUpdate, Delete: resourceAwsKinesisStreamDelete, Schema: map[string]*schema.Schema{ @@ -35,6 +37,7 @@ func resourceAwsKinesisStream() *schema.Resource { Optional: true, Computed: true, }, + "tags": tagsSchema(), }, } } @@ -75,13 +78,28 @@ func resourceAwsKinesisStreamCreate(d *schema.ResourceData, meta interface{}) er d.SetId(*s.StreamARN) d.Set("arn", s.StreamARN) - return nil + return resourceAwsKinesisStreamUpdate(d, meta) +} + +func resourceAwsKinesisStreamUpdate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).kinesisconn + + d.Partial(true) + if err := setTagsKinesis(conn, d); err != nil { + return err + } + + d.SetPartial("tags") + d.Partial(false) + + return resourceAwsKinesisStreamRead(d, meta) } func resourceAwsKinesisStreamRead(d *schema.ResourceData, meta interface{}) error { conn := meta.(*AWSClient).kinesisconn + sn := d.Get("name").(string) describeOpts := &kinesis.DescribeStreamInput{ - StreamName: aws.String(d.Get("name").(string)), + StreamName: aws.String(sn), } resp, err := conn.DescribeStream(describeOpts) if err != nil { @@ -99,6 +117,17 @@ func resourceAwsKinesisStreamRead(d *schema.ResourceData, meta interface{}) erro d.Set("arn", *s.StreamARN) d.Set("shard_count", len(s.Shards)) + // set tags + describeTagsOpts := &kinesis.ListTagsForStreamInput{ + StreamName: aws.String(sn), + } + tagsResp, err := conn.ListTagsForStream(describeTagsOpts) + if err != nil { + log.Printf("[DEBUG] Error retrieving tags for Stream: %s. %s", sn, err) + } else { + d.Set("tags", tagsToMapKinesis(tagsResp.Tags)) + } + return nil } diff --git a/builtin/providers/aws/resource_aws_kinesis_stream_test.go b/builtin/providers/aws/resource_aws_kinesis_stream_test.go index c9580ad22..82c0b64fa 100644 --- a/builtin/providers/aws/resource_aws_kinesis_stream_test.go +++ b/builtin/providers/aws/resource_aws_kinesis_stream_test.go @@ -107,5 +107,8 @@ var testAccKinesisStreamConfig = fmt.Sprintf(` resource "aws_kinesis_stream" "test_stream" { name = "terraform-kinesis-test-%d" shard_count = 2 + tags { + Name = "tf-test" + } } `, rand.New(rand.NewSource(time.Now().UnixNano())).Int()) diff --git a/builtin/providers/aws/resource_aws_lb_cookie_stickiness_policy.go b/builtin/providers/aws/resource_aws_lb_cookie_stickiness_policy.go index 50c6186de..bed01aadd 100644 --- a/builtin/providers/aws/resource_aws_lb_cookie_stickiness_policy.go +++ b/builtin/providers/aws/resource_aws_lb_cookie_stickiness_policy.go @@ -15,8 +15,6 @@ func resourceAwsLBCookieStickinessPolicy() *schema.Resource { // There is no concept of "updating" an LB Stickiness policy in // the AWS API. Create: resourceAwsLBCookieStickinessPolicyCreate, - Update: resourceAwsLBCookieStickinessPolicyCreate, - Read: resourceAwsLBCookieStickinessPolicyRead, Delete: resourceAwsLBCookieStickinessPolicyDelete, diff --git a/builtin/providers/aws/resource_aws_opsworks_custom_layer.go b/builtin/providers/aws/resource_aws_opsworks_custom_layer.go new file mode 100644 index 000000000..59de60db6 --- /dev/null +++ b/builtin/providers/aws/resource_aws_opsworks_custom_layer.go @@ -0,0 +1,17 @@ +package aws + +import ( + "github.com/hashicorp/terraform/helper/schema" +) + +func resourceAwsOpsworksCustomLayer() *schema.Resource { + layerType := &opsworksLayerType{ + TypeName: "custom", + CustomShortName: true, + + // The "custom" layer type has no additional attributes + Attributes: map[string]*opsworksLayerTypeAttribute{}, + } + + return layerType.SchemaResource() +} diff --git a/builtin/providers/aws/resource_aws_opsworks_custom_layer_test.go b/builtin/providers/aws/resource_aws_opsworks_custom_layer_test.go new file mode 100644 index 000000000..14a65b106 --- /dev/null +++ b/builtin/providers/aws/resource_aws_opsworks_custom_layer_test.go @@ -0,0 +1,234 @@ +package aws + +import ( + "fmt" + "testing" + + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +// These tests assume the existence of predefined Opsworks IAM roles named `aws-opsworks-ec2-role` +// and `aws-opsworks-service-role`. + +func TestAccAwsOpsworksCustomLayer(t *testing.T) { + opsiam := testAccAwsOpsworksStackIam{} + testAccAwsOpsworksStackPopulateIam(t, &opsiam) + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAwsOpsworksCustomLayerDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: fmt.Sprintf(testAccAwsOpsworksCustomLayerConfigCreate, opsiam.ServiceRoleArn, opsiam.InstanceProfileArn), + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr( + "aws_opsworks_custom_layer.tf-acc", "name", "tf-ops-acc-custom-layer", + ), + resource.TestCheckResourceAttr( + "aws_opsworks_custom_layer.tf-acc", "auto_assign_elastic_ips", "false", + ), + resource.TestCheckResourceAttr( + "aws_opsworks_custom_layer.tf-acc", "auto_healing", "true", + ), + resource.TestCheckResourceAttr( + "aws_opsworks_custom_layer.tf-acc", "drain_elb_on_shutdown", "true", + ), + resource.TestCheckResourceAttr( + "aws_opsworks_custom_layer.tf-acc", "instance_shutdown_timeout", "300", + ), + resource.TestCheckResourceAttr( + "aws_opsworks_custom_layer.tf-acc", "custom_security_group_ids.#", "2", + ), + resource.TestCheckResourceAttr( + "aws_opsworks_custom_layer.tf-acc", "system_packages.#", "2", + ), + resource.TestCheckResourceAttr( + "aws_opsworks_custom_layer.tf-acc", "system_packages.1368285564", "git", + ), + resource.TestCheckResourceAttr( + "aws_opsworks_custom_layer.tf-acc", "system_packages.2937857443", "golang", + ), + resource.TestCheckResourceAttr( + "aws_opsworks_custom_layer.tf-acc", "ebs_volume.#", "1", + ), + resource.TestCheckResourceAttr( + "aws_opsworks_custom_layer.tf-acc", "ebs_volume.3575749636.type", "gp2", + ), + resource.TestCheckResourceAttr( + "aws_opsworks_custom_layer.tf-acc", "ebs_volume.3575749636.number_of_disks", "2", + ), + resource.TestCheckResourceAttr( + "aws_opsworks_custom_layer.tf-acc", "ebs_volume.3575749636.mount_point", "/home", + ), + resource.TestCheckResourceAttr( + "aws_opsworks_custom_layer.tf-acc", "ebs_volume.3575749636.size", "100", + ), + ), + }, + resource.TestStep{ + Config: fmt.Sprintf(testAccAwsOpsworksCustomLayerConfigUpdate, opsiam.ServiceRoleArn, opsiam.InstanceProfileArn), + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr( + "aws_opsworks_custom_layer.tf-acc", "name", "tf-ops-acc-custom-layer", + ), + resource.TestCheckResourceAttr( + "aws_opsworks_custom_layer.tf-acc", "drain_elb_on_shutdown", "false", + ), + resource.TestCheckResourceAttr( + "aws_opsworks_custom_layer.tf-acc", "instance_shutdown_timeout", "120", + ), + resource.TestCheckResourceAttr( + "aws_opsworks_custom_layer.tf-acc", "custom_security_group_ids.#", "3", + ), + resource.TestCheckResourceAttr( + "aws_opsworks_custom_layer.tf-acc", "system_packages.#", "3", + ), + resource.TestCheckResourceAttr( + "aws_opsworks_custom_layer.tf-acc", "system_packages.1368285564", "git", + ), + resource.TestCheckResourceAttr( + "aws_opsworks_custom_layer.tf-acc", "system_packages.2937857443", "golang", + ), + resource.TestCheckResourceAttr( + "aws_opsworks_custom_layer.tf-acc", "system_packages.4101929740", "subversion", + ), + resource.TestCheckResourceAttr( + "aws_opsworks_custom_layer.tf-acc", "ebs_volume.#", "2", + ), + resource.TestCheckResourceAttr( + "aws_opsworks_custom_layer.tf-acc", "ebs_volume.3575749636.type", "gp2", + ), + resource.TestCheckResourceAttr( + "aws_opsworks_custom_layer.tf-acc", "ebs_volume.3575749636.number_of_disks", "2", + ), + resource.TestCheckResourceAttr( + "aws_opsworks_custom_layer.tf-acc", "ebs_volume.3575749636.mount_point", "/home", + ), + resource.TestCheckResourceAttr( + "aws_opsworks_custom_layer.tf-acc", "ebs_volume.3575749636.size", "100", + ), + resource.TestCheckResourceAttr( + "aws_opsworks_custom_layer.tf-acc", "ebs_volume.1266957920.type", "io1", + ), + resource.TestCheckResourceAttr( + "aws_opsworks_custom_layer.tf-acc", "ebs_volume.1266957920.number_of_disks", "4", + ), + resource.TestCheckResourceAttr( + "aws_opsworks_custom_layer.tf-acc", "ebs_volume.1266957920.mount_point", "/var", + ), + resource.TestCheckResourceAttr( + "aws_opsworks_custom_layer.tf-acc", "ebs_volume.1266957920.size", "100", + ), + resource.TestCheckResourceAttr( + "aws_opsworks_custom_layer.tf-acc", "ebs_volume.1266957920.raid_level", "1", + ), + resource.TestCheckResourceAttr( + "aws_opsworks_custom_layer.tf-acc", "ebs_volume.1266957920.iops", "3000", + ), + ), + }, + }, + }) +} + +func testAccCheckAwsOpsworksCustomLayerDestroy(s *terraform.State) error { + if len(s.RootModule().Resources) > 0 { + return fmt.Errorf("Expected all resources to be gone, but found: %#v", s.RootModule().Resources) + } + + return nil +} + +var testAccAwsOpsworksCustomLayerSecurityGroups = ` +resource "aws_security_group" "tf-ops-acc-layer1" { + name = "tf-ops-acc-layer1" + ingress { + from_port = 8 + to_port = -1 + protocol = "icmp" + cidr_blocks = ["0.0.0.0/0"] + } +} +resource "aws_security_group" "tf-ops-acc-layer2" { + name = "tf-ops-acc-layer2" + ingress { + from_port = 8 + to_port = -1 + protocol = "icmp" + cidr_blocks = ["0.0.0.0/0"] + } +} +` + +var testAccAwsOpsworksCustomLayerConfigCreate = testAccAwsOpsworksStackConfigNoVpcCreate + testAccAwsOpsworksCustomLayerSecurityGroups + ` +resource "aws_opsworks_custom_layer" "tf-acc" { + stack_id = "${aws_opsworks_stack.tf-acc.id}" + name = "tf-ops-acc-custom-layer" + short_name = "tf-ops-acc-custom-layer" + auto_assign_public_ips = true + custom_security_group_ids = [ + "${aws_security_group.tf-ops-acc-layer1.id}", + "${aws_security_group.tf-ops-acc-layer2.id}", + ] + drain_elb_on_shutdown = true + instance_shutdown_timeout = 300 + system_packages = [ + "git", + "golang", + ] + ebs_volume { + type = "gp2" + number_of_disks = 2 + mount_point = "/home" + size = 100 + raid_level = 0 + } +} +` + +var testAccAwsOpsworksCustomLayerConfigUpdate = testAccAwsOpsworksStackConfigNoVpcCreate + testAccAwsOpsworksCustomLayerSecurityGroups + ` +resource "aws_security_group" "tf-ops-acc-layer3" { + name = "tf-ops-acc-layer3" + ingress { + from_port = 8 + to_port = -1 + protocol = "icmp" + cidr_blocks = ["0.0.0.0/0"] + } +} +resource "aws_opsworks_custom_layer" "tf-acc" { + stack_id = "${aws_opsworks_stack.tf-acc.id}" + name = "tf-ops-acc-custom-layer" + short_name = "tf-ops-acc-custom-layer" + auto_assign_public_ips = true + custom_security_group_ids = [ + "${aws_security_group.tf-ops-acc-layer1.id}", + "${aws_security_group.tf-ops-acc-layer2.id}", + "${aws_security_group.tf-ops-acc-layer3.id}", + ] + drain_elb_on_shutdown = false + instance_shutdown_timeout = 120 + system_packages = [ + "git", + "golang", + "subversion", + ] + ebs_volume { + type = "gp2" + number_of_disks = 2 + mount_point = "/home" + size = 100 + raid_level = 0 + } + ebs_volume { + type = "io1" + number_of_disks = 4 + mount_point = "/var" + size = 100 + raid_level = 1 + iops = 3000 + } +} +` diff --git a/builtin/providers/aws/resource_aws_opsworks_ganglia_layer.go b/builtin/providers/aws/resource_aws_opsworks_ganglia_layer.go new file mode 100644 index 000000000..24778501c --- /dev/null +++ b/builtin/providers/aws/resource_aws_opsworks_ganglia_layer.go @@ -0,0 +1,33 @@ +package aws + +import ( + "github.com/hashicorp/terraform/helper/schema" +) + +func resourceAwsOpsworksGangliaLayer() *schema.Resource { + layerType := &opsworksLayerType{ + TypeName: "monitoring-master", + DefaultLayerName: "Ganglia", + + Attributes: map[string]*opsworksLayerTypeAttribute{ + "url": &opsworksLayerTypeAttribute{ + AttrName: "GangliaUrl", + Type: schema.TypeString, + Default: "/ganglia", + }, + "username": &opsworksLayerTypeAttribute{ + AttrName: "GangliaUser", + Type: schema.TypeString, + Default: "opsworks", + }, + "password": &opsworksLayerTypeAttribute{ + AttrName: "GangliaPassword", + Type: schema.TypeString, + Required: true, + WriteOnly: true, + }, + }, + } + + return layerType.SchemaResource() +} diff --git a/builtin/providers/aws/resource_aws_opsworks_haproxy_layer.go b/builtin/providers/aws/resource_aws_opsworks_haproxy_layer.go new file mode 100644 index 000000000..2b05dce05 --- /dev/null +++ b/builtin/providers/aws/resource_aws_opsworks_haproxy_layer.go @@ -0,0 +1,48 @@ +package aws + +import ( + "github.com/hashicorp/terraform/helper/schema" +) + +func resourceAwsOpsworksHaproxyLayer() *schema.Resource { + layerType := &opsworksLayerType{ + TypeName: "lb", + DefaultLayerName: "HAProxy", + + Attributes: map[string]*opsworksLayerTypeAttribute{ + "stats_enabled": &opsworksLayerTypeAttribute{ + AttrName: "EnableHaproxyStats", + Type: schema.TypeBool, + Default: true, + }, + "stats_url": &opsworksLayerTypeAttribute{ + AttrName: "HaproxyStatsUrl", + Type: schema.TypeString, + Default: "/haproxy?stats", + }, + "stats_user": &opsworksLayerTypeAttribute{ + AttrName: "HaproxyStatsUser", + Type: schema.TypeString, + Default: "opsworks", + }, + "stats_password": &opsworksLayerTypeAttribute{ + AttrName: "HaproxyStatsPassword", + Type: schema.TypeString, + WriteOnly: true, + Required: true, + }, + "healthcheck_url": &opsworksLayerTypeAttribute{ + AttrName: "HaproxyHealthCheckUrl", + Type: schema.TypeString, + Default: "/", + }, + "healthcheck_method": &opsworksLayerTypeAttribute{ + AttrName: "HaproxyHealthCheckMethod", + Type: schema.TypeString, + Default: "OPTIONS", + }, + }, + } + + return layerType.SchemaResource() +} diff --git a/builtin/providers/aws/resource_aws_opsworks_java_app_layer.go b/builtin/providers/aws/resource_aws_opsworks_java_app_layer.go new file mode 100644 index 000000000..2b79fcfad --- /dev/null +++ b/builtin/providers/aws/resource_aws_opsworks_java_app_layer.go @@ -0,0 +1,42 @@ +package aws + +import ( + "github.com/hashicorp/terraform/helper/schema" +) + +func resourceAwsOpsworksJavaAppLayer() *schema.Resource { + layerType := &opsworksLayerType{ + TypeName: "java-app", + DefaultLayerName: "Java App Server", + + Attributes: map[string]*opsworksLayerTypeAttribute{ + "jvm_type": &opsworksLayerTypeAttribute{ + AttrName: "Jvm", + Type: schema.TypeString, + Default: "openjdk", + }, + "jvm_version": &opsworksLayerTypeAttribute{ + AttrName: "JvmVersion", + Type: schema.TypeString, + Default: "7", + }, + "jvm_options": &opsworksLayerTypeAttribute{ + AttrName: "JvmOptions", + Type: schema.TypeString, + Default: "", + }, + "app_server": &opsworksLayerTypeAttribute{ + AttrName: "JavaAppServer", + Type: schema.TypeString, + Default: "tomcat", + }, + "app_server_version": &opsworksLayerTypeAttribute{ + AttrName: "JavaAppServerVersion", + Type: schema.TypeString, + Default: "7", + }, + }, + } + + return layerType.SchemaResource() +} diff --git a/builtin/providers/aws/resource_aws_opsworks_memcached_layer.go b/builtin/providers/aws/resource_aws_opsworks_memcached_layer.go new file mode 100644 index 000000000..626b428bb --- /dev/null +++ b/builtin/providers/aws/resource_aws_opsworks_memcached_layer.go @@ -0,0 +1,22 @@ +package aws + +import ( + "github.com/hashicorp/terraform/helper/schema" +) + +func resourceAwsOpsworksMemcachedLayer() *schema.Resource { + layerType := &opsworksLayerType{ + TypeName: "memcached", + DefaultLayerName: "Memcached", + + Attributes: map[string]*opsworksLayerTypeAttribute{ + "allocated_memory": &opsworksLayerTypeAttribute{ + AttrName: "MemcachedMemory", + Type: schema.TypeInt, + Default: 512, + }, + }, + } + + return layerType.SchemaResource() +} diff --git a/builtin/providers/aws/resource_aws_opsworks_mysql_layer.go b/builtin/providers/aws/resource_aws_opsworks_mysql_layer.go new file mode 100644 index 000000000..6ab4476a3 --- /dev/null +++ b/builtin/providers/aws/resource_aws_opsworks_mysql_layer.go @@ -0,0 +1,27 @@ +package aws + +import ( + "github.com/hashicorp/terraform/helper/schema" +) + +func resourceAwsOpsworksMysqlLayer() *schema.Resource { + layerType := &opsworksLayerType{ + TypeName: "db-master", + DefaultLayerName: "MySQL", + + Attributes: map[string]*opsworksLayerTypeAttribute{ + "root_password": &opsworksLayerTypeAttribute{ + AttrName: "MysqlRootPassword", + Type: schema.TypeString, + WriteOnly: true, + }, + "root_password_on_all_instances": &opsworksLayerTypeAttribute{ + AttrName: "MysqlRootPasswordUbiquitous", + Type: schema.TypeBool, + Default: true, + }, + }, + } + + return layerType.SchemaResource() +} diff --git a/builtin/providers/aws/resource_aws_opsworks_nodejs_app_layer.go b/builtin/providers/aws/resource_aws_opsworks_nodejs_app_layer.go new file mode 100644 index 000000000..24f3d0f3e --- /dev/null +++ b/builtin/providers/aws/resource_aws_opsworks_nodejs_app_layer.go @@ -0,0 +1,22 @@ +package aws + +import ( + "github.com/hashicorp/terraform/helper/schema" +) + +func resourceAwsOpsworksNodejsAppLayer() *schema.Resource { + layerType := &opsworksLayerType{ + TypeName: "nodejs-app", + DefaultLayerName: "Node.js App Server", + + Attributes: map[string]*opsworksLayerTypeAttribute{ + "nodejs_version": &opsworksLayerTypeAttribute{ + AttrName: "NodejsVersion", + Type: schema.TypeString, + Default: "0.10.38", + }, + }, + } + + return layerType.SchemaResource() +} diff --git a/builtin/providers/aws/resource_aws_opsworks_php_app_layer.go b/builtin/providers/aws/resource_aws_opsworks_php_app_layer.go new file mode 100644 index 000000000..c3176af5b --- /dev/null +++ b/builtin/providers/aws/resource_aws_opsworks_php_app_layer.go @@ -0,0 +1,16 @@ +package aws + +import ( + "github.com/hashicorp/terraform/helper/schema" +) + +func resourceAwsOpsworksPhpAppLayer() *schema.Resource { + layerType := &opsworksLayerType{ + TypeName: "php-app", + DefaultLayerName: "PHP App Server", + + Attributes: map[string]*opsworksLayerTypeAttribute{}, + } + + return layerType.SchemaResource() +} diff --git a/builtin/providers/aws/resource_aws_opsworks_rails_app_layer.go b/builtin/providers/aws/resource_aws_opsworks_rails_app_layer.go new file mode 100644 index 000000000..54a0084dd --- /dev/null +++ b/builtin/providers/aws/resource_aws_opsworks_rails_app_layer.go @@ -0,0 +1,47 @@ +package aws + +import ( + "github.com/hashicorp/terraform/helper/schema" +) + +func resourceAwsOpsworksRailsAppLayer() *schema.Resource { + layerType := &opsworksLayerType{ + TypeName: "rails-app", + DefaultLayerName: "Rails App Server", + + Attributes: map[string]*opsworksLayerTypeAttribute{ + "ruby_version": &opsworksLayerTypeAttribute{ + AttrName: "RubyVersion", + Type: schema.TypeString, + Default: "2.0.0", + }, + "app_server": &opsworksLayerTypeAttribute{ + AttrName: "RailsStack", + Type: schema.TypeString, + Default: "apache_passenger", + }, + "passenger_version": &opsworksLayerTypeAttribute{ + AttrName: "PassengerVersion", + Type: schema.TypeString, + Default: "4.0.46", + }, + "rubygems_version": &opsworksLayerTypeAttribute{ + AttrName: "RubygemsVersion", + Type: schema.TypeString, + Default: "2.2.2", + }, + "manage_bundler": &opsworksLayerTypeAttribute{ + AttrName: "ManageBundler", + Type: schema.TypeBool, + Default: true, + }, + "bundler_version": &opsworksLayerTypeAttribute{ + AttrName: "BundlerVersion", + Type: schema.TypeString, + Default: "1.5.3", + }, + }, + } + + return layerType.SchemaResource() +} diff --git a/builtin/providers/aws/resource_aws_opsworks_stack.go b/builtin/providers/aws/resource_aws_opsworks_stack.go new file mode 100644 index 000000000..8eeda3f05 --- /dev/null +++ b/builtin/providers/aws/resource_aws_opsworks_stack.go @@ -0,0 +1,456 @@ +package aws + +import ( + "fmt" + "log" + "strings" + "time" + + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/helper/schema" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/awserr" + "github.com/aws/aws-sdk-go/service/opsworks" +) + +func resourceAwsOpsworksStack() *schema.Resource { + return &schema.Resource{ + Create: resourceAwsOpsworksStackCreate, + Read: resourceAwsOpsworksStackRead, + Update: resourceAwsOpsworksStackUpdate, + Delete: resourceAwsOpsworksStackDelete, + + Schema: map[string]*schema.Schema{ + "id": &schema.Schema{ + Type: schema.TypeString, + Computed: true, + }, + + "name": &schema.Schema{ + Type: schema.TypeString, + Required: true, + }, + + "region": &schema.Schema{ + Type: schema.TypeString, + ForceNew: true, + Required: true, + }, + + "service_role_arn": &schema.Schema{ + Type: schema.TypeString, + Required: true, + }, + + "default_instance_profile_arn": &schema.Schema{ + Type: schema.TypeString, + Required: true, + }, + + "color": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + }, + + "configuration_manager_name": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + Default: "Chef", + }, + + "configuration_manager_version": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + Default: "11.4", + }, + + "manage_berkshelf": &schema.Schema{ + Type: schema.TypeBool, + Optional: true, + Default: false, + }, + + "berkshelf_version": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + Default: "3.2.0", + }, + + "custom_cookbooks_source": &schema.Schema{ + Type: schema.TypeList, + Optional: true, + Computed: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "type": &schema.Schema{ + Type: schema.TypeString, + Required: true, + }, + + "url": &schema.Schema{ + Type: schema.TypeString, + Required: true, + }, + + "username": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + }, + + "password": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + }, + + "revision": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + }, + + "ssh_key": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + }, + }, + }, + }, + + "custom_json": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + }, + + "default_availability_zone": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + Computed: true, + }, + + "default_os": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + Default: "Ubuntu 12.04 LTS", + }, + + "default_root_device_type": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + Default: "instance-store", + }, + + "default_ssh_key_name": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + }, + + "default_subnet_id": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + }, + + "hostname_theme": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + Default: "Layer_Dependent", + }, + + "use_custom_cookbooks": &schema.Schema{ + Type: schema.TypeBool, + Optional: true, + Default: false, + }, + + "use_opsworks_security_groups": &schema.Schema{ + Type: schema.TypeBool, + Optional: true, + Default: true, + }, + + "vpc_id": &schema.Schema{ + Type: schema.TypeString, + ForceNew: true, + Optional: true, + }, + }, + } +} + +func resourceAwsOpsworksStackValidate(d *schema.ResourceData) error { + cookbooksSourceCount := d.Get("custom_cookbooks_source.#").(int) + if cookbooksSourceCount > 1 { + return fmt.Errorf("Only one custom_cookbooks_source is permitted") + } + + vpcId := d.Get("vpc_id").(string) + if vpcId != "" { + if d.Get("default_subnet_id").(string) == "" { + return fmt.Errorf("default_subnet_id must be set if vpc_id is set") + } + } else { + if d.Get("default_availability_zone").(string) == "" { + return fmt.Errorf("either vpc_id or default_availability_zone must be set") + } + } + + return nil +} + +func resourceAwsOpsworksStackCustomCookbooksSource(d *schema.ResourceData) *opsworks.Source { + count := d.Get("custom_cookbooks_source.#").(int) + if count == 0 { + return nil + } + + return &opsworks.Source{ + Type: aws.String(d.Get("custom_cookbooks_source.0.type").(string)), + Url: aws.String(d.Get("custom_cookbooks_source.0.url").(string)), + Username: aws.String(d.Get("custom_cookbooks_source.0.username").(string)), + Password: aws.String(d.Get("custom_cookbooks_source.0.password").(string)), + Revision: aws.String(d.Get("custom_cookbooks_source.0.revision").(string)), + SshKey: aws.String(d.Get("custom_cookbooks_source.0.ssh_key").(string)), + } +} + +func resourceAwsOpsworksSetStackCustomCookbooksSource(d *schema.ResourceData, v *opsworks.Source) { + nv := make([]interface{}, 0, 1) + if v != nil { + m := make(map[string]interface{}) + if v.Type != nil { + m["type"] = *v.Type + } + if v.Url != nil { + m["url"] = *v.Url + } + if v.Username != nil { + m["username"] = *v.Username + } + if v.Password != nil { + m["password"] = *v.Password + } + if v.Revision != nil { + m["revision"] = *v.Revision + } + if v.SshKey != nil { + m["ssh_key"] = *v.SshKey + } + nv = append(nv, m) + } + + err := d.Set("custom_cookbooks_source", nv) + if err != nil { + // should never happen + panic(err) + } +} + +func resourceAwsOpsworksStackRead(d *schema.ResourceData, meta interface{}) error { + client := meta.(*AWSClient).opsworksconn + + req := &opsworks.DescribeStacksInput{ + StackIds: []*string{ + aws.String(d.Id()), + }, + } + + log.Printf("[DEBUG] Reading OpsWorks stack: %s", d.Id()) + + resp, err := client.DescribeStacks(req) + if err != nil { + if awserr, ok := err.(awserr.Error); ok { + if awserr.Code() == "ResourceNotFoundException" { + d.SetId("") + return nil + } + } + return err + } + + stack := resp.Stacks[0] + d.Set("name", stack.Name) + d.Set("region", stack.Region) + d.Set("default_instance_profile_arn", stack.DefaultInstanceProfileArn) + d.Set("service_role_arn", stack.ServiceRoleArn) + d.Set("default_availability_zone", stack.DefaultAvailabilityZone) + d.Set("default_os", stack.DefaultOs) + d.Set("default_root_device_type", stack.DefaultRootDeviceType) + d.Set("default_ssh_key_name", stack.DefaultSshKeyName) + d.Set("default_subnet_id", stack.DefaultSubnetId) + d.Set("hostname_theme", stack.HostnameTheme) + d.Set("use_custom_cookbooks", stack.UseCustomCookbooks) + d.Set("use_opsworks_security_groups", stack.UseOpsworksSecurityGroups) + d.Set("vpc_id", stack.VpcId) + if color, ok := stack.Attributes["Color"]; ok { + d.Set("color", color) + } + if stack.ConfigurationManager != nil { + d.Set("configuration_manager_name", stack.ConfigurationManager.Name) + d.Set("configuration_manager_version", stack.ConfigurationManager.Version) + } + if stack.ChefConfiguration != nil { + d.Set("berkshelf_version", stack.ChefConfiguration.BerkshelfVersion) + d.Set("manage_berkshelf", stack.ChefConfiguration.ManageBerkshelf) + } + resourceAwsOpsworksSetStackCustomCookbooksSource(d, stack.CustomCookbooksSource) + + return nil +} + +func resourceAwsOpsworksStackCreate(d *schema.ResourceData, meta interface{}) error { + client := meta.(*AWSClient).opsworksconn + + err := resourceAwsOpsworksStackValidate(d) + if err != nil { + return err + } + + req := &opsworks.CreateStackInput{ + DefaultInstanceProfileArn: aws.String(d.Get("default_instance_profile_arn").(string)), + Name: aws.String(d.Get("name").(string)), + Region: aws.String(d.Get("region").(string)), + ServiceRoleArn: aws.String(d.Get("service_role_arn").(string)), + } + inVpc := false + if vpcId, ok := d.GetOk("vpc_id"); ok { + req.VpcId = aws.String(vpcId.(string)) + inVpc = true + } + if defaultSubnetId, ok := d.GetOk("default_subnet_id"); ok { + req.DefaultSubnetId = aws.String(defaultSubnetId.(string)) + } + if defaultAvailabilityZone, ok := d.GetOk("default_availability_zone"); ok { + req.DefaultAvailabilityZone = aws.String(defaultAvailabilityZone.(string)) + } + + log.Printf("[DEBUG] Creating OpsWorks stack: %s", *req.Name) + + var resp *opsworks.CreateStackOutput + err = resource.Retry(20*time.Minute, func() error { + var cerr error + resp, cerr = client.CreateStack(req) + if cerr != nil { + if opserr, ok := cerr.(awserr.Error); ok { + // If Terraform is also managing the service IAM role, + // it may have just been created and not yet be + // propagated. + // AWS doesn't provide a machine-readable code for this + // specific error, so we're forced to do fragile message + // matching. + // The full error we're looking for looks something like + // the following: + // Service Role Arn: [...] is not yet propagated, please try again in a couple of minutes + if opserr.Code() == "ValidationException" && strings.Contains(opserr.Message(), "not yet propagated") { + log.Printf("[INFO] Waiting for service IAM role to propagate") + return cerr + } + } + return resource.RetryError{Err: cerr} + } + return nil + }) + if err != nil { + return err + } + + stackId := *resp.StackId + d.SetId(stackId) + d.Set("id", stackId) + + if inVpc { + // For VPC-based stacks, OpsWorks asynchronously creates some default + // security groups which must exist before layers can be created. + // Unfortunately it doesn't tell us what the ids of these are, so + // we can't actually check for them. Instead, we just wait a nominal + // amount of time for their creation to complete. + log.Print("[INFO] Waiting for OpsWorks built-in security groups to be created") + time.Sleep(30 * time.Second) + } + + return resourceAwsOpsworksStackUpdate(d, meta) +} + +func resourceAwsOpsworksStackUpdate(d *schema.ResourceData, meta interface{}) error { + client := meta.(*AWSClient).opsworksconn + + err := resourceAwsOpsworksStackValidate(d) + if err != nil { + return err + } + + req := &opsworks.UpdateStackInput{ + CustomJson: aws.String(d.Get("custom_json").(string)), + DefaultInstanceProfileArn: aws.String(d.Get("default_instance_profile_arn").(string)), + DefaultRootDeviceType: aws.String(d.Get("default_root_device_type").(string)), + DefaultSshKeyName: aws.String(d.Get("default_ssh_key_name").(string)), + Name: aws.String(d.Get("name").(string)), + ServiceRoleArn: aws.String(d.Get("service_role_arn").(string)), + StackId: aws.String(d.Id()), + UseCustomCookbooks: aws.Bool(d.Get("use_custom_cookbooks").(bool)), + UseOpsworksSecurityGroups: aws.Bool(d.Get("use_opsworks_security_groups").(bool)), + Attributes: make(map[string]*string), + CustomCookbooksSource: resourceAwsOpsworksStackCustomCookbooksSource(d), + } + if v, ok := d.GetOk("default_os"); ok { + req.DefaultOs = aws.String(v.(string)) + } + if v, ok := d.GetOk("default_subnet_id"); ok { + req.DefaultSubnetId = aws.String(v.(string)) + } + if v, ok := d.GetOk("default_availability_zone"); ok { + req.DefaultAvailabilityZone = aws.String(v.(string)) + } + if v, ok := d.GetOk("hostname_theme"); ok { + req.HostnameTheme = aws.String(v.(string)) + } + if v, ok := d.GetOk("color"); ok { + req.Attributes["Color"] = aws.String(v.(string)) + } + req.ChefConfiguration = &opsworks.ChefConfiguration{ + BerkshelfVersion: aws.String(d.Get("berkshelf_version").(string)), + ManageBerkshelf: aws.Bool(d.Get("manage_berkshelf").(bool)), + } + req.ConfigurationManager = &opsworks.StackConfigurationManager{ + Name: aws.String(d.Get("configuration_manager_name").(string)), + Version: aws.String(d.Get("configuration_manager_version").(string)), + } + + log.Printf("[DEBUG] Updating OpsWorks stack: %s", d.Id()) + + _, err = client.UpdateStack(req) + if err != nil { + return err + } + + return resourceAwsOpsworksStackRead(d, meta) +} + +func resourceAwsOpsworksStackDelete(d *schema.ResourceData, meta interface{}) error { + client := meta.(*AWSClient).opsworksconn + + req := &opsworks.DeleteStackInput{ + StackId: aws.String(d.Id()), + } + + log.Printf("[DEBUG] Deleting OpsWorks stack: %s", d.Id()) + + _, err := client.DeleteStack(req) + if err != nil { + return err + } + + // For a stack in a VPC, OpsWorks has created some default security groups + // in the VPC, which it will now delete. + // Unfortunately, the security groups are deleted asynchronously and there + // is no robust way for us to determine when it is done. The VPC itself + // isn't deletable until the security groups are cleaned up, so this could + // make 'terraform destroy' fail if the VPC is also managed and we don't + // wait for the security groups to be deleted. + // There is no robust way to check for this, so we'll just wait a + // nominal amount of time. + if _, ok := d.GetOk("vpc_id"); ok { + log.Print("[INFO] Waiting for Opsworks built-in security groups to be deleted") + time.Sleep(30 * time.Second) + } + + return nil +} diff --git a/builtin/providers/aws/resource_aws_opsworks_stack_test.go b/builtin/providers/aws/resource_aws_opsworks_stack_test.go new file mode 100644 index 000000000..b740b6a20 --- /dev/null +++ b/builtin/providers/aws/resource_aws_opsworks_stack_test.go @@ -0,0 +1,353 @@ +package aws + +import ( + "fmt" + "testing" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/iam" + "github.com/aws/aws-sdk-go/service/opsworks" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +// These tests assume the existence of predefined Opsworks IAM roles named `aws-opsworks-ec2-role` +// and `aws-opsworks-service-role`. + +/////////////////////////////// +//// Tests for the No-VPC case +/////////////////////////////// + +var testAccAwsOpsworksStackConfigNoVpcCreate = ` +resource "aws_opsworks_stack" "tf-acc" { + name = "tf-opsworks-acc" + region = "us-west-2" + service_role_arn = "%s" + default_instance_profile_arn = "%s" + default_availability_zone = "us-west-2a" + default_os = "Amazon Linux 2014.09" + default_root_device_type = "ebs" + custom_json = "{\"key\": \"value\"}" + configuration_manager_version = "11.10" + use_opsworks_security_groups = false +} +` +var testAccAWSOpsworksStackConfigNoVpcUpdate = ` +resource "aws_opsworks_stack" "tf-acc" { + name = "tf-opsworks-acc" + region = "us-west-2" + service_role_arn = "%s" + default_instance_profile_arn = "%s" + default_availability_zone = "us-west-2a" + default_os = "Amazon Linux 2014.09" + default_root_device_type = "ebs" + custom_json = "{\"key\": \"value\"}" + configuration_manager_version = "11.10" + use_opsworks_security_groups = false + use_custom_cookbooks = true + manage_berkshelf = true + custom_cookbooks_source { + type = "git" + revision = "master" + url = "https://github.com/awslabs/opsworks-example-cookbooks.git" + } +} +` + +func TestAccAwsOpsworksStackNoVpc(t *testing.T) { + opsiam := testAccAwsOpsworksStackIam{} + testAccAwsOpsworksStackPopulateIam(t, &opsiam) + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAwsOpsworksStackDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: fmt.Sprintf(testAccAwsOpsworksStackConfigNoVpcCreate, opsiam.ServiceRoleArn, opsiam.InstanceProfileArn), + Check: testAccAwsOpsworksStackCheckResourceAttrsCreate, + }, + resource.TestStep{ + Config: fmt.Sprintf(testAccAWSOpsworksStackConfigNoVpcUpdate, opsiam.ServiceRoleArn, opsiam.InstanceProfileArn), + Check: testAccAwsOpsworksStackCheckResourceAttrsUpdate, + }, + }, + }) +} + +//////////////////////////// +//// Tests for the VPC case +//////////////////////////// + +var testAccAwsOpsworksStackConfigVpcCreate = ` +resource "aws_vpc" "tf-acc" { + cidr_block = "10.3.5.0/24" +} +resource "aws_subnet" "tf-acc" { + vpc_id = "${aws_vpc.tf-acc.id}" + cidr_block = "${aws_vpc.tf-acc.cidr_block}" + availability_zone = "us-west-2a" +} +resource "aws_opsworks_stack" "tf-acc" { + name = "tf-opsworks-acc" + region = "us-west-2" + vpc_id = "${aws_vpc.tf-acc.id}" + default_subnet_id = "${aws_subnet.tf-acc.id}" + service_role_arn = "%s" + default_instance_profile_arn = "%s" + default_os = "Amazon Linux 2014.09" + default_root_device_type = "ebs" + custom_json = "{\"key\": \"value\"}" + configuration_manager_version = "11.10" + use_opsworks_security_groups = false +} +` + +var testAccAWSOpsworksStackConfigVpcUpdate = ` +resource "aws_vpc" "tf-acc" { + cidr_block = "10.3.5.0/24" +} +resource "aws_subnet" "tf-acc" { + vpc_id = "${aws_vpc.tf-acc.id}" + cidr_block = "${aws_vpc.tf-acc.cidr_block}" + availability_zone = "us-west-2a" +} +resource "aws_opsworks_stack" "tf-acc" { + name = "tf-opsworks-acc" + region = "us-west-2" + vpc_id = "${aws_vpc.tf-acc.id}" + default_subnet_id = "${aws_subnet.tf-acc.id}" + service_role_arn = "%s" + default_instance_profile_arn = "%s" + default_os = "Amazon Linux 2014.09" + default_root_device_type = "ebs" + custom_json = "{\"key\": \"value\"}" + configuration_manager_version = "11.10" + use_opsworks_security_groups = false + use_custom_cookbooks = true + manage_berkshelf = true + custom_cookbooks_source { + type = "git" + revision = "master" + url = "https://github.com/awslabs/opsworks-example-cookbooks.git" + } +} +` + +func TestAccAwsOpsworksStackVpc(t *testing.T) { + opsiam := testAccAwsOpsworksStackIam{} + testAccAwsOpsworksStackPopulateIam(t, &opsiam) + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAwsOpsworksStackDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: fmt.Sprintf(testAccAwsOpsworksStackConfigVpcCreate, opsiam.ServiceRoleArn, opsiam.InstanceProfileArn), + Check: testAccAwsOpsworksStackCheckResourceAttrsCreate, + }, + resource.TestStep{ + Config: fmt.Sprintf(testAccAWSOpsworksStackConfigVpcUpdate, opsiam.ServiceRoleArn, opsiam.InstanceProfileArn), + Check: resource.ComposeTestCheckFunc( + testAccAwsOpsworksStackCheckResourceAttrsUpdate, + testAccAwsOpsworksCheckVpc, + ), + }, + }, + }) +} + +//////////////////////////// +//// Checkers and Utilities +//////////////////////////// + +var testAccAwsOpsworksStackCheckResourceAttrsCreate = resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr( + "aws_opsworks_stack.tf-acc", + "name", + "tf-opsworks-acc", + ), + resource.TestCheckResourceAttr( + "aws_opsworks_stack.tf-acc", + "default_availability_zone", + "us-west-2a", + ), + resource.TestCheckResourceAttr( + "aws_opsworks_stack.tf-acc", + "default_os", + "Amazon Linux 2014.09", + ), + resource.TestCheckResourceAttr( + "aws_opsworks_stack.tf-acc", + "default_root_device_type", + "ebs", + ), + resource.TestCheckResourceAttr( + "aws_opsworks_stack.tf-acc", + "custom_json", + `{"key": "value"}`, + ), + resource.TestCheckResourceAttr( + "aws_opsworks_stack.tf-acc", + "configuration_manager_version", + "11.10", + ), + resource.TestCheckResourceAttr( + "aws_opsworks_stack.tf-acc", + "use_opsworks_security_groups", + "false", + ), +) + +var testAccAwsOpsworksStackCheckResourceAttrsUpdate = resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr( + "aws_opsworks_stack.tf-acc", + "name", + "tf-opsworks-acc", + ), + resource.TestCheckResourceAttr( + "aws_opsworks_stack.tf-acc", + "default_availability_zone", + "us-west-2a", + ), + resource.TestCheckResourceAttr( + "aws_opsworks_stack.tf-acc", + "default_os", + "Amazon Linux 2014.09", + ), + resource.TestCheckResourceAttr( + "aws_opsworks_stack.tf-acc", + "default_root_device_type", + "ebs", + ), + resource.TestCheckResourceAttr( + "aws_opsworks_stack.tf-acc", + "custom_json", + `{"key": "value"}`, + ), + resource.TestCheckResourceAttr( + "aws_opsworks_stack.tf-acc", + "configuration_manager_version", + "11.10", + ), + resource.TestCheckResourceAttr( + "aws_opsworks_stack.tf-acc", + "use_opsworks_security_groups", + "false", + ), + resource.TestCheckResourceAttr( + "aws_opsworks_stack.tf-acc", + "use_custom_cookbooks", + "true", + ), + resource.TestCheckResourceAttr( + "aws_opsworks_stack.tf-acc", + "manage_berkshelf", + "true", + ), + resource.TestCheckResourceAttr( + "aws_opsworks_stack.tf-acc", + "custom_cookbooks_source.0.type", + "git", + ), + resource.TestCheckResourceAttr( + "aws_opsworks_stack.tf-acc", + "custom_cookbooks_source.0.revision", + "master", + ), + resource.TestCheckResourceAttr( + "aws_opsworks_stack.tf-acc", + "custom_cookbooks_source.0.url", + "https://github.com/awslabs/opsworks-example-cookbooks.git", + ), +) + +func testAccAwsOpsworksCheckVpc(s *terraform.State) error { + rs, ok := s.RootModule().Resources["aws_opsworks_stack.tf-acc"] + if !ok { + return fmt.Errorf("Not found: %s", "aws_opsworks_stack.tf-acc") + } + if rs.Primary.ID == "" { + return fmt.Errorf("No ID is set") + } + + p := rs.Primary + + opsworksconn := testAccProvider.Meta().(*AWSClient).opsworksconn + describeOpts := &opsworks.DescribeStacksInput{ + StackIds: []*string{aws.String(p.ID)}, + } + resp, err := opsworksconn.DescribeStacks(describeOpts) + if err != nil { + return err + } + if len(resp.Stacks) == 0 { + return fmt.Errorf("No stack %s not found", p.ID) + } + if p.Attributes["vpc_id"] != *resp.Stacks[0].VpcId { + return fmt.Errorf("VPCID Got %s, expected %s", *resp.Stacks[0].VpcId, p.Attributes["vpc_id"]) + } + if p.Attributes["default_subnet_id"] != *resp.Stacks[0].DefaultSubnetId { + return fmt.Errorf("VPCID Got %s, expected %s", *resp.Stacks[0].DefaultSubnetId, p.Attributes["default_subnet_id"]) + } + return nil +} + +func testAccCheckAwsOpsworksStackDestroy(s *terraform.State) error { + if len(s.RootModule().Resources) > 0 { + return fmt.Errorf("Expected all resources to be gone, but found: %#v", s.RootModule().Resources) + } + + return nil +} + +// Holds the two IAM object ARNs used in stack objects we'll create. +type testAccAwsOpsworksStackIam struct { + ServiceRoleArn string + InstanceProfileArn string +} + +func testAccAwsOpsworksStackPopulateIam(t *testing.T, opsiam *testAccAwsOpsworksStackIam) { + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccInstanceConfig_pre, // noop + Check: testAccCheckAwsOpsworksEnsureIam(t, opsiam), + }, + }, + }) +} + +func testAccCheckAwsOpsworksEnsureIam(t *testing.T, opsiam *testAccAwsOpsworksStackIam) func(*terraform.State) error { + return func(_ *terraform.State) error { + iamconn := testAccProvider.Meta().(*AWSClient).iamconn + + serviceRoleOpts := &iam.GetRoleInput{ + RoleName: aws.String("aws-opsworks-service-role"), + } + respServiceRole, err := iamconn.GetRole(serviceRoleOpts) + if err != nil { + return err + } + + instanceProfileOpts := &iam.GetInstanceProfileInput{ + InstanceProfileName: aws.String("aws-opsworks-ec2-role"), + } + respInstanceProfile, err := iamconn.GetInstanceProfile(instanceProfileOpts) + if err != nil { + return err + } + + opsiam.ServiceRoleArn = *respServiceRole.Role.Arn + opsiam.InstanceProfileArn = *respInstanceProfile.InstanceProfile.Arn + + t.Logf("[DEBUG] ServiceRoleARN for OpsWorks: %s", opsiam.ServiceRoleArn) + t.Logf("[DEBUG] Instance Profile ARN for OpsWorks: %s", opsiam.InstanceProfileArn) + + return nil + + } +} diff --git a/builtin/providers/aws/resource_aws_opsworks_static_web_layer.go b/builtin/providers/aws/resource_aws_opsworks_static_web_layer.go new file mode 100644 index 000000000..df91b1b1b --- /dev/null +++ b/builtin/providers/aws/resource_aws_opsworks_static_web_layer.go @@ -0,0 +1,16 @@ +package aws + +import ( + "github.com/hashicorp/terraform/helper/schema" +) + +func resourceAwsOpsworksStaticWebLayer() *schema.Resource { + layerType := &opsworksLayerType{ + TypeName: "web", + DefaultLayerName: "Static Web Server", + + Attributes: map[string]*opsworksLayerTypeAttribute{}, + } + + return layerType.SchemaResource() +} diff --git a/builtin/providers/aws/resource_aws_placement_group.go b/builtin/providers/aws/resource_aws_placement_group.go new file mode 100644 index 000000000..9f0452f75 --- /dev/null +++ b/builtin/providers/aws/resource_aws_placement_group.go @@ -0,0 +1,150 @@ +package aws + +import ( + "fmt" + "log" + "time" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/awserr" + "github.com/aws/aws-sdk-go/service/ec2" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/helper/schema" +) + +func resourceAwsPlacementGroup() *schema.Resource { + return &schema.Resource{ + Create: resourceAwsPlacementGroupCreate, + Read: resourceAwsPlacementGroupRead, + Delete: resourceAwsPlacementGroupDelete, + + Schema: map[string]*schema.Schema{ + "name": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + "strategy": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + }, + } +} + +func resourceAwsPlacementGroupCreate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).ec2conn + + name := d.Get("name").(string) + input := ec2.CreatePlacementGroupInput{ + GroupName: aws.String(name), + Strategy: aws.String(d.Get("strategy").(string)), + } + log.Printf("[DEBUG] Creating EC2 Placement group: %s", input) + _, err := conn.CreatePlacementGroup(&input) + if err != nil { + return err + } + + wait := resource.StateChangeConf{ + Pending: []string{"pending"}, + Target: "available", + Timeout: 5 * time.Minute, + MinTimeout: 1 * time.Second, + Refresh: func() (interface{}, string, error) { + out, err := conn.DescribePlacementGroups(&ec2.DescribePlacementGroupsInput{ + GroupNames: []*string{aws.String(name)}, + }) + + if err != nil { + return out, "", err + } + + if len(out.PlacementGroups) == 0 { + return out, "", fmt.Errorf("Placement group not found (%q)", name) + } + pg := out.PlacementGroups[0] + + return out, *pg.State, nil + }, + } + + _, err = wait.WaitForState() + if err != nil { + return err + } + + log.Printf("[DEBUG] EC2 Placement group created: %q", name) + + d.SetId(name) + + return resourceAwsPlacementGroupRead(d, meta) +} + +func resourceAwsPlacementGroupRead(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).ec2conn + input := ec2.DescribePlacementGroupsInput{ + GroupNames: []*string{aws.String(d.Get("name").(string))}, + } + out, err := conn.DescribePlacementGroups(&input) + if err != nil { + return err + } + pg := out.PlacementGroups[0] + + log.Printf("[DEBUG] Received EC2 Placement Group: %s", pg) + + d.Set("name", pg.GroupName) + d.Set("strategy", pg.Strategy) + + return nil +} + +func resourceAwsPlacementGroupDelete(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).ec2conn + + log.Printf("[DEBUG] Deleting EC2 Placement Group %q", d.Id()) + _, err := conn.DeletePlacementGroup(&ec2.DeletePlacementGroupInput{ + GroupName: aws.String(d.Id()), + }) + if err != nil { + return err + } + + wait := resource.StateChangeConf{ + Pending: []string{"deleting"}, + Target: "deleted", + Timeout: 5 * time.Minute, + MinTimeout: 1 * time.Second, + Refresh: func() (interface{}, string, error) { + out, err := conn.DescribePlacementGroups(&ec2.DescribePlacementGroupsInput{ + GroupNames: []*string{aws.String(d.Id())}, + }) + + if err != nil { + awsErr := err.(awserr.Error) + if awsErr.Code() == "InvalidPlacementGroup.Unknown" { + return out, "deleted", nil + } + return out, "", awsErr + } + + if len(out.PlacementGroups) == 0 { + return out, "deleted", nil + } + + pg := out.PlacementGroups[0] + + return out, *pg.State, nil + }, + } + + _, err = wait.WaitForState() + if err != nil { + return err + } + + d.SetId("") + return nil +} diff --git a/builtin/providers/aws/resource_aws_placement_group_test.go b/builtin/providers/aws/resource_aws_placement_group_test.go new file mode 100644 index 000000000..a68e43e92 --- /dev/null +++ b/builtin/providers/aws/resource_aws_placement_group_test.go @@ -0,0 +1,98 @@ +package aws + +import ( + "fmt" + "testing" + + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/ec2" +) + +func TestAccAWSPlacementGroup_basic(t *testing.T) { + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSPlacementGroupDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccAWSPlacementGroupConfig, + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSPlacementGroupExists("aws_placement_group.pg"), + ), + }, + }, + }) +} + +func testAccCheckAWSPlacementGroupDestroy(s *terraform.State) error { + conn := testAccProvider.Meta().(*AWSClient).ec2conn + + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_placement_group" { + continue + } + _, err := conn.DeletePlacementGroup(&ec2.DeletePlacementGroupInput{ + GroupName: aws.String(rs.Primary.ID), + }) + if err != nil { + return err + } + } + return nil +} + +func testAccCheckAWSPlacementGroupExists(n string) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[n] + if !ok { + return fmt.Errorf("Not found: %s", n) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No Placement Group ID is set") + } + + conn := testAccProvider.Meta().(*AWSClient).ec2conn + _, err := conn.DescribePlacementGroups(&ec2.DescribePlacementGroupsInput{ + GroupNames: []*string{aws.String(rs.Primary.ID)}, + }) + + if err != nil { + return fmt.Errorf("Placement Group error: %v", err) + } + return nil + } +} + +func testAccCheckAWSDestroyPlacementGroup(n string) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[n] + if !ok { + return fmt.Errorf("Not found: %s", n) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No Placement Group ID is set") + } + + conn := testAccProvider.Meta().(*AWSClient).ec2conn + _, err := conn.DeletePlacementGroup(&ec2.DeletePlacementGroupInput{ + GroupName: aws.String(rs.Primary.ID), + }) + + if err != nil { + return fmt.Errorf("Error destroying Placement Group (%s): %s", rs.Primary.ID, err) + } + return nil + } +} + +var testAccAWSPlacementGroupConfig = ` +resource "aws_placement_group" "pg" { + name = "tf-test-pg" + strategy = "cluster" +} +` diff --git a/builtin/providers/aws/resource_aws_rds_cluster.go b/builtin/providers/aws/resource_aws_rds_cluster.go new file mode 100644 index 000000000..57f3a27b3 --- /dev/null +++ b/builtin/providers/aws/resource_aws_rds_cluster.go @@ -0,0 +1,347 @@ +package aws + +import ( + "fmt" + "log" + "regexp" + "time" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/awserr" + "github.com/aws/aws-sdk-go/service/rds" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/helper/schema" +) + +func resourceAwsRDSCluster() *schema.Resource { + return &schema.Resource{ + Create: resourceAwsRDSClusterCreate, + Read: resourceAwsRDSClusterRead, + Update: resourceAwsRDSClusterUpdate, + Delete: resourceAwsRDSClusterDelete, + + Schema: map[string]*schema.Schema{ + + "availability_zones": &schema.Schema{ + Type: schema.TypeSet, + Elem: &schema.Schema{Type: schema.TypeString}, + Optional: true, + ForceNew: true, + Computed: true, + Set: schema.HashString, + }, + + "cluster_identifier": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + ValidateFunc: validateRdsId, + }, + + "cluster_members": &schema.Schema{ + Type: schema.TypeSet, + Elem: &schema.Schema{Type: schema.TypeString}, + Optional: true, + Computed: true, + Set: schema.HashString, + }, + + "database_name": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + }, + + "db_subnet_group_name": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ForceNew: true, + Computed: true, + }, + + "endpoint": &schema.Schema{ + Type: schema.TypeString, + Computed: true, + }, + + "engine": &schema.Schema{ + Type: schema.TypeString, + Computed: true, + }, + + "final_snapshot_identifier": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ValidateFunc: func(v interface{}, k string) (ws []string, es []error) { + value := v.(string) + if !regexp.MustCompile(`^[0-9A-Za-z-]+$`).MatchString(value) { + es = append(es, fmt.Errorf( + "only alphanumeric characters and hyphens allowed in %q", k)) + } + if regexp.MustCompile(`--`).MatchString(value) { + es = append(es, fmt.Errorf("%q cannot contain two consecutive hyphens", k)) + } + if regexp.MustCompile(`-$`).MatchString(value) { + es = append(es, fmt.Errorf("%q cannot end in a hyphen", k)) + } + return + }, + }, + + "master_username": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + + "master_password": &schema.Schema{ + Type: schema.TypeString, + Required: true, + }, + + "port": &schema.Schema{ + Type: schema.TypeInt, + Optional: true, + Computed: true, + }, + + // apply_immediately is used to determine when the update modifications + // take place. + // See http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.DBInstance.Modifying.html + "apply_immediately": &schema.Schema{ + Type: schema.TypeBool, + Optional: true, + Computed: true, + }, + + "vpc_security_group_ids": &schema.Schema{ + Type: schema.TypeSet, + Optional: true, + Computed: true, + Elem: &schema.Schema{Type: schema.TypeString}, + Set: schema.HashString, + }, + }, + } +} + +func resourceAwsRDSClusterCreate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).rdsconn + + createOpts := &rds.CreateDBClusterInput{ + DBClusterIdentifier: aws.String(d.Get("cluster_identifier").(string)), + Engine: aws.String("aurora"), + MasterUserPassword: aws.String(d.Get("master_password").(string)), + MasterUsername: aws.String(d.Get("master_username").(string)), + } + + if v := d.Get("database_name"); v.(string) != "" { + createOpts.DatabaseName = aws.String(v.(string)) + } + + if attr, ok := d.GetOk("port"); ok { + createOpts.Port = aws.Int64(int64(attr.(int))) + } + + if attr, ok := d.GetOk("db_subnet_group_name"); ok { + createOpts.DBSubnetGroupName = aws.String(attr.(string)) + } + + if attr := d.Get("vpc_security_group_ids").(*schema.Set); attr.Len() > 0 { + createOpts.VpcSecurityGroupIds = expandStringList(attr.List()) + } + + if attr := d.Get("availability_zones").(*schema.Set); attr.Len() > 0 { + createOpts.AvailabilityZones = expandStringList(attr.List()) + } + + log.Printf("[DEBUG] RDS Cluster create options: %s", createOpts) + resp, err := conn.CreateDBCluster(createOpts) + if err != nil { + log.Printf("[ERROR] Error creating RDS Cluster: %s", err) + return err + } + + log.Printf("[DEBUG]: Cluster create response: %s", resp) + d.SetId(*resp.DBCluster.DBClusterIdentifier) + stateConf := &resource.StateChangeConf{ + Pending: []string{"creating", "backing-up", "modifying"}, + Target: "available", + Refresh: resourceAwsRDSClusterStateRefreshFunc(d, meta), + Timeout: 5 * time.Minute, + MinTimeout: 3 * time.Second, + } + + // Wait, catching any errors + _, err = stateConf.WaitForState() + if err != nil { + return fmt.Errorf("[WARN] Error waiting for RDS Cluster state to be \"available\": %s", err) + } + + return resourceAwsRDSClusterRead(d, meta) +} + +func resourceAwsRDSClusterRead(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).rdsconn + + resp, err := conn.DescribeDBClusters(&rds.DescribeDBClustersInput{ + DBClusterIdentifier: aws.String(d.Id()), + }) + + if err != nil { + if awsErr, ok := err.(awserr.Error); ok { + if "DBClusterNotFoundFault" == awsErr.Code() { + d.SetId("") + log.Printf("[DEBUG] RDS Cluster (%s) not found", d.Id()) + return nil + } + } + log.Printf("[DEBUG] Error describing RDS Cluster (%s)", d.Id()) + return err + } + + var dbc *rds.DBCluster + for _, c := range resp.DBClusters { + if *c.DBClusterIdentifier == d.Id() { + dbc = c + } + } + + if dbc == nil { + log.Printf("[WARN] RDS Cluster (%s) not found", d.Id()) + d.SetId("") + return nil + } + + if err := d.Set("availability_zones", aws.StringValueSlice(dbc.AvailabilityZones)); err != nil { + return fmt.Errorf("[DEBUG] Error saving AvailabilityZones to state for RDS Cluster (%s): %s", d.Id(), err) + } + d.Set("database_name", dbc.DatabaseName) + d.Set("db_subnet_group_name", dbc.DBSubnetGroup) + d.Set("endpoint", dbc.Endpoint) + d.Set("engine", dbc.Engine) + d.Set("master_username", dbc.MasterUsername) + d.Set("port", dbc.Port) + + var vpcg []string + for _, g := range dbc.VpcSecurityGroups { + vpcg = append(vpcg, *g.VpcSecurityGroupId) + } + if err := d.Set("vpc_security_group_ids", vpcg); err != nil { + return fmt.Errorf("[DEBUG] Error saving VPC Security Group IDs to state for RDS Cluster (%s): %s", d.Id(), err) + } + + var cm []string + for _, m := range dbc.DBClusterMembers { + cm = append(cm, *m.DBInstanceIdentifier) + } + if err := d.Set("cluster_members", cm); err != nil { + return fmt.Errorf("[DEBUG] Error saving RDS Cluster Members to state for RDS Cluster (%s): %s", d.Id(), err) + } + + return nil +} + +func resourceAwsRDSClusterUpdate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).rdsconn + + req := &rds.ModifyDBClusterInput{ + ApplyImmediately: aws.Bool(d.Get("apply_immediately").(bool)), + DBClusterIdentifier: aws.String(d.Id()), + } + + if d.HasChange("master_password") { + req.MasterUserPassword = aws.String(d.Get("master_password").(string)) + } + + if d.HasChange("vpc_security_group_ids") { + if attr := d.Get("vpc_security_group_ids").(*schema.Set); attr.Len() > 0 { + req.VpcSecurityGroupIds = expandStringList(attr.List()) + } else { + req.VpcSecurityGroupIds = []*string{} + } + } + + _, err := conn.ModifyDBCluster(req) + if err != nil { + return fmt.Errorf("[WARN] Error modifying RDS Cluster (%s): %s", d.Id(), err) + } + + return resourceAwsRDSClusterRead(d, meta) +} + +func resourceAwsRDSClusterDelete(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).rdsconn + log.Printf("[DEBUG] Destroying RDS Cluster (%s)", d.Id()) + + deleteOpts := rds.DeleteDBClusterInput{ + DBClusterIdentifier: aws.String(d.Id()), + } + + finalSnapshot := d.Get("final_snapshot_identifier").(string) + if finalSnapshot == "" { + deleteOpts.SkipFinalSnapshot = aws.Bool(true) + } else { + deleteOpts.FinalDBSnapshotIdentifier = aws.String(finalSnapshot) + deleteOpts.SkipFinalSnapshot = aws.Bool(false) + } + + log.Printf("[DEBUG] RDS Cluster delete options: %s", deleteOpts) + _, err := conn.DeleteDBCluster(&deleteOpts) + + stateConf := &resource.StateChangeConf{ + Pending: []string{"deleting", "backing-up", "modifying"}, + Target: "destroyed", + Refresh: resourceAwsRDSClusterStateRefreshFunc(d, meta), + Timeout: 5 * time.Minute, + MinTimeout: 3 * time.Second, + } + + // Wait, catching any errors + _, err = stateConf.WaitForState() + if err != nil { + return fmt.Errorf("[WARN] Error deleting RDS Cluster (%s): %s", d.Id(), err) + } + + return nil +} + +func resourceAwsRDSClusterStateRefreshFunc( + d *schema.ResourceData, meta interface{}) resource.StateRefreshFunc { + return func() (interface{}, string, error) { + conn := meta.(*AWSClient).rdsconn + + resp, err := conn.DescribeDBClusters(&rds.DescribeDBClustersInput{ + DBClusterIdentifier: aws.String(d.Id()), + }) + + if err != nil { + if awsErr, ok := err.(awserr.Error); ok { + if "DBClusterNotFoundFault" == awsErr.Code() { + return 42, "destroyed", nil + } + } + log.Printf("[WARN] Error on retrieving DB Cluster (%s) when waiting: %s", d.Id(), err) + return nil, "", err + } + + var dbc *rds.DBCluster + + for _, c := range resp.DBClusters { + if *c.DBClusterIdentifier == d.Id() { + dbc = c + } + } + + if dbc == nil { + return 42, "destroyed", nil + } + + if dbc.Status != nil { + log.Printf("[DEBUG] DB Cluster status (%s): %s", d.Id(), *dbc.Status) + } + + return dbc, *dbc.Status, nil + } +} diff --git a/builtin/providers/aws/resource_aws_rds_cluster_instance.go b/builtin/providers/aws/resource_aws_rds_cluster_instance.go new file mode 100644 index 000000000..bdffd59d4 --- /dev/null +++ b/builtin/providers/aws/resource_aws_rds_cluster_instance.go @@ -0,0 +1,220 @@ +package aws + +import ( + "fmt" + "log" + "time" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/rds" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/helper/schema" +) + +func resourceAwsRDSClusterInstance() *schema.Resource { + return &schema.Resource{ + Create: resourceAwsRDSClusterInstanceCreate, + Read: resourceAwsRDSClusterInstanceRead, + Update: resourceAwsRDSClusterInstanceUpdate, + Delete: resourceAwsRDSClusterInstanceDelete, + + Schema: map[string]*schema.Schema{ + "identifier": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ForceNew: true, + ValidateFunc: validateRdsId, + }, + + "db_subnet_group_name": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ForceNew: true, + Computed: true, + }, + + "writer": &schema.Schema{ + Type: schema.TypeBool, + Computed: true, + }, + + "cluster_identifier": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + + "endpoint": &schema.Schema{ + Type: schema.TypeString, + Computed: true, + }, + + "port": &schema.Schema{ + Type: schema.TypeInt, + Computed: true, + }, + + "publicly_accessible": &schema.Schema{ + Type: schema.TypeBool, + Optional: true, + Default: false, + ForceNew: true, + }, + + "instance_class": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + + "tags": tagsSchema(), + }, + } +} + +func resourceAwsRDSClusterInstanceCreate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).rdsconn + tags := tagsFromMapRDS(d.Get("tags").(map[string]interface{})) + + createOpts := &rds.CreateDBInstanceInput{ + DBInstanceClass: aws.String(d.Get("instance_class").(string)), + DBClusterIdentifier: aws.String(d.Get("cluster_identifier").(string)), + Engine: aws.String("aurora"), + PubliclyAccessible: aws.Bool(d.Get("publicly_accessible").(bool)), + Tags: tags, + } + + if v := d.Get("identifier").(string); v != "" { + createOpts.DBInstanceIdentifier = aws.String(v) + } else { + createOpts.DBInstanceIdentifier = aws.String(resource.UniqueId()) + } + + if attr, ok := d.GetOk("db_subnet_group_name"); ok { + createOpts.DBSubnetGroupName = aws.String(attr.(string)) + } + + log.Printf("[DEBUG] Creating RDS DB Instance opts: %s", createOpts) + resp, err := conn.CreateDBInstance(createOpts) + if err != nil { + return err + } + + d.SetId(*resp.DBInstance.DBInstanceIdentifier) + + // reuse db_instance refresh func + stateConf := &resource.StateChangeConf{ + Pending: []string{"creating", "backing-up", "modifying"}, + Target: "available", + Refresh: resourceAwsDbInstanceStateRefreshFunc(d, meta), + Timeout: 40 * time.Minute, + MinTimeout: 10 * time.Second, + Delay: 10 * time.Second, + } + + // Wait, catching any errors + _, err = stateConf.WaitForState() + if err != nil { + return err + } + + return resourceAwsRDSClusterInstanceRead(d, meta) +} + +func resourceAwsRDSClusterInstanceRead(d *schema.ResourceData, meta interface{}) error { + db, err := resourceAwsDbInstanceRetrieve(d, meta) + if err != nil { + log.Printf("[WARN] Error on retrieving RDS Cluster Instance (%s): %s", d.Id(), err) + d.SetId("") + return nil + } + + // Retreive DB Cluster information, to determine if this Instance is a writer + conn := meta.(*AWSClient).rdsconn + resp, err := conn.DescribeDBClusters(&rds.DescribeDBClustersInput{ + DBClusterIdentifier: db.DBClusterIdentifier, + }) + + var dbc *rds.DBCluster + for _, c := range resp.DBClusters { + if *c.DBClusterIdentifier == *db.DBClusterIdentifier { + dbc = c + } + } + + if dbc == nil { + return fmt.Errorf("[WARN] Error finding RDS Cluster (%s) for Cluster Instance (%s): %s", + *db.DBClusterIdentifier, *db.DBInstanceIdentifier, err) + } + + for _, m := range dbc.DBClusterMembers { + if *db.DBInstanceIdentifier == *m.DBInstanceIdentifier { + if *m.IsClusterWriter == true { + d.Set("writer", true) + } else { + d.Set("writer", false) + } + } + } + + if db.Endpoint != nil { + d.Set("endpoint", db.Endpoint.Address) + d.Set("port", db.Endpoint.Port) + } + + d.Set("publicly_accessible", db.PubliclyAccessible) + + // Fetch and save tags + arn, err := buildRDSARN(d, meta) + if err != nil { + log.Printf("[DEBUG] Error building ARN for RDS Cluster Instance (%s), not setting Tags", *db.DBInstanceIdentifier) + } else { + if err := saveTagsRDS(conn, d, arn); err != nil { + log.Printf("[WARN] Failed to save tags for RDS Cluster Instance (%s): %s", *db.DBClusterIdentifier, err) + } + } + + return nil +} + +func resourceAwsRDSClusterInstanceUpdate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).rdsconn + + if arn, err := buildRDSARN(d, meta); err == nil { + if err := setTagsRDS(conn, d, arn); err != nil { + return err + } + } + + return resourceAwsRDSClusterInstanceRead(d, meta) +} + +func resourceAwsRDSClusterInstanceDelete(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).rdsconn + + log.Printf("[DEBUG] RDS Cluster Instance destroy: %v", d.Id()) + + opts := rds.DeleteDBInstanceInput{DBInstanceIdentifier: aws.String(d.Id())} + + log.Printf("[DEBUG] RDS Cluster Instance destroy configuration: %s", opts) + if _, err := conn.DeleteDBInstance(&opts); err != nil { + return err + } + + // re-uses db_instance refresh func + log.Println("[INFO] Waiting for RDS Cluster Instance to be destroyed") + stateConf := &resource.StateChangeConf{ + Pending: []string{"modifying", "deleting"}, + Target: "", + Refresh: resourceAwsDbInstanceStateRefreshFunc(d, meta), + Timeout: 40 * time.Minute, + MinTimeout: 10 * time.Second, + } + + if _, err := stateConf.WaitForState(); err != nil { + return err + } + + return nil + +} diff --git a/builtin/providers/aws/resource_aws_rds_cluster_instance_test.go b/builtin/providers/aws/resource_aws_rds_cluster_instance_test.go new file mode 100644 index 000000000..046132fad --- /dev/null +++ b/builtin/providers/aws/resource_aws_rds_cluster_instance_test.go @@ -0,0 +1,134 @@ +package aws + +import ( + "fmt" + "math/rand" + "strings" + "testing" + "time" + + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/awserr" + "github.com/aws/aws-sdk-go/service/rds" +) + +func TestAccAWSRDSClusterInstance_basic(t *testing.T) { + var v rds.DBInstance + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSClusterDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccAWSClusterInstanceConfig, + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSClusterInstanceExists("aws_rds_cluster_instance.cluster_instances", &v), + testAccCheckAWSDBClusterInstanceAttributes(&v), + ), + }, + }, + }) +} + +func testAccCheckAWSClusterInstanceDestroy(s *terraform.State) error { + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_rds_cluster" { + continue + } + + // Try to find the Group + conn := testAccProvider.Meta().(*AWSClient).rdsconn + var err error + resp, err := conn.DescribeDBInstances( + &rds.DescribeDBInstancesInput{ + DBInstanceIdentifier: aws.String(rs.Primary.ID), + }) + + if err == nil { + if len(resp.DBInstances) != 0 && + *resp.DBInstances[0].DBInstanceIdentifier == rs.Primary.ID { + return fmt.Errorf("DB Cluster Instance %s still exists", rs.Primary.ID) + } + } + + // Return nil if the Cluster Instance is already destroyed + if awsErr, ok := err.(awserr.Error); ok { + if awsErr.Code() == "DBInstanceNotFound" { + return nil + } + } + + return err + + } + + return nil +} + +func testAccCheckAWSDBClusterInstanceAttributes(v *rds.DBInstance) resource.TestCheckFunc { + return func(s *terraform.State) error { + + if *v.Engine != "aurora" { + return fmt.Errorf("bad engine, expected \"aurora\": %#v", *v.Engine) + } + + if !strings.HasPrefix(*v.DBClusterIdentifier, "tf-aurora-cluster") { + return fmt.Errorf("Bad Cluster Identifier prefix:\nexpected: %s\ngot: %s", "tf-aurora-cluster", *v.DBClusterIdentifier) + } + + return nil + } +} + +func testAccCheckAWSClusterInstanceExists(n string, v *rds.DBInstance) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[n] + if !ok { + return fmt.Errorf("Not found: %s", n) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No DB Instance ID is set") + } + + conn := testAccProvider.Meta().(*AWSClient).rdsconn + resp, err := conn.DescribeDBInstances(&rds.DescribeDBInstancesInput{ + DBInstanceIdentifier: aws.String(rs.Primary.ID), + }) + + if err != nil { + return err + } + + for _, d := range resp.DBInstances { + if *d.DBInstanceIdentifier == rs.Primary.ID { + *v = *d + return nil + } + } + + return fmt.Errorf("DB Cluster (%s) not found", rs.Primary.ID) + } +} + +// Add some random to the name, to avoid collision +var testAccAWSClusterInstanceConfig = fmt.Sprintf(` +resource "aws_rds_cluster" "default" { + cluster_identifier = "tf-aurora-cluster-test-%d" + availability_zones = ["us-west-2a","us-west-2b","us-west-2c"] + database_name = "mydb" + master_username = "foo" + master_password = "mustbeeightcharaters" +} + +resource "aws_rds_cluster_instance" "cluster_instances" { + identifier = "aurora-cluster-test-instance" + cluster_identifier = "${aws_rds_cluster.default.id}" + instance_class = "db.r3.large" +} + +`, rand.New(rand.NewSource(time.Now().UnixNano())).Int()) diff --git a/builtin/providers/aws/resource_aws_rds_cluster_test.go b/builtin/providers/aws/resource_aws_rds_cluster_test.go new file mode 100644 index 000000000..ffa2fa8e9 --- /dev/null +++ b/builtin/providers/aws/resource_aws_rds_cluster_test.go @@ -0,0 +1,108 @@ +package aws + +import ( + "fmt" + "math/rand" + "testing" + "time" + + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/awserr" + "github.com/aws/aws-sdk-go/service/rds" +) + +func TestAccAWSRDSCluster_basic(t *testing.T) { + var v rds.DBCluster + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSClusterDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccAWSClusterConfig, + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSClusterExists("aws_rds_cluster.default", &v), + ), + }, + }, + }) +} + +func testAccCheckAWSClusterDestroy(s *terraform.State) error { + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_rds_cluster" { + continue + } + + // Try to find the Group + conn := testAccProvider.Meta().(*AWSClient).rdsconn + var err error + resp, err := conn.DescribeDBClusters( + &rds.DescribeDBClustersInput{ + DBClusterIdentifier: aws.String(rs.Primary.ID), + }) + + if err == nil { + if len(resp.DBClusters) != 0 && + *resp.DBClusters[0].DBClusterIdentifier == rs.Primary.ID { + return fmt.Errorf("DB Cluster %s still exists", rs.Primary.ID) + } + } + + // Return nil if the cluster is already destroyed + if awsErr, ok := err.(awserr.Error); ok { + if awsErr.Code() == "DBClusterNotFound" { + return nil + } + } + + return err + } + + return nil +} + +func testAccCheckAWSClusterExists(n string, v *rds.DBCluster) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[n] + if !ok { + return fmt.Errorf("Not found: %s", n) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No DB Instance ID is set") + } + + conn := testAccProvider.Meta().(*AWSClient).rdsconn + resp, err := conn.DescribeDBClusters(&rds.DescribeDBClustersInput{ + DBClusterIdentifier: aws.String(rs.Primary.ID), + }) + + if err != nil { + return err + } + + for _, c := range resp.DBClusters { + if *c.DBClusterIdentifier == rs.Primary.ID { + *v = *c + return nil + } + } + + return fmt.Errorf("DB Cluster (%s) not found", rs.Primary.ID) + } +} + +// Add some random to the name, to avoid collision +var testAccAWSClusterConfig = fmt.Sprintf(` +resource "aws_rds_cluster" "default" { + cluster_identifier = "tf-aurora-cluster-%d" + availability_zones = ["us-west-2a","us-west-2b","us-west-2c"] + database_name = "mydb" + master_username = "foo" + master_password = "mustbeeightcharaters" +}`, rand.New(rand.NewSource(time.Now().UnixNano())).Int()) diff --git a/builtin/providers/aws/resource_aws_route_table_test.go b/builtin/providers/aws/resource_aws_route_table_test.go index 6eb8951fd..17fd4087e 100644 --- a/builtin/providers/aws/resource_aws_route_table_test.go +++ b/builtin/providers/aws/resource_aws_route_table_test.go @@ -2,6 +2,7 @@ package aws import ( "fmt" + "os" "testing" "github.com/aws/aws-sdk-go/aws" @@ -212,12 +213,16 @@ func testAccCheckRouteTableExists(n string, v *ec2.RouteTable) resource.TestChec } } -// TODO: re-enable this test. // VPC Peering connections are prefixed with pcx // Right now there is no VPC Peering resource -func _TestAccAWSRouteTable_vpcPeering(t *testing.T) { +func TestAccAWSRouteTable_vpcPeering(t *testing.T) { var v ec2.RouteTable + acctId := os.Getenv("TF_ACC_ID") + if acctId == "" && os.Getenv(resource.TestEnvVar) != "" { + t.Fatal("Error: Test TestAccAWSRouteTable_vpcPeering requires an Account ID in TF_ACC_ID ") + } + testCheck := func(*terraform.State) error { if len(v.Routes) != 2 { return fmt.Errorf("bad routes: %#v", v.Routes) @@ -243,7 +248,7 @@ func _TestAccAWSRouteTable_vpcPeering(t *testing.T) { CheckDestroy: testAccCheckRouteTableDestroy, Steps: []resource.TestStep{ resource.TestStep{ - Config: testAccRouteTableVpcPeeringConfig, + Config: testAccRouteTableVpcPeeringConfig(acctId), Check: resource.ComposeTestCheckFunc( testAccCheckRouteTableExists( "aws_route_table.foo", &v), @@ -395,11 +400,10 @@ resource "aws_route_table" "foo" { } ` -// TODO: re-enable this test. // VPC Peering connections are prefixed with pcx -// Right now there is no VPC Peering resource -const testAccRouteTableVpcPeeringConfig = ` -resource "aws_vpc" "foo" { +// This test requires an ENV var, TF_ACC_ID, with a valid AWS Account ID +func testAccRouteTableVpcPeeringConfig(acc string) string { + cfg := `resource "aws_vpc" "foo" { cidr_block = "10.1.0.0/16" } @@ -407,15 +411,34 @@ resource "aws_internet_gateway" "foo" { vpc_id = "${aws_vpc.foo.id}" } +resource "aws_vpc" "bar" { + cidr_block = "10.3.0.0/16" +} + +resource "aws_internet_gateway" "bar" { + vpc_id = "${aws_vpc.bar.id}" +} + +resource "aws_vpc_peering_connection" "foo" { + vpc_id = "${aws_vpc.foo.id}" + peer_vpc_id = "${aws_vpc.bar.id}" + peer_owner_id = "%s" + tags { + foo = "bar" + } +} + resource "aws_route_table" "foo" { vpc_id = "${aws_vpc.foo.id}" route { cidr_block = "10.2.0.0/16" - vpc_peering_connection_id = "pcx-12345" + vpc_peering_connection_id = "${aws_vpc_peering_connection.foo.id}" } } ` + return fmt.Sprintf(cfg, acc) +} const testAccRouteTableVgwRoutePropagationConfig = ` resource "aws_vpc" "foo" { diff --git a/builtin/providers/aws/resource_aws_s3_bucket.go b/builtin/providers/aws/resource_aws_s3_bucket.go index a329d4ff6..b45f69cc4 100644 --- a/builtin/providers/aws/resource_aws_s3_bucket.go +++ b/builtin/providers/aws/resource_aws_s3_bucket.go @@ -464,6 +464,9 @@ func resourceAwsS3BucketWebsiteDelete(s3conn *s3.S3, d *schema.ResourceData) err return fmt.Errorf("Error deleting S3 website: %s", err) } + d.Set("website_endpoint", "") + d.Set("website_domain", "") + return nil } diff --git a/builtin/providers/aws/resource_aws_s3_bucket_object.go b/builtin/providers/aws/resource_aws_s3_bucket_object.go index 9d46952d0..b1c399dd1 100644 --- a/builtin/providers/aws/resource_aws_s3_bucket_object.go +++ b/builtin/providers/aws/resource_aws_s3_bucket_object.go @@ -1,7 +1,9 @@ package aws import ( + "bytes" "fmt" + "io" "log" "os" @@ -16,7 +18,6 @@ func resourceAwsS3BucketObject() *schema.Resource { return &schema.Resource{ Create: resourceAwsS3BucketObjectPut, Read: resourceAwsS3BucketObjectRead, - Update: resourceAwsS3BucketObjectPut, Delete: resourceAwsS3BucketObjectDelete, Schema: map[string]*schema.Schema{ @@ -26,6 +27,37 @@ func resourceAwsS3BucketObject() *schema.Resource { ForceNew: true, }, + "cache_control": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ForceNew: true, + }, + + "content_disposition": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ForceNew: true, + }, + + "content_encoding": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ForceNew: true, + }, + + "content_language": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ForceNew: true, + }, + + "content_type": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ForceNew: true, + Computed: true, + }, + "key": &schema.Schema{ Type: schema.TypeString, Required: true, @@ -33,9 +65,17 @@ func resourceAwsS3BucketObject() *schema.Resource { }, "source": &schema.Schema{ - Type: schema.TypeString, - Required: true, - ForceNew: true, + Type: schema.TypeString, + Optional: true, + ForceNew: true, + ConflictsWith: []string{"content"}, + }, + + "content": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ForceNew: true, + ConflictsWith: []string{"source"}, }, "etag": &schema.Schema{ @@ -51,21 +91,50 @@ func resourceAwsS3BucketObjectPut(d *schema.ResourceData, meta interface{}) erro bucket := d.Get("bucket").(string) key := d.Get("key").(string) - source := d.Get("source").(string) + var body io.ReadSeeker - file, err := os.Open(source) + if v, ok := d.GetOk("source"); ok { + source := v.(string) + file, err := os.Open(source) + if err != nil { + return fmt.Errorf("Error opening S3 bucket object source (%s): %s", source, err) + } - if err != nil { - return fmt.Errorf("Error opening S3 bucket object source (%s): %s", source, err) + body = file + } else if v, ok := d.GetOk("content"); ok { + content := v.(string) + body = bytes.NewReader([]byte(content)) + } else { + + return fmt.Errorf("Must specify \"source\" or \"content\" field") + } + putInput := &s3.PutObjectInput{ + Bucket: aws.String(bucket), + Key: aws.String(key), + Body: body, } - resp, err := s3conn.PutObject( - &s3.PutObjectInput{ - Bucket: aws.String(bucket), - Key: aws.String(key), - Body: file, - }) + if v, ok := d.GetOk("cache_control"); ok { + putInput.CacheControl = aws.String(v.(string)) + } + if v, ok := d.GetOk("content_type"); ok { + putInput.ContentType = aws.String(v.(string)) + } + + if v, ok := d.GetOk("content_encoding"); ok { + putInput.ContentEncoding = aws.String(v.(string)) + } + + if v, ok := d.GetOk("content_language"); ok { + putInput.ContentLanguage = aws.String(v.(string)) + } + + if v, ok := d.GetOk("content_disposition"); ok { + putInput.ContentDisposition = aws.String(v.(string)) + } + + resp, err := s3conn.PutObject(putInput) if err != nil { return fmt.Errorf("Error putting object in S3 bucket (%s): %s", bucket, err) } @@ -99,6 +168,12 @@ func resourceAwsS3BucketObjectRead(d *schema.ResourceData, meta interface{}) err return err } + d.Set("cache_control", resp.CacheControl) + d.Set("content_disposition", resp.ContentDisposition) + d.Set("content_encoding", resp.ContentEncoding) + d.Set("content_language", resp.ContentLanguage) + d.Set("content_type", resp.ContentType) + log.Printf("[DEBUG] Reading S3 Bucket Object meta: %s", resp) return nil } diff --git a/builtin/providers/aws/resource_aws_s3_bucket_object_test.go b/builtin/providers/aws/resource_aws_s3_bucket_object_test.go index 4f947736a..ea28f9d37 100644 --- a/builtin/providers/aws/resource_aws_s3_bucket_object_test.go +++ b/builtin/providers/aws/resource_aws_s3_bucket_object_test.go @@ -15,7 +15,7 @@ import ( var tf, err = ioutil.TempFile("", "tf") -func TestAccAWSS3BucketObject_basic(t *testing.T) { +func TestAccAWSS3BucketObject_source(t *testing.T) { // first write some data to the tempfile just so it's not 0 bytes. ioutil.WriteFile(tf.Name(), []byte("{anything will do }"), 0644) resource.Test(t, resource.TestCase{ @@ -29,13 +29,57 @@ func TestAccAWSS3BucketObject_basic(t *testing.T) { CheckDestroy: testAccCheckAWSS3BucketObjectDestroy, Steps: []resource.TestStep{ resource.TestStep{ - Config: testAccAWSS3BucketObjectConfig, + Config: testAccAWSS3BucketObjectConfigSource, Check: testAccCheckAWSS3BucketObjectExists("aws_s3_bucket_object.object"), }, }, }) } +func TestAccAWSS3BucketObject_content(t *testing.T) { + resource.Test(t, resource.TestCase{ + PreCheck: func() { + if err != nil { + panic(err) + } + testAccPreCheck(t) + }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSS3BucketObjectDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccAWSS3BucketObjectConfigContent, + Check: testAccCheckAWSS3BucketObjectExists("aws_s3_bucket_object.object"), + }, + }, + }) +} + +func TestAccAWSS3BucketObject_withContentCharacteristics(t *testing.T) { + // first write some data to the tempfile just so it's not 0 bytes. + ioutil.WriteFile(tf.Name(), []byte("{anything will do }"), 0644) + resource.Test(t, resource.TestCase{ + PreCheck: func() { + if err != nil { + panic(err) + } + testAccPreCheck(t) + }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSS3BucketObjectDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccAWSS3BucketObjectConfig_withContentCharacteristics, + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSS3BucketObjectExists("aws_s3_bucket_object.object"), + resource.TestCheckResourceAttr( + "aws_s3_bucket_object.object", "content_type", "binary/octet-stream"), + ), + }, + }, + }) +} + func testAccCheckAWSS3BucketObjectDestroy(s *terraform.State) error { s3conn := testAccProvider.Meta().(*AWSClient).s3conn @@ -86,14 +130,39 @@ func testAccCheckAWSS3BucketObjectExists(n string) resource.TestCheckFunc { } var randomBucket = randInt -var testAccAWSS3BucketObjectConfig = fmt.Sprintf(` +var testAccAWSS3BucketObjectConfigSource = fmt.Sprintf(` resource "aws_s3_bucket" "object_bucket" { - bucket = "tf-object-test-bucket-%d" + bucket = "tf-object-test-bucket-%d" } - resource "aws_s3_bucket_object" "object" { bucket = "${aws_s3_bucket.object_bucket.bucket}" key = "test-key" source = "%s" + content_type = "binary/octet-stream" } `, randomBucket, tf.Name()) + +var testAccAWSS3BucketObjectConfig_withContentCharacteristics = fmt.Sprintf(` +resource "aws_s3_bucket" "object_bucket_2" { + bucket = "tf-object-test-bucket-%d" +} + +resource "aws_s3_bucket_object" "object" { + bucket = "${aws_s3_bucket.object_bucket_2.bucket}" + key = "test-key" + source = "%s" + content_language = "en" + content_type = "binary/octet-stream" +} +`, randomBucket, tf.Name()) + +var testAccAWSS3BucketObjectConfigContent = fmt.Sprintf(` +resource "aws_s3_bucket" "object_bucket" { + bucket = "tf-object-test-bucket-%d" +} +resource "aws_s3_bucket_object" "object" { + bucket = "${aws_s3_bucket.object_bucket.bucket}" + key = "test-key" + content = "some_bucket_content" +} +`, randomBucket) diff --git a/builtin/providers/aws/resource_aws_s3_bucket_test.go b/builtin/providers/aws/resource_aws_s3_bucket_test.go index e494816b3..1ce05583c 100644 --- a/builtin/providers/aws/resource_aws_s3_bucket_test.go +++ b/builtin/providers/aws/resource_aws_s3_bucket_test.go @@ -64,7 +64,7 @@ func TestAccAWSS3Bucket_Policy(t *testing.T) { }) } -func TestAccAWSS3Bucket_Website(t *testing.T) { +func TestAccAWSS3Bucket_Website_Simple(t *testing.T) { resource.Test(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, diff --git a/builtin/providers/aws/resource_aws_security_group_rule.go b/builtin/providers/aws/resource_aws_security_group_rule.go index 97b6d4025..55499cfd5 100644 --- a/builtin/providers/aws/resource_aws_security_group_rule.go +++ b/builtin/providers/aws/resource_aws_security_group_rule.go @@ -20,7 +20,7 @@ func resourceAwsSecurityGroupRule() *schema.Resource { Read: resourceAwsSecurityGroupRuleRead, Delete: resourceAwsSecurityGroupRuleDelete, - SchemaVersion: 1, + SchemaVersion: 2, MigrateState: resourceAwsSecurityGroupRuleMigrateState, Schema: map[string]*schema.Schema{ @@ -67,14 +67,15 @@ func resourceAwsSecurityGroupRule() *schema.Resource { Optional: true, ForceNew: true, Computed: true, - ConflictsWith: []string{"cidr_blocks"}, + ConflictsWith: []string{"cidr_blocks", "self"}, }, "self": &schema.Schema{ - Type: schema.TypeBool, - Optional: true, - Default: false, - ForceNew: true, + Type: schema.TypeBool, + Optional: true, + Default: false, + ForceNew: true, + ConflictsWith: []string{"cidr_blocks"}, }, }, } @@ -142,7 +143,7 @@ information and instructions for recovery. Error message: %s`, awsErr.Message()) ruleType, autherr) } - d.SetId(ipPermissionIDHash(ruleType, perm)) + d.SetId(ipPermissionIDHash(sg_id, ruleType, perm)) return resourceAwsSecurityGroupRuleRead(d, meta) } @@ -158,24 +159,69 @@ func resourceAwsSecurityGroupRuleRead(d *schema.ResourceData, meta interface{}) } var rule *ec2.IpPermission + var rules []*ec2.IpPermission ruleType := d.Get("type").(string) - var rl []*ec2.IpPermission switch ruleType { case "ingress": - rl = sg.IpPermissions + rules = sg.IpPermissions default: - rl = sg.IpPermissionsEgress + rules = sg.IpPermissionsEgress } - for _, r := range rl { - if d.Id() == ipPermissionIDHash(ruleType, r) { - rule = r + p := expandIPPerm(d, sg) + + if len(rules) == 0 { + return fmt.Errorf( + "[WARN] No %s rules were found for Security Group (%s) looking for Security Group Rule (%s)", + ruleType, *sg.GroupName, d.Id()) + } + + for _, r := range rules { + if r.ToPort != nil && *p.ToPort != *r.ToPort { + continue } + + if r.FromPort != nil && *p.FromPort != *r.FromPort { + continue + } + + if r.IpProtocol != nil && *p.IpProtocol != *r.IpProtocol { + continue + } + + remaining := len(p.IpRanges) + for _, ip := range p.IpRanges { + for _, rip := range r.IpRanges { + if *ip.CidrIp == *rip.CidrIp { + remaining-- + } + } + } + + if remaining > 0 { + continue + } + + remaining = len(p.UserIdGroupPairs) + for _, ip := range p.UserIdGroupPairs { + for _, rip := range r.UserIdGroupPairs { + if *ip.GroupId == *rip.GroupId { + remaining-- + } + } + } + + if remaining > 0 { + continue + } + + log.Printf("[DEBUG] Found rule for Security Group Rule (%s): %s", d.Id(), r) + rule = r } if rule == nil { - log.Printf("[DEBUG] Unable to find matching %s Security Group Rule for Group %s", - ruleType, sg_id) + log.Printf("[DEBUG] Unable to find matching %s Security Group Rule (%s) for Group %s", + ruleType, d.Id(), sg_id) d.SetId("") return nil } @@ -186,14 +232,14 @@ func resourceAwsSecurityGroupRuleRead(d *schema.ResourceData, meta interface{}) d.Set("type", ruleType) var cb []string - for _, c := range rule.IpRanges { + for _, c := range p.IpRanges { cb = append(cb, *c.CidrIp) } d.Set("cidr_blocks", cb) - if len(rule.UserIdGroupPairs) > 0 { - s := rule.UserIdGroupPairs[0] + if len(p.UserIdGroupPairs) > 0 { + s := p.UserIdGroupPairs[0] d.Set("source_security_group_id", *s.GroupId) } @@ -285,8 +331,9 @@ func (b ByGroupPair) Less(i, j int) bool { panic("mismatched security group rules, may be a terraform bug") } -func ipPermissionIDHash(ruleType string, ip *ec2.IpPermission) string { +func ipPermissionIDHash(sg_id, ruleType string, ip *ec2.IpPermission) string { var buf bytes.Buffer + buf.WriteString(fmt.Sprintf("%s-", sg_id)) if ip.FromPort != nil && *ip.FromPort > 0 { buf.WriteString(fmt.Sprintf("%d-", *ip.FromPort)) } @@ -326,7 +373,7 @@ func ipPermissionIDHash(ruleType string, ip *ec2.IpPermission) string { } } - return fmt.Sprintf("sg-%d", hashcode.String(buf.String())) + return fmt.Sprintf("sgrule-%d", hashcode.String(buf.String())) } func expandIPPerm(d *schema.ResourceData, sg *ec2.SecurityGroup) *ec2.IpPermission { diff --git a/builtin/providers/aws/resource_aws_security_group_rule_migrate.go b/builtin/providers/aws/resource_aws_security_group_rule_migrate.go index 98ecced70..0b57f3f17 100644 --- a/builtin/providers/aws/resource_aws_security_group_rule_migrate.go +++ b/builtin/providers/aws/resource_aws_security_group_rule_migrate.go @@ -17,6 +17,12 @@ func resourceAwsSecurityGroupRuleMigrateState( case 0: log.Println("[INFO] Found AWS Security Group State v0; migrating to v1") return migrateSGRuleStateV0toV1(is) + case 1: + log.Println("[INFO] Found AWS Security Group State v1; migrating to v2") + // migrating to version 2 of the schema is the same as 0->1, since the + // method signature has changed now and will use the security group id in + // the hash + return migrateSGRuleStateV0toV1(is) default: return is, fmt.Errorf("Unexpected schema version: %d", v) } @@ -37,7 +43,7 @@ func migrateSGRuleStateV0toV1(is *terraform.InstanceState) (*terraform.InstanceS } log.Printf("[DEBUG] Attributes before migration: %#v", is.Attributes) - newID := ipPermissionIDHash(is.Attributes["type"], perm) + newID := ipPermissionIDHash(is.Attributes["security_group_id"], is.Attributes["type"], perm) is.Attributes["id"] = newID is.ID = newID log.Printf("[DEBUG] Attributes after migration: %#v, new id: %s", is.Attributes, newID) diff --git a/builtin/providers/aws/resource_aws_security_group_rule_migrate_test.go b/builtin/providers/aws/resource_aws_security_group_rule_migrate_test.go index 664f05039..496834b8c 100644 --- a/builtin/providers/aws/resource_aws_security_group_rule_migrate_test.go +++ b/builtin/providers/aws/resource_aws_security_group_rule_migrate_test.go @@ -27,7 +27,7 @@ func TestAWSSecurityGroupRuleMigrateState(t *testing.T) { "from_port": "0", "source_security_group_id": "sg-11877275", }, - Expected: "sg-3766347571", + Expected: "sgrule-2889201120", }, "v0_2": { StateVersion: 0, @@ -44,7 +44,7 @@ func TestAWSSecurityGroupRuleMigrateState(t *testing.T) { "cidr_blocks.2": "172.16.3.0/24", "cidr_blocks.3": "172.16.4.0/24", "cidr_blocks.#": "4"}, - Expected: "sg-4100229787", + Expected: "sgrule-1826358977", }, } diff --git a/builtin/providers/aws/resource_aws_security_group_rule_test.go b/builtin/providers/aws/resource_aws_security_group_rule_test.go index c160703f3..f06dd3e13 100644 --- a/builtin/providers/aws/resource_aws_security_group_rule_test.go +++ b/builtin/providers/aws/resource_aws_security_group_rule_test.go @@ -2,7 +2,7 @@ package aws import ( "fmt" - "reflect" + "log" "testing" "github.com/aws/aws-sdk-go/aws" @@ -90,15 +90,15 @@ func TestIpPermissionIDHash(t *testing.T) { Type string Output string }{ - {simple, "ingress", "sg-82613597"}, - {egress, "egress", "sg-363054720"}, - {egress_all, "egress", "sg-2766285362"}, - {vpc_security_group_source, "egress", "sg-2661404947"}, - {security_group_source, "egress", "sg-1841245863"}, + {simple, "ingress", "sgrule-3403497314"}, + {egress, "egress", "sgrule-1173186295"}, + {egress_all, "egress", "sgrule-766323498"}, + {vpc_security_group_source, "egress", "sgrule-351225364"}, + {security_group_source, "egress", "sgrule-2198807188"}, } for _, tc := range cases { - actual := ipPermissionIDHash(tc.Type, tc.Input) + actual := ipPermissionIDHash("sg-12345", tc.Type, tc.Input) if actual != tc.Output { t.Errorf("input: %s - %s\noutput: %s", tc.Type, tc.Input, actual) } @@ -132,7 +132,7 @@ func TestAccAWSSecurityGroupRule_Ingress_VPC(t *testing.T) { Config: testAccAWSSecurityGroupRuleIngressConfig, Check: resource.ComposeTestCheckFunc( testAccCheckAWSSecurityGroupRuleExists("aws_security_group.web", &group), - testAccCheckAWSSecurityGroupRuleAttributes(&group, "ingress"), + testAccCheckAWSSecurityGroupRuleAttributes("aws_security_group_rule.ingress_1", &group, nil, "ingress"), resource.TestCheckResourceAttr( "aws_security_group_rule.ingress_1", "from_port", "80"), testRuleCount, @@ -169,7 +169,7 @@ func TestAccAWSSecurityGroupRule_Ingress_Classic(t *testing.T) { Config: testAccAWSSecurityGroupRuleIngressClassicConfig, Check: resource.ComposeTestCheckFunc( testAccCheckAWSSecurityGroupRuleExists("aws_security_group.web", &group), - testAccCheckAWSSecurityGroupRuleAttributes(&group, "ingress"), + testAccCheckAWSSecurityGroupRuleAttributes("aws_security_group_rule.ingress_1", &group, nil, "ingress"), resource.TestCheckResourceAttr( "aws_security_group_rule.ingress_1", "from_port", "80"), testRuleCount, @@ -231,7 +231,7 @@ func TestAccAWSSecurityGroupRule_Egress(t *testing.T) { Config: testAccAWSSecurityGroupRuleEgressConfig, Check: resource.ComposeTestCheckFunc( testAccCheckAWSSecurityGroupRuleExists("aws_security_group.web", &group), - testAccCheckAWSSecurityGroupRuleAttributes(&group, "egress"), + testAccCheckAWSSecurityGroupRuleAttributes("aws_security_group_rule.egress_1", &group, nil, "egress"), ), }, }, @@ -256,6 +256,92 @@ func TestAccAWSSecurityGroupRule_SelfReference(t *testing.T) { }) } +// testing partial match implementation +func TestAccAWSSecurityGroupRule_PartialMatching_basic(t *testing.T) { + var group ec2.SecurityGroup + + p := ec2.IpPermission{ + FromPort: aws.Int64(80), + ToPort: aws.Int64(80), + IpProtocol: aws.String("tcp"), + IpRanges: []*ec2.IpRange{ + &ec2.IpRange{CidrIp: aws.String("10.0.2.0/24")}, + &ec2.IpRange{CidrIp: aws.String("10.0.3.0/24")}, + &ec2.IpRange{CidrIp: aws.String("10.0.4.0/24")}, + }, + } + + o := ec2.IpPermission{ + FromPort: aws.Int64(80), + ToPort: aws.Int64(80), + IpProtocol: aws.String("tcp"), + IpRanges: []*ec2.IpRange{ + &ec2.IpRange{CidrIp: aws.String("10.0.5.0/24")}, + }, + } + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSSecurityGroupRuleDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccAWSSecurityGroupRulePartialMatching, + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSSecurityGroupRuleExists("aws_security_group.web", &group), + testAccCheckAWSSecurityGroupRuleAttributes("aws_security_group_rule.ingress", &group, &p, "ingress"), + testAccCheckAWSSecurityGroupRuleAttributes("aws_security_group_rule.other", &group, &o, "ingress"), + testAccCheckAWSSecurityGroupRuleAttributes("aws_security_group_rule.nat_ingress", &group, &o, "ingress"), + ), + }, + }, + }) +} + +func TestAccAWSSecurityGroupRule_PartialMatching_Source(t *testing.T) { + var group ec2.SecurityGroup + var nat ec2.SecurityGroup + var p ec2.IpPermission + + // This function creates the expected IPPermission with the group id from an + // external security group, needed because Security Group IDs are generated on + // AWS side and can't be known ahead of time. + setupSG := func(*terraform.State) error { + if nat.GroupId == nil { + return fmt.Errorf("Error: nat group has nil GroupID") + } + + p = ec2.IpPermission{ + FromPort: aws.Int64(80), + ToPort: aws.Int64(80), + IpProtocol: aws.String("tcp"), + UserIdGroupPairs: []*ec2.UserIdGroupPair{ + &ec2.UserIdGroupPair{GroupId: nat.GroupId}, + }, + } + + return nil + } + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSSecurityGroupRuleDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccAWSSecurityGroupRulePartialMatching_Source, + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSSecurityGroupRuleExists("aws_security_group.web", &group), + testAccCheckAWSSecurityGroupRuleExists("aws_security_group.nat", &nat), + setupSG, + testAccCheckAWSSecurityGroupRuleAttributes("aws_security_group_rule.source_ingress", &group, &p, "ingress"), + ), + }, + }, + }) + +} + func testAccCheckAWSSecurityGroupRuleDestroy(s *terraform.State) error { conn := testAccProvider.Meta().(*AWSClient).ec2conn @@ -319,14 +405,27 @@ func testAccCheckAWSSecurityGroupRuleExists(n string, group *ec2.SecurityGroup) } } -func testAccCheckAWSSecurityGroupRuleAttributes(group *ec2.SecurityGroup, ruleType string) resource.TestCheckFunc { +func testAccCheckAWSSecurityGroupRuleAttributes(n string, group *ec2.SecurityGroup, p *ec2.IpPermission, ruleType string) resource.TestCheckFunc { return func(s *terraform.State) error { - p := &ec2.IpPermission{ - FromPort: aws.Int64(80), - ToPort: aws.Int64(8000), - IpProtocol: aws.String("tcp"), - IpRanges: []*ec2.IpRange{&ec2.IpRange{CidrIp: aws.String("10.0.0.0/8")}}, + rs, ok := s.RootModule().Resources[n] + if !ok { + return fmt.Errorf("Security Group Rule Not found: %s", n) } + + if rs.Primary.ID == "" { + return fmt.Errorf("No Security Group Rule is set") + } + + if p == nil { + p = &ec2.IpPermission{ + FromPort: aws.Int64(80), + ToPort: aws.Int64(8000), + IpProtocol: aws.String("tcp"), + IpRanges: []*ec2.IpRange{&ec2.IpRange{CidrIp: aws.String("10.0.0.0/8")}}, + } + } + + var matchingRule *ec2.IpPermission var rules []*ec2.IpPermission if ruleType == "ingress" { rules = group.IpPermissions @@ -338,15 +437,53 @@ func testAccCheckAWSSecurityGroupRuleAttributes(group *ec2.SecurityGroup, ruleTy return fmt.Errorf("No IPPerms") } - // Compare our ingress - if !reflect.DeepEqual(rules[0], p) { - return fmt.Errorf( - "Got:\n\n%#v\n\nExpected:\n\n%#v\n", - rules[0], - p) + for _, r := range rules { + if r.ToPort != nil && *p.ToPort != *r.ToPort { + continue + } + + if r.FromPort != nil && *p.FromPort != *r.FromPort { + continue + } + + if r.IpProtocol != nil && *p.IpProtocol != *r.IpProtocol { + continue + } + + remaining := len(p.IpRanges) + for _, ip := range p.IpRanges { + for _, rip := range r.IpRanges { + if *ip.CidrIp == *rip.CidrIp { + remaining-- + } + } + } + + if remaining > 0 { + continue + } + + remaining = len(p.UserIdGroupPairs) + for _, ip := range p.UserIdGroupPairs { + for _, rip := range r.UserIdGroupPairs { + if *ip.GroupId == *rip.GroupId { + remaining-- + } + } + } + + if remaining > 0 { + continue + } + matchingRule = r } - return nil + if matchingRule != nil { + log.Printf("[DEBUG] Matching rule found : %s", matchingRule) + return nil + } + + return fmt.Errorf("Error here\n\tlooking for %s, wasn't found in %s", p, rules) } } @@ -480,3 +617,104 @@ resource "aws_security_group_rule" "self" { security_group_id = "${aws_security_group.web.id}" } ` + +const testAccAWSSecurityGroupRulePartialMatching = ` +resource "aws_vpc" "default" { + cidr_block = "10.0.0.0/16" + tags { + Name = "tf-sg-rule-bug" + } +} + +resource "aws_security_group" "web" { + name = "tf-other" + vpc_id = "${aws_vpc.default.id}" + tags { + Name = "tf-other-sg" + } +} + +resource "aws_security_group" "nat" { + name = "tf-nat" + vpc_id = "${aws_vpc.default.id}" + tags { + Name = "tf-nat-sg" + } +} + +resource "aws_security_group_rule" "ingress" { + type = "ingress" + from_port = 80 + to_port = 80 + protocol = "tcp" + cidr_blocks = ["10.0.2.0/24", "10.0.3.0/24", "10.0.4.0/24"] + + security_group_id = "${aws_security_group.web.id}" +} + +resource "aws_security_group_rule" "other" { + type = "ingress" + from_port = 80 + to_port = 80 + protocol = "tcp" + cidr_blocks = ["10.0.5.0/24"] + + security_group_id = "${aws_security_group.web.id}" +} + +// same a above, but different group, to guard against bad hashing +resource "aws_security_group_rule" "nat_ingress" { + type = "ingress" + from_port = 80 + to_port = 80 + protocol = "tcp" + cidr_blocks = ["10.0.2.0/24", "10.0.3.0/24", "10.0.4.0/24"] + + security_group_id = "${aws_security_group.nat.id}" +} +` + +const testAccAWSSecurityGroupRulePartialMatching_Source = ` +resource "aws_vpc" "default" { + cidr_block = "10.0.0.0/16" + tags { + Name = "tf-sg-rule-bug" + } +} + +resource "aws_security_group" "web" { + name = "tf-other" + vpc_id = "${aws_vpc.default.id}" + tags { + Name = "tf-other-sg" + } +} + +resource "aws_security_group" "nat" { + name = "tf-nat" + vpc_id = "${aws_vpc.default.id}" + tags { + Name = "tf-nat-sg" + } +} + +resource "aws_security_group_rule" "source_ingress" { + type = "ingress" + from_port = 80 + to_port = 80 + protocol = "tcp" + + source_security_group_id = "${aws_security_group.nat.id}" + security_group_id = "${aws_security_group.web.id}" +} + +resource "aws_security_group_rule" "other_ingress" { + type = "ingress" + from_port = 80 + to_port = 80 + protocol = "tcp" + cidr_blocks = ["10.0.2.0/24", "10.0.3.0/24", "10.0.4.0/24"] + + security_group_id = "${aws_security_group.web.id}" +} +` diff --git a/builtin/providers/aws/resource_aws_spot_instance_request.go b/builtin/providers/aws/resource_aws_spot_instance_request.go index 56de8992c..89384246c 100644 --- a/builtin/providers/aws/resource_aws_spot_instance_request.go +++ b/builtin/providers/aws/resource_aws_spot_instance_request.go @@ -36,6 +36,11 @@ func resourceAwsSpotInstanceRequest() *schema.Resource { Required: true, ForceNew: true, } + s["spot_type"] = &schema.Schema{ + Type: schema.TypeString, + Optional: true, + Default: "persistent", + } s["wait_for_fulfillment"] = &schema.Schema{ Type: schema.TypeBool, Optional: true, @@ -69,10 +74,7 @@ func resourceAwsSpotInstanceRequestCreate(d *schema.ResourceData, meta interface spotOpts := &ec2.RequestSpotInstancesInput{ SpotPrice: aws.String(d.Get("spot_price").(string)), - - // We always set the type to "persistent", since the imperative-like - // behavior of "one-time" does not map well to TF's declarative domain. - Type: aws.String("persistent"), + Type: aws.String(d.Get("spot_type").(string)), // Though the AWS API supports creating spot instance requests for multiple // instances, for TF purposes we fix this to one instance per request. diff --git a/builtin/providers/aws/resource_aws_vpc_peering_connection.go b/builtin/providers/aws/resource_aws_vpc_peering_connection.go index b279797f6..6b7c4dc52 100644 --- a/builtin/providers/aws/resource_aws_vpc_peering_connection.go +++ b/builtin/providers/aws/resource_aws_vpc_peering_connection.go @@ -127,6 +127,9 @@ func resourceVPCPeeringConnectionAccept(conn *ec2.EC2, id string) (string, error } resp, err := conn.AcceptVpcPeeringConnection(req) + if err != nil { + return "", err + } pc := resp.VpcPeeringConnection return *pc.Status.Code, err } @@ -153,16 +156,15 @@ func resourceAwsVPCPeeringUpdate(d *schema.ResourceData, meta interface{}) error } pc := pcRaw.(*ec2.VpcPeeringConnection) - if *pc.Status.Code == "pending-acceptance" { + if pc.Status != nil && *pc.Status.Code == "pending-acceptance" { status, err := resourceVPCPeeringConnectionAccept(conn, d.Id()) - - log.Printf( - "[DEBUG] VPC Peering connection accept status %s", - status) if err != nil { return err } + log.Printf( + "[DEBUG] VPC Peering connection accept status: %s", + status) } } diff --git a/builtin/providers/aws/resource_aws_vpc_peering_connection_test.go b/builtin/providers/aws/resource_aws_vpc_peering_connection_test.go index dc78a7082..ca92ce66a 100644 --- a/builtin/providers/aws/resource_aws_vpc_peering_connection_test.go +++ b/builtin/providers/aws/resource_aws_vpc_peering_connection_test.go @@ -36,6 +36,7 @@ func TestAccAWSVPCPeeringConnection_basic(t *testing.T) { func TestAccAWSVPCPeeringConnection_tags(t *testing.T) { var connection ec2.VpcPeeringConnection + peerId := os.Getenv("TF_PEER_ID") resource.Test(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, @@ -43,7 +44,7 @@ func TestAccAWSVPCPeeringConnection_tags(t *testing.T) { CheckDestroy: testAccCheckVpcDestroy, Steps: []resource.TestStep{ resource.TestStep{ - Config: testAccVpcPeeringConfigTags, + Config: fmt.Sprintf(testAccVpcPeeringConfigTags, peerId), Check: resource.ComposeTestCheckFunc( testAccCheckAWSVpcPeeringConnectionExists("aws_vpc_peering_connection.foo", &connection), testAccCheckTags(&connection.Tags, "foo", "bar"), @@ -117,6 +118,7 @@ resource "aws_vpc" "bar" { resource "aws_vpc_peering_connection" "foo" { vpc_id = "${aws_vpc.foo.id}" peer_vpc_id = "${aws_vpc.bar.id}" + auto_accept = true } ` @@ -132,6 +134,7 @@ resource "aws_vpc" "bar" { resource "aws_vpc_peering_connection" "foo" { vpc_id = "${aws_vpc.foo.id}" peer_vpc_id = "${aws_vpc.bar.id}" + peer_owner_id = "%s" tags { foo = "bar" } diff --git a/builtin/providers/aws/resource_vpn_connection_route.go b/builtin/providers/aws/resource_vpn_connection_route.go index 580ecff19..e6863f721 100644 --- a/builtin/providers/aws/resource_vpn_connection_route.go +++ b/builtin/providers/aws/resource_vpn_connection_route.go @@ -17,8 +17,6 @@ func resourceAwsVpnConnectionRoute() *schema.Resource { // You can't update a route. You can just delete one and make // a new one. Create: resourceAwsVpnConnectionRouteCreate, - Update: resourceAwsVpnConnectionRouteCreate, - Read: resourceAwsVpnConnectionRouteRead, Delete: resourceAwsVpnConnectionRouteDelete, diff --git a/builtin/providers/aws/structure.go b/builtin/providers/aws/structure.go index 9b1c0ab79..5976a8ff0 100644 --- a/builtin/providers/aws/structure.go +++ b/builtin/providers/aws/structure.go @@ -4,13 +4,16 @@ import ( "bytes" "encoding/json" "fmt" + "regexp" "sort" "strings" "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/directoryservice" "github.com/aws/aws-sdk-go/service/ec2" "github.com/aws/aws-sdk-go/service/ecs" "github.com/aws/aws-sdk-go/service/elasticache" + elasticsearch "github.com/aws/aws-sdk-go/service/elasticsearchservice" "github.com/aws/aws-sdk-go/service/elb" "github.com/aws/aws-sdk-go/service/rds" "github.com/aws/aws-sdk-go/service/route53" @@ -368,7 +371,7 @@ func flattenElastiCacheParameters(list []*elasticache.Parameter) []map[string]in } // Takes the result of flatmap.Expand for an array of strings -// and returns a []string +// and returns a []*string func expandStringList(configured []interface{}) []*string { vs := make([]*string, 0, len(configured)) for _, v := range configured { @@ -377,6 +380,17 @@ func expandStringList(configured []interface{}) []*string { return vs } +// Takes list of pointers to strings. Expand to an array +// of raw strings and returns a []interface{} +// to keep compatibility w/ schema.NewSetschema.NewSet +func flattenStringList(list []*string) []interface{} { + vs := make([]interface{}, 0, len(list)) + for _, v := range list { + vs = append(vs, *v) + } + return vs +} + //Flattens an array of private ip addresses into a []string, where the elements returned are the IP strings e.g. "192.168.0.0" func flattenNetworkInterfacesPrivateIPAddresses(dtos []*ec2.NetworkInterfacePrivateIpAddress) []string { ips := make([]string, 0, len(dtos)) @@ -446,3 +460,144 @@ func expandResourceRecords(recs []interface{}, typeStr string) []*route53.Resour } return records } + +func validateRdsId(v interface{}, k string) (ws []string, errors []error) { + value := v.(string) + if !regexp.MustCompile(`^[0-9a-z-]+$`).MatchString(value) { + errors = append(errors, fmt.Errorf( + "only lowercase alphanumeric characters and hyphens allowed in %q", k)) + } + if !regexp.MustCompile(`^[a-z]`).MatchString(value) { + errors = append(errors, fmt.Errorf( + "first character of %q must be a letter", k)) + } + if regexp.MustCompile(`--`).MatchString(value) { + errors = append(errors, fmt.Errorf( + "%q cannot contain two consecutive hyphens", k)) + } + if regexp.MustCompile(`-$`).MatchString(value) { + errors = append(errors, fmt.Errorf( + "%q cannot end with a hyphen", k)) + } + return +} + +func expandESClusterConfig(m map[string]interface{}) *elasticsearch.ElasticsearchClusterConfig { + config := elasticsearch.ElasticsearchClusterConfig{} + + if v, ok := m["dedicated_master_enabled"]; ok { + isEnabled := v.(bool) + config.DedicatedMasterEnabled = aws.Bool(isEnabled) + + if isEnabled { + if v, ok := m["dedicated_master_count"]; ok && v.(int) > 0 { + config.DedicatedMasterCount = aws.Int64(int64(v.(int))) + } + if v, ok := m["dedicated_master_type"]; ok && v.(string) != "" { + config.DedicatedMasterType = aws.String(v.(string)) + } + } + } + + if v, ok := m["instance_count"]; ok { + config.InstanceCount = aws.Int64(int64(v.(int))) + } + if v, ok := m["instance_type"]; ok { + config.InstanceType = aws.String(v.(string)) + } + + if v, ok := m["zone_awareness_enabled"]; ok { + config.ZoneAwarenessEnabled = aws.Bool(v.(bool)) + } + + return &config +} + +func flattenESClusterConfig(c *elasticsearch.ElasticsearchClusterConfig) []map[string]interface{} { + m := map[string]interface{}{} + + if c.DedicatedMasterCount != nil { + m["dedicated_master_count"] = *c.DedicatedMasterCount + } + if c.DedicatedMasterEnabled != nil { + m["dedicated_master_enabled"] = *c.DedicatedMasterEnabled + } + if c.DedicatedMasterType != nil { + m["dedicated_master_type"] = *c.DedicatedMasterType + } + if c.InstanceCount != nil { + m["instance_count"] = *c.InstanceCount + } + if c.InstanceType != nil { + m["instance_type"] = *c.InstanceType + } + if c.ZoneAwarenessEnabled != nil { + m["zone_awareness_enabled"] = *c.ZoneAwarenessEnabled + } + + return []map[string]interface{}{m} +} + +func flattenESEBSOptions(o *elasticsearch.EBSOptions) []map[string]interface{} { + m := map[string]interface{}{} + + if o.EBSEnabled != nil { + m["ebs_enabled"] = *o.EBSEnabled + } + if o.Iops != nil { + m["iops"] = *o.Iops + } + if o.VolumeSize != nil { + m["volume_size"] = *o.VolumeSize + } + if o.VolumeType != nil { + m["volume_type"] = *o.VolumeType + } + + return []map[string]interface{}{m} +} + +func expandESEBSOptions(m map[string]interface{}) *elasticsearch.EBSOptions { + options := elasticsearch.EBSOptions{} + + if v, ok := m["ebs_enabled"]; ok { + options.EBSEnabled = aws.Bool(v.(bool)) + } + if v, ok := m["iops"]; ok && v.(int) > 0 { + options.Iops = aws.Int64(int64(v.(int))) + } + if v, ok := m["volume_size"]; ok && v.(int) > 0 { + options.VolumeSize = aws.Int64(int64(v.(int))) + } + if v, ok := m["volume_type"]; ok && v.(string) != "" { + options.VolumeType = aws.String(v.(string)) + } + + return &options +} + +func pointersMapToStringList(pointers map[string]*string) map[string]interface{} { + list := make(map[string]interface{}, len(pointers)) + for i, v := range pointers { + list[i] = *v + } + return list +} + +func stringMapToPointers(m map[string]interface{}) map[string]*string { + list := make(map[string]*string, len(m)) + for i, v := range m { + list[i] = aws.String(v.(string)) + } + return list +} + +func flattenDSVpcSettings( + s *directoryservice.DirectoryVpcSettingsDescription) []map[string]interface{} { + settings := make(map[string]interface{}, 0) + + settings["subnet_ids"] = schema.NewSet(schema.HashString, flattenStringList(s.SubnetIds)) + settings["vpc_id"] = *s.VpcId + + return []map[string]interface{}{settings} +} diff --git a/builtin/providers/aws/tagsEFS.go b/builtin/providers/aws/tagsEFS.go new file mode 100644 index 000000000..8303d6888 --- /dev/null +++ b/builtin/providers/aws/tagsEFS.go @@ -0,0 +1,94 @@ +package aws + +import ( + "log" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/efs" + "github.com/hashicorp/terraform/helper/schema" +) + +// setTags is a helper to set the tags for a resource. It expects the +// tags field to be named "tags" +func setTagsEFS(conn *efs.EFS, d *schema.ResourceData) error { + if d.HasChange("tags") { + oraw, nraw := d.GetChange("tags") + o := oraw.(map[string]interface{}) + n := nraw.(map[string]interface{}) + create, remove := diffTagsEFS(tagsFromMapEFS(o), tagsFromMapEFS(n)) + + // Set tags + if len(remove) > 0 { + log.Printf("[DEBUG] Removing tags: %#v", remove) + k := make([]*string, 0, len(remove)) + for _, t := range remove { + k = append(k, t.Key) + } + _, err := conn.DeleteTags(&efs.DeleteTagsInput{ + FileSystemId: aws.String(d.Id()), + TagKeys: k, + }) + if err != nil { + return err + } + } + if len(create) > 0 { + log.Printf("[DEBUG] Creating tags: %#v", create) + _, err := conn.CreateTags(&efs.CreateTagsInput{ + FileSystemId: aws.String(d.Id()), + Tags: create, + }) + if err != nil { + return err + } + } + } + + return nil +} + +// diffTags takes our tags locally and the ones remotely and returns +// the set of tags that must be created, and the set of tags that must +// be destroyed. +func diffTagsEFS(oldTags, newTags []*efs.Tag) ([]*efs.Tag, []*efs.Tag) { + // First, we're creating everything we have + create := make(map[string]interface{}) + for _, t := range newTags { + create[*t.Key] = *t.Value + } + + // Build the list of what to remove + var remove []*efs.Tag + for _, t := range oldTags { + old, ok := create[*t.Key] + if !ok || old != *t.Value { + // Delete it! + remove = append(remove, t) + } + } + + return tagsFromMapEFS(create), remove +} + +// tagsFromMap returns the tags for the given map of data. +func tagsFromMapEFS(m map[string]interface{}) []*efs.Tag { + var result []*efs.Tag + for k, v := range m { + result = append(result, &efs.Tag{ + Key: aws.String(k), + Value: aws.String(v.(string)), + }) + } + + return result +} + +// tagsToMap turns the list of tags into a map. +func tagsToMapEFS(ts []*efs.Tag) map[string]string { + result := make(map[string]string) + for _, t := range ts { + result[*t.Key] = *t.Value + } + + return result +} diff --git a/builtin/providers/aws/tagsEFS_test.go b/builtin/providers/aws/tagsEFS_test.go new file mode 100644 index 000000000..ca2ae8843 --- /dev/null +++ b/builtin/providers/aws/tagsEFS_test.go @@ -0,0 +1,85 @@ +package aws + +import ( + "fmt" + "reflect" + "testing" + + "github.com/aws/aws-sdk-go/service/efs" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +func TestDiffEFSTags(t *testing.T) { + cases := []struct { + Old, New map[string]interface{} + Create, Remove map[string]string + }{ + // Basic add/remove + { + Old: map[string]interface{}{ + "foo": "bar", + }, + New: map[string]interface{}{ + "bar": "baz", + }, + Create: map[string]string{ + "bar": "baz", + }, + Remove: map[string]string{ + "foo": "bar", + }, + }, + + // Modify + { + Old: map[string]interface{}{ + "foo": "bar", + }, + New: map[string]interface{}{ + "foo": "baz", + }, + Create: map[string]string{ + "foo": "baz", + }, + Remove: map[string]string{ + "foo": "bar", + }, + }, + } + + for i, tc := range cases { + c, r := diffTagsEFS(tagsFromMapEFS(tc.Old), tagsFromMapEFS(tc.New)) + cm := tagsToMapEFS(c) + rm := tagsToMapEFS(r) + if !reflect.DeepEqual(cm, tc.Create) { + t.Fatalf("%d: bad create: %#v", i, cm) + } + if !reflect.DeepEqual(rm, tc.Remove) { + t.Fatalf("%d: bad remove: %#v", i, rm) + } + } +} + +// testAccCheckTags can be used to check the tags on a resource. +func testAccCheckEFSTags( + ts *[]*efs.Tag, key string, value string) resource.TestCheckFunc { + return func(s *terraform.State) error { + m := tagsToMapEFS(*ts) + v, ok := m[key] + if value != "" && !ok { + return fmt.Errorf("Missing tag: %s", key) + } else if value == "" && ok { + return fmt.Errorf("Extra tag: %s", key) + } + if value == "" { + return nil + } + + if v != value { + return fmt.Errorf("%s: bad value: %s", key, v) + } + + return nil + } +} diff --git a/builtin/providers/aws/tagsRDS.go b/builtin/providers/aws/tagsRDS.go index 3e4e0c700..bcc3eb9ea 100644 --- a/builtin/providers/aws/tagsRDS.go +++ b/builtin/providers/aws/tagsRDS.go @@ -1,6 +1,7 @@ package aws import ( + "fmt" "log" "github.com/aws/aws-sdk-go/aws" @@ -19,7 +20,7 @@ func setTagsRDS(conn *rds.RDS, d *schema.ResourceData, arn string) error { // Set tags if len(remove) > 0 { - log.Printf("[DEBUG] Removing tags: %#v", remove) + log.Printf("[DEBUG] Removing tags: %s", remove) k := make([]*string, len(remove), len(remove)) for i, t := range remove { k[i] = t.Key @@ -34,7 +35,7 @@ func setTagsRDS(conn *rds.RDS, d *schema.ResourceData, arn string) error { } } if len(create) > 0 { - log.Printf("[DEBUG] Creating tags: %#v", create) + log.Printf("[DEBUG] Creating tags: %s", create) _, err := conn.AddTagsToResource(&rds.AddTagsToResourceInput{ ResourceName: aws.String(arn), Tags: create, @@ -93,3 +94,20 @@ func tagsToMapRDS(ts []*rds.Tag) map[string]string { return result } + +func saveTagsRDS(conn *rds.RDS, d *schema.ResourceData, arn string) error { + resp, err := conn.ListTagsForResource(&rds.ListTagsForResourceInput{ + ResourceName: aws.String(arn), + }) + + if err != nil { + return fmt.Errorf("[DEBUG] Error retreiving tags for ARN: %s", arn) + } + + var dt []*rds.Tag + if len(resp.TagList) > 0 { + dt = resp.TagList + } + + return d.Set("tags", tagsToMapRDS(dt)) +} diff --git a/builtin/providers/aws/tags_kinesis.go b/builtin/providers/aws/tags_kinesis.go new file mode 100644 index 000000000..c9562644d --- /dev/null +++ b/builtin/providers/aws/tags_kinesis.go @@ -0,0 +1,105 @@ +package aws + +import ( + "log" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/kinesis" + "github.com/hashicorp/terraform/helper/schema" +) + +// setTags is a helper to set the tags for a resource. It expects the +// tags field to be named "tags" +func setTagsKinesis(conn *kinesis.Kinesis, d *schema.ResourceData) error { + + sn := d.Get("name").(string) + + if d.HasChange("tags") { + oraw, nraw := d.GetChange("tags") + o := oraw.(map[string]interface{}) + n := nraw.(map[string]interface{}) + create, remove := diffTagsKinesis(tagsFromMapKinesis(o), tagsFromMapKinesis(n)) + + // Set tags + if len(remove) > 0 { + log.Printf("[DEBUG] Removing tags: %#v", remove) + k := make([]*string, len(remove), len(remove)) + for i, t := range remove { + k[i] = t.Key + } + + _, err := conn.RemoveTagsFromStream(&kinesis.RemoveTagsFromStreamInput{ + StreamName: aws.String(sn), + TagKeys: k, + }) + if err != nil { + return err + } + } + + if len(create) > 0 { + + log.Printf("[DEBUG] Creating tags: %#v", create) + t := make(map[string]*string) + for _, tag := range create { + t[*tag.Key] = tag.Value + } + + _, err := conn.AddTagsToStream(&kinesis.AddTagsToStreamInput{ + StreamName: aws.String(sn), + Tags: t, + }) + if err != nil { + return err + } + } + } + + return nil +} + +// diffTags takes our tags locally and the ones remotely and returns +// the set of tags that must be created, and the set of tags that must +// be destroyed. +func diffTagsKinesis(oldTags, newTags []*kinesis.Tag) ([]*kinesis.Tag, []*kinesis.Tag) { + // First, we're creating everything we have + create := make(map[string]interface{}) + for _, t := range newTags { + create[*t.Key] = *t.Value + } + + // Build the list of what to remove + var remove []*kinesis.Tag + for _, t := range oldTags { + old, ok := create[*t.Key] + if !ok || old != *t.Value { + // Delete it! + remove = append(remove, t) + } + } + + return tagsFromMapKinesis(create), remove +} + +// tagsFromMap returns the tags for the given map of data. +func tagsFromMapKinesis(m map[string]interface{}) []*kinesis.Tag { + var result []*kinesis.Tag + for k, v := range m { + result = append(result, &kinesis.Tag{ + Key: aws.String(k), + Value: aws.String(v.(string)), + }) + } + + return result +} + +// tagsToMap turns the list of tags into a map. +func tagsToMapKinesis(ts []*kinesis.Tag) map[string]string { + result := make(map[string]string) + for _, t := range ts { + result[*t.Key] = *t.Value + } + + return result +} diff --git a/builtin/providers/aws/tags_kinesis_test.go b/builtin/providers/aws/tags_kinesis_test.go new file mode 100644 index 000000000..d97365ad8 --- /dev/null +++ b/builtin/providers/aws/tags_kinesis_test.go @@ -0,0 +1,84 @@ +package aws + +import ( + "fmt" + "reflect" + "testing" + + "github.com/aws/aws-sdk-go/service/kinesis" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +func TestDiffTagsKinesis(t *testing.T) { + cases := []struct { + Old, New map[string]interface{} + Create, Remove map[string]string + }{ + // Basic add/remove + { + Old: map[string]interface{}{ + "foo": "bar", + }, + New: map[string]interface{}{ + "bar": "baz", + }, + Create: map[string]string{ + "bar": "baz", + }, + Remove: map[string]string{ + "foo": "bar", + }, + }, + + // Modify + { + Old: map[string]interface{}{ + "foo": "bar", + }, + New: map[string]interface{}{ + "foo": "baz", + }, + Create: map[string]string{ + "foo": "baz", + }, + Remove: map[string]string{ + "foo": "bar", + }, + }, + } + + for i, tc := range cases { + c, r := diffTagsKinesis(tagsFromMapKinesis(tc.Old), tagsFromMapKinesis(tc.New)) + cm := tagsToMapKinesis(c) + rm := tagsToMapKinesis(r) + if !reflect.DeepEqual(cm, tc.Create) { + t.Fatalf("%d: bad create: %#v", i, cm) + } + if !reflect.DeepEqual(rm, tc.Remove) { + t.Fatalf("%d: bad remove: %#v", i, rm) + } + } +} + +// testAccCheckTags can be used to check the tags on a resource. +func testAccCheckKinesisTags(ts []*kinesis.Tag, key string, value string) resource.TestCheckFunc { + return func(s *terraform.State) error { + m := tagsToMapKinesis(ts) + v, ok := m[key] + if value != "" && !ok { + return fmt.Errorf("Missing tag: %s", key) + } else if value == "" && ok { + return fmt.Errorf("Extra tag: %s", key) + } + if value == "" { + return nil + } + + if v != value { + return fmt.Errorf("%s: bad value: %s", key, v) + } + + return nil + } +} diff --git a/builtin/providers/aws/test-fixtures/saml-metadata-modified.xml b/builtin/providers/aws/test-fixtures/saml-metadata-modified.xml new file mode 100644 index 000000000..aaca7afc0 --- /dev/null +++ b/builtin/providers/aws/test-fixtures/saml-metadata-modified.xml @@ -0,0 +1,14 @@ + + + + + + MIIErDCCA5SgAwIBAgIOAU+PT8RBAAAAAHxJXEcwDQYJKoZIhvcNAQELBQAwgZAxKDAmBgNVBAMMH1NlbGZTaWduZWRDZXJ0XzAyU2VwMjAxNV8xODI2NTMxGDAWBgNVBAsMDzAwRDI0MDAwMDAwcEFvQTEXMBUGA1UECgwOU2FsZXNmb3JjZS5jb20xFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28xCzAJBgNVBAgMAkNBMQwwCgYDVQQGEwNVU0EwHhcNMTUwOTAyMTgyNjUzWhcNMTcwOTAyMTIwMDAwWjCBkDEoMCYGA1UEAwwfU2VsZlNpZ25lZENlcnRfMDJTZXAyMDE1XzE4MjY1MzEYMBYGA1UECwwPMDBEMjQwMDAwMDBwQW9BMRcwFQYDVQQKDA5TYWxlc2ZvcmNlLmNvbTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzELMAkGA1UECAwCQ0ExDDAKBgNVBAYTA1VTQTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAJp/wTRr9n1IWJpkRTjNpep47OKJrD2E6rGbJ18TG2RxtIz+zCn2JwH2aP3TULh0r0hhcg/pecv51RRcG7O19DBBaTQ5+KuoICQyKZy07/yDXSiZontTwkEYs06ssTwTHUcRXbcwTKv16L7omt0MjIhTTGfvtLOYiPwyvKvzAHg4eNuAcli0duVM78UIBORtdmy9C9ZcMh8yRJo5aPBq85wsE3JXU58ytyZzCHTBLH+2xFQrjYnUSEW+FOEEpI7o33MVdFBvWWg1R17HkWzcve4C30lqOHqvxBzyESZ/N1mMlmSt8gPFyB+mUXY99StJDJpnytbY8DwSzMQUo/sOVB0CAwEAAaOCAQAwgf0wHQYDVR0OBBYEFByu1EQqRQS0bYQBKS9K5qwKi+6IMA8GA1UdEwEB/wQFMAMBAf8wgcoGA1UdIwSBwjCBv4AUHK7URCpFBLRthAEpL0rmrAqL7oihgZakgZMwgZAxKDAmBgNVBAMMH1NlbGZTaWduZWRDZXJ0XzAyU2VwMjAxNV8xODI2NTMxGDAWBgNVBAsMDzAwRDI0MDAwMDAwcEFvQTEXMBUGA1UECgwOU2FsZXNmb3JjZS5jb20xFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28xCzAJBgNVBAgMAkNBMQwwCgYDVQQGEwNVU0GCDgFPj0/EQQAAAAB8SVxHMA0GCSqGSIb3DQEBCwUAA4IBAQA9O5o1tC71qJnkq+ABPo4A1aFKZVT/07GcBX4/wetcbYySL4Q2nR9pMgfPYYS1j+P2E3viPsQwPIWDUBwFkNsjjX5DSGEkLAioVGKRwJshRSCSynMcsVZbQkfBUiZXqhM0wzvoa/ALvGD+aSSb1m+x7lEpDYNwQKWaUW2VYcHWv9wjujMyy7dlj8E/jqM71mw7ThNl6k4+3RQ802dMa14txm8pkF0vZgfpV3tkqhBqtjBAicVCaveqr3r3iGqjvyilBgdY+0NR8szqzm7CD/Bkb22+/IgM/mXQuL9KHD/WADlSGmYKmG3SSahmcZxznYCnzcRNN9LVuXlz5cbljmBj + + + + urn:oasis:names:tc:SAML:1.1:nameid-format:unspecified + + + + diff --git a/builtin/providers/aws/test-fixtures/saml-metadata.xml b/builtin/providers/aws/test-fixtures/saml-metadata.xml new file mode 100644 index 000000000..69e353b77 --- /dev/null +++ b/builtin/providers/aws/test-fixtures/saml-metadata.xml @@ -0,0 +1,14 @@ + + + + + + MIIErDCCA5SgAwIBAgIOAU+PT8RBAAAAAHxJXEcwDQYJKoZIhvcNAQELBQAwgZAxKDAmBgNVBAMMH1NlbGZTaWduZWRDZXJ0XzAyU2VwMjAxNV8xODI2NTMxGDAWBgNVBAsMDzAwRDI0MDAwMDAwcEFvQTEXMBUGA1UECgwOU2FsZXNmb3JjZS5jb20xFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28xCzAJBgNVBAgMAkNBMQwwCgYDVQQGEwNVU0EwHhcNMTUwOTAyMTgyNjUzWhcNMTcwOTAyMTIwMDAwWjCBkDEoMCYGA1UEAwwfU2VsZlNpZ25lZENlcnRfMDJTZXAyMDE1XzE4MjY1MzEYMBYGA1UECwwPMDBEMjQwMDAwMDBwQW9BMRcwFQYDVQQKDA5TYWxlc2ZvcmNlLmNvbTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzELMAkGA1UECAwCQ0ExDDAKBgNVBAYTA1VTQTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAJp/wTRr9n1IWJpkRTjNpep47OKJrD2E6rGbJ18TG2RxtIz+zCn2JwH2aP3TULh0r0hhcg/pecv51RRcG7O19DBBaTQ5+KuoICQyKZy07/yDXSiZontTwkEYs06ssTwTHUcRXbcwTKv16L7omt0MjIhTTGfvtLOYiPwyvKvzAHg4eNuAcli0duVM78UIBORtdmy9C9ZcMh8yRJo5aPBq85wsE3JXU58ytyZzCHTBLH+2xFQrjYnUSEW+FOEEpI7o33MVdFBvWWg1R17HkWzcve4C30lqOHqvxBzyESZ/N1mMlmSt8gPFyB+mUXY99StJDJpnytbY8DwSzMQUo/sOVB0CAwEAAaOCAQAwgf0wHQYDVR0OBBYEFByu1EQqRQS0bYQBKS9K5qwKi+6IMA8GA1UdEwEB/wQFMAMBAf8wgcoGA1UdIwSBwjCBv4AUHK7URCpFBLRthAEpL0rmrAqL7oihgZakgZMwgZAxKDAmBgNVBAMMH1NlbGZTaWduZWRDZXJ0XzAyU2VwMjAxNV8xODI2NTMxGDAWBgNVBAsMDzAwRDI0MDAwMDAwcEFvQTEXMBUGA1UECgwOU2FsZXNmb3JjZS5jb20xFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28xCzAJBgNVBAgMAkNBMQwwCgYDVQQGEwNVU0GCDgFPj0/EQQAAAAB8SVxHMA0GCSqGSIb3DQEBCwUAA4IBAQA9O5o1tC71qJnkq+ABPo4A1aFKZVT/07GcBX4/wetcbYySL4Q2nR9pMgfPYYS1j+P2E3viPsQwPIWDUBwFkNsjjX5DSGEkLAioVGKRwJshRSCSynMcsVZbQkfBUiZXqhM0wzvoa/ALvGD+aSSb1m+x7lEpDYNwQKWaUW2VYcHWv9wjujMyy7dlj8E/jqM71mw7ThNl6k4+3RQ802dMa14txm8pkF0vZgfpV3tkqhBqtjBAicVCaveqr3r3iGqjvyilBgdY+0NR8szqzm7CD/Bkb22+/IgM/mXQuL9KHD/WADlSGmYKmG3SSahmcZxznYCnzcRNN9LVuXlz5cbljmBj + + + + urn:oasis:names:tc:SAML:1.1:nameid-format:unspecified + + + + diff --git a/builtin/providers/azure/config.go b/builtin/providers/azure/config.go index cbb23d58b..b096a10c4 100644 --- a/builtin/providers/azure/config.go +++ b/builtin/providers/azure/config.go @@ -98,7 +98,7 @@ func (c Client) getStorageServiceQueueClient(serviceName string) (storage.QueueS func (c *Config) NewClientFromSettingsData() (*Client, error) { mc, err := management.ClientFromPublishSettingsData(c.Settings, c.SubscriptionID) if err != nil { - return nil, nil + return nil, err } return &Client{ diff --git a/builtin/providers/azure/provider.go b/builtin/providers/azure/provider.go index fe100be35..975a93b00 100644 --- a/builtin/providers/azure/provider.go +++ b/builtin/providers/azure/provider.go @@ -64,22 +64,12 @@ func providerConfigure(d *schema.ResourceData) (interface{}, error) { Certificate: []byte(d.Get("certificate").(string)), } - settings := d.Get("settings_file").(string) - - if settings != "" { - if ok, _ := isFile(settings); ok { - settingsFile, err := homedir.Expand(settings) - if err != nil { - return nil, fmt.Errorf("Error expanding the settings file path: %s", err) - } - publishSettingsContent, err := ioutil.ReadFile(settingsFile) - if err != nil { - return nil, fmt.Errorf("Error reading settings file: %s", err) - } - config.Settings = publishSettingsContent - } else { - config.Settings = []byte(settings) - } + settingsFile := d.Get("settings_file").(string) + if settingsFile != "" { + // any errors from readSettings would have been caught at the validate + // step, so we can avoid handling them now + settings, _, _ := readSettings(settingsFile) + config.Settings = settings return config.NewClientFromSettingsData() } @@ -92,31 +82,39 @@ func providerConfigure(d *schema.ResourceData) (interface{}, error) { "or both a 'subscription_id' and 'certificate'.") } -func validateSettingsFile(v interface{}, k string) (warnings []string, errors []error) { +func validateSettingsFile(v interface{}, k string) ([]string, []error) { value := v.(string) - if value == "" { - return + return nil, nil } - var settings settingsData - if err := xml.Unmarshal([]byte(value), &settings); err != nil { - warnings = append(warnings, ` + _, warnings, errors := readSettings(value) + return warnings, errors +} + +const settingsPathWarnMsg = ` settings_file is not valid XML, so we are assuming it is a file path. This support will be removed in the future. Please update your configuration to use -${file("filename.publishsettings")} instead.`) - } else { +${file("filename.publishsettings")} instead.` + +func readSettings(pathOrContents string) (s []byte, ws []string, es []error) { + var settings settingsData + if err := xml.Unmarshal([]byte(pathOrContents), &settings); err == nil { + s = []byte(pathOrContents) return } - if ok, err := isFile(value); !ok { - errors = append(errors, - fmt.Errorf( - "account_file path could not be read from '%s': %s", - value, - err)) + ws = append(ws, settingsPathWarnMsg) + path, err := homedir.Expand(pathOrContents) + if err != nil { + es = append(es, fmt.Errorf("Error expanding path: %s", err)) + return } + s, err = ioutil.ReadFile(path) + if err != nil { + es = append(es, fmt.Errorf("Could not read file '%s': %s", path, err)) + } return } diff --git a/builtin/providers/azure/provider_test.go b/builtin/providers/azure/provider_test.go index 5c720640f..b3feb8392 100644 --- a/builtin/providers/azure/provider_test.go +++ b/builtin/providers/azure/provider_test.go @@ -3,12 +3,14 @@ package azure import ( "io" "io/ioutil" - "log" "os" + "strings" "testing" + "github.com/hashicorp/terraform/config" "github.com/hashicorp/terraform/helper/schema" "github.com/hashicorp/terraform/terraform" + "github.com/mitchellh/go-homedir" ) var testAccProviders map[string]terraform.ResourceProvider @@ -67,20 +69,33 @@ func TestAzure_validateSettingsFile(t *testing.T) { if err != nil { t.Fatalf("Error creating temporary file in TestAzure_validateSettingsFile: %s", err) } + defer os.Remove(f.Name()) fx, err := ioutil.TempFile("", "tf-test-xml") if err != nil { t.Fatalf("Error creating temporary file with XML in TestAzure_validateSettingsFile: %s", err) } + defer os.Remove(fx.Name()) + + home, err := homedir.Dir() + if err != nil { + t.Fatalf("Error fetching homedir: %s", err) + } + fh, err := ioutil.TempFile(home, "tf-test-home") + if err != nil { + t.Fatalf("Error creating homedir-based temporary file: %s", err) + } + defer os.Remove(fh.Name()) _, err = io.WriteString(fx, "") if err != nil { t.Fatalf("Error writing XML File: %s", err) } - - log.Printf("fx name: %s", fx.Name()) fx.Close() + r := strings.NewReplacer(home, "~") + homePath := r.Replace(fh.Name()) + cases := []struct { Input string // String of XML or a path to an XML file W int // expected count of warnings @@ -89,6 +104,7 @@ func TestAzure_validateSettingsFile(t *testing.T) { {"test", 1, 1}, {f.Name(), 1, 0}, {fx.Name(), 1, 0}, + {homePath, 1, 0}, {"", 0, 0}, } @@ -104,6 +120,53 @@ func TestAzure_validateSettingsFile(t *testing.T) { } } +func TestAzure_providerConfigure(t *testing.T) { + home, err := homedir.Dir() + if err != nil { + t.Fatalf("Error fetching homedir: %s", err) + } + fh, err := ioutil.TempFile(home, "tf-test-home") + if err != nil { + t.Fatalf("Error creating homedir-based temporary file: %s", err) + } + defer os.Remove(fh.Name()) + + _, err = io.WriteString(fh, testAzurePublishSettingsStr) + if err != nil { + t.Fatalf("err: %s", err) + } + fh.Close() + + r := strings.NewReplacer(home, "~") + homePath := r.Replace(fh.Name()) + + cases := []struct { + SettingsFile string // String of XML or a path to an XML file + NilMeta bool // whether meta is expected to be nil + }{ + {testAzurePublishSettingsStr, false}, + {homePath, false}, + } + + for _, tc := range cases { + rp := Provider() + raw := map[string]interface{}{ + "settings_file": tc.SettingsFile, + } + + rawConfig, err := config.NewRawConfig(raw) + if err != nil { + t.Fatalf("err: %s", err) + } + + err = rp.Configure(terraform.NewResourceConfig(rawConfig)) + meta := rp.(*schema.Provider).Meta() + if (meta == nil) != tc.NilMeta { + t.Fatalf("expected NilMeta: %t, got meta: %#v", tc.NilMeta, meta) + } + } +} + func TestAzure_isFile(t *testing.T) { f, err := ioutil.TempFile("", "tf-test-file") if err != nil { @@ -129,3 +192,19 @@ func TestAzure_isFile(t *testing.T) { } } } + +// testAzurePublishSettingsStr is a revoked publishsettings file +const testAzurePublishSettingsStr = ` + + + + + + +` diff --git a/builtin/providers/azure/resource_azure_instance.go b/builtin/providers/azure/resource_azure_instance.go index fb264f28e..c95285ec2 100644 --- a/builtin/providers/azure/resource_azure_instance.go +++ b/builtin/providers/azure/resource_azure_instance.go @@ -297,15 +297,15 @@ func resourceAzureInstanceCreate(d *schema.ResourceData, meta interface{}) (err if err != nil { return fmt.Errorf("Error configuring %s for Windows: %s", name, err) } - + if domain_name, ok := d.GetOk("domain_name"); ok { err = vmutils.ConfigureWindowsToJoinDomain( - &role, - d.Get("domain_username").(string), - d.Get("domain_password").(string), - domain_name.(string), + &role, + d.Get("domain_username").(string), + d.Get("domain_password").(string), + domain_name.(string), d.Get("domain_ou").(string), - ) + ) if err != nil { return fmt.Errorf("Error configuring %s for WindowsToJoinDomain: %s", name, err) } diff --git a/builtin/providers/azure/resource_azure_storage_blob.go b/builtin/providers/azure/resource_azure_storage_blob.go index 4e870e0ad..9a3dca1a9 100644 --- a/builtin/providers/azure/resource_azure_storage_blob.go +++ b/builtin/providers/azure/resource_azure_storage_blob.go @@ -13,7 +13,6 @@ func resourceAzureStorageBlob() *schema.Resource { return &schema.Resource{ Create: resourceAzureStorageBlobCreate, Read: resourceAzureStorageBlobRead, - Update: resourceAzureStorageBlobUpdate, Exists: resourceAzureStorageBlobExists, Delete: resourceAzureStorageBlobDelete, @@ -122,17 +121,6 @@ func resourceAzureStorageBlobRead(d *schema.ResourceData, meta interface{}) erro return nil } -// resourceAzureStorageBlobUpdate does all the necessary API calls to -// update a blob on Azure. -func resourceAzureStorageBlobUpdate(d *schema.ResourceData, meta interface{}) error { - // NOTE: although empty as most parameters have ForceNew set; this is - // still required in case of changes to the storage_service_key - - // run the ExistsFunc beforehand to ensure the resource's existence nonetheless: - _, err := resourceAzureStorageBlobExists(d, meta) - return err -} - // resourceAzureStorageBlobExists does all the necessary API calls to // check for the existence of the blob on Azure. func resourceAzureStorageBlobExists(d *schema.ResourceData, meta interface{}) (bool, error) { diff --git a/builtin/providers/cloudstack/provider_test.go b/builtin/providers/cloudstack/provider_test.go index 1207fd085..b1b8442a5 100644 --- a/builtin/providers/cloudstack/provider_test.go +++ b/builtin/providers/cloudstack/provider_test.go @@ -32,18 +32,18 @@ func testSetValueOnResourceData(t *testing.T) { d := schema.ResourceData{} d.Set("id", "name") - setValueOrUUID(&d, "id", "name", "54711781-274e-41b2-83c0-17194d0108f7") + setValueOrID(&d, "id", "name", "54711781-274e-41b2-83c0-17194d0108f7") if d.Get("id").(string) != "name" { t.Fatal("err: 'id' does not match 'name'") } } -func testSetUUIDOnResourceData(t *testing.T) { +func testSetIDOnResourceData(t *testing.T) { d := schema.ResourceData{} d.Set("id", "54711781-274e-41b2-83c0-17194d0108f7") - setValueOrUUID(&d, "id", "name", "54711781-274e-41b2-83c0-17194d0108f7") + setValueOrID(&d, "id", "name", "54711781-274e-41b2-83c0-17194d0108f7") if d.Get("id").(string) != "54711781-274e-41b2-83c0-17194d0108f7" { t.Fatal("err: 'id' doest not match '54711781-274e-41b2-83c0-17194d0108f7'") diff --git a/builtin/providers/cloudstack/resource_cloudstack_disk.go b/builtin/providers/cloudstack/resource_cloudstack_disk.go index 30e7950da..63a788f66 100644 --- a/builtin/providers/cloudstack/resource_cloudstack_disk.go +++ b/builtin/providers/cloudstack/resource_cloudstack_disk.go @@ -80,12 +80,12 @@ func resourceCloudStackDiskCreate(d *schema.ResourceData, meta interface{}) erro // Create a new parameter struct p := cs.Volume.NewCreateVolumeParams(name) - // Retrieve the disk_offering UUID - diskofferingid, e := retrieveUUID(cs, "disk_offering", d.Get("disk_offering").(string)) + // Retrieve the disk_offering ID + diskofferingid, e := retrieveID(cs, "disk_offering", d.Get("disk_offering").(string)) if e != nil { return e.Error() } - // Set the disk_offering UUID + // Set the disk_offering ID p.SetDiskofferingid(diskofferingid) if d.Get("size").(int) != 0 { @@ -95,8 +95,8 @@ func resourceCloudStackDiskCreate(d *schema.ResourceData, meta interface{}) erro // If there is a project supplied, we retrieve and set the project id if project, ok := d.GetOk("project"); ok { - // Retrieve the project UUID - projectid, e := retrieveUUID(cs, "project", project.(string)) + // Retrieve the project ID + projectid, e := retrieveID(cs, "project", project.(string)) if e != nil { return e.Error() } @@ -104,8 +104,8 @@ func resourceCloudStackDiskCreate(d *schema.ResourceData, meta interface{}) erro p.SetProjectid(projectid) } - // Retrieve the zone UUID - zoneid, e := retrieveUUID(cs, "zone", d.Get("zone").(string)) + // Retrieve the zone ID + zoneid, e := retrieveID(cs, "zone", d.Get("zone").(string)) if e != nil { return e.Error() } @@ -118,7 +118,7 @@ func resourceCloudStackDiskCreate(d *schema.ResourceData, meta interface{}) erro return fmt.Errorf("Error creating the new disk %s: %s", name, err) } - // Set the volume UUID and partials + // Set the volume ID and partials d.SetId(r.Id) d.SetPartial("name") d.SetPartial("device") @@ -160,9 +160,9 @@ func resourceCloudStackDiskRead(d *schema.ResourceData, meta interface{}) error d.Set("attach", v.Attached != "") // If attached this will contain a timestamp when attached d.Set("size", int(v.Size/(1024*1024*1024))) // Needed to get GB's again - setValueOrUUID(d, "disk_offering", v.Diskofferingname, v.Diskofferingid) - setValueOrUUID(d, "project", v.Project, v.Projectid) - setValueOrUUID(d, "zone", v.Zonename, v.Zoneid) + setValueOrID(d, "disk_offering", v.Diskofferingname, v.Diskofferingid) + setValueOrID(d, "project", v.Project, v.Projectid) + setValueOrID(d, "zone", v.Zonename, v.Zoneid) if v.Attached != "" { // Get the virtual machine details @@ -184,7 +184,7 @@ func resourceCloudStackDiskRead(d *schema.ResourceData, meta interface{}) error } d.Set("device", retrieveDeviceName(v.Deviceid, c.Name)) - setValueOrUUID(d, "virtual_machine", v.Vmname, v.Virtualmachineid) + setValueOrID(d, "virtual_machine", v.Vmname, v.Virtualmachineid) } return nil @@ -205,13 +205,13 @@ func resourceCloudStackDiskUpdate(d *schema.ResourceData, meta interface{}) erro // Create a new parameter struct p := cs.Volume.NewResizeVolumeParams(d.Id()) - // Retrieve the disk_offering UUID - diskofferingid, e := retrieveUUID(cs, "disk_offering", d.Get("disk_offering").(string)) + // Retrieve the disk_offering ID + diskofferingid, e := retrieveID(cs, "disk_offering", d.Get("disk_offering").(string)) if e != nil { return e.Error() } - // Set the disk_offering UUID + // Set the disk_offering ID p.SetDiskofferingid(diskofferingid) if d.Get("size").(int) != 0 { @@ -228,7 +228,7 @@ func resourceCloudStackDiskUpdate(d *schema.ResourceData, meta interface{}) erro return fmt.Errorf("Error changing disk offering/size for disk %s: %s", name, err) } - // Update the volume UUID and set partials + // Update the volume ID and set partials d.SetId(r.Id) d.SetPartial("disk_offering") d.SetPartial("size") @@ -278,7 +278,7 @@ func resourceCloudStackDiskDelete(d *schema.ResourceData, meta interface{}) erro // Delete the voluem if _, err := cs.Volume.DeleteVolume(p); err != nil { - // This is a very poor way to be told the UUID does no longer exist :( + // This is a very poor way to be told the ID does no longer exist :( if strings.Contains(err.Error(), fmt.Sprintf( "Invalid parameter id value=%s due to incorrect long value format, "+ "or entity does not exist", d.Id())) { @@ -299,8 +299,8 @@ func resourceCloudStackDiskAttach(d *schema.ResourceData, meta interface{}) erro return err } - // Retrieve the virtual_machine UUID - virtualmachineid, e := retrieveUUID(cs, "virtual_machine", d.Get("virtual_machine").(string)) + // Retrieve the virtual_machine ID + virtualmachineid, e := retrieveID(cs, "virtual_machine", d.Get("virtual_machine").(string)) if e != nil { return e.Error() } @@ -341,13 +341,13 @@ func resourceCloudStackDiskDetach(d *schema.ResourceData, meta interface{}) erro // Create a new parameter struct p := cs.Volume.NewDetachVolumeParams() - // Set the volume UUID + // Set the volume ID p.SetId(d.Id()) // Detach the currently attached volume if _, err := cs.Volume.DetachVolume(p); err != nil { - // Retrieve the virtual_machine UUID - virtualmachineid, e := retrieveUUID(cs, "virtual_machine", d.Get("virtual_machine").(string)) + // Retrieve the virtual_machine ID + virtualmachineid, e := retrieveID(cs, "virtual_machine", d.Get("virtual_machine").(string)) if e != nil { return e.Error() } diff --git a/builtin/providers/cloudstack/resource_cloudstack_egress_firewall.go b/builtin/providers/cloudstack/resource_cloudstack_egress_firewall.go index e61ac0173..55209eadf 100644 --- a/builtin/providers/cloudstack/resource_cloudstack_egress_firewall.go +++ b/builtin/providers/cloudstack/resource_cloudstack_egress_firewall.go @@ -89,8 +89,8 @@ func resourceCloudStackEgressFirewallCreate(d *schema.ResourceData, meta interfa return err } - // Retrieve the network UUID - networkid, e := retrieveUUID(cs, "network", d.Get("network").(string)) + // Retrieve the network ID + networkid, e := retrieveID(cs, "network", d.Get("network").(string)) if e != nil { return e.Error() } @@ -222,7 +222,7 @@ func resourceCloudStackEgressFirewallRead(d *schema.ResourceData, meta interface // Get the rule r, count, err := cs.Firewall.GetEgressFirewallRuleByID(id.(string)) - // If the count == 0, there is no object found for this UUID + // If the count == 0, there is no object found for this ID if err != nil { if count == 0 { delete(uuids, "icmp") @@ -415,7 +415,7 @@ func resourceCloudStackEgressFirewallDeleteRule( // Delete the rule if _, err := cs.Firewall.DeleteEgressFirewallRule(p); err != nil { - // This is a very poor way to be told the UUID does no longer exist :( + // This is a very poor way to be told the ID does no longer exist :( if strings.Contains(err.Error(), fmt.Sprintf( "Invalid parameter id value=%s due to incorrect long value format, "+ "or entity does not exist", id.(string))) { diff --git a/builtin/providers/cloudstack/resource_cloudstack_egress_firewall_test.go b/builtin/providers/cloudstack/resource_cloudstack_egress_firewall_test.go index 05c3b985e..dbca8c32b 100644 --- a/builtin/providers/cloudstack/resource_cloudstack_egress_firewall_test.go +++ b/builtin/providers/cloudstack/resource_cloudstack_egress_firewall_test.go @@ -123,13 +123,13 @@ func testAccCheckCloudStackEgressFirewallRulesExist(n string) resource.TestCheck return fmt.Errorf("No firewall ID is set") } - for k, uuid := range rs.Primary.Attributes { + for k, id := range rs.Primary.Attributes { if !strings.Contains(k, ".uuids.") || strings.HasSuffix(k, ".uuids.#") { continue } cs := testAccProvider.Meta().(*cloudstack.CloudStackClient) - _, count, err := cs.Firewall.GetEgressFirewallRuleByID(uuid) + _, count, err := cs.Firewall.GetEgressFirewallRuleByID(id) if err != nil { return err @@ -156,12 +156,12 @@ func testAccCheckCloudStackEgressFirewallDestroy(s *terraform.State) error { return fmt.Errorf("No instance ID is set") } - for k, uuid := range rs.Primary.Attributes { + for k, id := range rs.Primary.Attributes { if !strings.Contains(k, ".uuids.") || strings.HasSuffix(k, ".uuids.#") { continue } - _, _, err := cs.Firewall.GetEgressFirewallRuleByID(uuid) + _, _, err := cs.Firewall.GetEgressFirewallRuleByID(id) if err == nil { return fmt.Errorf("Egress rule %s still exists", rs.Primary.ID) } diff --git a/builtin/providers/cloudstack/resource_cloudstack_firewall.go b/builtin/providers/cloudstack/resource_cloudstack_firewall.go index 48a780545..1e7ff8e70 100644 --- a/builtin/providers/cloudstack/resource_cloudstack_firewall.go +++ b/builtin/providers/cloudstack/resource_cloudstack_firewall.go @@ -89,8 +89,8 @@ func resourceCloudStackFirewallCreate(d *schema.ResourceData, meta interface{}) return err } - // Retrieve the ipaddress UUID - ipaddressid, e := retrieveUUID(cs, "ipaddress", d.Get("ipaddress").(string)) + // Retrieve the ipaddress ID + ipaddressid, e := retrieveID(cs, "ipaddress", d.Get("ipaddress").(string)) if e != nil { return e.Error() } @@ -222,7 +222,7 @@ func resourceCloudStackFirewallRead(d *schema.ResourceData, meta interface{}) er // Get the rule r, count, err := cs.Firewall.GetFirewallRuleByID(id.(string)) - // If the count == 0, there is no object found for this UUID + // If the count == 0, there is no object found for this ID if err != nil { if count == 0 { delete(uuids, "icmp") @@ -415,7 +415,7 @@ func resourceCloudStackFirewallDeleteRule( // Delete the rule if _, err := cs.Firewall.DeleteFirewallRule(p); err != nil { - // This is a very poor way to be told the UUID does no longer exist :( + // This is a very poor way to be told the ID does no longer exist :( if strings.Contains(err.Error(), fmt.Sprintf( "Invalid parameter id value=%s due to incorrect long value format, "+ "or entity does not exist", id.(string))) { diff --git a/builtin/providers/cloudstack/resource_cloudstack_firewall_test.go b/builtin/providers/cloudstack/resource_cloudstack_firewall_test.go index 2be2cebea..a86cdc3b2 100644 --- a/builtin/providers/cloudstack/resource_cloudstack_firewall_test.go +++ b/builtin/providers/cloudstack/resource_cloudstack_firewall_test.go @@ -110,13 +110,13 @@ func testAccCheckCloudStackFirewallRulesExist(n string) resource.TestCheckFunc { return fmt.Errorf("No firewall ID is set") } - for k, uuid := range rs.Primary.Attributes { + for k, id := range rs.Primary.Attributes { if !strings.Contains(k, ".uuids.") || strings.HasSuffix(k, ".uuids.#") { continue } cs := testAccProvider.Meta().(*cloudstack.CloudStackClient) - _, count, err := cs.Firewall.GetFirewallRuleByID(uuid) + _, count, err := cs.Firewall.GetFirewallRuleByID(id) if err != nil { return err @@ -143,12 +143,12 @@ func testAccCheckCloudStackFirewallDestroy(s *terraform.State) error { return fmt.Errorf("No instance ID is set") } - for k, uuid := range rs.Primary.Attributes { + for k, id := range rs.Primary.Attributes { if !strings.Contains(k, ".uuids.") || strings.HasSuffix(k, ".uuids.#") { continue } - _, _, err := cs.Firewall.GetFirewallRuleByID(uuid) + _, _, err := cs.Firewall.GetFirewallRuleByID(id) if err == nil { return fmt.Errorf("Firewall rule %s still exists", rs.Primary.ID) } diff --git a/builtin/providers/cloudstack/resource_cloudstack_instance.go b/builtin/providers/cloudstack/resource_cloudstack_instance.go index ea5d85caf..504a2dbbf 100644 --- a/builtin/providers/cloudstack/resource_cloudstack_instance.go +++ b/builtin/providers/cloudstack/resource_cloudstack_instance.go @@ -100,14 +100,14 @@ func resourceCloudStackInstance() *schema.Resource { func resourceCloudStackInstanceCreate(d *schema.ResourceData, meta interface{}) error { cs := meta.(*cloudstack.CloudStackClient) - // Retrieve the service_offering UUID - serviceofferingid, e := retrieveUUID(cs, "service_offering", d.Get("service_offering").(string)) + // Retrieve the service_offering ID + serviceofferingid, e := retrieveID(cs, "service_offering", d.Get("service_offering").(string)) if e != nil { return e.Error() } - // Retrieve the zone UUID - zoneid, e := retrieveUUID(cs, "zone", d.Get("zone").(string)) + // Retrieve the zone ID + zoneid, e := retrieveID(cs, "zone", d.Get("zone").(string)) if e != nil { return e.Error() } @@ -118,8 +118,8 @@ func resourceCloudStackInstanceCreate(d *schema.ResourceData, meta interface{}) return err } - // Retrieve the template UUID - templateid, e := retrieveTemplateUUID(cs, zone.Id, d.Get("template").(string)) + // Retrieve the template ID + templateid, e := retrieveTemplateID(cs, zone.Id, d.Get("template").(string)) if e != nil { return e.Error() } @@ -139,8 +139,8 @@ func resourceCloudStackInstanceCreate(d *schema.ResourceData, meta interface{}) } if zone.Networktype == "Advanced" { - // Retrieve the network UUID - networkid, e := retrieveUUID(cs, "network", d.Get("network").(string)) + // Retrieve the network ID + networkid, e := retrieveID(cs, "network", d.Get("network").(string)) if e != nil { return e.Error() } @@ -155,8 +155,8 @@ func resourceCloudStackInstanceCreate(d *schema.ResourceData, meta interface{}) // If there is a project supplied, we retrieve and set the project id if project, ok := d.GetOk("project"); ok { - // Retrieve the project UUID - projectid, e := retrieveUUID(cs, "project", project.(string)) + // Retrieve the project ID + projectid, e := retrieveID(cs, "project", project.(string)) if e != nil { return e.Error() } @@ -229,11 +229,11 @@ func resourceCloudStackInstanceRead(d *schema.ResourceData, meta interface{}) er d.Set("ipaddress", vm.Nic[0].Ipaddress) //NB cloudstack sometimes sends back the wrong keypair name, so dont update it - setValueOrUUID(d, "network", vm.Nic[0].Networkname, vm.Nic[0].Networkid) - setValueOrUUID(d, "service_offering", vm.Serviceofferingname, vm.Serviceofferingid) - setValueOrUUID(d, "template", vm.Templatename, vm.Templateid) - setValueOrUUID(d, "project", vm.Project, vm.Projectid) - setValueOrUUID(d, "zone", vm.Zonename, vm.Zoneid) + setValueOrID(d, "network", vm.Nic[0].Networkname, vm.Nic[0].Networkid) + setValueOrID(d, "service_offering", vm.Serviceofferingname, vm.Serviceofferingid) + setValueOrID(d, "template", vm.Templatename, vm.Templateid) + setValueOrID(d, "project", vm.Project, vm.Projectid) + setValueOrID(d, "zone", vm.Zonename, vm.Zoneid) return nil } @@ -278,8 +278,8 @@ func resourceCloudStackInstanceUpdate(d *schema.ResourceData, meta interface{}) if d.HasChange("service_offering") { log.Printf("[DEBUG] Service offering changed for %s, starting update", name) - // Retrieve the service_offering UUID - serviceofferingid, e := retrieveUUID(cs, "service_offering", d.Get("service_offering").(string)) + // Retrieve the service_offering ID + serviceofferingid, e := retrieveID(cs, "service_offering", d.Get("service_offering").(string)) if e != nil { return e.Error() } @@ -335,7 +335,7 @@ func resourceCloudStackInstanceDelete(d *schema.ResourceData, meta interface{}) log.Printf("[INFO] Destroying instance: %s", d.Get("name").(string)) if _, err := cs.VirtualMachine.DestroyVirtualMachine(p); err != nil { - // This is a very poor way to be told the UUID does no longer exist :( + // This is a very poor way to be told the ID does no longer exist :( if strings.Contains(err.Error(), fmt.Sprintf( "Invalid parameter id value=%s due to incorrect long value format, "+ "or entity does not exist", d.Id())) { diff --git a/builtin/providers/cloudstack/resource_cloudstack_ipaddress.go b/builtin/providers/cloudstack/resource_cloudstack_ipaddress.go index 7d958d104..e2e590f6b 100644 --- a/builtin/providers/cloudstack/resource_cloudstack_ipaddress.go +++ b/builtin/providers/cloudstack/resource_cloudstack_ipaddress.go @@ -53,8 +53,8 @@ func resourceCloudStackIPAddressCreate(d *schema.ResourceData, meta interface{}) p := cs.Address.NewAssociateIpAddressParams() if network, ok := d.GetOk("network"); ok { - // Retrieve the network UUID - networkid, e := retrieveUUID(cs, "network", network.(string)) + // Retrieve the network ID + networkid, e := retrieveID(cs, "network", network.(string)) if e != nil { return e.Error() } @@ -64,8 +64,8 @@ func resourceCloudStackIPAddressCreate(d *schema.ResourceData, meta interface{}) } if vpc, ok := d.GetOk("vpc"); ok { - // Retrieve the vpc UUID - vpcid, e := retrieveUUID(cs, "vpc", vpc.(string)) + // Retrieve the vpc ID + vpcid, e := retrieveID(cs, "vpc", vpc.(string)) if e != nil { return e.Error() } @@ -76,8 +76,8 @@ func resourceCloudStackIPAddressCreate(d *schema.ResourceData, meta interface{}) // If there is a project supplied, we retrieve and set the project id if project, ok := d.GetOk("project"); ok { - // Retrieve the project UUID - projectid, e := retrieveUUID(cs, "project", project.(string)) + // Retrieve the project ID + projectid, e := retrieveID(cs, "project", project.(string)) if e != nil { return e.Error() } @@ -122,7 +122,7 @@ func resourceCloudStackIPAddressRead(d *schema.ResourceData, meta interface{}) e return err } - setValueOrUUID(d, "network", n.Name, f.Associatednetworkid) + setValueOrID(d, "network", n.Name, f.Associatednetworkid) } if _, ok := d.GetOk("vpc"); ok { @@ -132,10 +132,10 @@ func resourceCloudStackIPAddressRead(d *schema.ResourceData, meta interface{}) e return err } - setValueOrUUID(d, "vpc", v.Name, f.Vpcid) + setValueOrID(d, "vpc", v.Name, f.Vpcid) } - setValueOrUUID(d, "project", f.Project, f.Projectid) + setValueOrID(d, "project", f.Project, f.Projectid) return nil } @@ -148,7 +148,7 @@ func resourceCloudStackIPAddressDelete(d *schema.ResourceData, meta interface{}) // Disassociate the IP address if _, err := cs.Address.DisassociateIpAddress(p); err != nil { - // This is a very poor way to be told the UUID does no longer exist :( + // This is a very poor way to be told the ID does no longer exist :( if strings.Contains(err.Error(), fmt.Sprintf( "Invalid parameter id value=%s due to incorrect long value format, "+ "or entity does not exist", d.Id())) { @@ -165,7 +165,7 @@ func verifyIPAddressParams(d *schema.ResourceData) error { _, network := d.GetOk("network") _, vpc := d.GetOk("vpc") - if (network && vpc) || (!network && !vpc) { + if network && vpc || !network && !vpc { return fmt.Errorf( "You must supply a value for either (so not both) the 'network' or 'vpc' parameter") } diff --git a/builtin/providers/cloudstack/resource_cloudstack_loadbalancer.go b/builtin/providers/cloudstack/resource_cloudstack_loadbalancer.go index 73bf1d493..6f8d5473f 100644 --- a/builtin/providers/cloudstack/resource_cloudstack_loadbalancer.go +++ b/builtin/providers/cloudstack/resource_cloudstack_loadbalancer.go @@ -89,9 +89,9 @@ func resourceCloudStackLoadBalancerRuleCreate(d *schema.ResourceData, meta inter p.SetDescription(d.Get("name").(string)) } - // Retrieve the network and the UUID + // Retrieve the network and the ID if network, ok := d.GetOk("network"); ok { - networkid, e := retrieveUUID(cs, "network", network.(string)) + networkid, e := retrieveID(cs, "network", network.(string)) if e != nil { return e.Error() } @@ -100,8 +100,8 @@ func resourceCloudStackLoadBalancerRuleCreate(d *schema.ResourceData, meta inter p.SetNetworkid(networkid) } - // Retrieve the ipaddress UUID - ipaddressid, e := retrieveUUID(cs, "ipaddress", d.Get("ipaddress").(string)) + // Retrieve the ipaddress ID + ipaddressid, e := retrieveID(cs, "ipaddress", d.Get("ipaddress").(string)) if e != nil { return e.Error() } @@ -113,7 +113,7 @@ func resourceCloudStackLoadBalancerRuleCreate(d *schema.ResourceData, meta inter return err } - // Set the load balancer rule UUID and set partials + // Set the load balancer rule ID and set partials d.SetId(r.Id) d.SetPartial("name") d.SetPartial("description") @@ -163,7 +163,7 @@ func resourceCloudStackLoadBalancerRuleRead(d *schema.ResourceData, meta interfa d.Set("public_port", lb.Publicport) d.Set("private_port", lb.Privateport) - setValueOrUUID(d, "ipaddress", lb.Publicip, lb.Publicipid) + setValueOrID(d, "ipaddress", lb.Publicip, lb.Publicipid) // Only set network if user specified it to avoid spurious diffs if _, ok := d.GetOk("network"); ok { @@ -171,7 +171,7 @@ func resourceCloudStackLoadBalancerRuleRead(d *schema.ResourceData, meta interfa if err != nil { return err } - setValueOrUUID(d, "network", network.Name, lb.Networkid) + setValueOrID(d, "network", network.Name, lb.Networkid) } return nil @@ -229,7 +229,7 @@ func resourceCloudStackLoadBalancerRuleDelete(d *schema.ResourceData, meta inter log.Printf("[INFO] Deleting load balancer rule: %s", d.Get("name").(string)) if _, err := cs.LoadBalancer.DeleteLoadBalancerRule(p); err != nil { - // This is a very poor way to be told the UUID does no longer exist :( + // This is a very poor way to be told the ID does no longer exist :( if !strings.Contains(err.Error(), fmt.Sprintf( "Invalid parameter id value=%s due to incorrect long value format, "+ "or entity does not exist", d.Id())) { diff --git a/builtin/providers/cloudstack/resource_cloudstack_loadbalancer_test.go b/builtin/providers/cloudstack/resource_cloudstack_loadbalancer_test.go index 59e119b16..a316d5988 100644 --- a/builtin/providers/cloudstack/resource_cloudstack_loadbalancer_test.go +++ b/builtin/providers/cloudstack/resource_cloudstack_loadbalancer_test.go @@ -223,12 +223,12 @@ func testAccCheckCloudStackLoadBalancerRuleDestroy(s *terraform.State) error { return fmt.Errorf("No Loadbalancer rule ID is set") } - for k, uuid := range rs.Primary.Attributes { + for k, id := range rs.Primary.Attributes { if !strings.Contains(k, "uuid") { continue } - _, _, err := cs.LoadBalancer.GetLoadBalancerRuleByID(uuid) + _, _, err := cs.LoadBalancer.GetLoadBalancerRuleByID(id) if err == nil { return fmt.Errorf("Loadbalancer rule %s still exists", rs.Primary.ID) } diff --git a/builtin/providers/cloudstack/resource_cloudstack_network.go b/builtin/providers/cloudstack/resource_cloudstack_network.go index 9be63b927..a76beae32 100644 --- a/builtin/providers/cloudstack/resource_cloudstack_network.go +++ b/builtin/providers/cloudstack/resource_cloudstack_network.go @@ -72,14 +72,14 @@ func resourceCloudStackNetworkCreate(d *schema.ResourceData, meta interface{}) e name := d.Get("name").(string) - // Retrieve the network_offering UUID - networkofferingid, e := retrieveUUID(cs, "network_offering", d.Get("network_offering").(string)) + // Retrieve the network_offering ID + networkofferingid, e := retrieveID(cs, "network_offering", d.Get("network_offering").(string)) if e != nil { return e.Error() } - // Retrieve the zone UUID - zoneid, e := retrieveUUID(cs, "zone", d.Get("zone").(string)) + // Retrieve the zone ID + zoneid, e := retrieveID(cs, "zone", d.Get("zone").(string)) if e != nil { return e.Error() } @@ -108,27 +108,27 @@ func resourceCloudStackNetworkCreate(d *schema.ResourceData, meta interface{}) e // Check is this network needs to be created in a VPC vpc := d.Get("vpc").(string) if vpc != "" { - // Retrieve the vpc UUID - vpcid, e := retrieveUUID(cs, "vpc", vpc) + // Retrieve the vpc ID + vpcid, e := retrieveID(cs, "vpc", vpc) if e != nil { return e.Error() } - // Set the vpc UUID + // Set the vpc ID p.SetVpcid(vpcid) // Since we're in a VPC, check if we want to assiciate an ACL list aclid := d.Get("aclid").(string) if aclid != "" { - // Set the acl UUID + // Set the acl ID p.SetAclid(aclid) } } // If there is a project supplied, we retrieve and set the project id if project, ok := d.GetOk("project"); ok { - // Retrieve the project UUID - projectid, e := retrieveUUID(cs, "project", project.(string)) + // Retrieve the project ID + projectid, e := retrieveID(cs, "project", project.(string)) if e != nil { return e.Error() } @@ -167,9 +167,9 @@ func resourceCloudStackNetworkRead(d *schema.ResourceData, meta interface{}) err d.Set("display_text", n.Displaytext) d.Set("cidr", n.Cidr) - setValueOrUUID(d, "network_offering", n.Networkofferingname, n.Networkofferingid) - setValueOrUUID(d, "project", n.Project, n.Projectid) - setValueOrUUID(d, "zone", n.Zonename, n.Zoneid) + setValueOrID(d, "network_offering", n.Networkofferingname, n.Networkofferingid) + setValueOrID(d, "project", n.Project, n.Projectid) + setValueOrID(d, "zone", n.Zonename, n.Zoneid) return nil } @@ -200,8 +200,8 @@ func resourceCloudStackNetworkUpdate(d *schema.ResourceData, meta interface{}) e // Check if the network offering is changed if d.HasChange("network_offering") { - // Retrieve the network_offering UUID - networkofferingid, e := retrieveUUID(cs, "network_offering", d.Get("network_offering").(string)) + // Retrieve the network_offering ID + networkofferingid, e := retrieveID(cs, "network_offering", d.Get("network_offering").(string)) if e != nil { return e.Error() } @@ -228,7 +228,7 @@ func resourceCloudStackNetworkDelete(d *schema.ResourceData, meta interface{}) e // Delete the network _, err := cs.Network.DeleteNetwork(p) if err != nil { - // This is a very poor way to be told the UUID does no longer exist :( + // This is a very poor way to be told the ID does no longer exist :( if strings.Contains(err.Error(), fmt.Sprintf( "Invalid parameter id value=%s due to incorrect long value format, "+ "or entity does not exist", d.Id())) { diff --git a/builtin/providers/cloudstack/resource_cloudstack_network_acl.go b/builtin/providers/cloudstack/resource_cloudstack_network_acl.go index 7f073bbf8..2504b762b 100644 --- a/builtin/providers/cloudstack/resource_cloudstack_network_acl.go +++ b/builtin/providers/cloudstack/resource_cloudstack_network_acl.go @@ -43,8 +43,8 @@ func resourceCloudStackNetworkACLCreate(d *schema.ResourceData, meta interface{} name := d.Get("name").(string) - // Retrieve the vpc UUID - vpcid, e := retrieveUUID(cs, "vpc", d.Get("vpc").(string)) + // Retrieve the vpc ID + vpcid, e := retrieveID(cs, "vpc", d.Get("vpc").(string)) if e != nil { return e.Error() } @@ -95,7 +95,7 @@ func resourceCloudStackNetworkACLRead(d *schema.ResourceData, meta interface{}) return err } - setValueOrUUID(d, "vpc", v.Name, v.Id) + setValueOrID(d, "vpc", v.Name, v.Id) return nil } @@ -109,7 +109,7 @@ func resourceCloudStackNetworkACLDelete(d *schema.ResourceData, meta interface{} // Delete the network ACL list _, err := cs.NetworkACL.DeleteNetworkACLList(p) if err != nil { - // This is a very poor way to be told the UUID does no longer exist :( + // This is a very poor way to be told the ID does no longer exist :( if strings.Contains(err.Error(), fmt.Sprintf( "Invalid parameter id value=%s due to incorrect long value format, "+ "or entity does not exist", d.Id())) { diff --git a/builtin/providers/cloudstack/resource_cloudstack_network_acl_rule.go b/builtin/providers/cloudstack/resource_cloudstack_network_acl_rule.go index fceeb7d45..ba2650484 100644 --- a/builtin/providers/cloudstack/resource_cloudstack_network_acl_rule.go +++ b/builtin/providers/cloudstack/resource_cloudstack_network_acl_rule.go @@ -247,7 +247,7 @@ func resourceCloudStackNetworkACLRuleRead(d *schema.ResourceData, meta interface // Get the rule r, count, err := cs.NetworkACL.GetNetworkACLByID(id.(string)) - // If the count == 0, there is no object found for this UUID + // If the count == 0, there is no object found for this ID if err != nil { if count == 0 { delete(uuids, "icmp") @@ -275,7 +275,7 @@ func resourceCloudStackNetworkACLRuleRead(d *schema.ResourceData, meta interface // Get the rule r, count, err := cs.NetworkACL.GetNetworkACLByID(id.(string)) - // If the count == 0, there is no object found for this UUID + // If the count == 0, there is no object found for this ID if err != nil { if count == 0 { delete(uuids, "all") @@ -469,7 +469,7 @@ func resourceCloudStackNetworkACLRuleDeleteRule( // Delete the rule if _, err := cs.NetworkACL.DeleteNetworkACL(p); err != nil { - // This is a very poor way to be told the UUID does no longer exist :( + // This is a very poor way to be told the ID does no longer exist :( if strings.Contains(err.Error(), fmt.Sprintf( "Invalid parameter id value=%s due to incorrect long value format, "+ "or entity does not exist", id.(string))) { diff --git a/builtin/providers/cloudstack/resource_cloudstack_network_acl_rule_test.go b/builtin/providers/cloudstack/resource_cloudstack_network_acl_rule_test.go index 5f450f931..6f2370f5b 100644 --- a/builtin/providers/cloudstack/resource_cloudstack_network_acl_rule_test.go +++ b/builtin/providers/cloudstack/resource_cloudstack_network_acl_rule_test.go @@ -122,13 +122,13 @@ func testAccCheckCloudStackNetworkACLRulesExist(n string) resource.TestCheckFunc return fmt.Errorf("No network ACL rule ID is set") } - for k, uuid := range rs.Primary.Attributes { + for k, id := range rs.Primary.Attributes { if !strings.Contains(k, ".uuids.") || strings.HasSuffix(k, ".uuids.#") { continue } cs := testAccProvider.Meta().(*cloudstack.CloudStackClient) - _, count, err := cs.NetworkACL.GetNetworkACLByID(uuid) + _, count, err := cs.NetworkACL.GetNetworkACLByID(id) if err != nil { return err @@ -155,12 +155,12 @@ func testAccCheckCloudStackNetworkACLRuleDestroy(s *terraform.State) error { return fmt.Errorf("No network ACL rule ID is set") } - for k, uuid := range rs.Primary.Attributes { + for k, id := range rs.Primary.Attributes { if !strings.Contains(k, ".uuids.") || strings.HasSuffix(k, ".uuids.#") { continue } - _, _, err := cs.NetworkACL.GetNetworkACLByID(uuid) + _, _, err := cs.NetworkACL.GetNetworkACLByID(id) if err == nil { return fmt.Errorf("Network ACL rule %s still exists", rs.Primary.ID) } diff --git a/builtin/providers/cloudstack/resource_cloudstack_nic.go b/builtin/providers/cloudstack/resource_cloudstack_nic.go index 2eb89c80b..e118a5fe9 100644 --- a/builtin/providers/cloudstack/resource_cloudstack_nic.go +++ b/builtin/providers/cloudstack/resource_cloudstack_nic.go @@ -41,14 +41,14 @@ func resourceCloudStackNIC() *schema.Resource { func resourceCloudStackNICCreate(d *schema.ResourceData, meta interface{}) error { cs := meta.(*cloudstack.CloudStackClient) - // Retrieve the network UUID - networkid, e := retrieveUUID(cs, "network", d.Get("network").(string)) + // Retrieve the network ID + networkid, e := retrieveID(cs, "network", d.Get("network").(string)) if e != nil { return e.Error() } - // Retrieve the virtual_machine UUID - virtualmachineid, e := retrieveUUID(cs, "virtual_machine", d.Get("virtual_machine").(string)) + // Retrieve the virtual_machine ID + virtualmachineid, e := retrieveID(cs, "virtual_machine", d.Get("virtual_machine").(string)) if e != nil { return e.Error() } @@ -103,8 +103,8 @@ func resourceCloudStackNICRead(d *schema.ResourceData, meta interface{}) error { for _, n := range vm.Nic { if n.Id == d.Id() { d.Set("ipaddress", n.Ipaddress) - setValueOrUUID(d, "network", n.Networkname, n.Networkid) - setValueOrUUID(d, "virtual_machine", vm.Name, vm.Id) + setValueOrID(d, "network", n.Networkname, n.Networkid) + setValueOrID(d, "virtual_machine", vm.Name, vm.Id) found = true break } @@ -121,8 +121,8 @@ func resourceCloudStackNICRead(d *schema.ResourceData, meta interface{}) error { func resourceCloudStackNICDelete(d *schema.ResourceData, meta interface{}) error { cs := meta.(*cloudstack.CloudStackClient) - // Retrieve the virtual_machine UUID - virtualmachineid, e := retrieveUUID(cs, "virtual_machine", d.Get("virtual_machine").(string)) + // Retrieve the virtual_machine ID + virtualmachineid, e := retrieveID(cs, "virtual_machine", d.Get("virtual_machine").(string)) if e != nil { return e.Error() } @@ -133,7 +133,7 @@ func resourceCloudStackNICDelete(d *schema.ResourceData, meta interface{}) error // Remove the NIC _, err := cs.VirtualMachine.RemoveNicFromVirtualMachine(p) if err != nil { - // This is a very poor way to be told the UUID does no longer exist :( + // This is a very poor way to be told the ID does no longer exist :( if strings.Contains(err.Error(), fmt.Sprintf( "Invalid parameter id value=%s due to incorrect long value format, "+ "or entity does not exist", d.Id())) { diff --git a/builtin/providers/cloudstack/resource_cloudstack_port_forward.go b/builtin/providers/cloudstack/resource_cloudstack_port_forward.go index 3781fc1ae..0bec41af5 100644 --- a/builtin/providers/cloudstack/resource_cloudstack_port_forward.go +++ b/builtin/providers/cloudstack/resource_cloudstack_port_forward.go @@ -72,8 +72,8 @@ func resourceCloudStackPortForward() *schema.Resource { func resourceCloudStackPortForwardCreate(d *schema.ResourceData, meta interface{}) error { cs := meta.(*cloudstack.CloudStackClient) - // Retrieve the ipaddress UUID - ipaddressid, e := retrieveUUID(cs, "ipaddress", d.Get("ipaddress").(string)) + // Retrieve the ipaddress ID + ipaddressid, e := retrieveID(cs, "ipaddress", d.Get("ipaddress").(string)) if e != nil { return e.Error() } @@ -115,8 +115,8 @@ func resourceCloudStackPortForwardCreateForward( return err } - // Retrieve the virtual_machine UUID - virtualmachineid, e := retrieveUUID(cs, "virtual_machine", forward["virtual_machine"].(string)) + // Retrieve the virtual_machine ID + virtualmachineid, e := retrieveID(cs, "virtual_machine", forward["virtual_machine"].(string)) if e != nil { return e.Error() } @@ -167,7 +167,7 @@ func resourceCloudStackPortForwardRead(d *schema.ResourceData, meta interface{}) // Get the forward r, count, err := cs.Firewall.GetPortForwardingRuleByID(id.(string)) - // If the count == 0, there is no object found for this UUID + // If the count == 0, there is no object found for this ID if err != nil { if count == 0 { forward["uuid"] = "" @@ -192,7 +192,7 @@ func resourceCloudStackPortForwardRead(d *schema.ResourceData, meta interface{}) forward["private_port"] = privPort forward["public_port"] = pubPort - if isUUID(forward["virtual_machine"].(string)) { + if isID(forward["virtual_machine"].(string)) { forward["virtual_machine"] = r.Virtualmachineid } else { forward["virtual_machine"] = r.Virtualmachinename @@ -317,7 +317,7 @@ func resourceCloudStackPortForwardDeleteForward( // Delete the forward if _, err := cs.Firewall.DeletePortForwardingRule(p); err != nil { - // This is a very poor way to be told the UUID does no longer exist :( + // This is a very poor way to be told the ID does no longer exist :( if !strings.Contains(err.Error(), fmt.Sprintf( "Invalid parameter id value=%s due to incorrect long value format, "+ "or entity does not exist", forward["uuid"].(string))) { @@ -325,6 +325,7 @@ func resourceCloudStackPortForwardDeleteForward( } } + // Empty the UUID of this rule forward["uuid"] = "" return nil diff --git a/builtin/providers/cloudstack/resource_cloudstack_port_forward_test.go b/builtin/providers/cloudstack/resource_cloudstack_port_forward_test.go index 39ebfe8f6..b0851753f 100644 --- a/builtin/providers/cloudstack/resource_cloudstack_port_forward_test.go +++ b/builtin/providers/cloudstack/resource_cloudstack_port_forward_test.go @@ -102,13 +102,13 @@ func testAccCheckCloudStackPortForwardsExist(n string) resource.TestCheckFunc { return fmt.Errorf("No port forward ID is set") } - for k, uuid := range rs.Primary.Attributes { + for k, id := range rs.Primary.Attributes { if !strings.Contains(k, "uuid") { continue } cs := testAccProvider.Meta().(*cloudstack.CloudStackClient) - _, count, err := cs.Firewall.GetPortForwardingRuleByID(uuid) + _, count, err := cs.Firewall.GetPortForwardingRuleByID(id) if err != nil { return err @@ -135,12 +135,12 @@ func testAccCheckCloudStackPortForwardDestroy(s *terraform.State) error { return fmt.Errorf("No port forward ID is set") } - for k, uuid := range rs.Primary.Attributes { + for k, id := range rs.Primary.Attributes { if !strings.Contains(k, "uuid") { continue } - _, _, err := cs.Firewall.GetPortForwardingRuleByID(uuid) + _, _, err := cs.Firewall.GetPortForwardingRuleByID(id) if err == nil { return fmt.Errorf("Port forward %s still exists", rs.Primary.ID) } diff --git a/builtin/providers/cloudstack/resource_cloudstack_secondary_ipaddress.go b/builtin/providers/cloudstack/resource_cloudstack_secondary_ipaddress.go index 1c491be44..697e55eb4 100644 --- a/builtin/providers/cloudstack/resource_cloudstack_secondary_ipaddress.go +++ b/builtin/providers/cloudstack/resource_cloudstack_secondary_ipaddress.go @@ -44,8 +44,8 @@ func resourceCloudStackSecondaryIPAddressCreate(d *schema.ResourceData, meta int nicid := d.Get("nicid").(string) if nicid == "" { - // Retrieve the virtual_machine UUID - virtualmachineid, e := retrieveUUID(cs, "virtual_machine", d.Get("virtual_machine").(string)) + // Retrieve the virtual_machine ID + virtualmachineid, e := retrieveID(cs, "virtual_machine", d.Get("virtual_machine").(string)) if e != nil { return e.Error() } @@ -84,8 +84,8 @@ func resourceCloudStackSecondaryIPAddressCreate(d *schema.ResourceData, meta int func resourceCloudStackSecondaryIPAddressRead(d *schema.ResourceData, meta interface{}) error { cs := meta.(*cloudstack.CloudStackClient) - // Retrieve the virtual_machine UUID - virtualmachineid, e := retrieveUUID(cs, "virtual_machine", d.Get("virtual_machine").(string)) + // Retrieve the virtual_machine ID + virtualmachineid, e := retrieveID(cs, "virtual_machine", d.Get("virtual_machine").(string)) if e != nil { return e.Error() } @@ -146,7 +146,7 @@ func resourceCloudStackSecondaryIPAddressDelete(d *schema.ResourceData, meta int log.Printf("[INFO] Removing secondary IP address: %s", d.Get("ipaddress").(string)) if _, err := cs.Nic.RemoveIpFromNic(p); err != nil { - // This is a very poor way to be told the UUID does no longer exist :( + // This is a very poor way to be told the ID does no longer exist :( if strings.Contains(err.Error(), fmt.Sprintf( "Invalid parameter id value=%s due to incorrect long value format, "+ "or entity does not exist", d.Id())) { diff --git a/builtin/providers/cloudstack/resource_cloudstack_secondary_ipaddress_test.go b/builtin/providers/cloudstack/resource_cloudstack_secondary_ipaddress_test.go index e0c353e20..beedcd2cb 100644 --- a/builtin/providers/cloudstack/resource_cloudstack_secondary_ipaddress_test.go +++ b/builtin/providers/cloudstack/resource_cloudstack_secondary_ipaddress_test.go @@ -64,8 +64,8 @@ func testAccCheckCloudStackSecondaryIPAddressExists( cs := testAccProvider.Meta().(*cloudstack.CloudStackClient) - // Retrieve the virtual_machine UUID - virtualmachineid, e := retrieveUUID( + // Retrieve the virtual_machine ID + virtualmachineid, e := retrieveID( cs, "virtual_machine", rs.Primary.Attributes["virtual_machine"]) if e != nil { return e.Error() @@ -136,8 +136,8 @@ func testAccCheckCloudStackSecondaryIPAddressDestroy(s *terraform.State) error { return fmt.Errorf("No IP address ID is set") } - // Retrieve the virtual_machine UUID - virtualmachineid, e := retrieveUUID( + // Retrieve the virtual_machine ID + virtualmachineid, e := retrieveID( cs, "virtual_machine", rs.Primary.Attributes["virtual_machine"]) if e != nil { return e.Error() diff --git a/builtin/providers/cloudstack/resource_cloudstack_ssh_keypair.go b/builtin/providers/cloudstack/resource_cloudstack_ssh_keypair.go index 9fb859a22..8f6f0f9c5 100644 --- a/builtin/providers/cloudstack/resource_cloudstack_ssh_keypair.go +++ b/builtin/providers/cloudstack/resource_cloudstack_ssh_keypair.go @@ -116,7 +116,7 @@ func resourceCloudStackSSHKeyPairDelete(d *schema.ResourceData, meta interface{} // Remove the SSH Keypair _, err := cs.SSH.DeleteSSHKeyPair(p) if err != nil { - // This is a very poor way to be told the UUID does no longer exist :( + // This is a very poor way to be told the ID does no longer exist :( if strings.Contains(err.Error(), fmt.Sprintf( "A key pair with name '%s' does not exist for account", d.Id())) { return nil diff --git a/builtin/providers/cloudstack/resource_cloudstack_template.go b/builtin/providers/cloudstack/resource_cloudstack_template.go index 15c6ebec4..04aaca22e 100644 --- a/builtin/providers/cloudstack/resource_cloudstack_template.go +++ b/builtin/providers/cloudstack/resource_cloudstack_template.go @@ -51,6 +51,12 @@ func resourceCloudStackTemplate() *schema.Resource { ForceNew: true, }, + "project": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ForceNew: true, + }, + "zone": &schema.Schema{ Type: schema.TypeString, Required: true, @@ -118,14 +124,14 @@ func resourceCloudStackTemplateCreate(d *schema.ResourceData, meta interface{}) displaytext = name } - // Retrieve the os_type UUID - ostypeid, e := retrieveUUID(cs, "os_type", d.Get("os_type").(string)) + // Retrieve the os_type ID + ostypeid, e := retrieveID(cs, "os_type", d.Get("os_type").(string)) if e != nil { return e.Error() } - // Retrieve the zone UUID - zoneid, e := retrieveUUID(cs, "zone", d.Get("zone").(string)) + // Retrieve the zone ID + zoneid, e := retrieveID(cs, "zone", d.Get("zone").(string)) if e != nil { return e.Error() } @@ -161,6 +167,17 @@ func resourceCloudStackTemplateCreate(d *schema.ResourceData, meta interface{}) p.SetPasswordenabled(v.(bool)) } + // If there is a project supplied, we retrieve and set the project id + if project, ok := d.GetOk("project"); ok { + // Retrieve the project ID + projectid, e := retrieveID(cs, "project", project.(string)) + if e != nil { + return e.Error() + } + // Set the default project ID + p.SetProjectid(projectid) + } + // Create the new template r, err := cs.Template.RegisterTemplate(p) if err != nil { @@ -219,8 +236,9 @@ func resourceCloudStackTemplateRead(d *schema.ResourceData, meta interface{}) er d.Set("password_enabled", t.Passwordenabled) d.Set("is_ready", t.Isready) - setValueOrUUID(d, "os_type", t.Ostypename, t.Ostypeid) - setValueOrUUID(d, "zone", t.Zonename, t.Zoneid) + setValueOrID(d, "os_type", t.Ostypename, t.Ostypeid) + setValueOrID(d, "project", t.Project, t.Projectid) + setValueOrID(d, "zone", t.Zonename, t.Zoneid) return nil } @@ -249,7 +267,7 @@ func resourceCloudStackTemplateUpdate(d *schema.ResourceData, meta interface{}) } if d.HasChange("os_type") { - ostypeid, e := retrieveUUID(cs, "os_type", d.Get("os_type").(string)) + ostypeid, e := retrieveID(cs, "os_type", d.Get("os_type").(string)) if e != nil { return e.Error() } @@ -278,7 +296,7 @@ func resourceCloudStackTemplateDelete(d *schema.ResourceData, meta interface{}) log.Printf("[INFO] Deleting template: %s", d.Get("name").(string)) _, err := cs.Template.DeleteTemplate(p) if err != nil { - // This is a very poor way to be told the UUID does no longer exist :( + // This is a very poor way to be told the ID does no longer exist :( if strings.Contains(err.Error(), fmt.Sprintf( "Invalid parameter id value=%s due to incorrect long value format, "+ "or entity does not exist", d.Id())) { diff --git a/builtin/providers/cloudstack/resource_cloudstack_vpc.go b/builtin/providers/cloudstack/resource_cloudstack_vpc.go index 4cc07d7b6..07502e58e 100644 --- a/builtin/providers/cloudstack/resource_cloudstack_vpc.go +++ b/builtin/providers/cloudstack/resource_cloudstack_vpc.go @@ -52,6 +52,11 @@ func resourceCloudStackVPC() *schema.Resource { ForceNew: true, }, + "source_nat_ip": &schema.Schema{ + Type: schema.TypeString, + Computed: true, + }, + "zone": &schema.Schema{ Type: schema.TypeString, Required: true, @@ -66,14 +71,14 @@ func resourceCloudStackVPCCreate(d *schema.ResourceData, meta interface{}) error name := d.Get("name").(string) - // Retrieve the vpc_offering UUID - vpcofferingid, e := retrieveUUID(cs, "vpc_offering", d.Get("vpc_offering").(string)) + // Retrieve the vpc_offering ID + vpcofferingid, e := retrieveID(cs, "vpc_offering", d.Get("vpc_offering").(string)) if e != nil { return e.Error() } - // Retrieve the zone UUID - zoneid, e := retrieveUUID(cs, "zone", d.Get("zone").(string)) + // Retrieve the zone ID + zoneid, e := retrieveID(cs, "zone", d.Get("zone").(string)) if e != nil { return e.Error() } @@ -101,8 +106,8 @@ func resourceCloudStackVPCCreate(d *schema.ResourceData, meta interface{}) error // If there is a project supplied, we retrieve and set the project id if project, ok := d.GetOk("project"); ok { - // Retrieve the project UUID - projectid, e := retrieveUUID(cs, "project", project.(string)) + // Retrieve the project ID + projectid, e := retrieveID(cs, "project", project.(string)) if e != nil { return e.Error() } @@ -148,9 +153,30 @@ func resourceCloudStackVPCRead(d *schema.ResourceData, meta interface{}) error { return err } - setValueOrUUID(d, "vpc_offering", o.Name, v.Vpcofferingid) - setValueOrUUID(d, "project", v.Project, v.Projectid) - setValueOrUUID(d, "zone", v.Zonename, v.Zoneid) + setValueOrID(d, "vpc_offering", o.Name, v.Vpcofferingid) + setValueOrID(d, "project", v.Project, v.Projectid) + setValueOrID(d, "zone", v.Zonename, v.Zoneid) + + // Create a new parameter struct + p := cs.Address.NewListPublicIpAddressesParams() + p.SetVpcid(d.Id()) + p.SetIssourcenat(true) + + if _, ok := d.GetOk("project"); ok { + p.SetProjectid(v.Projectid) + } + + // Get the source NAT IP assigned to the VPC + l, err := cs.Address.ListPublicIpAddresses(p) + if err != nil { + return err + } + + if l.Count != 1 { + return fmt.Errorf("Unexpected number (%d) of source NAT IPs returned", l.Count) + } + + d.Set("source_nat_ip", l.PublicIpAddresses[0].Ipaddress) return nil } @@ -191,7 +217,7 @@ func resourceCloudStackVPCDelete(d *schema.ResourceData, meta interface{}) error // Delete the VPC _, err := cs.VPC.DeleteVPC(p) if err != nil { - // This is a very poor way to be told the UUID does no longer exist :( + // This is a very poor way to be told the ID does no longer exist :( if strings.Contains(err.Error(), fmt.Sprintf( "Invalid parameter id value=%s due to incorrect long value format, "+ "or entity does not exist", d.Id())) { diff --git a/builtin/providers/cloudstack/resource_cloudstack_vpn_connection.go b/builtin/providers/cloudstack/resource_cloudstack_vpn_connection.go index b036890a5..322f07a2c 100644 --- a/builtin/providers/cloudstack/resource_cloudstack_vpn_connection.go +++ b/builtin/providers/cloudstack/resource_cloudstack_vpn_connection.go @@ -81,7 +81,7 @@ func resourceCloudStackVPNConnectionDelete(d *schema.ResourceData, meta interfac // Delete the VPN Connection _, err := cs.VPN.DeleteVpnConnection(p) if err != nil { - // This is a very poor way to be told the UUID does no longer exist :( + // This is a very poor way to be told the ID does no longer exist :( if strings.Contains(err.Error(), fmt.Sprintf( "Invalid parameter id value=%s due to incorrect long value format, "+ "or entity does not exist", d.Id())) { diff --git a/builtin/providers/cloudstack/resource_cloudstack_vpn_customer_gateway.go b/builtin/providers/cloudstack/resource_cloudstack_vpn_customer_gateway.go index f27e28d38..b049c0319 100644 --- a/builtin/providers/cloudstack/resource_cloudstack_vpn_customer_gateway.go +++ b/builtin/providers/cloudstack/resource_cloudstack_vpn_customer_gateway.go @@ -179,7 +179,7 @@ func resourceCloudStackVPNCustomerGatewayDelete(d *schema.ResourceData, meta int // Delete the VPN Customer Gateway _, err := cs.VPN.DeleteVpnCustomerGateway(p) if err != nil { - // This is a very poor way to be told the UUID does no longer exist :( + // This is a very poor way to be told the ID does no longer exist :( if strings.Contains(err.Error(), fmt.Sprintf( "Invalid parameter id value=%s due to incorrect long value format, "+ "or entity does not exist", d.Id())) { diff --git a/builtin/providers/cloudstack/resource_cloudstack_vpn_gateway.go b/builtin/providers/cloudstack/resource_cloudstack_vpn_gateway.go index 704511ca8..17533a3a6 100644 --- a/builtin/providers/cloudstack/resource_cloudstack_vpn_gateway.go +++ b/builtin/providers/cloudstack/resource_cloudstack_vpn_gateway.go @@ -33,8 +33,8 @@ func resourceCloudStackVPNGateway() *schema.Resource { func resourceCloudStackVPNGatewayCreate(d *schema.ResourceData, meta interface{}) error { cs := meta.(*cloudstack.CloudStackClient) - // Retrieve the VPC UUID - vpcid, e := retrieveUUID(cs, "vpc", d.Get("vpc").(string)) + // Retrieve the VPC ID + vpcid, e := retrieveID(cs, "vpc", d.Get("vpc").(string)) if e != nil { return e.Error() } @@ -69,7 +69,7 @@ func resourceCloudStackVPNGatewayRead(d *schema.ResourceData, meta interface{}) return err } - setValueOrUUID(d, "vpc", d.Get("vpc").(string), v.Vpcid) + setValueOrID(d, "vpc", d.Get("vpc").(string), v.Vpcid) d.Set("public_ip", v.Publicip) @@ -85,7 +85,7 @@ func resourceCloudStackVPNGatewayDelete(d *schema.ResourceData, meta interface{} // Delete the VPN Gateway _, err := cs.VPN.DeleteVpnGateway(p) if err != nil { - // This is a very poor way to be told the UUID does no longer exist :( + // This is a very poor way to be told the ID does no longer exist :( if strings.Contains(err.Error(), fmt.Sprintf( "Invalid parameter id value=%s due to incorrect long value format, "+ "or entity does not exist", d.Id())) { diff --git a/builtin/providers/cloudstack/resources.go b/builtin/providers/cloudstack/resources.go index cc826492f..f7115e793 100644 --- a/builtin/providers/cloudstack/resources.go +++ b/builtin/providers/cloudstack/resources.go @@ -10,6 +10,9 @@ import ( "github.com/xanzy/go-cloudstack/cloudstack" ) +// CloudStack uses a "special" ID of -1 to define an unlimited resource +const UnlimitedResourceID = "-1" + type retrieveError struct { name string value string @@ -17,43 +20,51 @@ type retrieveError struct { } func (e *retrieveError) Error() error { - return fmt.Errorf("Error retrieving UUID of %s %s: %s", e.name, e.value, e.err) + return fmt.Errorf("Error retrieving ID of %s %s: %s", e.name, e.value, e.err) } -func setValueOrUUID(d *schema.ResourceData, key string, value string, uuid string) { - if isUUID(d.Get(key).(string)) { - d.Set(key, uuid) +func setValueOrID(d *schema.ResourceData, key string, value string, id string) { + if isID(d.Get(key).(string)) { + // If the given id is an empty string, check if the configured value matches + // the UnlimitedResourceID in which case we set id to UnlimitedResourceID + if id == "" && d.Get(key).(string) == UnlimitedResourceID { + id = UnlimitedResourceID + } + + d.Set(key, id) } else { d.Set(key, value) } } -func retrieveUUID(cs *cloudstack.CloudStackClient, name, value string) (uuid string, e *retrieveError) { - // If the supplied value isn't a UUID, try to retrieve the UUID ourselves - if isUUID(value) { +func retrieveID(cs *cloudstack.CloudStackClient, name, value string) (id string, e *retrieveError) { + // If the supplied value isn't a ID, try to retrieve the ID ourselves + if isID(value) { return value, nil } - log.Printf("[DEBUG] Retrieving UUID of %s: %s", name, value) + log.Printf("[DEBUG] Retrieving ID of %s: %s", name, value) var err error switch name { case "disk_offering": - uuid, err = cs.DiskOffering.GetDiskOfferingID(value) + id, err = cs.DiskOffering.GetDiskOfferingID(value) case "virtual_machine": - uuid, err = cs.VirtualMachine.GetVirtualMachineID(value) + id, err = cs.VirtualMachine.GetVirtualMachineID(value) case "service_offering": - uuid, err = cs.ServiceOffering.GetServiceOfferingID(value) + id, err = cs.ServiceOffering.GetServiceOfferingID(value) case "network_offering": - uuid, err = cs.NetworkOffering.GetNetworkOfferingID(value) + id, err = cs.NetworkOffering.GetNetworkOfferingID(value) + case "project": + id, err = cs.Project.GetProjectID(value) case "vpc_offering": - uuid, err = cs.VPC.GetVPCOfferingID(value) + id, err = cs.VPC.GetVPCOfferingID(value) case "vpc": - uuid, err = cs.VPC.GetVPCID(value) + id, err = cs.VPC.GetVPCID(value) case "network": - uuid, err = cs.Network.GetNetworkID(value) + id, err = cs.Network.GetNetworkID(value) case "zone": - uuid, err = cs.Zone.GetZoneID(value) + id, err = cs.Zone.GetZoneID(value) case "ipaddress": p := cs.Address.NewListPublicIpAddressesParams() p.SetIpaddress(value) @@ -63,10 +74,10 @@ func retrieveUUID(cs *cloudstack.CloudStackClient, name, value string) (uuid str break } if l.Count == 1 { - uuid = l.PublicIpAddresses[0].Id + id = l.PublicIpAddresses[0].Id break } - err = fmt.Errorf("Could not find UUID of IP address: %s", value) + err = fmt.Errorf("Could not find ID of IP address: %s", value) case "os_type": p := cs.GuestOS.NewListOsTypesParams() p.SetDescription(value) @@ -76,43 +87,42 @@ func retrieveUUID(cs *cloudstack.CloudStackClient, name, value string) (uuid str break } if l.Count == 1 { - uuid = l.OsTypes[0].Id + id = l.OsTypes[0].Id break } - err = fmt.Errorf("Could not find UUID of OS Type: %s", value) - case "project": - uuid, err = cs.Project.GetProjectID(value) + err = fmt.Errorf("Could not find ID of OS Type: %s", value) default: - return uuid, &retrieveError{name: name, value: value, + return id, &retrieveError{name: name, value: value, err: fmt.Errorf("Unknown request: %s", name)} } if err != nil { - return uuid, &retrieveError{name: name, value: value, err: err} + return id, &retrieveError{name: name, value: value, err: err} } - return uuid, nil + return id, nil } -func retrieveTemplateUUID(cs *cloudstack.CloudStackClient, zoneid, value string) (uuid string, e *retrieveError) { - // If the supplied value isn't a UUID, try to retrieve the UUID ourselves - if isUUID(value) { +func retrieveTemplateID(cs *cloudstack.CloudStackClient, zoneid, value string) (id string, e *retrieveError) { + // If the supplied value isn't a ID, try to retrieve the ID ourselves + if isID(value) { return value, nil } - log.Printf("[DEBUG] Retrieving UUID of template: %s", value) + log.Printf("[DEBUG] Retrieving ID of template: %s", value) - uuid, err := cs.Template.GetTemplateID(value, "executable", zoneid) + id, err := cs.Template.GetTemplateID(value, "executable", zoneid) if err != nil { - return uuid, &retrieveError{name: "template", value: value, err: err} + return id, &retrieveError{name: "template", value: value, err: err} } - return uuid, nil + return id, nil } -func isUUID(s string) bool { - re := regexp.MustCompile(`^[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}$`) - return re.MatchString(s) +// ID can be either a UUID or a UnlimitedResourceID +func isID(id string) bool { + re := regexp.MustCompile(`^([0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}|-1)$`) + return re.MatchString(id) } // RetryFunc is the function retried n times diff --git a/builtin/providers/digitalocean/resource_digitalocean_droplet.go b/builtin/providers/digitalocean/resource_digitalocean_droplet.go index 88c0c6d07..c14b248c6 100644 --- a/builtin/providers/digitalocean/resource_digitalocean_droplet.go +++ b/builtin/providers/digitalocean/resource_digitalocean_droplet.go @@ -39,6 +39,10 @@ func resourceDigitalOceanDroplet() *schema.Resource { "size": &schema.Schema{ Type: schema.TypeString, Required: true, + StateFunc: func(val interface{}) string { + // DO API V2 size slug is always lowercase + return strings.ToLower(val.(string)) + }, }, "status": &schema.Schema{ diff --git a/builtin/providers/docker/resource_docker_container_funcs.go b/builtin/providers/docker/resource_docker_container_funcs.go index 058a4411b..aa74a4e1d 100644 --- a/builtin/providers/docker/resource_docker_container_funcs.go +++ b/builtin/providers/docker/resource_docker_container_funcs.go @@ -148,7 +148,7 @@ func resourceDockerContainerRead(d *schema.ResourceData, meta interface{}) error } if container.State.Running || - (!container.State.Running && !d.Get("must_run").(bool)) { + !container.State.Running && !d.Get("must_run").(bool) { break } diff --git a/builtin/providers/docker/resource_docker_image_funcs.go b/builtin/providers/docker/resource_docker_image_funcs.go index f45dd2226..454113c5f 100644 --- a/builtin/providers/docker/resource_docker_image_funcs.go +++ b/builtin/providers/docker/resource_docker_image_funcs.go @@ -83,7 +83,7 @@ func pullImage(data *Data, client *dc.Client, image string) error { splitPortRepo := strings.Split(splitImageName[1], "/") pullOpts.Registry = splitImageName[0] + ":" + splitPortRepo[0] pullOpts.Tag = splitImageName[2] - pullOpts.Repository = strings.Join(splitPortRepo[1:], "/") + pullOpts.Repository = pullOpts.Registry + "/" + strings.Join(splitPortRepo[1:], "/") // It's either registry:port/username/repo, registry:port/repo, // or repo:tag with default registry @@ -98,7 +98,7 @@ func pullImage(data *Data, client *dc.Client, image string) error { // registry:port/username/repo or registry:port/repo default: pullOpts.Registry = splitImageName[0] + ":" + splitPortRepo[0] - pullOpts.Repository = strings.Join(splitPortRepo[1:], "/") + pullOpts.Repository = pullOpts.Registry + "/" + strings.Join(splitPortRepo[1:], "/") pullOpts.Tag = "latest" } diff --git a/builtin/providers/docker/resource_docker_image_test.go b/builtin/providers/docker/resource_docker_image_test.go index 14dfb29b7..0f0f0707a 100644 --- a/builtin/providers/docker/resource_docker_image_test.go +++ b/builtin/providers/docker/resource_docker_image_test.go @@ -24,9 +24,34 @@ func TestAccDockerImage_basic(t *testing.T) { }) } +func TestAddDockerImage_private(t *testing.T) { + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAddDockerPrivateImageConfig, + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr( + "docker_image.foobar", + "latest", + "2c40b0526b6358710fd09e7b8c022429268cc61703b4777e528ac9d469a07ca1"), + ), + }, + }, + }) +} + const testAccDockerImageConfig = ` resource "docker_image" "foo" { name = "ubuntu:trusty-20150320" keep_updated = true } ` + +const testAddDockerPrivateImageConfig = ` +resource "docker_image" "foobar" { + name = "gcr.io:443/google_containers/pause:0.8.0" + keep_updated = true +} +` diff --git a/builtin/providers/google/metadata.go b/builtin/providers/google/metadata.go index bc609ac88..e75c45022 100644 --- a/builtin/providers/google/metadata.go +++ b/builtin/providers/google/metadata.go @@ -23,7 +23,7 @@ func MetadataRetryWrapper(update func() error) error { } } - return fmt.Errorf("Failed to update metadata after %d retries", attempt); + return fmt.Errorf("Failed to update metadata after %d retries", attempt) } // Update the metadata (serverMD) according to the provided diff (oldMDMap v @@ -51,7 +51,7 @@ func MetadataUpdate(oldMDMap map[string]interface{}, newMDMap map[string]interfa // Reformat old metadata into a list serverMD.Items = nil for key, val := range curMDMap { - v := val; + v := val serverMD.Items = append(serverMD.Items, &compute.MetadataItems{ Key: key, Value: &v, @@ -60,7 +60,7 @@ func MetadataUpdate(oldMDMap map[string]interface{}, newMDMap map[string]interfa } // Format metadata from the server data format -> schema data format -func MetadataFormatSchema(md *compute.Metadata) (map[string]interface{}) { +func MetadataFormatSchema(md *compute.Metadata) map[string]interface{} { newMD := make(map[string]interface{}) for _, kv := range md.Items { diff --git a/builtin/providers/google/provider.go b/builtin/providers/google/provider.go index a023b81c9..7c9587219 100644 --- a/builtin/providers/google/provider.go +++ b/builtin/providers/google/provider.go @@ -54,7 +54,9 @@ func Provider() terraform.ResourceProvider { "google_dns_record_set": resourceDnsRecordSet(), "google_compute_instance_group_manager": resourceComputeInstanceGroupManager(), "google_storage_bucket": resourceStorageBucket(), + "google_storage_bucket_acl": resourceStorageBucketAcl(), "google_storage_bucket_object": resourceStorageBucketObject(), + "google_storage_object_acl": resourceStorageObjectAcl(), }, ConfigureFunc: providerConfigure, diff --git a/builtin/providers/google/resource_compute_backend_service.go b/builtin/providers/google/resource_compute_backend_service.go index cbd722d38..ead6e2402 100644 --- a/builtin/providers/google/resource_compute_backend_service.go +++ b/builtin/providers/google/resource_compute_backend_service.go @@ -66,6 +66,12 @@ func resourceComputeBackendService() *schema.Resource { Optional: true, }, + "region": &schema.Schema{ + Type: schema.TypeString, + ForceNew: true, + Optional: true, + }, + "health_checks": &schema.Schema{ Type: schema.TypeSet, Elem: &schema.Schema{Type: schema.TypeString}, diff --git a/builtin/providers/google/resource_compute_instance.go b/builtin/providers/google/resource_compute_instance.go index 987964641..68b8aed35 100644 --- a/builtin/providers/google/resource_compute_instance.go +++ b/builtin/providers/google/resource_compute_instance.go @@ -197,9 +197,10 @@ func resourceComputeInstance() *schema.Resource { }, "metadata": &schema.Schema{ - Type: schema.TypeMap, - Optional: true, - Elem: schema.TypeString, + Type: schema.TypeMap, + Optional: true, + Elem: schema.TypeString, + ValidateFunc: validateInstanceMetadata, }, "service_account": &schema.Schema{ @@ -507,15 +508,22 @@ func resourceComputeInstanceCreate(d *schema.ResourceData, meta interface{}) err func resourceComputeInstanceRead(d *schema.ResourceData, meta interface{}) error { config := meta.(*Config) - instance, err := getInstance(config, d); + instance, err := getInstance(config, d) if err != nil { return err } - // Synch metadata + // Synch metadata md := instance.Metadata - if err = d.Set("metadata", MetadataFormatSchema(md)); err != nil { + _md := MetadataFormatSchema(md) + delete(_md, "startup-script") + + if script, scriptExists := d.GetOk("metadata_startup_script"); scriptExists { + d.Set("metadata_startup_script", script) + } + + if err = d.Set("metadata", _md); err != nil { return fmt.Errorf("Error setting metadata: %s", err) } @@ -635,6 +643,7 @@ func resourceComputeInstanceRead(d *schema.ResourceData, meta interface{}) error } d.Set("self_link", instance.SelfLink) + d.SetId(instance.Name) return nil } @@ -644,7 +653,7 @@ func resourceComputeInstanceUpdate(d *schema.ResourceData, meta interface{}) err zone := d.Get("zone").(string) - instance, err := getInstance(config, d); + instance, err := getInstance(config, d) if err != nil { return err } @@ -655,10 +664,17 @@ func resourceComputeInstanceUpdate(d *schema.ResourceData, meta interface{}) err // If the Metadata has changed, then update that. if d.HasChange("metadata") { o, n := d.GetChange("metadata") + if script, scriptExists := d.GetOk("metadata_startup_script"); scriptExists { + if _, ok := n.(map[string]interface{})["startup-script"]; ok { + return fmt.Errorf("Only one of metadata.startup-script and metadata_startup_script may be defined") + } + + n.(map[string]interface{})["startup-script"] = script + } updateMD := func() error { // Reload the instance in the case of a fingerprint mismatch - instance, err = getInstance(config, d); + instance, err = getInstance(config, d) if err != nil { return err } @@ -794,13 +810,8 @@ func resourceComputeInstanceDelete(d *schema.ResourceData, meta interface{}) err func resourceInstanceMetadata(d *schema.ResourceData) (*compute.Metadata, error) { m := &compute.Metadata{} mdMap := d.Get("metadata").(map[string]interface{}) - _, mapScriptExists := mdMap["startup-script"] - dScript, dScriptExists := d.GetOk("metadata_startup_script") - if mapScriptExists && dScriptExists { - return nil, fmt.Errorf("Not allowed to have both metadata_startup_script and metadata.startup-script") - } - if dScriptExists { - mdMap["startup-script"] = dScript + if v, ok := d.GetOk("metadata_startup_script"); ok && v.(string) != "" { + mdMap["startup-script"] = v } if len(mdMap) > 0 { m.Items = make([]*compute.MetadataItems, 0, len(mdMap)) @@ -836,3 +847,12 @@ func resourceInstanceTags(d *schema.ResourceData) *compute.Tags { return tags } + +func validateInstanceMetadata(v interface{}, k string) (ws []string, es []error) { + mdMap := v.(map[string]interface{}) + if _, ok := mdMap["startup-script"]; ok { + es = append(es, fmt.Errorf( + "Use metadata_startup_script instead of a startup-script key in %q.", k)) + } + return +} diff --git a/builtin/providers/google/resource_compute_instance_test.go b/builtin/providers/google/resource_compute_instance_test.go index 394e66dbf..f59da73ef 100644 --- a/builtin/providers/google/resource_compute_instance_test.go +++ b/builtin/providers/google/resource_compute_instance_test.go @@ -32,7 +32,7 @@ func TestAccComputeInstance_basic_deprecated_network(t *testing.T) { }) } -func TestAccComputeInstance_basic(t *testing.T) { +func TestAccComputeInstance_basic1(t *testing.T) { var instance compute.Instance resource.Test(t, resource.TestCase{ @@ -376,7 +376,7 @@ func testAccCheckComputeInstanceDisk(instance *compute.Instance, source string, } for _, disk := range instance.Disks { - if strings.LastIndex(disk.Source, "/"+source) == (len(disk.Source)-len(source)-1) && disk.AutoDelete == delete && disk.Boot == boot { + if strings.LastIndex(disk.Source, "/"+source) == len(disk.Source)-len(source)-1 && disk.AutoDelete == delete && disk.Boot == boot { return nil } } diff --git a/builtin/providers/google/resource_compute_project_metadata.go b/builtin/providers/google/resource_compute_project_metadata.go index 83b6fb0df..c2f8a4a5f 100644 --- a/builtin/providers/google/resource_compute_project_metadata.go +++ b/builtin/providers/google/resource_compute_project_metadata.go @@ -72,10 +72,10 @@ func resourceComputeProjectMetadataCreate(d *schema.ResourceData, meta interface err := MetadataRetryWrapper(createMD) if err != nil { - return err; + return err } - return resourceComputeProjectMetadataRead(d, meta); + return resourceComputeProjectMetadataRead(d, meta) } func resourceComputeProjectMetadataRead(d *schema.ResourceData, meta interface{}) error { @@ -115,7 +115,7 @@ func resourceComputeProjectMetadataUpdate(d *schema.ResourceData, meta interface md := project.CommonInstanceMetadata - MetadataUpdate(o.(map[string]interface{}), n.(map[string]interface{}), md) + MetadataUpdate(o.(map[string]interface{}), n.(map[string]interface{}), md) op, err := config.clientCompute.Projects.SetCommonInstanceMetadata(config.Project, md).Do() @@ -133,10 +133,10 @@ func resourceComputeProjectMetadataUpdate(d *schema.ResourceData, meta interface err := MetadataRetryWrapper(updateMD) if err != nil { - return err; + return err } - return resourceComputeProjectMetadataRead(d, meta); + return resourceComputeProjectMetadataRead(d, meta) } return nil diff --git a/builtin/providers/google/resource_compute_target_pool.go b/builtin/providers/google/resource_compute_target_pool.go index 37af4a1e7..91e83a46a 100644 --- a/builtin/providers/google/resource_compute_target_pool.go +++ b/builtin/providers/google/resource_compute_target_pool.go @@ -66,6 +66,12 @@ func resourceComputeTargetPool() *schema.Resource { Optional: true, ForceNew: true, }, + + "region": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ForceNew: true, + }, }, } } @@ -115,6 +121,7 @@ func convertInstances(config *Config, names []string) ([]string, error) { func resourceComputeTargetPoolCreate(d *schema.ResourceData, meta interface{}) error { config := meta.(*Config) + region := getOptionalRegion(d, config) hchkUrls, err := convertHealthChecks( config, convertStringArr(d.Get("health_checks").([]interface{}))) @@ -142,7 +149,7 @@ func resourceComputeTargetPoolCreate(d *schema.ResourceData, meta interface{}) e } log.Printf("[DEBUG] TargetPool insert request: %#v", tpool) op, err := config.clientCompute.TargetPools.Insert( - config.Project, config.Region, tpool).Do() + config.Project, region, tpool).Do() if err != nil { return fmt.Errorf("Error creating TargetPool: %s", err) } @@ -150,11 +157,10 @@ func resourceComputeTargetPoolCreate(d *schema.ResourceData, meta interface{}) e // It probably maybe worked, so store the ID now d.SetId(tpool.Name) - err = computeOperationWaitRegion(config, op, config.Region, "Creating Target Pool") + err = computeOperationWaitRegion(config, op, region, "Creating Target Pool") if err != nil { return err } - return resourceComputeTargetPoolRead(d, meta) } @@ -190,6 +196,7 @@ func calcAddRemove(from []string, to []string) ([]string, []string) { func resourceComputeTargetPoolUpdate(d *schema.ResourceData, meta interface{}) error { config := meta.(*Config) + region := getOptionalRegion(d, config) d.Partial(true) @@ -215,12 +222,12 @@ func resourceComputeTargetPoolUpdate(d *schema.ResourceData, meta interface{}) e removeReq.HealthChecks[i] = &compute.HealthCheckReference{HealthCheck: v} } op, err := config.clientCompute.TargetPools.RemoveHealthCheck( - config.Project, config.Region, d.Id(), removeReq).Do() + config.Project, region, d.Id(), removeReq).Do() if err != nil { return fmt.Errorf("Error updating health_check: %s", err) } - err = computeOperationWaitRegion(config, op, config.Region, "Updating Target Pool") + err = computeOperationWaitRegion(config, op, region, "Updating Target Pool") if err != nil { return err } @@ -231,12 +238,12 @@ func resourceComputeTargetPoolUpdate(d *schema.ResourceData, meta interface{}) e addReq.HealthChecks[i] = &compute.HealthCheckReference{HealthCheck: v} } op, err = config.clientCompute.TargetPools.AddHealthCheck( - config.Project, config.Region, d.Id(), addReq).Do() + config.Project, region, d.Id(), addReq).Do() if err != nil { return fmt.Errorf("Error updating health_check: %s", err) } - err = computeOperationWaitRegion(config, op, config.Region, "Updating Target Pool") + err = computeOperationWaitRegion(config, op, region, "Updating Target Pool") if err != nil { return err } @@ -265,12 +272,12 @@ func resourceComputeTargetPoolUpdate(d *schema.ResourceData, meta interface{}) e addReq.Instances[i] = &compute.InstanceReference{Instance: v} } op, err := config.clientCompute.TargetPools.AddInstance( - config.Project, config.Region, d.Id(), addReq).Do() + config.Project, region, d.Id(), addReq).Do() if err != nil { return fmt.Errorf("Error updating instances: %s", err) } - err = computeOperationWaitRegion(config, op, config.Region, "Updating Target Pool") + err = computeOperationWaitRegion(config, op, region, "Updating Target Pool") if err != nil { return err } @@ -281,12 +288,11 @@ func resourceComputeTargetPoolUpdate(d *schema.ResourceData, meta interface{}) e removeReq.Instances[i] = &compute.InstanceReference{Instance: v} } op, err = config.clientCompute.TargetPools.RemoveInstance( - config.Project, config.Region, d.Id(), removeReq).Do() + config.Project, region, d.Id(), removeReq).Do() if err != nil { return fmt.Errorf("Error updating instances: %s", err) } - - err = computeOperationWaitRegion(config, op, config.Region, "Updating Target Pool") + err = computeOperationWaitRegion(config, op, region, "Updating Target Pool") if err != nil { return err } @@ -299,12 +305,12 @@ func resourceComputeTargetPoolUpdate(d *schema.ResourceData, meta interface{}) e Target: bpool_name, } op, err := config.clientCompute.TargetPools.SetBackup( - config.Project, config.Region, d.Id(), tref).Do() + config.Project, region, d.Id(), tref).Do() if err != nil { return fmt.Errorf("Error updating backup_pool: %s", err) } - err = computeOperationWaitRegion(config, op, config.Region, "Updating Target Pool") + err = computeOperationWaitRegion(config, op, region, "Updating Target Pool") if err != nil { return err } @@ -318,9 +324,10 @@ func resourceComputeTargetPoolUpdate(d *schema.ResourceData, meta interface{}) e func resourceComputeTargetPoolRead(d *schema.ResourceData, meta interface{}) error { config := meta.(*Config) + region := getOptionalRegion(d, config) tpool, err := config.clientCompute.TargetPools.Get( - config.Project, config.Region, d.Id()).Do() + config.Project, region, d.Id()).Do() if err != nil { if gerr, ok := err.(*googleapi.Error); ok && gerr.Code == 404 { // The resource doesn't exist anymore @@ -339,15 +346,16 @@ func resourceComputeTargetPoolRead(d *schema.ResourceData, meta interface{}) err func resourceComputeTargetPoolDelete(d *schema.ResourceData, meta interface{}) error { config := meta.(*Config) + region := getOptionalRegion(d, config) // Delete the TargetPool op, err := config.clientCompute.TargetPools.Delete( - config.Project, config.Region, d.Id()).Do() + config.Project, region, d.Id()).Do() if err != nil { return fmt.Errorf("Error deleting TargetPool: %s", err) } - err = computeOperationWaitRegion(config, op, config.Region, "Deleting Target Pool") + err = computeOperationWaitRegion(config, op, region, "Deleting Target Pool") if err != nil { return err } diff --git a/builtin/providers/google/resource_compute_vpn_gateway.go b/builtin/providers/google/resource_compute_vpn_gateway.go index ba25aeb1f..bd5350b9c 100644 --- a/builtin/providers/google/resource_compute_vpn_gateway.go +++ b/builtin/providers/google/resource_compute_vpn_gateway.go @@ -56,8 +56,8 @@ func resourceComputeVpnGatewayCreate(d *schema.ResourceData, meta interface{}) e vpnGatewaysService := compute.NewTargetVpnGatewaysService(config.clientCompute) vpnGateway := &compute.TargetVpnGateway{ - Name: name, - Network: network, + Name: name, + Network: network, } if v, ok := d.GetOk("description"); ok { diff --git a/builtin/providers/google/resource_storage_bucket.go b/builtin/providers/google/resource_storage_bucket.go index de03d5f6d..9118119a8 100644 --- a/builtin/providers/google/resource_storage_bucket.go +++ b/builtin/providers/google/resource_storage_bucket.go @@ -24,10 +24,10 @@ func resourceStorageBucket() *schema.Resource { ForceNew: true, }, "predefined_acl": &schema.Schema{ - Type: schema.TypeString, - Default: "projectPrivate", - Optional: true, - ForceNew: true, + Type: schema.TypeString, + Deprecated: "Please use resource \"storage_bucket_acl.predefined_acl\" instead.", + Optional: true, + ForceNew: true, }, "location": &schema.Schema{ Type: schema.TypeString, @@ -69,7 +69,6 @@ func resourceStorageBucketCreate(d *schema.ResourceData, meta interface{}) error // Get the bucket and acl bucket := d.Get("name").(string) - acl := d.Get("predefined_acl").(string) location := d.Get("location").(string) // Create a bucket, setting the acl, location and name. @@ -95,7 +94,12 @@ func resourceStorageBucketCreate(d *schema.ResourceData, meta interface{}) error } } - res, err := config.clientStorage.Buckets.Insert(config.Project, sb).PredefinedAcl(acl).Do() + call := config.clientStorage.Buckets.Insert(config.Project, sb) + if v, ok := d.GetOk("predefined_acl"); ok { + call = call.PredefinedAcl(v.(string)) + } + + res, err := call.Do() if err != nil { fmt.Printf("Error creating bucket %s: %v", bucket, err) @@ -124,8 +128,8 @@ func resourceStorageBucketUpdate(d *schema.ResourceData, meta interface{}) error return fmt.Errorf("At most one website block is allowed") } - // Setting fields to "" to be explicit that the PATCH call will - // delete this field. + // Setting fields to "" to be explicit that the PATCH call will + // delete this field. if len(websites) == 0 { sb.Website.NotFoundPage = "" sb.Website.MainPageSuffix = "" diff --git a/builtin/providers/google/resource_storage_bucket_acl.go b/builtin/providers/google/resource_storage_bucket_acl.go new file mode 100644 index 000000000..3b866e0ad --- /dev/null +++ b/builtin/providers/google/resource_storage_bucket_acl.go @@ -0,0 +1,291 @@ +package google + +import ( + "fmt" + "log" + "strings" + + "github.com/hashicorp/terraform/helper/schema" + + "google.golang.org/api/storage/v1" +) + +func resourceStorageBucketAcl() *schema.Resource { + return &schema.Resource{ + Create: resourceStorageBucketAclCreate, + Read: resourceStorageBucketAclRead, + Update: resourceStorageBucketAclUpdate, + Delete: resourceStorageBucketAclDelete, + + Schema: map[string]*schema.Schema{ + "bucket": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + "predefined_acl": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ForceNew: true, + }, + "role_entity": &schema.Schema{ + Type: schema.TypeList, + Optional: true, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "default_acl": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + }, + }, + } +} + +type RoleEntity struct { + Role string + Entity string +} + +func getBucketAclId(bucket string) string { + return bucket + "-acl" +} + +func getRoleEntityPair(role_entity string) (*RoleEntity, error) { + split := strings.Split(role_entity, ":") + if len(split) != 2 { + return nil, fmt.Errorf("Error, each role entity pair must be " + + "formatted as ROLE:entity") + } + + return &RoleEntity{Role: split[0], Entity: split[1]}, nil +} + +func resourceStorageBucketAclCreate(d *schema.ResourceData, meta interface{}) error { + config := meta.(*Config) + + bucket := d.Get("bucket").(string) + predefined_acl := "" + default_acl := "" + role_entity := make([]interface{}, 0) + + if v, ok := d.GetOk("predefined_acl"); ok { + predefined_acl = v.(string) + } + + if v, ok := d.GetOk("role_entity"); ok { + role_entity = v.([]interface{}) + } + + if v, ok := d.GetOk("default_acl"); ok { + default_acl = v.(string) + } + + if len(predefined_acl) > 0 { + if len(role_entity) > 0 { + return fmt.Errorf("Error, you cannot specify both " + + "\"predefined_acl\" and \"role_entity\"") + } + + res, err := config.clientStorage.Buckets.Get(bucket).Do() + + if err != nil { + return fmt.Errorf("Error reading bucket %s: %v", bucket, err) + } + + res, err = config.clientStorage.Buckets.Update(bucket, + res).PredefinedAcl(predefined_acl).Do() + + if err != nil { + return fmt.Errorf("Error updating bucket %s: %v", bucket, err) + } + + return resourceStorageBucketAclRead(d, meta) + } else if len(role_entity) > 0 { + for _, v := range role_entity { + pair, err := getRoleEntityPair(v.(string)) + + bucketAccessControl := &storage.BucketAccessControl{ + Role: pair.Role, + Entity: pair.Entity, + } + + log.Printf("[DEBUG]: storing re %s-%s", pair.Role, pair.Entity) + + _, err = config.clientStorage.BucketAccessControls.Insert(bucket, bucketAccessControl).Do() + + if err != nil { + return fmt.Errorf("Error updating ACL for bucket %s: %v", bucket, err) + } + } + + return resourceStorageBucketAclRead(d, meta) + } + + if len(default_acl) > 0 { + res, err := config.clientStorage.Buckets.Get(bucket).Do() + + if err != nil { + return fmt.Errorf("Error reading bucket %s: %v", bucket, err) + } + + res, err = config.clientStorage.Buckets.Update(bucket, + res).PredefinedDefaultObjectAcl(default_acl).Do() + + if err != nil { + return fmt.Errorf("Error updating bucket %s: %v", bucket, err) + } + + return resourceStorageBucketAclRead(d, meta) + } + + return nil +} + +func resourceStorageBucketAclRead(d *schema.ResourceData, meta interface{}) error { + config := meta.(*Config) + + bucket := d.Get("bucket").(string) + + // Predefined ACLs cannot easily be parsed once they have been processed + // by the GCP server + if _, ok := d.GetOk("predefined_acl"); !ok { + role_entity := make([]interface{}, 0) + re_local := d.Get("role_entity").([]interface{}) + re_local_map := make(map[string]string) + for _, v := range re_local { + res, err := getRoleEntityPair(v.(string)) + + if err != nil { + return fmt.Errorf( + "Old state has malformed Role/Entity pair: %v", err) + } + + re_local_map[res.Entity] = res.Role + } + + res, err := config.clientStorage.BucketAccessControls.List(bucket).Do() + + if err != nil { + return err + } + + for _, v := range res.Items { + log.Printf("[DEBUG]: examining re %s-%s", v.Role, v.Entity) + // We only store updates to the locally defined access controls + if _, in := re_local_map[v.Entity]; in { + role_entity = append(role_entity, fmt.Sprintf("%s:%s", v.Role, v.Entity)) + log.Printf("[DEBUG]: saving re %s-%s", v.Role, v.Entity) + } + } + + d.Set("role_entity", role_entity) + } + + d.SetId(getBucketAclId(bucket)) + return nil +} + +func resourceStorageBucketAclUpdate(d *schema.ResourceData, meta interface{}) error { + config := meta.(*Config) + + bucket := d.Get("bucket").(string) + + if d.HasChange("role_entity") { + o, n := d.GetChange("role_entity") + old_re, new_re := o.([]interface{}), n.([]interface{}) + + old_re_map := make(map[string]string) + for _, v := range old_re { + res, err := getRoleEntityPair(v.(string)) + + if err != nil { + return fmt.Errorf( + "Old state has malformed Role/Entity pair: %v", err) + } + + old_re_map[res.Entity] = res.Role + } + + for _, v := range new_re { + pair, err := getRoleEntityPair(v.(string)) + + bucketAccessControl := &storage.BucketAccessControl{ + Role: pair.Role, + Entity: pair.Entity, + } + + // If the old state is missing this entity, it needs to + // be created. Otherwise it is updated + if _, ok := old_re_map[pair.Entity]; ok { + _, err = config.clientStorage.BucketAccessControls.Update( + bucket, pair.Entity, bucketAccessControl).Do() + } else { + _, err = config.clientStorage.BucketAccessControls.Insert( + bucket, bucketAccessControl).Do() + } + + // Now we only store the keys that have to be removed + delete(old_re_map, pair.Entity) + + if err != nil { + return fmt.Errorf("Error updating ACL for bucket %s: %v", bucket, err) + } + } + + for entity, _ := range old_re_map { + log.Printf("[DEBUG]: removing entity %s", entity) + err := config.clientStorage.BucketAccessControls.Delete(bucket, entity).Do() + + if err != nil { + return fmt.Errorf("Error updating ACL for bucket %s: %v", bucket, err) + } + } + + return resourceStorageBucketAclRead(d, meta) + } + + if d.HasChange("default_acl") { + default_acl := d.Get("default_acl").(string) + + res, err := config.clientStorage.Buckets.Get(bucket).Do() + + if err != nil { + return fmt.Errorf("Error reading bucket %s: %v", bucket, err) + } + + res, err = config.clientStorage.Buckets.Update(bucket, + res).PredefinedDefaultObjectAcl(default_acl).Do() + + if err != nil { + return fmt.Errorf("Error updating bucket %s: %v", bucket, err) + } + + return resourceStorageBucketAclRead(d, meta) + } + + return nil +} + +func resourceStorageBucketAclDelete(d *schema.ResourceData, meta interface{}) error { + config := meta.(*Config) + + bucket := d.Get("bucket").(string) + + re_local := d.Get("role_entity").([]interface{}) + for _, v := range re_local { + res, err := getRoleEntityPair(v.(string)) + if err != nil { + return err + } + + log.Printf("[DEBUG]: removing entity %s", res.Entity) + + err = config.clientStorage.BucketAccessControls.Delete(bucket, res.Entity).Do() + + if err != nil { + return fmt.Errorf("Error deleting entity %s ACL: %s", res.Entity, err) + } + } + + return nil +} diff --git a/builtin/providers/google/resource_storage_bucket_acl_test.go b/builtin/providers/google/resource_storage_bucket_acl_test.go new file mode 100644 index 000000000..9cdc2b173 --- /dev/null +++ b/builtin/providers/google/resource_storage_bucket_acl_test.go @@ -0,0 +1,231 @@ +package google + +import ( + "fmt" + "math/rand" + "testing" + "time" + + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" + + //"google.golang.org/api/storage/v1" +) + +var roleEntityBasic1 = "OWNER:user-omeemail@gmail.com" + +var roleEntityBasic2 = "READER:user-anotheremail@gmail.com" + +var roleEntityBasic3_owner = "OWNER:user-yetanotheremail@gmail.com" + +var roleEntityBasic3_reader = "READER:user-yetanotheremail@gmail.com" + +var testAclBucketName = fmt.Sprintf("%s-%d", "tf-test-acl-bucket", rand.New(rand.NewSource(time.Now().UnixNano())).Int()) + +func TestAccGoogleStorageBucketAcl_basic(t *testing.T) { + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccGoogleStorageBucketAclDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testGoogleStorageBucketsAclBasic1, + Check: resource.ComposeTestCheckFunc( + testAccCheckGoogleStorageBucketAcl(testAclBucketName, roleEntityBasic1), + testAccCheckGoogleStorageBucketAcl(testAclBucketName, roleEntityBasic2), + ), + }, + }, + }) +} + +func TestAccGoogleStorageBucketAcl_upgrade(t *testing.T) { + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccGoogleStorageBucketAclDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testGoogleStorageBucketsAclBasic1, + Check: resource.ComposeTestCheckFunc( + testAccCheckGoogleStorageBucketAcl(testAclBucketName, roleEntityBasic1), + testAccCheckGoogleStorageBucketAcl(testAclBucketName, roleEntityBasic2), + ), + }, + + resource.TestStep{ + Config: testGoogleStorageBucketsAclBasic2, + Check: resource.ComposeTestCheckFunc( + testAccCheckGoogleStorageBucketAcl(testAclBucketName, roleEntityBasic2), + testAccCheckGoogleStorageBucketAcl(testAclBucketName, roleEntityBasic3_owner), + ), + }, + + resource.TestStep{ + Config: testGoogleStorageBucketsAclBasicDelete, + Check: resource.ComposeTestCheckFunc( + testAccCheckGoogleStorageBucketAclDelete(testAclBucketName, roleEntityBasic1), + testAccCheckGoogleStorageBucketAclDelete(testAclBucketName, roleEntityBasic2), + testAccCheckGoogleStorageBucketAclDelete(testAclBucketName, roleEntityBasic3_owner), + ), + }, + }, + }) +} + +func TestAccGoogleStorageBucketAcl_downgrade(t *testing.T) { + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccGoogleStorageBucketAclDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testGoogleStorageBucketsAclBasic2, + Check: resource.ComposeTestCheckFunc( + testAccCheckGoogleStorageBucketAcl(testAclBucketName, roleEntityBasic2), + testAccCheckGoogleStorageBucketAcl(testAclBucketName, roleEntityBasic3_owner), + ), + }, + + resource.TestStep{ + Config: testGoogleStorageBucketsAclBasic3, + Check: resource.ComposeTestCheckFunc( + testAccCheckGoogleStorageBucketAcl(testAclBucketName, roleEntityBasic2), + testAccCheckGoogleStorageBucketAcl(testAclBucketName, roleEntityBasic3_reader), + ), + }, + + resource.TestStep{ + Config: testGoogleStorageBucketsAclBasicDelete, + Check: resource.ComposeTestCheckFunc( + testAccCheckGoogleStorageBucketAclDelete(testAclBucketName, roleEntityBasic1), + testAccCheckGoogleStorageBucketAclDelete(testAclBucketName, roleEntityBasic2), + testAccCheckGoogleStorageBucketAclDelete(testAclBucketName, roleEntityBasic3_owner), + ), + }, + }, + }) +} + +func TestAccGoogleStorageBucketAcl_predefined(t *testing.T) { + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccGoogleStorageBucketAclDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testGoogleStorageBucketsAclPredefined, + }, + }, + }) +} + +func testAccCheckGoogleStorageBucketAclDelete(bucket, roleEntityS string) resource.TestCheckFunc { + return func(s *terraform.State) error { + roleEntity, _ := getRoleEntityPair(roleEntityS) + config := testAccProvider.Meta().(*Config) + + _, err := config.clientStorage.BucketAccessControls.Get(bucket, roleEntity.Entity).Do() + + if err != nil { + return nil + } + + return fmt.Errorf("Error, entity %s still exists", roleEntity.Entity) + } +} + +func testAccCheckGoogleStorageBucketAcl(bucket, roleEntityS string) resource.TestCheckFunc { + return func(s *terraform.State) error { + roleEntity, _ := getRoleEntityPair(roleEntityS) + config := testAccProvider.Meta().(*Config) + + res, err := config.clientStorage.BucketAccessControls.Get(bucket, roleEntity.Entity).Do() + + if err != nil { + return fmt.Errorf("Error retrieving contents of acl for bucket %s: %s", bucket, err) + } + + if res.Role != roleEntity.Role { + return fmt.Errorf("Error, Role mismatch %s != %s", res.Role, roleEntity.Role) + } + + return nil + } +} + +func testAccGoogleStorageBucketAclDestroy(s *terraform.State) error { + config := testAccProvider.Meta().(*Config) + + for _, rs := range s.RootModule().Resources { + if rs.Type != "google_storage_bucket_acl" { + continue + } + + bucket := rs.Primary.Attributes["bucket"] + + _, err := config.clientStorage.BucketAccessControls.List(bucket).Do() + + if err == nil { + return fmt.Errorf("Acl for bucket %s still exists", bucket) + } + } + + return nil +} + +var testGoogleStorageBucketsAclBasic1 = fmt.Sprintf(` +resource "google_storage_bucket" "bucket" { + name = "%s" +} + +resource "google_storage_bucket_acl" "acl" { + bucket = "${google_storage_bucket.bucket.name}" + role_entity = ["%s", "%s"] +} +`, testAclBucketName, roleEntityBasic1, roleEntityBasic2) + +var testGoogleStorageBucketsAclBasic2 = fmt.Sprintf(` +resource "google_storage_bucket" "bucket" { + name = "%s" +} + +resource "google_storage_bucket_acl" "acl" { + bucket = "${google_storage_bucket.bucket.name}" + role_entity = ["%s", "%s"] +} +`, testAclBucketName, roleEntityBasic2, roleEntityBasic3_owner) + +var testGoogleStorageBucketsAclBasicDelete = fmt.Sprintf(` +resource "google_storage_bucket" "bucket" { + name = "%s" +} + +resource "google_storage_bucket_acl" "acl" { + bucket = "${google_storage_bucket.bucket.name}" + role_entity = [] +} +`, testAclBucketName) + +var testGoogleStorageBucketsAclBasic3 = fmt.Sprintf(` +resource "google_storage_bucket" "bucket" { + name = "%s" +} + +resource "google_storage_bucket_acl" "acl" { + bucket = "${google_storage_bucket.bucket.name}" + role_entity = ["%s", "%s"] +} +`, testAclBucketName, roleEntityBasic2, roleEntityBasic3_reader) + +var testGoogleStorageBucketsAclPredefined = fmt.Sprintf(` +resource "google_storage_bucket" "bucket" { + name = "%s" +} + +resource "google_storage_bucket_acl" "acl" { + bucket = "${google_storage_bucket.bucket.name}" + predefined_acl = "projectPrivate" + default_acl = "projectPrivate" +} +`, testAclBucketName) diff --git a/builtin/providers/google/resource_storage_bucket_object.go b/builtin/providers/google/resource_storage_bucket_object.go index cd5fe7d9c..231153a85 100644 --- a/builtin/providers/google/resource_storage_bucket_object.go +++ b/builtin/providers/google/resource_storage_bucket_object.go @@ -1,8 +1,8 @@ package google import ( - "os" "fmt" + "os" "github.com/hashicorp/terraform/helper/schema" @@ -13,7 +13,6 @@ func resourceStorageBucketObject() *schema.Resource { return &schema.Resource{ Create: resourceStorageBucketObjectCreate, Read: resourceStorageBucketObjectRead, - Update: resourceStorageBucketObjectUpdate, Delete: resourceStorageBucketObjectDelete, Schema: map[string]*schema.Schema{ @@ -33,10 +32,10 @@ func resourceStorageBucketObject() *schema.Resource { ForceNew: true, }, "predefined_acl": &schema.Schema{ - Type: schema.TypeString, - Default: "projectPrivate", - Optional: true, - ForceNew: true, + Type: schema.TypeString, + Deprecated: "Please use resource \"storage_object_acl.predefined_acl\" instead.", + Optional: true, + ForceNew: true, }, "md5hash": &schema.Schema{ Type: schema.TypeString, @@ -60,7 +59,6 @@ func resourceStorageBucketObjectCreate(d *schema.ResourceData, meta interface{}) bucket := d.Get("bucket").(string) name := d.Get("name").(string) source := d.Get("source").(string) - acl := d.Get("predefined_acl").(string) file, err := os.Open(source) if err != nil { @@ -73,7 +71,9 @@ func resourceStorageBucketObjectCreate(d *schema.ResourceData, meta interface{}) insertCall := objectsService.Insert(bucket, object) insertCall.Name(name) insertCall.Media(file) - insertCall.PredefinedAcl(acl) + if v, ok := d.GetOk("predefined_acl"); ok { + insertCall.PredefinedAcl(v.(string)) + } _, err = insertCall.Do() @@ -107,12 +107,6 @@ func resourceStorageBucketObjectRead(d *schema.ResourceData, meta interface{}) e return nil } -func resourceStorageBucketObjectUpdate(d *schema.ResourceData, meta interface{}) error { - // The Cloud storage API doesn't support updating object data contents, - // only metadata. So once we implement metadata we'll have work to do here - return nil -} - func resourceStorageBucketObjectDelete(d *schema.ResourceData, meta interface{}) error { config := meta.(*Config) diff --git a/builtin/providers/google/resource_storage_bucket_object_test.go b/builtin/providers/google/resource_storage_bucket_object_test.go index d7be902a1..e84822fdd 100644 --- a/builtin/providers/google/resource_storage_bucket_object_test.go +++ b/builtin/providers/google/resource_storage_bucket_object_test.go @@ -1,11 +1,11 @@ package google import ( - "fmt" - "testing" - "io/ioutil" "crypto/md5" "encoding/base64" + "fmt" + "io/ioutil" + "testing" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/terraform" @@ -48,7 +48,6 @@ func testAccCheckGoogleStorageObject(bucket, object, md5 string) resource.TestCh objectsService := storage.NewObjectsService(config.clientStorage) - getCall := objectsService.Get(bucket, object) res, err := getCall.Do() @@ -56,7 +55,7 @@ func testAccCheckGoogleStorageObject(bucket, object, md5 string) resource.TestCh return fmt.Errorf("Error retrieving contents of object %s: %s", object, err) } - if (md5 != res.Md5Hash) { + if md5 != res.Md5Hash { return fmt.Errorf("Error contents of %s garbled, md5 hashes don't match (%s, %s)", object, md5, res.Md5Hash) } diff --git a/builtin/providers/google/resource_storage_object_acl.go b/builtin/providers/google/resource_storage_object_acl.go new file mode 100644 index 000000000..5212f81db --- /dev/null +++ b/builtin/providers/google/resource_storage_object_acl.go @@ -0,0 +1,253 @@ +package google + +import ( + "fmt" + "log" + + "github.com/hashicorp/terraform/helper/schema" + + "google.golang.org/api/storage/v1" +) + +func resourceStorageObjectAcl() *schema.Resource { + return &schema.Resource{ + Create: resourceStorageObjectAclCreate, + Read: resourceStorageObjectAclRead, + Update: resourceStorageObjectAclUpdate, + Delete: resourceStorageObjectAclDelete, + + Schema: map[string]*schema.Schema{ + "bucket": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + "object": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + "role_entity": &schema.Schema{ + Type: schema.TypeList, + Optional: true, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "predefined_acl": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ForceNew: true, + }, + }, + } +} + +func getObjectAclId(object string) string { + return object + "-acl" +} + +func resourceStorageObjectAclCreate(d *schema.ResourceData, meta interface{}) error { + config := meta.(*Config) + + bucket := d.Get("bucket").(string) + object := d.Get("object").(string) + + predefined_acl := "" + role_entity := make([]interface{}, 0) + + if v, ok := d.GetOk("predefined_acl"); ok { + predefined_acl = v.(string) + } + + if v, ok := d.GetOk("role_entity"); ok { + role_entity = v.([]interface{}) + } + + if len(predefined_acl) > 0 { + if len(role_entity) > 0 { + return fmt.Errorf("Error, you cannot specify both " + + "\"predefined_acl\" and \"role_entity\"") + } + + res, err := config.clientStorage.Objects.Get(bucket, object).Do() + + if err != nil { + return fmt.Errorf("Error reading object %s: %v", bucket, err) + } + + res, err = config.clientStorage.Objects.Update(bucket, object, + res).PredefinedAcl(predefined_acl).Do() + + if err != nil { + return fmt.Errorf("Error updating object %s: %v", bucket, err) + } + + return resourceStorageBucketAclRead(d, meta) + } else if len(role_entity) > 0 { + for _, v := range role_entity { + pair, err := getRoleEntityPair(v.(string)) + + objectAccessControl := &storage.ObjectAccessControl{ + Role: pair.Role, + Entity: pair.Entity, + } + + log.Printf("[DEBUG]: setting role = %s, entity = %s", pair.Role, pair.Entity) + + _, err = config.clientStorage.ObjectAccessControls.Insert(bucket, + object, objectAccessControl).Do() + + if err != nil { + return fmt.Errorf("Error setting ACL for %s on object %s: %v", pair.Entity, object, err) + } + } + + return resourceStorageObjectAclRead(d, meta) + } + + return fmt.Errorf("Error, you must specify either " + + "\"predefined_acl\" or \"role_entity\"") +} + +func resourceStorageObjectAclRead(d *schema.ResourceData, meta interface{}) error { + config := meta.(*Config) + + bucket := d.Get("bucket").(string) + object := d.Get("object").(string) + + // Predefined ACLs cannot easily be parsed once they have been processed + // by the GCP server + if _, ok := d.GetOk("predefined_acl"); !ok { + role_entity := make([]interface{}, 0) + re_local := d.Get("role_entity").([]interface{}) + re_local_map := make(map[string]string) + for _, v := range re_local { + res, err := getRoleEntityPair(v.(string)) + + if err != nil { + return fmt.Errorf( + "Old state has malformed Role/Entity pair: %v", err) + } + + re_local_map[res.Entity] = res.Role + } + + res, err := config.clientStorage.ObjectAccessControls.List(bucket, object).Do() + + if err != nil { + return err + } + + for _, v := range res.Items { + role := "" + entity := "" + for key, val := range v.(map[string]interface{}) { + if key == "role" { + role = val.(string) + } else if key == "entity" { + entity = val.(string) + } + } + if _, in := re_local_map[entity]; in { + role_entity = append(role_entity, fmt.Sprintf("%s:%s", role, entity)) + log.Printf("[DEBUG]: saving re %s-%s", role, entity) + } + } + + d.Set("role_entity", role_entity) + } + + d.SetId(getObjectAclId(object)) + return nil +} + +func resourceStorageObjectAclUpdate(d *schema.ResourceData, meta interface{}) error { + config := meta.(*Config) + + bucket := d.Get("bucket").(string) + object := d.Get("object").(string) + + if d.HasChange("role_entity") { + o, n := d.GetChange("role_entity") + old_re, new_re := o.([]interface{}), n.([]interface{}) + + old_re_map := make(map[string]string) + for _, v := range old_re { + res, err := getRoleEntityPair(v.(string)) + + if err != nil { + return fmt.Errorf( + "Old state has malformed Role/Entity pair: %v", err) + } + + old_re_map[res.Entity] = res.Role + } + + for _, v := range new_re { + pair, err := getRoleEntityPair(v.(string)) + + objectAccessControl := &storage.ObjectAccessControl{ + Role: pair.Role, + Entity: pair.Entity, + } + + // If the old state is missing this entity, it needs to + // be created. Otherwise it is updated + if _, ok := old_re_map[pair.Entity]; ok { + _, err = config.clientStorage.ObjectAccessControls.Update( + bucket, object, pair.Entity, objectAccessControl).Do() + } else { + _, err = config.clientStorage.ObjectAccessControls.Insert( + bucket, object, objectAccessControl).Do() + } + + // Now we only store the keys that have to be removed + delete(old_re_map, pair.Entity) + + if err != nil { + return fmt.Errorf("Error updating ACL for object %s: %v", bucket, err) + } + } + + for entity, _ := range old_re_map { + log.Printf("[DEBUG]: removing entity %s", entity) + err := config.clientStorage.ObjectAccessControls.Delete(bucket, object, entity).Do() + + if err != nil { + return fmt.Errorf("Error updating ACL for object %s: %v", bucket, err) + } + } + + return resourceStorageObjectAclRead(d, meta) + } + + return nil +} + +func resourceStorageObjectAclDelete(d *schema.ResourceData, meta interface{}) error { + config := meta.(*Config) + + bucket := d.Get("bucket").(string) + object := d.Get("object").(string) + + re_local := d.Get("role_entity").([]interface{}) + for _, v := range re_local { + res, err := getRoleEntityPair(v.(string)) + if err != nil { + return err + } + + entity := res.Entity + + log.Printf("[DEBUG]: removing entity %s", entity) + + err = config.clientStorage.ObjectAccessControls.Delete(bucket, object, + entity).Do() + + if err != nil { + return fmt.Errorf("Error deleting entity %s ACL: %s", + entity, err) + } + } + + return nil +} diff --git a/builtin/providers/google/resource_storage_object_acl_test.go b/builtin/providers/google/resource_storage_object_acl_test.go new file mode 100644 index 000000000..ff14f683c --- /dev/null +++ b/builtin/providers/google/resource_storage_object_acl_test.go @@ -0,0 +1,310 @@ +package google + +import ( + "fmt" + "io/ioutil" + "math/rand" + "testing" + "time" + + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" + + //"google.golang.org/api/storage/v1" +) + +var tfObjectAcl, errObjectAcl = ioutil.TempFile("", "tf-gce-test") +var testAclObjectName = fmt.Sprintf("%s-%d", "tf-test-acl-object", + rand.New(rand.NewSource(time.Now().UnixNano())).Int()) + +func TestAccGoogleStorageObjectAcl_basic(t *testing.T) { + objectData := []byte("data data data") + ioutil.WriteFile(tfObjectAcl.Name(), objectData, 0644) + resource.Test(t, resource.TestCase{ + PreCheck: func() { + if errObjectAcl != nil { + panic(errObjectAcl) + } + testAccPreCheck(t) + }, + Providers: testAccProviders, + CheckDestroy: testAccGoogleStorageObjectAclDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testGoogleStorageObjectsAclBasic1, + Check: resource.ComposeTestCheckFunc( + testAccCheckGoogleStorageObjectAcl(testAclBucketName, + testAclObjectName, roleEntityBasic1), + testAccCheckGoogleStorageObjectAcl(testAclBucketName, + testAclObjectName, roleEntityBasic2), + ), + }, + }, + }) +} + +func TestAccGoogleStorageObjectAcl_upgrade(t *testing.T) { + objectData := []byte("data data data") + ioutil.WriteFile(tfObjectAcl.Name(), objectData, 0644) + resource.Test(t, resource.TestCase{ + PreCheck: func() { + if errObjectAcl != nil { + panic(errObjectAcl) + } + testAccPreCheck(t) + }, + Providers: testAccProviders, + CheckDestroy: testAccGoogleStorageObjectAclDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testGoogleStorageObjectsAclBasic1, + Check: resource.ComposeTestCheckFunc( + testAccCheckGoogleStorageObjectAcl(testAclBucketName, + testAclObjectName, roleEntityBasic1), + testAccCheckGoogleStorageObjectAcl(testAclBucketName, + testAclObjectName, roleEntityBasic2), + ), + }, + + resource.TestStep{ + Config: testGoogleStorageObjectsAclBasic2, + Check: resource.ComposeTestCheckFunc( + testAccCheckGoogleStorageObjectAcl(testAclBucketName, + testAclObjectName, roleEntityBasic2), + testAccCheckGoogleStorageObjectAcl(testAclBucketName, + testAclObjectName, roleEntityBasic3_owner), + ), + }, + + resource.TestStep{ + Config: testGoogleStorageObjectsAclBasicDelete, + Check: resource.ComposeTestCheckFunc( + testAccCheckGoogleStorageObjectAclDelete(testAclBucketName, + testAclObjectName, roleEntityBasic1), + testAccCheckGoogleStorageObjectAclDelete(testAclBucketName, + testAclObjectName, roleEntityBasic2), + testAccCheckGoogleStorageObjectAclDelete(testAclBucketName, + testAclObjectName, roleEntityBasic3_reader), + ), + }, + }, + }) +} + +func TestAccGoogleStorageObjectAcl_downgrade(t *testing.T) { + objectData := []byte("data data data") + ioutil.WriteFile(tfObjectAcl.Name(), objectData, 0644) + resource.Test(t, resource.TestCase{ + PreCheck: func() { + if errObjectAcl != nil { + panic(errObjectAcl) + } + testAccPreCheck(t) + }, + Providers: testAccProviders, + CheckDestroy: testAccGoogleStorageObjectAclDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testGoogleStorageObjectsAclBasic2, + Check: resource.ComposeTestCheckFunc( + testAccCheckGoogleStorageObjectAcl(testAclBucketName, + testAclObjectName, roleEntityBasic2), + testAccCheckGoogleStorageObjectAcl(testAclBucketName, + testAclObjectName, roleEntityBasic3_owner), + ), + }, + + resource.TestStep{ + Config: testGoogleStorageObjectsAclBasic3, + Check: resource.ComposeTestCheckFunc( + testAccCheckGoogleStorageObjectAcl(testAclBucketName, + testAclObjectName, roleEntityBasic2), + testAccCheckGoogleStorageObjectAcl(testAclBucketName, + testAclObjectName, roleEntityBasic3_reader), + ), + }, + + resource.TestStep{ + Config: testGoogleStorageObjectsAclBasicDelete, + Check: resource.ComposeTestCheckFunc( + testAccCheckGoogleStorageObjectAclDelete(testAclBucketName, + testAclObjectName, roleEntityBasic1), + testAccCheckGoogleStorageObjectAclDelete(testAclBucketName, + testAclObjectName, roleEntityBasic2), + testAccCheckGoogleStorageObjectAclDelete(testAclBucketName, + testAclObjectName, roleEntityBasic3_reader), + ), + }, + }, + }) +} + +func TestAccGoogleStorageObjectAcl_predefined(t *testing.T) { + objectData := []byte("data data data") + ioutil.WriteFile(tfObjectAcl.Name(), objectData, 0644) + resource.Test(t, resource.TestCase{ + PreCheck: func() { + if errObjectAcl != nil { + panic(errObjectAcl) + } + testAccPreCheck(t) + }, + Providers: testAccProviders, + CheckDestroy: testAccGoogleStorageObjectAclDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testGoogleStorageObjectsAclPredefined, + }, + }, + }) +} + +func testAccCheckGoogleStorageObjectAcl(bucket, object, roleEntityS string) resource.TestCheckFunc { + return func(s *terraform.State) error { + roleEntity, _ := getRoleEntityPair(roleEntityS) + config := testAccProvider.Meta().(*Config) + + res, err := config.clientStorage.ObjectAccessControls.Get(bucket, + object, roleEntity.Entity).Do() + + if err != nil { + return fmt.Errorf("Error retrieving contents of acl for bucket %s: %s", bucket, err) + } + + if res.Role != roleEntity.Role { + return fmt.Errorf("Error, Role mismatch %s != %s", res.Role, roleEntity.Role) + } + + return nil + } +} + +func testAccCheckGoogleStorageObjectAclDelete(bucket, object, roleEntityS string) resource.TestCheckFunc { + return func(s *terraform.State) error { + roleEntity, _ := getRoleEntityPair(roleEntityS) + config := testAccProvider.Meta().(*Config) + + _, err := config.clientStorage.ObjectAccessControls.Get(bucket, + object, roleEntity.Entity).Do() + + if err != nil { + return nil + } + + return fmt.Errorf("Error, Entity still exists %s", roleEntity.Entity) + } +} + +func testAccGoogleStorageObjectAclDestroy(s *terraform.State) error { + config := testAccProvider.Meta().(*Config) + + for _, rs := range s.RootModule().Resources { + if rs.Type != "google_storage_bucket_acl" { + continue + } + + bucket := rs.Primary.Attributes["bucket"] + object := rs.Primary.Attributes["object"] + + _, err := config.clientStorage.ObjectAccessControls.List(bucket, object).Do() + + if err == nil { + return fmt.Errorf("Acl for bucket %s still exists", bucket) + } + } + + return nil +} + +var testGoogleStorageObjectsAclBasicDelete = fmt.Sprintf(` +resource "google_storage_bucket" "bucket" { + name = "%s" +} + +resource "google_storage_bucket_object" "object" { + name = "%s" + bucket = "${google_storage_bucket.bucket.name}" + source = "%s" +} + +resource "google_storage_object_acl" "acl" { + object = "${google_storage_bucket_object.object.name}" + bucket = "${google_storage_bucket.bucket.name}" + role_entity = [] +} +`, testAclBucketName, testAclObjectName, tfObjectAcl.Name()) + +var testGoogleStorageObjectsAclBasic1 = fmt.Sprintf(` +resource "google_storage_bucket" "bucket" { + name = "%s" +} + +resource "google_storage_bucket_object" "object" { + name = "%s" + bucket = "${google_storage_bucket.bucket.name}" + source = "%s" +} + +resource "google_storage_object_acl" "acl" { + object = "${google_storage_bucket_object.object.name}" + bucket = "${google_storage_bucket.bucket.name}" + role_entity = ["%s", "%s"] +} +`, testAclBucketName, testAclObjectName, tfObjectAcl.Name(), + roleEntityBasic1, roleEntityBasic2) + +var testGoogleStorageObjectsAclBasic2 = fmt.Sprintf(` +resource "google_storage_bucket" "bucket" { + name = "%s" +} + +resource "google_storage_bucket_object" "object" { + name = "%s" + bucket = "${google_storage_bucket.bucket.name}" + source = "%s" +} + +resource "google_storage_object_acl" "acl" { + object = "${google_storage_bucket_object.object.name}" + bucket = "${google_storage_bucket.bucket.name}" + role_entity = ["%s", "%s"] +} +`, testAclBucketName, testAclObjectName, tfObjectAcl.Name(), + roleEntityBasic2, roleEntityBasic3_owner) + +var testGoogleStorageObjectsAclBasic3 = fmt.Sprintf(` +resource "google_storage_bucket" "bucket" { + name = "%s" +} + +resource "google_storage_bucket_object" "object" { + name = "%s" + bucket = "${google_storage_bucket.bucket.name}" + source = "%s" +} + +resource "google_storage_object_acl" "acl" { + object = "${google_storage_bucket_object.object.name}" + bucket = "${google_storage_bucket.bucket.name}" + role_entity = ["%s", "%s"] +} +`, testAclBucketName, testAclObjectName, tfObjectAcl.Name(), + roleEntityBasic2, roleEntityBasic3_reader) + +var testGoogleStorageObjectsAclPredefined = fmt.Sprintf(` +resource "google_storage_bucket" "bucket" { + name = "%s" +} + +resource "google_storage_bucket_object" "object" { + name = "%s" + bucket = "${google_storage_bucket.bucket.name}" + source = "%s" +} + +resource "google_storage_object_acl" "acl" { + object = "${google_storage_bucket_object.object.name}" + bucket = "${google_storage_bucket.bucket.name}" + predefined_acl = "projectPrivate" +} +`, testAclBucketName, testAclObjectName, tfObjectAcl.Name()) diff --git a/builtin/providers/heroku/resource_heroku_app.go b/builtin/providers/heroku/resource_heroku_app.go index 52954aa5d..4c2f3bf97 100644 --- a/builtin/providers/heroku/resource_heroku_app.go +++ b/builtin/providers/heroku/resource_heroku_app.go @@ -5,7 +5,7 @@ import ( "log" "github.com/cyberdelia/heroku-go/v3" - "github.com/hashicorp/terraform/helper/multierror" + "github.com/hashicorp/go-multierror" "github.com/hashicorp/terraform/helper/schema" ) diff --git a/builtin/providers/null/resource.go b/builtin/providers/null/resource.go index 0badf346c..ad56ba4bd 100644 --- a/builtin/providers/null/resource.go +++ b/builtin/providers/null/resource.go @@ -16,7 +16,6 @@ func resource() *schema.Resource { return &schema.Resource{ Create: resourceCreate, Read: resourceRead, - Update: resourceUpdate, Delete: resourceDelete, Schema: map[string]*schema.Schema{}, @@ -32,10 +31,6 @@ func resourceRead(d *schema.ResourceData, meta interface{}) error { return nil } -func resourceUpdate(d *schema.ResourceData, meta interface{}) error { - return nil -} - func resourceDelete(d *schema.ResourceData, meta interface{}) error { d.SetId("") return nil diff --git a/builtin/providers/openstack/resource_openstack_blockstorage_volume_v1.go b/builtin/providers/openstack/resource_openstack_blockstorage_volume_v1.go index cd5a5d567..e049269a9 100644 --- a/builtin/providers/openstack/resource_openstack_blockstorage_volume_v1.go +++ b/builtin/providers/openstack/resource_openstack_blockstorage_volume_v1.go @@ -136,7 +136,7 @@ func resourceBlockStorageVolumeV1Create(d *schema.ResourceData, meta interface{} v.ID) stateConf := &resource.StateChangeConf{ - Pending: []string{"downloading"}, + Pending: []string{"downloading"}, Target: "available", Refresh: VolumeV1StateRefreshFunc(blockStorageClient, v.ID), Timeout: 10 * time.Minute, diff --git a/builtin/providers/openstack/resource_openstack_compute_instance_v2.go b/builtin/providers/openstack/resource_openstack_compute_instance_v2.go index 75014cc75..3101f41bc 100644 --- a/builtin/providers/openstack/resource_openstack_compute_instance_v2.go +++ b/builtin/providers/openstack/resource_openstack_compute_instance_v2.go @@ -610,7 +610,6 @@ func resourceComputeInstanceV2Update(d *schema.ResourceData, meta interface{}) e log.Printf("[DEBUG] Security groups to remove: %v", secgroupsToRemove) - for _, g := range secgroupsToRemove.List() { err := secgroups.RemoveServerFromGroup(computeClient, d.Id(), g.(string)).ExtractErr() if err != nil { diff --git a/builtin/providers/openstack/resource_openstack_compute_secgroup_v2.go b/builtin/providers/openstack/resource_openstack_compute_secgroup_v2.go index e6d8be8ea..18812cb59 100644 --- a/builtin/providers/openstack/resource_openstack_compute_secgroup_v2.go +++ b/builtin/providers/openstack/resource_openstack_compute_secgroup_v2.go @@ -219,7 +219,7 @@ func resourceComputeSecGroupV2Delete(d *schema.ResourceData, meta interface{}) e } func resourceSecGroupRulesV2(d *schema.ResourceData) []secgroups.CreateRuleOpts { - rawRules := (d.Get("rule")).([]interface{}) + rawRules := d.Get("rule").([]interface{}) createRuleOptsList := make([]secgroups.CreateRuleOpts, len(rawRules)) for i, raw := range rawRules { rawMap := raw.(map[string]interface{}) diff --git a/builtin/providers/openstack/resource_openstack_lb_pool_v1.go b/builtin/providers/openstack/resource_openstack_lb_pool_v1.go index 1384796d5..64e0436db 100644 --- a/builtin/providers/openstack/resource_openstack_lb_pool_v1.go +++ b/builtin/providers/openstack/resource_openstack_lb_pool_v1.go @@ -292,7 +292,7 @@ func resourcePoolMonitorIDsV1(d *schema.ResourceData) []string { } func resourcePoolMembersV1(d *schema.ResourceData) []members.CreateOpts { - memberOptsRaw := (d.Get("member")).(*schema.Set) + memberOptsRaw := d.Get("member").(*schema.Set) memberOpts := make([]members.CreateOpts, memberOptsRaw.Len()) for i, raw := range memberOptsRaw.List() { rawMap := raw.(map[string]interface{}) diff --git a/builtin/providers/openstack/resource_openstack_networking_floatingip_v2.go b/builtin/providers/openstack/resource_openstack_networking_floatingip_v2.go index 1b81c6a96..37f1ca7cf 100644 --- a/builtin/providers/openstack/resource_openstack_networking_floatingip_v2.go +++ b/builtin/providers/openstack/resource_openstack_networking_floatingip_v2.go @@ -14,6 +14,7 @@ func resourceNetworkingFloatingIPV2() *schema.Resource { return &schema.Resource{ Create: resourceNetworkFloatingIPV2Create, Read: resourceNetworkFloatingIPV2Read, + Update: resourceNetworkFloatingIPV2Update, Delete: resourceNetworkFloatingIPV2Delete, Schema: map[string]*schema.Schema{ @@ -33,6 +34,11 @@ func resourceNetworkingFloatingIPV2() *schema.Resource { ForceNew: true, DefaultFunc: envDefaultFunc("OS_POOL_NAME"), }, + "port_id": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + Default: "", + }, }, } } @@ -53,6 +59,7 @@ func resourceNetworkFloatingIPV2Create(d *schema.ResourceData, meta interface{}) } createOpts := floatingips.CreateOpts{ FloatingNetworkID: poolID, + PortID: d.Get("port_id").(string), } log.Printf("[DEBUG] Create Options: %#v", createOpts) floatingIP, err := floatingips.Create(networkClient, createOpts).Extract() @@ -78,6 +85,7 @@ func resourceNetworkFloatingIPV2Read(d *schema.ResourceData, meta interface{}) e } d.Set("address", floatingIP.FloatingIP) + d.Set("port_id", floatingIP.PortID) poolName, err := getNetworkName(d, meta, floatingIP.FloatingNetworkID) if err != nil { return fmt.Errorf("Error retrieving floating IP pool name: %s", err) @@ -87,6 +95,29 @@ func resourceNetworkFloatingIPV2Read(d *schema.ResourceData, meta interface{}) e return nil } +func resourceNetworkFloatingIPV2Update(d *schema.ResourceData, meta interface{}) error { + config := meta.(*Config) + networkClient, err := config.networkingV2Client(d.Get("region").(string)) + if err != nil { + return fmt.Errorf("Error creating OpenStack network client: %s", err) + } + + var updateOpts floatingips.UpdateOpts + + if d.HasChange("port_id") { + updateOpts.PortID = d.Get("port_id").(string) + } + + log.Printf("[DEBUG] Update Options: %#v", updateOpts) + + _, err = floatingips.Update(networkClient, d.Id(), updateOpts).Extract() + if err != nil { + return fmt.Errorf("Error updating floating IP: %s", err) + } + + return resourceNetworkFloatingIPV2Read(d, meta) +} + func resourceNetworkFloatingIPV2Delete(d *schema.ResourceData, meta interface{}) error { config := meta.(*Config) networkClient, err := config.networkingV2Client(d.Get("region").(string)) diff --git a/builtin/providers/packet/config.go b/builtin/providers/packet/config.go new file mode 100644 index 000000000..659ee9ebc --- /dev/null +++ b/builtin/providers/packet/config.go @@ -0,0 +1,18 @@ +package packet + +import ( + "github.com/packethost/packngo" +) + +const ( + consumerToken = "aZ9GmqHTPtxevvFq9SK3Pi2yr9YCbRzduCSXF2SNem5sjB91mDq7Th3ZwTtRqMWZ" +) + +type Config struct { + AuthToken string +} + +// Client() returns a new client for accessing packet. +func (c *Config) Client() *packngo.Client { + return packngo.NewClient(consumerToken, c.AuthToken) +} diff --git a/builtin/providers/packet/provider.go b/builtin/providers/packet/provider.go new file mode 100644 index 000000000..c1efd6e83 --- /dev/null +++ b/builtin/providers/packet/provider.go @@ -0,0 +1,36 @@ +package packet + +import ( + "github.com/hashicorp/terraform/helper/schema" + "github.com/hashicorp/terraform/terraform" +) + +// Provider returns a schema.Provider for Packet. +func Provider() terraform.ResourceProvider { + return &schema.Provider{ + Schema: map[string]*schema.Schema{ + "auth_token": &schema.Schema{ + Type: schema.TypeString, + Required: true, + DefaultFunc: schema.EnvDefaultFunc("PACKET_AUTH_TOKEN", nil), + Description: "The API auth key for API operations.", + }, + }, + + ResourcesMap: map[string]*schema.Resource{ + "packet_device": resourcePacketDevice(), + "packet_ssh_key": resourcePacketSSHKey(), + "packet_project": resourcePacketProject(), + }, + + ConfigureFunc: providerConfigure, + } +} + +func providerConfigure(d *schema.ResourceData) (interface{}, error) { + config := Config{ + AuthToken: d.Get("auth_token").(string), + } + + return config.Client(), nil +} diff --git a/builtin/providers/packet/provider_test.go b/builtin/providers/packet/provider_test.go new file mode 100644 index 000000000..5483c4fb0 --- /dev/null +++ b/builtin/providers/packet/provider_test.go @@ -0,0 +1,35 @@ +package packet + +import ( + "os" + "testing" + + "github.com/hashicorp/terraform/helper/schema" + "github.com/hashicorp/terraform/terraform" +) + +var testAccProviders map[string]terraform.ResourceProvider +var testAccProvider *schema.Provider + +func init() { + testAccProvider = Provider().(*schema.Provider) + testAccProviders = map[string]terraform.ResourceProvider{ + "packet": testAccProvider, + } +} + +func TestProvider(t *testing.T) { + if err := Provider().(*schema.Provider).InternalValidate(); err != nil { + t.Fatalf("err: %s", err) + } +} + +func TestProvider_impl(t *testing.T) { + var _ terraform.ResourceProvider = Provider() +} + +func testAccPreCheck(t *testing.T) { + if v := os.Getenv("PACKET_AUTH_TOKEN"); v == "" { + t.Fatal("PACKET_AUTH_TOKEN must be set for acceptance tests") + } +} diff --git a/builtin/providers/packet/resource_packet_device.go b/builtin/providers/packet/resource_packet_device.go new file mode 100644 index 000000000..56fc7afe5 --- /dev/null +++ b/builtin/providers/packet/resource_packet_device.go @@ -0,0 +1,302 @@ +package packet + +import ( + "fmt" + "log" + "time" + + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/helper/schema" + "github.com/packethost/packngo" +) + +func resourcePacketDevice() *schema.Resource { + return &schema.Resource{ + Create: resourcePacketDeviceCreate, + Read: resourcePacketDeviceRead, + Update: resourcePacketDeviceUpdate, + Delete: resourcePacketDeviceDelete, + + Schema: map[string]*schema.Schema{ + "project_id": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + + "hostname": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + + "operating_system": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + + "facility": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + + "plan": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + + "billing_cycle": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + + "state": &schema.Schema{ + Type: schema.TypeString, + Computed: true, + }, + + "locked": &schema.Schema{ + Type: schema.TypeBool, + Computed: true, + }, + + "network": &schema.Schema{ + Type: schema.TypeList, + Computed: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "address": &schema.Schema{ + Type: schema.TypeString, + Computed: true, + }, + + "gateway": &schema.Schema{ + Type: schema.TypeString, + Computed: true, + }, + + "family": &schema.Schema{ + Type: schema.TypeInt, + Computed: true, + }, + + "cidr": &schema.Schema{ + Type: schema.TypeInt, + Computed: true, + }, + + "public": &schema.Schema{ + Type: schema.TypeBool, + Computed: true, + }, + }, + }, + }, + + "created": &schema.Schema{ + Type: schema.TypeString, + Computed: true, + }, + + "updated": &schema.Schema{ + Type: schema.TypeString, + Computed: true, + }, + + "user_data": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + }, + + "tags": &schema.Schema{ + Type: schema.TypeList, + Optional: true, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + }, + } +} + +func resourcePacketDeviceCreate(d *schema.ResourceData, meta interface{}) error { + client := meta.(*packngo.Client) + + createRequest := &packngo.DeviceCreateRequest{ + HostName: d.Get("hostname").(string), + Plan: d.Get("plan").(string), + Facility: d.Get("facility").(string), + OS: d.Get("operating_system").(string), + BillingCycle: d.Get("billing_cycle").(string), + ProjectID: d.Get("project_id").(string), + } + + if attr, ok := d.GetOk("user_data"); ok { + createRequest.UserData = attr.(string) + } + + tags := d.Get("tags.#").(int) + if tags > 0 { + createRequest.Tags = make([]string, 0, tags) + for i := 0; i < tags; i++ { + key := fmt.Sprintf("tags.%d", i) + createRequest.Tags = append(createRequest.Tags, d.Get(key).(string)) + } + } + + log.Printf("[DEBUG] Device create configuration: %#v", createRequest) + + newDevice, _, err := client.Devices.Create(createRequest) + if err != nil { + return fmt.Errorf("Error creating device: %s", err) + } + + // Assign the device id + d.SetId(newDevice.ID) + + log.Printf("[INFO] Device ID: %s", d.Id()) + + _, err = WaitForDeviceAttribute(d, "active", []string{"provisioning"}, "state", meta) + if err != nil { + return fmt.Errorf( + "Error waiting for device (%s) to become ready: %s", d.Id(), err) + } + + return resourcePacketDeviceRead(d, meta) +} + +func resourcePacketDeviceRead(d *schema.ResourceData, meta interface{}) error { + client := meta.(*packngo.Client) + + // Retrieve the device properties for updating the state + device, _, err := client.Devices.Get(d.Id()) + if err != nil { + return fmt.Errorf("Error retrieving device: %s", err) + } + + d.Set("name", device.Hostname) + d.Set("plan", device.Plan.Slug) + d.Set("facility", device.Facility.Code) + d.Set("operating_system", device.OS.Slug) + d.Set("state", device.State) + d.Set("billing_cycle", device.BillingCycle) + d.Set("locked", device.Locked) + d.Set("created", device.Created) + d.Set("udpated", device.Updated) + + tags := make([]string, 0) + for _, tag := range device.Tags { + tags = append(tags, tag) + } + d.Set("tags", tags) + + networks := make([]map[string]interface{}, 0, 1) + for _, ip := range device.Network { + network := make(map[string]interface{}) + network["address"] = ip.Address + network["gateway"] = ip.Gateway + network["family"] = ip.Family + network["cidr"] = ip.Cidr + network["public"] = ip.Public + networks = append(networks, network) + } + d.Set("network", networks) + + return nil +} + +func resourcePacketDeviceUpdate(d *schema.ResourceData, meta interface{}) error { + client := meta.(*packngo.Client) + + if d.HasChange("locked") && d.Get("locked").(bool) { + _, err := client.Devices.Lock(d.Id()) + + if err != nil { + return fmt.Errorf( + "Error locking device (%s): %s", d.Id(), err) + } + } else if d.HasChange("locked") { + _, err := client.Devices.Unlock(d.Id()) + + if err != nil { + return fmt.Errorf( + "Error unlocking device (%s): %s", d.Id(), err) + } + } + + return resourcePacketDeviceRead(d, meta) +} + +func resourcePacketDeviceDelete(d *schema.ResourceData, meta interface{}) error { + client := meta.(*packngo.Client) + + log.Printf("[INFO] Deleting device: %s", d.Id()) + if _, err := client.Devices.Delete(d.Id()); err != nil { + return fmt.Errorf("Error deleting device: %s", err) + } + + return nil +} + +func WaitForDeviceAttribute( + d *schema.ResourceData, target string, pending []string, attribute string, meta interface{}) (interface{}, error) { + // Wait for the device so we can get the networking attributes + // that show up after a while + log.Printf( + "[INFO] Waiting for device (%s) to have %s of %s", + d.Id(), attribute, target) + + stateConf := &resource.StateChangeConf{ + Pending: pending, + Target: target, + Refresh: newDeviceStateRefreshFunc(d, attribute, meta), + Timeout: 60 * time.Minute, + Delay: 10 * time.Second, + MinTimeout: 3 * time.Second, + } + + return stateConf.WaitForState() +} + +func newDeviceStateRefreshFunc( + d *schema.ResourceData, attribute string, meta interface{}) resource.StateRefreshFunc { + client := meta.(*packngo.Client) + return func() (interface{}, string, error) { + err := resourcePacketDeviceRead(d, meta) + if err != nil { + return nil, "", err + } + + // See if we can access our attribute + if attr, ok := d.GetOk(attribute); ok { + // Retrieve the device properties + device, _, err := client.Devices.Get(d.Id()) + if err != nil { + return nil, "", fmt.Errorf("Error retrieving device: %s", err) + } + + return &device, attr.(string), nil + } + + return nil, "", nil + } +} + +// Powers on the device and waits for it to be active +func powerOnAndWait(d *schema.ResourceData, meta interface{}) error { + client := meta.(*packngo.Client) + _, err := client.Devices.PowerOn(d.Id()) + if err != nil { + return err + } + + // Wait for power on + _, err = WaitForDeviceAttribute(d, "active", []string{"off"}, "state", client) + if err != nil { + return err + } + + return nil +} diff --git a/builtin/providers/packet/resource_packet_project.go b/builtin/providers/packet/resource_packet_project.go new file mode 100644 index 000000000..e41ef1381 --- /dev/null +++ b/builtin/providers/packet/resource_packet_project.go @@ -0,0 +1,123 @@ +package packet + +import ( + "fmt" + "log" + "strings" + + "github.com/hashicorp/terraform/helper/schema" + "github.com/packethost/packngo" +) + +func resourcePacketProject() *schema.Resource { + return &schema.Resource{ + Create: resourcePacketProjectCreate, + Read: resourcePacketProjectRead, + Update: resourcePacketProjectUpdate, + Delete: resourcePacketProjectDelete, + + Schema: map[string]*schema.Schema{ + "id": &schema.Schema{ + Type: schema.TypeString, + Computed: true, + }, + + "name": &schema.Schema{ + Type: schema.TypeString, + Required: true, + }, + + "payment_method": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + }, + + "created": &schema.Schema{ + Type: schema.TypeString, + Computed: true, + }, + + "updated": &schema.Schema{ + Type: schema.TypeString, + Computed: true, + }, + }, + } +} + +func resourcePacketProjectCreate(d *schema.ResourceData, meta interface{}) error { + client := meta.(*packngo.Client) + + createRequest := &packngo.ProjectCreateRequest{ + Name: d.Get("name").(string), + PaymentMethod: d.Get("payment_method").(string), + } + + log.Printf("[DEBUG] Project create configuration: %#v", createRequest) + project, _, err := client.Projects.Create(createRequest) + if err != nil { + return fmt.Errorf("Error creating Project: %s", err) + } + + d.SetId(project.ID) + log.Printf("[INFO] Project created: %s", project.ID) + + return resourcePacketProjectRead(d, meta) +} + +func resourcePacketProjectRead(d *schema.ResourceData, meta interface{}) error { + client := meta.(*packngo.Client) + + key, _, err := client.Projects.Get(d.Id()) + if err != nil { + // If the project somehow already destroyed, mark as + // succesfully gone + if strings.Contains(err.Error(), "404") { + d.SetId("") + return nil + } + + return fmt.Errorf("Error retrieving Project: %s", err) + } + + d.Set("id", key.ID) + d.Set("name", key.Name) + d.Set("created", key.Created) + d.Set("updated", key.Updated) + + return nil +} + +func resourcePacketProjectUpdate(d *schema.ResourceData, meta interface{}) error { + client := meta.(*packngo.Client) + + updateRequest := &packngo.ProjectUpdateRequest{ + ID: d.Get("id").(string), + Name: d.Get("name").(string), + } + + if attr, ok := d.GetOk("payment_method"); ok { + updateRequest.PaymentMethod = attr.(string) + } + + log.Printf("[DEBUG] Project update: %#v", d.Get("id")) + _, _, err := client.Projects.Update(updateRequest) + if err != nil { + return fmt.Errorf("Failed to update Project: %s", err) + } + + return resourcePacketProjectRead(d, meta) +} + +func resourcePacketProjectDelete(d *schema.ResourceData, meta interface{}) error { + client := meta.(*packngo.Client) + + log.Printf("[INFO] Deleting Project: %s", d.Id()) + _, err := client.Projects.Delete(d.Id()) + if err != nil { + return fmt.Errorf("Error deleting SSH key: %s", err) + } + + d.SetId("") + return nil +} diff --git a/builtin/providers/packet/resource_packet_project_test.go b/builtin/providers/packet/resource_packet_project_test.go new file mode 100644 index 000000000..b0179cfbe --- /dev/null +++ b/builtin/providers/packet/resource_packet_project_test.go @@ -0,0 +1,95 @@ +package packet + +import ( + "fmt" + "testing" + + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" + "github.com/packethost/packngo" +) + +func TestAccPacketProject_Basic(t *testing.T) { + var project packngo.Project + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckPacketProjectDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccCheckPacketProjectConfig_basic, + Check: resource.ComposeTestCheckFunc( + testAccCheckPacketProjectExists("packet_project.foobar", &project), + testAccCheckPacketProjectAttributes(&project), + resource.TestCheckResourceAttr( + "packet_project.foobar", "name", "foobar"), + ), + }, + }, + }) +} + +func testAccCheckPacketProjectDestroy(s *terraform.State) error { + client := testAccProvider.Meta().(*packngo.Client) + + for _, rs := range s.RootModule().Resources { + if rs.Type != "packet_project" { + continue + } + + _, _, err := client.Projects.Get(rs.Primary.ID) + + if err == nil { + fmt.Errorf("Project cstill exists") + } + } + + return nil +} + +func testAccCheckPacketProjectAttributes(project *packngo.Project) resource.TestCheckFunc { + return func(s *terraform.State) error { + + if project.Name != "foobar" { + return fmt.Errorf("Bad name: %s", project.Name) + } + + return nil + } +} + +func testAccCheckPacketProjectExists(n string, project *packngo.Project) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[n] + + if !ok { + return fmt.Errorf("Not found: %s", n) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No Record ID is set") + } + + client := testAccProvider.Meta().(*packngo.Client) + + foundProject, _, err := client.Projects.Get(rs.Primary.ID) + + if err != nil { + return err + } + + if foundProject.ID != rs.Primary.ID { + return fmt.Errorf("Record not found: %v - %v", rs.Primary.ID, foundProject) + } + + *project = *foundProject + + return nil + } +} + +var testAccCheckPacketProjectConfig_basic = fmt.Sprintf(` +resource "packet_project" "foobar" { + name = "foobar" +}`) diff --git a/builtin/providers/packet/resource_packet_ssh_key.go b/builtin/providers/packet/resource_packet_ssh_key.go new file mode 100644 index 000000000..95e04bd8c --- /dev/null +++ b/builtin/providers/packet/resource_packet_ssh_key.go @@ -0,0 +1,128 @@ +package packet + +import ( + "fmt" + "log" + "strings" + + "github.com/hashicorp/terraform/helper/schema" + "github.com/packethost/packngo" +) + +func resourcePacketSSHKey() *schema.Resource { + return &schema.Resource{ + Create: resourcePacketSSHKeyCreate, + Read: resourcePacketSSHKeyRead, + Update: resourcePacketSSHKeyUpdate, + Delete: resourcePacketSSHKeyDelete, + + Schema: map[string]*schema.Schema{ + "id": &schema.Schema{ + Type: schema.TypeString, + Computed: true, + }, + + "name": &schema.Schema{ + Type: schema.TypeString, + Required: true, + }, + + "public_key": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + + "fingerprint": &schema.Schema{ + Type: schema.TypeString, + Computed: true, + }, + + "created": &schema.Schema{ + Type: schema.TypeString, + Computed: true, + }, + + "updated": &schema.Schema{ + Type: schema.TypeString, + Computed: true, + }, + }, + } +} + +func resourcePacketSSHKeyCreate(d *schema.ResourceData, meta interface{}) error { + client := meta.(*packngo.Client) + + createRequest := &packngo.SSHKeyCreateRequest{ + Label: d.Get("name").(string), + Key: d.Get("public_key").(string), + } + + log.Printf("[DEBUG] SSH Key create configuration: %#v", createRequest) + key, _, err := client.SSHKeys.Create(createRequest) + if err != nil { + return fmt.Errorf("Error creating SSH Key: %s", err) + } + + d.SetId(key.ID) + log.Printf("[INFO] SSH Key: %s", key.ID) + + return resourcePacketSSHKeyRead(d, meta) +} + +func resourcePacketSSHKeyRead(d *schema.ResourceData, meta interface{}) error { + client := meta.(*packngo.Client) + + key, _, err := client.SSHKeys.Get(d.Id()) + if err != nil { + // If the key is somehow already destroyed, mark as + // succesfully gone + if strings.Contains(err.Error(), "404") { + d.SetId("") + return nil + } + + return fmt.Errorf("Error retrieving SSH key: %s", err) + } + + d.Set("id", key.ID) + d.Set("name", key.Label) + d.Set("public_key", key.Key) + d.Set("fingerprint", key.FingerPrint) + d.Set("created", key.Created) + d.Set("updated", key.Updated) + + return nil +} + +func resourcePacketSSHKeyUpdate(d *schema.ResourceData, meta interface{}) error { + client := meta.(*packngo.Client) + + updateRequest := &packngo.SSHKeyUpdateRequest{ + ID: d.Get("id").(string), + Label: d.Get("name").(string), + Key: d.Get("public_key").(string), + } + + log.Printf("[DEBUG] SSH key update: %#v", d.Get("id")) + _, _, err := client.SSHKeys.Update(updateRequest) + if err != nil { + return fmt.Errorf("Failed to update SSH key: %s", err) + } + + return resourcePacketSSHKeyRead(d, meta) +} + +func resourcePacketSSHKeyDelete(d *schema.ResourceData, meta interface{}) error { + client := meta.(*packngo.Client) + + log.Printf("[INFO] Deleting SSH key: %s", d.Id()) + _, err := client.SSHKeys.Delete(d.Id()) + if err != nil { + return fmt.Errorf("Error deleting SSH key: %s", err) + } + + d.SetId("") + return nil +} diff --git a/builtin/providers/packet/resource_packet_ssh_key_test.go b/builtin/providers/packet/resource_packet_ssh_key_test.go new file mode 100644 index 000000000..765086d4f --- /dev/null +++ b/builtin/providers/packet/resource_packet_ssh_key_test.go @@ -0,0 +1,104 @@ +package packet + +import ( + "fmt" + "strings" + "testing" + + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" + "github.com/packethost/packngo" +) + +func TestAccPacketSSHKey_Basic(t *testing.T) { + var key packngo.SSHKey + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckPacketSSHKeyDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccCheckPacketSSHKeyConfig_basic, + Check: resource.ComposeTestCheckFunc( + testAccCheckPacketSSHKeyExists("packet_ssh_key.foobar", &key), + testAccCheckPacketSSHKeyAttributes(&key), + resource.TestCheckResourceAttr( + "packet_ssh_key.foobar", "name", "foobar"), + resource.TestCheckResourceAttr( + "packet_ssh_key.foobar", "public_key", testAccValidPublicKey), + ), + }, + }, + }) +} + +func testAccCheckPacketSSHKeyDestroy(s *terraform.State) error { + client := testAccProvider.Meta().(*packngo.Client) + + for _, rs := range s.RootModule().Resources { + if rs.Type != "packet_ssh_key" { + continue + } + + _, _, err := client.SSHKeys.Get(rs.Primary.ID) + + if err == nil { + fmt.Errorf("SSH key still exists") + } + } + + return nil +} + +func testAccCheckPacketSSHKeyAttributes(key *packngo.SSHKey) resource.TestCheckFunc { + return func(s *terraform.State) error { + + if key.Label != "foobar" { + return fmt.Errorf("Bad name: %s", key.Label) + } + + return nil + } +} + +func testAccCheckPacketSSHKeyExists(n string, key *packngo.SSHKey) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[n] + + if !ok { + return fmt.Errorf("Not found: %s", n) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No Record ID is set") + } + + client := testAccProvider.Meta().(*packngo.Client) + + foundKey, _, err := client.SSHKeys.Get(rs.Primary.ID) + + if err != nil { + return err + } + + if foundKey.ID != rs.Primary.ID { + return fmt.Errorf("SSh Key not found: %v - %v", rs.Primary.ID, foundKey) + } + + *key = *foundKey + + fmt.Printf("key: %v", key) + return nil + } +} + +var testAccCheckPacketSSHKeyConfig_basic = fmt.Sprintf(` +resource "packet_ssh_key" "foobar" { + name = "foobar" + public_key = "%s" +}`, testAccValidPublicKey) + +var testAccValidPublicKey = strings.TrimSpace(` +ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCKVmnMOlHKcZK8tpt3MP1lqOLAcqcJzhsvJcjscgVERRN7/9484SOBJ3HSKxxNG5JN8owAjy5f9yYwcUg+JaUVuytn5Pv3aeYROHGGg+5G346xaq3DAwX6Y5ykr2fvjObgncQBnuU5KHWCECO/4h8uWuwh/kfniXPVjFToc+gnkqA+3RKpAecZhFXwfalQ9mMuYGFxn+fwn8cYEApsJbsEmb0iJwPiZ5hjFC8wREuiTlhPHDgkBLOiycd20op2nXzDbHfCHInquEe/gYxEitALONxm0swBOwJZwlTDOB7C6y2dzlrtxr1L59m7pCkWI4EtTRLvleehBoj3u7jB4usR +`) diff --git a/builtin/providers/rundeck/resource_job.go b/builtin/providers/rundeck/resource_job.go index 7411d5746..c9af25b0b 100644 --- a/builtin/providers/rundeck/resource_job.go +++ b/builtin/providers/rundeck/resource_job.go @@ -340,10 +340,10 @@ func jobFromResourceData(d *schema.ResourceData) (*rundeck.JobDetail, error) { LogLevel: d.Get("log_level").(string), AllowConcurrentExecutions: d.Get("allow_concurrent_executions").(bool), Dispatch: &rundeck.JobDispatch{ - MaxThreadCount: d.Get("max_thread_count").(int), - ContinueOnError: d.Get("continue_on_error").(bool), - RankAttribute: d.Get("rank_attribute").(string), - RankOrder: d.Get("rank_order").(string), + MaxThreadCount: d.Get("max_thread_count").(int), + ContinueOnError: d.Get("continue_on_error").(bool), + RankAttribute: d.Get("rank_attribute").(string), + RankOrder: d.Get("rank_order").(string), }, } diff --git a/builtin/providers/vsphere/config.go b/builtin/providers/vsphere/config.go new file mode 100644 index 000000000..1f6af7ffd --- /dev/null +++ b/builtin/providers/vsphere/config.go @@ -0,0 +1,39 @@ +package vsphere + +import ( + "fmt" + "log" + "net/url" + + "github.com/vmware/govmomi" + "golang.org/x/net/context" +) + +const ( + defaultInsecureFlag = true +) + +type Config struct { + User string + Password string + VCenterServer string +} + +// Client() returns a new client for accessing VMWare vSphere. +func (c *Config) Client() (*govmomi.Client, error) { + u, err := url.Parse("https://" + c.VCenterServer + "/sdk") + if err != nil { + return nil, fmt.Errorf("Error parse url: %s", err) + } + + u.User = url.UserPassword(c.User, c.Password) + + client, err := govmomi.NewClient(context.TODO(), u, defaultInsecureFlag) + if err != nil { + return nil, fmt.Errorf("Error setting up client: %s", err) + } + + log.Printf("[INFO] VMWare vSphere Client configured for URL: %s", u) + + return client, nil +} diff --git a/builtin/providers/vsphere/provider.go b/builtin/providers/vsphere/provider.go new file mode 100644 index 000000000..4dce81a9d --- /dev/null +++ b/builtin/providers/vsphere/provider.go @@ -0,0 +1,50 @@ +package vsphere + +import ( + "github.com/hashicorp/terraform/helper/schema" + "github.com/hashicorp/terraform/terraform" +) + +// Provider returns a terraform.ResourceProvider. +func Provider() terraform.ResourceProvider { + return &schema.Provider{ + Schema: map[string]*schema.Schema{ + "user": &schema.Schema{ + Type: schema.TypeString, + Required: true, + DefaultFunc: schema.EnvDefaultFunc("VSPHERE_USER", nil), + Description: "The user name for vSphere API operations.", + }, + + "password": &schema.Schema{ + Type: schema.TypeString, + Required: true, + DefaultFunc: schema.EnvDefaultFunc("VSPHERE_PASSWORD", nil), + Description: "The user password for vSphere API operations.", + }, + + "vcenter_server": &schema.Schema{ + Type: schema.TypeString, + Required: true, + DefaultFunc: schema.EnvDefaultFunc("VSPHERE_VCENTER", nil), + Description: "The vCenter Server name for vSphere API operations.", + }, + }, + + ResourcesMap: map[string]*schema.Resource{ + "vsphere_virtual_machine": resourceVSphereVirtualMachine(), + }, + + ConfigureFunc: providerConfigure, + } +} + +func providerConfigure(d *schema.ResourceData) (interface{}, error) { + config := Config{ + User: d.Get("user").(string), + Password: d.Get("password").(string), + VCenterServer: d.Get("vcenter_server").(string), + } + + return config.Client() +} diff --git a/builtin/providers/vsphere/provider_test.go b/builtin/providers/vsphere/provider_test.go new file mode 100644 index 000000000..bb8e4dc55 --- /dev/null +++ b/builtin/providers/vsphere/provider_test.go @@ -0,0 +1,43 @@ +package vsphere + +import ( + "os" + "testing" + + "github.com/hashicorp/terraform/helper/schema" + "github.com/hashicorp/terraform/terraform" +) + +var testAccProviders map[string]terraform.ResourceProvider +var testAccProvider *schema.Provider + +func init() { + testAccProvider = Provider().(*schema.Provider) + testAccProviders = map[string]terraform.ResourceProvider{ + "vsphere": testAccProvider, + } +} + +func TestProvider(t *testing.T) { + if err := Provider().(*schema.Provider).InternalValidate(); err != nil { + t.Fatalf("err: %s", err) + } +} + +func TestProvider_impl(t *testing.T) { + var _ terraform.ResourceProvider = Provider() +} + +func testAccPreCheck(t *testing.T) { + if v := os.Getenv("VSPHERE_USER"); v == "" { + t.Fatal("VSPHERE_USER must be set for acceptance tests") + } + + if v := os.Getenv("VSPHERE_PASSWORD"); v == "" { + t.Fatal("VSPHERE_PASSWORD must be set for acceptance tests") + } + + if v := os.Getenv("VSPHERE_VCENTER"); v == "" { + t.Fatal("VSPHERE_VCENTER must be set for acceptance tests") + } +} diff --git a/builtin/providers/vsphere/resource_vsphere_virtual_machine.go b/builtin/providers/vsphere/resource_vsphere_virtual_machine.go new file mode 100644 index 000000000..c6b1292ac --- /dev/null +++ b/builtin/providers/vsphere/resource_vsphere_virtual_machine.go @@ -0,0 +1,1061 @@ +package vsphere + +import ( + "fmt" + "log" + "net" + "strings" + "time" + + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/helper/schema" + "github.com/vmware/govmomi" + "github.com/vmware/govmomi/find" + "github.com/vmware/govmomi/object" + "github.com/vmware/govmomi/property" + "github.com/vmware/govmomi/vim25/mo" + "github.com/vmware/govmomi/vim25/types" + "golang.org/x/net/context" +) + +var DefaultDNSSuffixes = []string{ + "vsphere.local", +} + +var DefaultDNSServers = []string{ + "8.8.8.8", + "8.8.4.4", +} + +type networkInterface struct { + deviceName string + label string + ipAddress string + subnetMask string + adapterType string // TODO: Make "adapter_type" argument +} + +type hardDisk struct { + size int64 + iops int64 +} + +type virtualMachine struct { + name string + datacenter string + cluster string + resourcePool string + datastore string + vcpu int + memoryMb int64 + template string + networkInterfaces []networkInterface + hardDisks []hardDisk + gateway string + domain string + timeZone string + dnsSuffixes []string + dnsServers []string +} + +func resourceVSphereVirtualMachine() *schema.Resource { + return &schema.Resource{ + Create: resourceVSphereVirtualMachineCreate, + Read: resourceVSphereVirtualMachineRead, + Delete: resourceVSphereVirtualMachineDelete, + + Schema: map[string]*schema.Schema{ + "name": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + + "vcpu": &schema.Schema{ + Type: schema.TypeInt, + Required: true, + ForceNew: true, + }, + + "memory": &schema.Schema{ + Type: schema.TypeInt, + Required: true, + ForceNew: true, + }, + + "datacenter": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ForceNew: true, + }, + + "cluster": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ForceNew: true, + }, + + "resource_pool": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ForceNew: true, + }, + + "gateway": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ForceNew: true, + }, + + "domain": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ForceNew: true, + Default: "vsphere.local", + }, + + "time_zone": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ForceNew: true, + Default: "Etc/UTC", + }, + + "dns_suffixes": &schema.Schema{ + Type: schema.TypeList, + Optional: true, + Elem: &schema.Schema{Type: schema.TypeString}, + ForceNew: true, + }, + + "dns_servers": &schema.Schema{ + Type: schema.TypeList, + Optional: true, + Elem: &schema.Schema{Type: schema.TypeString}, + ForceNew: true, + }, + + "network_interface": &schema.Schema{ + Type: schema.TypeList, + Required: true, + ForceNew: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "label": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + + "ip_address": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + }, + + "subnet_mask": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + }, + + "adapter_type": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ForceNew: true, + }, + }, + }, + }, + + "disk": &schema.Schema{ + Type: schema.TypeList, + Required: true, + ForceNew: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "template": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ForceNew: true, + }, + + "datastore": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ForceNew: true, + }, + + "size": &schema.Schema{ + Type: schema.TypeInt, + Optional: true, + ForceNew: true, + }, + + "iops": &schema.Schema{ + Type: schema.TypeInt, + Optional: true, + ForceNew: true, + }, + }, + }, + }, + + "boot_delay": &schema.Schema{ + Type: schema.TypeInt, + Optional: true, + ForceNew: true, + }, + }, + } +} + +func resourceVSphereVirtualMachineCreate(d *schema.ResourceData, meta interface{}) error { + client := meta.(*govmomi.Client) + + vm := virtualMachine{ + name: d.Get("name").(string), + vcpu: d.Get("vcpu").(int), + memoryMb: int64(d.Get("memory").(int)), + } + + if v, ok := d.GetOk("datacenter"); ok { + vm.datacenter = v.(string) + } + + if v, ok := d.GetOk("cluster"); ok { + vm.cluster = v.(string) + } + + if v, ok := d.GetOk("resource_pool"); ok { + vm.resourcePool = v.(string) + } + + if v, ok := d.GetOk("gateway"); ok { + vm.gateway = v.(string) + } + + if v, ok := d.GetOk("domain"); ok { + vm.domain = v.(string) + } + + if v, ok := d.GetOk("time_zone"); ok { + vm.timeZone = v.(string) + } + + if raw, ok := d.GetOk("dns_suffixes"); ok { + for _, v := range raw.([]interface{}) { + vm.dnsSuffixes = append(vm.dnsSuffixes, v.(string)) + } + } else { + vm.dnsSuffixes = DefaultDNSSuffixes + } + + if raw, ok := d.GetOk("dns_servers"); ok { + for _, v := range raw.([]interface{}) { + vm.dnsServers = append(vm.dnsServers, v.(string)) + } + } else { + vm.dnsServers = DefaultDNSServers + } + + if vL, ok := d.GetOk("network_interface"); ok { + networks := make([]networkInterface, len(vL.([]interface{}))) + for i, v := range vL.([]interface{}) { + network := v.(map[string]interface{}) + networks[i].label = network["label"].(string) + if v, ok := network["ip_address"].(string); ok && v != "" { + networks[i].ipAddress = v + } + if v, ok := network["subnet_mask"].(string); ok && v != "" { + networks[i].subnetMask = v + } + } + vm.networkInterfaces = networks + log.Printf("[DEBUG] network_interface init: %v", networks) + } + + if vL, ok := d.GetOk("disk"); ok { + disks := make([]hardDisk, len(vL.([]interface{}))) + for i, v := range vL.([]interface{}) { + disk := v.(map[string]interface{}) + if i == 0 { + if v, ok := disk["template"].(string); ok && v != "" { + vm.template = v + } else { + if v, ok := disk["size"].(int); ok && v != 0 { + disks[i].size = int64(v) + } else { + return fmt.Errorf("If template argument is not specified, size argument is required.") + } + } + if v, ok := disk["datastore"].(string); ok && v != "" { + vm.datastore = v + } + } else { + if v, ok := disk["size"].(int); ok && v != 0 { + disks[i].size = int64(v) + } else { + return fmt.Errorf("Size argument is required.") + } + } + if v, ok := disk["iops"].(int); ok && v != 0 { + disks[i].iops = int64(v) + } + } + vm.hardDisks = disks + log.Printf("[DEBUG] disk init: %v", disks) + } + + if vm.template != "" { + err := vm.deployVirtualMachine(client) + if err != nil { + return err + } + } else { + err := vm.createVirtualMachine(client) + if err != nil { + return err + } + } + + if _, ok := d.GetOk("network_interface.0.ip_address"); !ok { + if v, ok := d.GetOk("boot_delay"); ok { + stateConf := &resource.StateChangeConf{ + Pending: []string{"pending"}, + Target: "active", + Refresh: waitForNetworkingActive(client, vm.datacenter, vm.name), + Timeout: 600 * time.Second, + Delay: time.Duration(v.(int)) * time.Second, + MinTimeout: 2 * time.Second, + } + + _, err := stateConf.WaitForState() + if err != nil { + return err + } + } + } + d.SetId(vm.name) + log.Printf("[INFO] Created virtual machine: %s", d.Id()) + + return resourceVSphereVirtualMachineRead(d, meta) +} + +func resourceVSphereVirtualMachineRead(d *schema.ResourceData, meta interface{}) error { + client := meta.(*govmomi.Client) + dc, err := getDatacenter(client, d.Get("datacenter").(string)) + if err != nil { + return err + } + finder := find.NewFinder(client.Client, true) + finder = finder.SetDatacenter(dc) + + vm, err := finder.VirtualMachine(context.TODO(), d.Get("name").(string)) + if err != nil { + log.Printf("[ERROR] Virtual machine not found: %s", d.Get("name").(string)) + d.SetId("") + return nil + } + + var mvm mo.VirtualMachine + + collector := property.DefaultCollector(client.Client) + if err := collector.RetrieveOne(context.TODO(), vm.Reference(), []string{"guest", "summary", "datastore"}, &mvm); err != nil { + return err + } + + log.Printf("[DEBUG] %#v", dc) + log.Printf("[DEBUG] %#v", mvm.Summary.Config) + log.Printf("[DEBUG] %#v", mvm.Guest.Net) + + networkInterfaces := make([]map[string]interface{}, 0) + for _, v := range mvm.Guest.Net { + if v.DeviceConfigId >= 0 { + log.Printf("[DEBUG] %#v", v.Network) + networkInterface := make(map[string]interface{}) + networkInterface["label"] = v.Network + if len(v.IpAddress) > 0 { + log.Printf("[DEBUG] %#v", v.IpAddress[0]) + networkInterface["ip_address"] = v.IpAddress[0] + + m := net.CIDRMask(v.IpConfig.IpAddress[0].PrefixLength, 32) + subnetMask := net.IPv4(m[0], m[1], m[2], m[3]) + networkInterface["subnet_mask"] = subnetMask.String() + log.Printf("[DEBUG] %#v", subnetMask.String()) + } + networkInterfaces = append(networkInterfaces, networkInterface) + } + } + err = d.Set("network_interface", networkInterfaces) + if err != nil { + return fmt.Errorf("Invalid network interfaces to set: %#v", networkInterfaces) + } + + var rootDatastore string + for _, v := range mvm.Datastore { + var md mo.Datastore + if err := collector.RetrieveOne(context.TODO(), v, []string{"name", "parent"}, &md); err != nil { + return err + } + if md.Parent.Type == "StoragePod" { + var msp mo.StoragePod + if err := collector.RetrieveOne(context.TODO(), *md.Parent, []string{"name"}, &msp); err != nil { + return err + } + rootDatastore = msp.Name + log.Printf("[DEBUG] %#v", msp.Name) + } else { + rootDatastore = md.Name + log.Printf("[DEBUG] %#v", md.Name) + } + break + } + + d.Set("datacenter", dc) + d.Set("memory", mvm.Summary.Config.MemorySizeMB) + d.Set("cpu", mvm.Summary.Config.NumCpu) + d.Set("datastore", rootDatastore) + + // Initialize the connection info + d.SetConnInfo(map[string]string{ + "type": "ssh", + "host": networkInterfaces[0]["ip_address"].(string), + }) + + return nil +} + +func resourceVSphereVirtualMachineDelete(d *schema.ResourceData, meta interface{}) error { + client := meta.(*govmomi.Client) + dc, err := getDatacenter(client, d.Get("datacenter").(string)) + if err != nil { + return err + } + finder := find.NewFinder(client.Client, true) + finder = finder.SetDatacenter(dc) + + vm, err := finder.VirtualMachine(context.TODO(), d.Get("name").(string)) + if err != nil { + return err + } + + log.Printf("[INFO] Deleting virtual machine: %s", d.Id()) + + task, err := vm.PowerOff(context.TODO()) + if err != nil { + return err + } + + err = task.Wait(context.TODO()) + if err != nil { + return err + } + + task, err = vm.Destroy(context.TODO()) + if err != nil { + return err + } + + err = task.Wait(context.TODO()) + if err != nil { + return err + } + + d.SetId("") + return nil +} + +func waitForNetworkingActive(client *govmomi.Client, datacenter, name string) resource.StateRefreshFunc { + return func() (interface{}, string, error) { + dc, err := getDatacenter(client, datacenter) + if err != nil { + log.Printf("[ERROR] %#v", err) + return nil, "", err + } + finder := find.NewFinder(client.Client, true) + finder = finder.SetDatacenter(dc) + + vm, err := finder.VirtualMachine(context.TODO(), name) + if err != nil { + log.Printf("[ERROR] %#v", err) + return nil, "", err + } + + var mvm mo.VirtualMachine + collector := property.DefaultCollector(client.Client) + if err := collector.RetrieveOne(context.TODO(), vm.Reference(), []string{"summary"}, &mvm); err != nil { + log.Printf("[ERROR] %#v", err) + return nil, "", err + } + + if mvm.Summary.Guest.IpAddress != "" { + log.Printf("[DEBUG] IP address with DHCP: %v", mvm.Summary.Guest.IpAddress) + return mvm.Summary, "active", err + } else { + log.Printf("[DEBUG] Waiting for IP address") + return nil, "pending", err + } + } +} + +// getDatacenter gets datacenter object +func getDatacenter(c *govmomi.Client, dc string) (*object.Datacenter, error) { + finder := find.NewFinder(c.Client, true) + if dc != "" { + d, err := finder.Datacenter(context.TODO(), dc) + return d, err + } else { + d, err := finder.DefaultDatacenter(context.TODO()) + return d, err + } +} + +// addHardDisk adds a new Hard Disk to the VirtualMachine. +func addHardDisk(vm *object.VirtualMachine, size, iops int64, diskType string) error { + devices, err := vm.Device(context.TODO()) + if err != nil { + return err + } + log.Printf("[DEBUG] vm devices: %#v\n", devices) + + controller, err := devices.FindDiskController("scsi") + if err != nil { + return err + } + log.Printf("[DEBUG] disk controller: %#v\n", controller) + + disk := devices.CreateDisk(controller, "") + existing := devices.SelectByBackingInfo(disk.Backing) + log.Printf("[DEBUG] disk: %#v\n", disk) + + if len(existing) == 0 { + disk.CapacityInKB = int64(size * 1024 * 1024) + if iops != 0 { + disk.StorageIOAllocation = &types.StorageIOAllocationInfo{ + Limit: iops, + } + } + backing := disk.Backing.(*types.VirtualDiskFlatVer2BackingInfo) + + if diskType == "eager_zeroed" { + // eager zeroed thick virtual disk + backing.ThinProvisioned = types.NewBool(false) + backing.EagerlyScrub = types.NewBool(true) + } else if diskType == "thin" { + // thin provisioned virtual disk + backing.ThinProvisioned = types.NewBool(true) + } + + log.Printf("[DEBUG] addHardDisk: %#v\n", disk) + log.Printf("[DEBUG] addHardDisk: %#v\n", disk.CapacityInKB) + + return vm.AddDevice(context.TODO(), disk) + } else { + log.Printf("[DEBUG] addHardDisk: Disk already present.\n") + + return nil + } +} + +// createNetworkDevice creates VirtualDeviceConfigSpec for Network Device. +func createNetworkDevice(f *find.Finder, label, adapterType string) (*types.VirtualDeviceConfigSpec, error) { + network, err := f.Network(context.TODO(), "*"+label) + if err != nil { + return nil, err + } + + backing, err := network.EthernetCardBackingInfo(context.TODO()) + if err != nil { + return nil, err + } + + if adapterType == "vmxnet3" { + return &types.VirtualDeviceConfigSpec{ + Operation: types.VirtualDeviceConfigSpecOperationAdd, + Device: &types.VirtualVmxnet3{ + types.VirtualVmxnet{ + types.VirtualEthernetCard{ + VirtualDevice: types.VirtualDevice{ + Key: -1, + Backing: backing, + }, + AddressType: string(types.VirtualEthernetCardMacTypeGenerated), + }, + }, + }, + }, nil + } else if adapterType == "e1000" { + return &types.VirtualDeviceConfigSpec{ + Operation: types.VirtualDeviceConfigSpecOperationAdd, + Device: &types.VirtualE1000{ + types.VirtualEthernetCard{ + VirtualDevice: types.VirtualDevice{ + Key: -1, + Backing: backing, + }, + AddressType: string(types.VirtualEthernetCardMacTypeGenerated), + }, + }, + }, nil + } else { + return nil, fmt.Errorf("Invalid network adapter type.") + } +} + +// createVMRelocateSpec creates VirtualMachineRelocateSpec to set a place for a new VirtualMachine. +func createVMRelocateSpec(rp *object.ResourcePool, ds *object.Datastore, vm *object.VirtualMachine) (types.VirtualMachineRelocateSpec, error) { + var key int + + devices, err := vm.Device(context.TODO()) + if err != nil { + return types.VirtualMachineRelocateSpec{}, err + } + for _, d := range devices { + if devices.Type(d) == "disk" { + key = d.GetVirtualDevice().Key + } + } + + rpr := rp.Reference() + dsr := ds.Reference() + return types.VirtualMachineRelocateSpec{ + Datastore: &dsr, + Pool: &rpr, + Disk: []types.VirtualMachineRelocateSpecDiskLocator{ + types.VirtualMachineRelocateSpecDiskLocator{ + Datastore: dsr, + DiskBackingInfo: &types.VirtualDiskFlatVer2BackingInfo{ + DiskMode: "persistent", + ThinProvisioned: types.NewBool(false), + EagerlyScrub: types.NewBool(true), + }, + DiskId: key, + }, + }, + }, nil +} + +// getDatastoreObject gets datastore object. +func getDatastoreObject(client *govmomi.Client, f *object.DatacenterFolders, name string) (types.ManagedObjectReference, error) { + s := object.NewSearchIndex(client.Client) + ref, err := s.FindChild(context.TODO(), f.DatastoreFolder, name) + if err != nil { + return types.ManagedObjectReference{}, err + } + if ref == nil { + return types.ManagedObjectReference{}, fmt.Errorf("Datastore '%s' not found.", name) + } + log.Printf("[DEBUG] getDatastoreObject: reference: %#v", ref) + return ref.Reference(), nil +} + +// createStoragePlacementSpecCreate creates StoragePlacementSpec for create action. +func createStoragePlacementSpecCreate(f *object.DatacenterFolders, rp *object.ResourcePool, storagePod object.StoragePod, configSpec types.VirtualMachineConfigSpec) types.StoragePlacementSpec { + vmfr := f.VmFolder.Reference() + rpr := rp.Reference() + spr := storagePod.Reference() + + sps := types.StoragePlacementSpec{ + Type: "create", + ConfigSpec: &configSpec, + PodSelectionSpec: types.StorageDrsPodSelectionSpec{ + StoragePod: &spr, + }, + Folder: &vmfr, + ResourcePool: &rpr, + } + log.Printf("[DEBUG] findDatastore: StoragePlacementSpec: %#v\n", sps) + return sps +} + +// createStoragePlacementSpecClone creates StoragePlacementSpec for clone action. +func createStoragePlacementSpecClone(c *govmomi.Client, f *object.DatacenterFolders, vm *object.VirtualMachine, rp *object.ResourcePool, storagePod object.StoragePod) types.StoragePlacementSpec { + vmr := vm.Reference() + vmfr := f.VmFolder.Reference() + rpr := rp.Reference() + spr := storagePod.Reference() + + var o mo.VirtualMachine + err := vm.Properties(context.TODO(), vmr, []string{"datastore"}, &o) + if err != nil { + return types.StoragePlacementSpec{} + } + ds := object.NewDatastore(c.Client, o.Datastore[0]) + log.Printf("[DEBUG] findDatastore: datastore: %#v\n", ds) + + devices, err := vm.Device(context.TODO()) + if err != nil { + return types.StoragePlacementSpec{} + } + + var key int + for _, d := range devices.SelectByType((*types.VirtualDisk)(nil)) { + key = d.GetVirtualDevice().Key + log.Printf("[DEBUG] findDatastore: virtual devices: %#v\n", d.GetVirtualDevice()) + } + + sps := types.StoragePlacementSpec{ + Type: "clone", + Vm: &vmr, + PodSelectionSpec: types.StorageDrsPodSelectionSpec{ + StoragePod: &spr, + }, + CloneSpec: &types.VirtualMachineCloneSpec{ + Location: types.VirtualMachineRelocateSpec{ + Disk: []types.VirtualMachineRelocateSpecDiskLocator{ + types.VirtualMachineRelocateSpecDiskLocator{ + Datastore: ds.Reference(), + DiskBackingInfo: &types.VirtualDiskFlatVer2BackingInfo{}, + DiskId: key, + }, + }, + Pool: &rpr, + }, + PowerOn: false, + Template: false, + }, + CloneName: "dummy", + Folder: &vmfr, + } + return sps +} + +// findDatastore finds Datastore object. +func findDatastore(c *govmomi.Client, sps types.StoragePlacementSpec) (*object.Datastore, error) { + var datastore *object.Datastore + log.Printf("[DEBUG] findDatastore: StoragePlacementSpec: %#v\n", sps) + + srm := object.NewStorageResourceManager(c.Client) + rds, err := srm.RecommendDatastores(context.TODO(), sps) + if err != nil { + return nil, err + } + log.Printf("[DEBUG] findDatastore: recommendDatastores: %#v\n", rds) + + spa := rds.Recommendations[0].Action[0].(*types.StoragePlacementAction) + datastore = object.NewDatastore(c.Client, spa.Destination) + log.Printf("[DEBUG] findDatastore: datastore: %#v", datastore) + + return datastore, nil +} + +// createVirtualMchine creates a new VirtualMachine. +func (vm *virtualMachine) createVirtualMachine(c *govmomi.Client) error { + dc, err := getDatacenter(c, vm.datacenter) + if err != nil { + return err + } + finder := find.NewFinder(c.Client, true) + finder = finder.SetDatacenter(dc) + + var resourcePool *object.ResourcePool + if vm.resourcePool == "" { + if vm.cluster == "" { + resourcePool, err = finder.DefaultResourcePool(context.TODO()) + if err != nil { + return err + } + } else { + resourcePool, err = finder.ResourcePool(context.TODO(), "*"+vm.cluster+"/Resources") + if err != nil { + return err + } + } + } else { + resourcePool, err = finder.ResourcePool(context.TODO(), vm.resourcePool) + if err != nil { + return err + } + } + log.Printf("[DEBUG] resource pool: %#v", resourcePool) + + dcFolders, err := dc.Folders(context.TODO()) + if err != nil { + return err + } + + // network + networkDevices := []types.BaseVirtualDeviceConfigSpec{} + for _, network := range vm.networkInterfaces { + // network device + nd, err := createNetworkDevice(finder, network.label, "e1000") + if err != nil { + return err + } + networkDevices = append(networkDevices, nd) + } + + // make config spec + configSpec := types.VirtualMachineConfigSpec{ + GuestId: "otherLinux64Guest", + Name: vm.name, + NumCPUs: vm.vcpu, + NumCoresPerSocket: 1, + MemoryMB: vm.memoryMb, + DeviceChange: networkDevices, + } + log.Printf("[DEBUG] virtual machine config spec: %v", configSpec) + + var datastore *object.Datastore + if vm.datastore == "" { + datastore, err = finder.DefaultDatastore(context.TODO()) + if err != nil { + return err + } + } else { + datastore, err = finder.Datastore(context.TODO(), vm.datastore) + if err != nil { + // TODO: datastore cluster support in govmomi finder function + d, err := getDatastoreObject(c, dcFolders, vm.datastore) + if err != nil { + return err + } + + if d.Type == "StoragePod" { + sp := object.StoragePod{ + object.NewFolder(c.Client, d), + } + sps := createStoragePlacementSpecCreate(dcFolders, resourcePool, sp, configSpec) + datastore, err = findDatastore(c, sps) + if err != nil { + return err + } + } else { + datastore = object.NewDatastore(c.Client, d) + } + } + } + + log.Printf("[DEBUG] datastore: %#v", datastore) + + var mds mo.Datastore + if err = datastore.Properties(context.TODO(), datastore.Reference(), []string{"name"}, &mds); err != nil { + return err + } + log.Printf("[DEBUG] datastore: %#v", mds.Name) + scsi, err := object.SCSIControllerTypes().CreateSCSIController("scsi") + if err != nil { + log.Printf("[ERROR] %s", err) + } + + configSpec.DeviceChange = append(configSpec.DeviceChange, &types.VirtualDeviceConfigSpec{ + Operation: types.VirtualDeviceConfigSpecOperationAdd, + Device: scsi, + }) + configSpec.Files = &types.VirtualMachineFileInfo{VmPathName: fmt.Sprintf("[%s]", mds.Name)} + + task, err := dcFolders.VmFolder.CreateVM(context.TODO(), configSpec, resourcePool, nil) + if err != nil { + log.Printf("[ERROR] %s", err) + } + + err = task.Wait(context.TODO()) + if err != nil { + log.Printf("[ERROR] %s", err) + } + + newVM, err := finder.VirtualMachine(context.TODO(), vm.name) + if err != nil { + return err + } + log.Printf("[DEBUG] new vm: %v", newVM) + + log.Printf("[DEBUG] add hard disk: %v", vm.hardDisks) + for _, hd := range vm.hardDisks { + log.Printf("[DEBUG] add hard disk: %v", hd.size) + log.Printf("[DEBUG] add hard disk: %v", hd.iops) + err = addHardDisk(newVM, hd.size, hd.iops, "thin") + if err != nil { + return err + } + } + return nil +} + +// deployVirtualMchine deploys a new VirtualMachine. +func (vm *virtualMachine) deployVirtualMachine(c *govmomi.Client) error { + dc, err := getDatacenter(c, vm.datacenter) + if err != nil { + return err + } + finder := find.NewFinder(c.Client, true) + finder = finder.SetDatacenter(dc) + + template, err := finder.VirtualMachine(context.TODO(), vm.template) + if err != nil { + return err + } + log.Printf("[DEBUG] template: %#v", template) + + var resourcePool *object.ResourcePool + if vm.resourcePool == "" { + if vm.cluster == "" { + resourcePool, err = finder.DefaultResourcePool(context.TODO()) + if err != nil { + return err + } + } else { + resourcePool, err = finder.ResourcePool(context.TODO(), "*"+vm.cluster+"/Resources") + if err != nil { + return err + } + } + } else { + resourcePool, err = finder.ResourcePool(context.TODO(), vm.resourcePool) + if err != nil { + return err + } + } + log.Printf("[DEBUG] resource pool: %#v", resourcePool) + + dcFolders, err := dc.Folders(context.TODO()) + if err != nil { + return err + } + + var datastore *object.Datastore + if vm.datastore == "" { + datastore, err = finder.DefaultDatastore(context.TODO()) + if err != nil { + return err + } + } else { + datastore, err = finder.Datastore(context.TODO(), vm.datastore) + if err != nil { + // TODO: datastore cluster support in govmomi finder function + d, err := getDatastoreObject(c, dcFolders, vm.datastore) + if err != nil { + return err + } + + if d.Type == "StoragePod" { + sp := object.StoragePod{ + object.NewFolder(c.Client, d), + } + sps := createStoragePlacementSpecClone(c, dcFolders, template, resourcePool, sp) + datastore, err = findDatastore(c, sps) + if err != nil { + return err + } + } else { + datastore = object.NewDatastore(c.Client, d) + } + } + } + log.Printf("[DEBUG] datastore: %#v", datastore) + + relocateSpec, err := createVMRelocateSpec(resourcePool, datastore, template) + if err != nil { + return err + } + log.Printf("[DEBUG] relocate spec: %v", relocateSpec) + + // network + networkDevices := []types.BaseVirtualDeviceConfigSpec{} + networkConfigs := []types.CustomizationAdapterMapping{} + for _, network := range vm.networkInterfaces { + // network device + nd, err := createNetworkDevice(finder, network.label, "vmxnet3") + if err != nil { + return err + } + networkDevices = append(networkDevices, nd) + + var ipSetting types.CustomizationIPSettings + if network.ipAddress == "" { + ipSetting = types.CustomizationIPSettings{ + Ip: &types.CustomizationDhcpIpGenerator{}, + } + } else { + log.Printf("[DEBUG] gateway: %v", vm.gateway) + log.Printf("[DEBUG] ip address: %v", network.ipAddress) + log.Printf("[DEBUG] subnet mask: %v", network.subnetMask) + ipSetting = types.CustomizationIPSettings{ + Gateway: []string{ + vm.gateway, + }, + Ip: &types.CustomizationFixedIp{ + IpAddress: network.ipAddress, + }, + SubnetMask: network.subnetMask, + } + } + + // network config + config := types.CustomizationAdapterMapping{ + Adapter: ipSetting, + } + networkConfigs = append(networkConfigs, config) + } + log.Printf("[DEBUG] network configs: %v", networkConfigs[0].Adapter) + + // make config spec + configSpec := types.VirtualMachineConfigSpec{ + NumCPUs: vm.vcpu, + NumCoresPerSocket: 1, + MemoryMB: vm.memoryMb, + DeviceChange: networkDevices, + } + log.Printf("[DEBUG] virtual machine config spec: %v", configSpec) + + // create CustomizationSpec + customSpec := types.CustomizationSpec{ + Identity: &types.CustomizationLinuxPrep{ + HostName: &types.CustomizationFixedName{ + Name: strings.Split(vm.name, ".")[0], + }, + Domain: vm.domain, + TimeZone: vm.timeZone, + HwClockUTC: types.NewBool(true), + }, + GlobalIPSettings: types.CustomizationGlobalIPSettings{ + DnsSuffixList: vm.dnsSuffixes, + DnsServerList: vm.dnsServers, + }, + NicSettingMap: networkConfigs, + } + log.Printf("[DEBUG] custom spec: %v", customSpec) + + // make vm clone spec + cloneSpec := types.VirtualMachineCloneSpec{ + Location: relocateSpec, + Template: false, + Config: &configSpec, + Customization: &customSpec, + PowerOn: true, + } + log.Printf("[DEBUG] clone spec: %v", cloneSpec) + + task, err := template.Clone(context.TODO(), dcFolders.VmFolder, vm.name, cloneSpec) + if err != nil { + return err + } + + _, err = task.WaitForResult(context.TODO(), nil) + if err != nil { + return err + } + + newVM, err := finder.VirtualMachine(context.TODO(), vm.name) + if err != nil { + return err + } + log.Printf("[DEBUG] new vm: %v", newVM) + + ip, err := newVM.WaitForIP(context.TODO()) + if err != nil { + return err + } + log.Printf("[DEBUG] ip address: %v", ip) + + for i := 1; i < len(vm.hardDisks); i++ { + err = addHardDisk(newVM, vm.hardDisks[i].size, vm.hardDisks[i].iops, "eager_zeroed") + if err != nil { + return err + } + } + return nil +} diff --git a/builtin/providers/vsphere/resource_vsphere_virtual_machine_test.go b/builtin/providers/vsphere/resource_vsphere_virtual_machine_test.go new file mode 100644 index 000000000..75bc339e8 --- /dev/null +++ b/builtin/providers/vsphere/resource_vsphere_virtual_machine_test.go @@ -0,0 +1,240 @@ +package vsphere + +import ( + "fmt" + "os" + "testing" + + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" + "github.com/vmware/govmomi" + "github.com/vmware/govmomi/find" + "github.com/vmware/govmomi/object" + "golang.org/x/net/context" +) + +func TestAccVSphereVirtualMachine_basic(t *testing.T) { + var vm virtualMachine + datacenter := os.Getenv("VSPHERE_DATACENTER") + cluster := os.Getenv("VSPHERE_CLUSTER") + datastore := os.Getenv("VSPHERE_DATASTORE") + template := os.Getenv("VSPHERE_TEMPLATE") + gateway := os.Getenv("VSPHERE_NETWORK_GATEWAY") + label := os.Getenv("VSPHERE_NETWORK_LABEL") + ip_address := os.Getenv("VSPHERE_NETWORK_IP_ADDRESS") + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckVSphereVirtualMachineDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: fmt.Sprintf( + testAccCheckVSphereVirtualMachineConfig_basic, + datacenter, + cluster, + gateway, + label, + ip_address, + datastore, + template, + ), + Check: resource.ComposeTestCheckFunc( + testAccCheckVSphereVirtualMachineExists("vsphere_virtual_machine.foo", &vm), + resource.TestCheckResourceAttr( + "vsphere_virtual_machine.foo", "name", "terraform-test"), + resource.TestCheckResourceAttr( + "vsphere_virtual_machine.foo", "datacenter", datacenter), + resource.TestCheckResourceAttr( + "vsphere_virtual_machine.foo", "vcpu", "2"), + resource.TestCheckResourceAttr( + "vsphere_virtual_machine.foo", "memory", "4096"), + resource.TestCheckResourceAttr( + "vsphere_virtual_machine.foo", "disk.#", "2"), + resource.TestCheckResourceAttr( + "vsphere_virtual_machine.foo", "disk.0.datastore", datastore), + resource.TestCheckResourceAttr( + "vsphere_virtual_machine.foo", "disk.0.template", template), + resource.TestCheckResourceAttr( + "vsphere_virtual_machine.foo", "network_interface.#", "1"), + resource.TestCheckResourceAttr( + "vsphere_virtual_machine.foo", "network_interface.0.label", label), + ), + }, + }, + }) +} + +func TestAccVSphereVirtualMachine_dhcp(t *testing.T) { + var vm virtualMachine + datacenter := os.Getenv("VSPHERE_DATACENTER") + cluster := os.Getenv("VSPHERE_CLUSTER") + datastore := os.Getenv("VSPHERE_DATASTORE") + template := os.Getenv("VSPHERE_TEMPLATE") + label := os.Getenv("VSPHERE_NETWORK_LABEL_DHCP") + password := os.Getenv("VSPHERE_VM_PASSWORD") + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckVSphereVirtualMachineDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: fmt.Sprintf( + testAccCheckVSphereVirtualMachineConfig_dhcp, + datacenter, + cluster, + label, + datastore, + template, + password, + ), + Check: resource.ComposeTestCheckFunc( + testAccCheckVSphereVirtualMachineExists("vsphere_virtual_machine.bar", &vm), + resource.TestCheckResourceAttr( + "vsphere_virtual_machine.bar", "name", "terraform-test"), + resource.TestCheckResourceAttr( + "vsphere_virtual_machine.bar", "datacenter", datacenter), + resource.TestCheckResourceAttr( + "vsphere_virtual_machine.bar", "vcpu", "2"), + resource.TestCheckResourceAttr( + "vsphere_virtual_machine.bar", "memory", "4096"), + resource.TestCheckResourceAttr( + "vsphere_virtual_machine.bar", "disk.#", "1"), + resource.TestCheckResourceAttr( + "vsphere_virtual_machine.bar", "disk.0.datastore", datastore), + resource.TestCheckResourceAttr( + "vsphere_virtual_machine.bar", "disk.0.template", template), + resource.TestCheckResourceAttr( + "vsphere_virtual_machine.bar", "network_interface.#", "1"), + resource.TestCheckResourceAttr( + "vsphere_virtual_machine.bar", "network_interface.0.label", label), + ), + }, + }, + }) +} + +func testAccCheckVSphereVirtualMachineDestroy(s *terraform.State) error { + client := testAccProvider.Meta().(*govmomi.Client) + finder := find.NewFinder(client.Client, true) + + for _, rs := range s.RootModule().Resources { + if rs.Type != "vsphere_virtual_machine" { + continue + } + + dc, err := finder.Datacenter(context.TODO(), rs.Primary.Attributes["datacenter"]) + if err != nil { + return fmt.Errorf("error %s", err) + } + + dcFolders, err := dc.Folders(context.TODO()) + if err != nil { + return fmt.Errorf("error %s", err) + } + + _, err = object.NewSearchIndex(client.Client).FindChild(context.TODO(), dcFolders.VmFolder, rs.Primary.Attributes["name"]) + if err == nil { + return fmt.Errorf("Record still exists") + } + } + + return nil +} + +func testAccCheckVSphereVirtualMachineExists(n string, vm *virtualMachine) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[n] + if !ok { + return fmt.Errorf("Not found: %s", n) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No ID is set") + } + + client := testAccProvider.Meta().(*govmomi.Client) + finder := find.NewFinder(client.Client, true) + + dc, err := finder.Datacenter(context.TODO(), rs.Primary.Attributes["datacenter"]) + if err != nil { + return fmt.Errorf("error %s", err) + } + + dcFolders, err := dc.Folders(context.TODO()) + if err != nil { + return fmt.Errorf("error %s", err) + } + + _, err = object.NewSearchIndex(client.Client).FindChild(context.TODO(), dcFolders.VmFolder, rs.Primary.Attributes["name"]) + /* + vmRef, err := client.SearchIndex().FindChild(dcFolders.VmFolder, rs.Primary.Attributes["name"]) + if err != nil { + return fmt.Errorf("error %s", err) + } + + found := govmomi.NewVirtualMachine(client, vmRef.Reference()) + fmt.Printf("%v", found) + + if found.Name != rs.Primary.ID { + return fmt.Errorf("Instance not found") + } + *instance = *found + */ + + *vm = virtualMachine{ + name: rs.Primary.ID, + } + + return nil + } +} + +const testAccCheckVSphereVirtualMachineConfig_basic = ` +resource "vsphere_virtual_machine" "foo" { + name = "terraform-test" + datacenter = "%s" + cluster = "%s" + vcpu = 2 + memory = 4096 + gateway = "%s" + network_interface { + label = "%s" + ip_address = "%s" + subnet_mask = "255.255.255.0" + } + disk { + datastore = "%s" + template = "%s" + iops = 500 + } + disk { + size = 1 + iops = 500 + } +} +` + +const testAccCheckVSphereVirtualMachineConfig_dhcp = ` +resource "vsphere_virtual_machine" "bar" { + name = "terraform-test" + datacenter = "%s" + cluster = "%s" + vcpu = 2 + memory = 4096 + network_interface { + label = "%s" + } + disk { + datastore = "%s" + template = "%s" + } + + connection { + host = "${self.network_interface.0.ip_address}" + user = "root" + password = "%s" + } +} +` diff --git a/builtin/provisioners/chef/resource_provisioner.go b/builtin/provisioners/chef/resource_provisioner.go index d7dbd718d..7b94486d2 100644 --- a/builtin/provisioners/chef/resource_provisioner.go +++ b/builtin/provisioners/chef/resource_provisioner.go @@ -326,7 +326,6 @@ func (p *Provisioner) runChefClientFunc( cmd = fmt.Sprintf("%s -j %q -E %q", chefCmd, fb, p.Environment) } - if p.LogToFile { if err := os.MkdirAll(logfileDir, 0755); err != nil { return fmt.Errorf("Error creating logfile directory %s: %v", logfileDir, err) diff --git a/command/apply.go b/command/apply.go index f924c65ca..8001cfe07 100644 --- a/command/apply.go +++ b/command/apply.go @@ -39,6 +39,8 @@ func (c *ApplyCommand) Run(args []string) int { cmdFlags.BoolVar(&destroyForce, "force", false, "force") } cmdFlags.BoolVar(&refresh, "refresh", true, "refresh") + cmdFlags.IntVar( + &c.Meta.parallelism, "parallelism", DefaultParallelism, "parallelism") cmdFlags.StringVar(&c.Meta.statePath, "state", DefaultStateFilename, "path") cmdFlags.StringVar(&c.Meta.stateOutPath, "state-out", "", "path") cmdFlags.StringVar(&c.Meta.backupPath, "backup", "", "path") @@ -94,9 +96,10 @@ func (c *ApplyCommand) Run(args []string) int { // Build the context based on the arguments given ctx, planned, err := c.Context(contextOpts{ - Destroy: c.Destroy, - Path: configPath, - StatePath: c.Meta.statePath, + Destroy: c.Destroy, + Path: configPath, + StatePath: c.Meta.statePath, + Parallelism: c.Meta.parallelism, }) if err != nil { c.Ui.Error(err.Error()) @@ -278,6 +281,9 @@ Options: -no-color If specified, output won't contain any color. + -parallelism=n Limit the number of concurrent operations. + Defaults to 10. + -refresh=true Update state prior to checking for differences. This has no effect if a plan file is given to apply. @@ -320,6 +326,9 @@ Options: -no-color If specified, output won't contain any color. + -parallelism=n Limit the number of concurrent operations. + Defaults to 10. + -refresh=true Update state prior to checking for differences. This has no effect if a plan file is given to apply. diff --git a/command/apply_test.go b/command/apply_test.go index 052bd592c..c5379c4f3 100644 --- a/command/apply_test.go +++ b/command/apply_test.go @@ -58,6 +58,82 @@ func TestApply(t *testing.T) { } } +func TestApply_parallelism1(t *testing.T) { + statePath := testTempFile(t) + + ui := new(cli.MockUi) + p := testProvider() + pr := new(terraform.MockResourceProvisioner) + + pr.ApplyFn = func(*terraform.InstanceState, *terraform.ResourceConfig) error { + time.Sleep(time.Second) + return nil + } + + args := []string{ + "-state", statePath, + "-parallelism=1", + testFixturePath("parallelism"), + } + + c := &ApplyCommand{ + Meta: Meta{ + ContextOpts: testCtxConfigWithShell(p, pr), + Ui: ui, + }, + } + + start := time.Now() + if code := c.Run(args); code != 0 { + t.Fatalf("bad: %d\n\n%s", code, ui.ErrorWriter.String()) + } + elapsed := time.Since(start).Seconds() + + // This test should take exactly two seconds, plus some minor amount of execution time. + if elapsed < 2 || elapsed > 2.2 { + t.Fatalf("bad: %f\n\n%s", elapsed, ui.ErrorWriter.String()) + } + +} + +func TestApply_parallelism2(t *testing.T) { + statePath := testTempFile(t) + + ui := new(cli.MockUi) + p := testProvider() + pr := new(terraform.MockResourceProvisioner) + + pr.ApplyFn = func(*terraform.InstanceState, *terraform.ResourceConfig) error { + time.Sleep(time.Second) + return nil + } + + args := []string{ + "-state", statePath, + "-parallelism=2", + testFixturePath("parallelism"), + } + + c := &ApplyCommand{ + Meta: Meta{ + ContextOpts: testCtxConfigWithShell(p, pr), + Ui: ui, + }, + } + + start := time.Now() + if code := c.Run(args); code != 0 { + t.Fatalf("bad: %d\n\n%s", code, ui.ErrorWriter.String()) + } + elapsed := time.Since(start).Seconds() + + // This test should take exactly one second, plus some minor amount of execution time. + if elapsed < 1 || elapsed > 1.2 { + t.Fatalf("bad: %f\n\n%s", elapsed, ui.ErrorWriter.String()) + } + +} + func TestApply_configInvalid(t *testing.T) { p := testProvider() ui := new(cli.MockUi) diff --git a/command/command.go b/command/command.go index c9a87230b..80e82e78c 100644 --- a/command/command.go +++ b/command/command.go @@ -26,6 +26,10 @@ const DefaultBackupExtension = ".backup" // by default. const DefaultDataDirectory = ".terraform" +// DefaultParallelism is the limit Terraform places on total parallel +// operations as it walks the dependency graph. +const DefaultParallelism = 10 + func validateContext(ctx *terraform.Context, ui cli.Ui) bool { if ws, es := ctx.Validate(); len(ws) > 0 || len(es) > 0 { ui.Output( diff --git a/command/command_test.go b/command/command_test.go index 2544cf531..2b9f93dd1 100644 --- a/command/command_test.go +++ b/command/command_test.go @@ -52,6 +52,21 @@ func testCtxConfig(p terraform.ResourceProvider) *terraform.ContextOpts { } } +func testCtxConfigWithShell(p terraform.ResourceProvider, pr terraform.ResourceProvisioner) *terraform.ContextOpts { + return &terraform.ContextOpts{ + Providers: map[string]terraform.ResourceProviderFactory{ + "test": func() (terraform.ResourceProvider, error) { + return p, nil + }, + }, + Provisioners: map[string]terraform.ResourceProvisionerFactory{ + "shell": func() (terraform.ResourceProvisioner, error) { + return pr, nil + }, + }, + } +} + func testModule(t *testing.T, name string) *module.Tree { mod, err := module.NewTreeModule("", filepath.Join(fixtureDir, name)) if err != nil { diff --git a/command/flag_kv_test.go b/command/flag_kv_test.go index b17266cc6..a134a1692 100644 --- a/command/flag_kv_test.go +++ b/command/flag_kv_test.go @@ -45,7 +45,7 @@ func TestFlagKV(t *testing.T) { for _, tc := range cases { f := new(FlagKV) err := f.Set(tc.Input) - if (err != nil) != tc.Error { + if err != nil != tc.Error { t.Fatalf("bad error. Input: %#v", tc.Input) } @@ -95,7 +95,7 @@ foo = "bar" f := new(FlagKVFile) err := f.Set(path) - if (err != nil) != tc.Error { + if err != nil != tc.Error { t.Fatalf("bad error. Input: %#v", tc.Input) } diff --git a/command/meta.go b/command/meta.go index 4c1c09afe..af4a52302 100644 --- a/command/meta.go +++ b/command/meta.go @@ -59,9 +59,13 @@ type Meta struct { // // backupPath is used to backup the state file before writing a modified // version. It defaults to stateOutPath + DefaultBackupExtension + // + // parallelism is used to control the number of concurrent operations + // allowed when walking the graph statePath string stateOutPath string backupPath string + parallelism int } // initStatePaths is used to initialize the default values for @@ -151,6 +155,7 @@ func (m *Meta) Context(copts contextOpts) (*terraform.Context, bool, error) { } opts.Module = mod + opts.Parallelism = copts.Parallelism opts.State = state.State() ctx := terraform.NewContext(opts) return ctx, false, nil @@ -430,4 +435,7 @@ type contextOpts struct { // Set to true when running a destroy plan/apply. Destroy bool + + // Number of concurrent operations allowed + Parallelism int } diff --git a/command/plan.go b/command/plan.go index 15c2b505f..cd1aeaec6 100644 --- a/command/plan.go +++ b/command/plan.go @@ -27,6 +27,8 @@ func (c *PlanCommand) Run(args []string) int { cmdFlags.BoolVar(&refresh, "refresh", true, "refresh") c.addModuleDepthFlag(cmdFlags, &moduleDepth) cmdFlags.StringVar(&outPath, "out", "", "path") + cmdFlags.IntVar( + &c.Meta.parallelism, "parallelism", DefaultParallelism, "parallelism") cmdFlags.StringVar(&c.Meta.statePath, "state", DefaultStateFilename, "path") cmdFlags.StringVar(&c.Meta.backupPath, "backup", "", "path") cmdFlags.BoolVar(&detailed, "detailed-exitcode", false, "detailed-exitcode") @@ -57,9 +59,10 @@ func (c *PlanCommand) Run(args []string) int { c.Meta.extraHooks = []terraform.Hook{countHook} ctx, _, err := c.Context(contextOpts{ - Destroy: destroy, - Path: path, - StatePath: c.Meta.statePath, + Destroy: destroy, + Path: path, + StatePath: c.Meta.statePath, + Parallelism: c.Meta.parallelism, }) if err != nil { c.Ui.Error(err.Error()) @@ -183,6 +186,8 @@ Options: -out=path Write a plan file to the given path. This can be used as input to the "apply" command. + -parallelism=n Limit the number of concurrent operations. Defaults to 10. + -refresh=true Update state prior to checking for differences. -state=statefile Path to a Terraform state file to use to look diff --git a/command/refresh.go b/command/refresh.go index ee3cd7007..99190bf87 100644 --- a/command/refresh.go +++ b/command/refresh.go @@ -18,6 +18,7 @@ func (c *RefreshCommand) Run(args []string) int { cmdFlags := c.Meta.flagSet("refresh") cmdFlags.StringVar(&c.Meta.statePath, "state", DefaultStateFilename, "path") + cmdFlags.IntVar(&c.Meta.parallelism, "parallelism", 0, "parallelism") cmdFlags.StringVar(&c.Meta.stateOutPath, "state-out", "", "path") cmdFlags.StringVar(&c.Meta.backupPath, "backup", "", "path") cmdFlags.Usage = func() { c.Ui.Error(c.Help()) } @@ -78,8 +79,9 @@ func (c *RefreshCommand) Run(args []string) int { // Build the context based on the arguments given ctx, _, err := c.Context(contextOpts{ - Path: configPath, - StatePath: c.Meta.statePath, + Path: configPath, + StatePath: c.Meta.statePath, + Parallelism: c.Meta.parallelism, }) if err != nil { c.Ui.Error(err.Error()) diff --git a/command/test-fixtures/parallelism/main.tf b/command/test-fixtures/parallelism/main.tf new file mode 100644 index 000000000..7708209c1 --- /dev/null +++ b/command/test-fixtures/parallelism/main.tf @@ -0,0 +1,13 @@ +resource "test_instance" "foo1" { + ami = "bar" + + // shell has been configured to sleep for one second + provisioner "shell" {} +} + +resource "test_instance" "foo2" { + ami = "bar" + + // shell has been configured to sleep for one second + provisioner "shell" {} +} diff --git a/communicator/winrm/provisioner.go b/communicator/winrm/provisioner.go index 59c0ba7dd..d1562998c 100644 --- a/communicator/winrm/provisioner.go +++ b/communicator/winrm/provisioner.go @@ -99,7 +99,7 @@ func safeDuration(dur string, defaultDur time.Duration) time.Duration { func formatDuration(duration time.Duration) string { h := int(duration.Hours()) - m := int(duration.Minutes()) - (h * 60) + m := int(duration.Minutes()) - h*60 s := int(duration.Seconds()) - (h*3600 + m*60) res := "PT" diff --git a/config/append_test.go b/config/append_test.go index adeb7835b..8d6258ecd 100644 --- a/config/append_test.go +++ b/config/append_test.go @@ -91,7 +91,7 @@ func TestAppend(t *testing.T) { for i, tc := range cases { actual, err := Append(tc.c1, tc.c2) - if (err != nil) != tc.err { + if err != nil != tc.err { t.Fatalf("%d: error fail", i) } diff --git a/config/config.go b/config/config.go index 811b77ec7..d31777f6e 100644 --- a/config/config.go +++ b/config/config.go @@ -8,10 +8,10 @@ import ( "strconv" "strings" + "github.com/hashicorp/go-multierror" "github.com/hashicorp/terraform/config/lang" "github.com/hashicorp/terraform/config/lang/ast" "github.com/hashicorp/terraform/flatmap" - "github.com/hashicorp/terraform/helper/multierror" "github.com/mitchellh/mapstructure" "github.com/mitchellh/reflectwalk" ) @@ -84,8 +84,9 @@ type Resource struct { // ResourceLifecycle is used to store the lifecycle tuning parameters // to allow customized behavior type ResourceLifecycle struct { - CreateBeforeDestroy bool `mapstructure:"create_before_destroy"` - PreventDestroy bool `mapstructure:"prevent_destroy"` + CreateBeforeDestroy bool `mapstructure:"create_before_destroy"` + PreventDestroy bool `mapstructure:"prevent_destroy"` + IgnoreChanges []string `mapstructure:"ignore_changes"` } // Provisioner is a configured provisioner step on a resource. diff --git a/config/interpolate_funcs.go b/config/interpolate_funcs.go index bbe2b8434..5322e46c4 100644 --- a/config/interpolate_funcs.go +++ b/config/interpolate_funcs.go @@ -2,6 +2,7 @@ package config import ( "bytes" + "encoding/base64" "errors" "fmt" "io/ioutil" @@ -19,16 +20,35 @@ var Funcs map[string]ast.Function func init() { Funcs = map[string]ast.Function{ - "concat": interpolationFuncConcat(), - "element": interpolationFuncElement(), - "file": interpolationFuncFile(), - "format": interpolationFuncFormat(), - "formatlist": interpolationFuncFormatList(), - "index": interpolationFuncIndex(), - "join": interpolationFuncJoin(), - "length": interpolationFuncLength(), - "replace": interpolationFuncReplace(), - "split": interpolationFuncSplit(), + "compact": interpolationFuncCompact(), + "concat": interpolationFuncConcat(), + "element": interpolationFuncElement(), + "file": interpolationFuncFile(), + "format": interpolationFuncFormat(), + "formatlist": interpolationFuncFormatList(), + "index": interpolationFuncIndex(), + "join": interpolationFuncJoin(), + "length": interpolationFuncLength(), + "replace": interpolationFuncReplace(), + "split": interpolationFuncSplit(), + "base64encode": interpolationFuncBase64Encode(), + "base64decode": interpolationFuncBase64Decode(), + } +} + +// interpolationFuncCompact strips a list of multi-variable values +// (e.g. as returned by "split") of any empty strings. +func interpolationFuncCompact() ast.Function { + return ast.Function{ + ArgTypes: []ast.Type{ast.TypeString}, + ReturnType: ast.TypeString, + Variadic: false, + Callback: func(args []interface{}) (interface{}, error) { + if !IsStringList(args[0].(string)) { + return args[0].(string), nil + } + return StringList(args[0].(string)).Compact().String(), nil + }, } } @@ -392,3 +412,33 @@ func interpolationFuncValues(vs map[string]ast.Variable) ast.Function { }, } } + +// interpolationFuncBase64Encode implements the "base64encode" function that +// allows Base64 encoding. +func interpolationFuncBase64Encode() ast.Function { + return ast.Function{ + ArgTypes: []ast.Type{ast.TypeString}, + ReturnType: ast.TypeString, + Callback: func(args []interface{}) (interface{}, error) { + s := args[0].(string) + return base64.StdEncoding.EncodeToString([]byte(s)), nil + }, + } +} + +// interpolationFuncBase64Decode implements the "base64decode" function that +// allows Base64 decoding. +func interpolationFuncBase64Decode() ast.Function { + return ast.Function{ + ArgTypes: []ast.Type{ast.TypeString}, + ReturnType: ast.TypeString, + Callback: func(args []interface{}) (interface{}, error) { + s := args[0].(string) + sDec, err := base64.StdEncoding.DecodeString(s) + if err != nil { + return "", fmt.Errorf("failed to decode base64 data '%s'", s) + } + return string(sDec), nil + }, + } +} diff --git a/config/interpolate_funcs_test.go b/config/interpolate_funcs_test.go index 05f84c201..cafdf0564 100644 --- a/config/interpolate_funcs_test.go +++ b/config/interpolate_funcs_test.go @@ -11,6 +11,33 @@ import ( "github.com/hashicorp/terraform/config/lang/ast" ) +func TestInterpolateFuncCompact(t *testing.T) { + testFunction(t, testFunctionConfig{ + Cases: []testFunctionCase{ + // empty string within array + { + `${compact(split(",", "a,,b"))}`, + NewStringList([]string{"a", "b"}).String(), + false, + }, + + // empty string at the end of array + { + `${compact(split(",", "a,b,"))}`, + NewStringList([]string{"a", "b"}).String(), + false, + }, + + // single empty string + { + `${compact(split(",", ""))}`, + NewStringList([]string{}).String(), + false, + }, + }, + }) +} + func TestInterpolateFuncDeprecatedConcat(t *testing.T) { testFunction(t, testFunctionConfig{ Cases: []testFunctionCase{ @@ -584,6 +611,39 @@ func TestInterpolateFuncElement(t *testing.T) { }) } +func TestInterpolateFuncBase64Encode(t *testing.T) { + testFunction(t, testFunctionConfig{ + Cases: []testFunctionCase{ + // Regular base64 encoding + { + `${base64encode("abc123!?$*&()'-=@~")}`, + "YWJjMTIzIT8kKiYoKSctPUB+", + false, + }, + }, + }) +} + +func TestInterpolateFuncBase64Decode(t *testing.T) { + testFunction(t, testFunctionConfig{ + Cases: []testFunctionCase{ + // Regular base64 decoding + { + `${base64decode("YWJjMTIzIT8kKiYoKSctPUB+")}`, + "abc123!?$*&()'-=@~", + false, + }, + + // Invalid base64 data decoding + { + `${base64decode("this-is-an-invalid-base64-data")}`, + nil, + true, + }, + }, + }) +} + type testFunctionConfig struct { Cases []testFunctionCase Vars map[string]ast.Variable @@ -603,7 +663,7 @@ func testFunction(t *testing.T, config testFunctionConfig) { } out, _, err := lang.Eval(ast, langEvalConfig(config.Vars)) - if (err != nil) != tc.Error { + if err != nil != tc.Error { t.Fatalf("Case #%d:\ninput: %#v\nerr: %s", i, tc.Input, err) } diff --git a/config/interpolate_test.go b/config/interpolate_test.go index 69a6ca229..3328571cc 100644 --- a/config/interpolate_test.go +++ b/config/interpolate_test.go @@ -66,7 +66,7 @@ func TestNewInterpolatedVariable(t *testing.T) { for i, tc := range cases { actual, err := NewInterpolatedVariable(tc.Input) - if (err != nil) != tc.Error { + if err != nil != tc.Error { t.Fatalf("%d. Error: %s", i, err) } if !reflect.DeepEqual(actual, tc.Result) { diff --git a/config/lang/check_identifier_test.go b/config/lang/check_identifier_test.go index 1ed52580e..fe76be1d4 100644 --- a/config/lang/check_identifier_test.go +++ b/config/lang/check_identifier_test.go @@ -134,7 +134,7 @@ func TestIdentifierCheck(t *testing.T) { visitor := &IdentifierCheck{Scope: tc.Scope} err = visitor.Visit(node) - if (err != nil) != tc.Error { + if err != nil != tc.Error { t.Fatalf("Error: %s\n\nInput: %s", err, tc.Input) } } diff --git a/config/lang/check_types_test.go b/config/lang/check_types_test.go index eb108044e..6087f98d5 100644 --- a/config/lang/check_types_test.go +++ b/config/lang/check_types_test.go @@ -169,7 +169,7 @@ func TestTypeCheck(t *testing.T) { visitor := &TypeCheck{Scope: tc.Scope} err = visitor.Visit(node) - if (err != nil) != tc.Error { + if err != nil != tc.Error { t.Fatalf("Error: %s\n\nInput: %s", err, tc.Input) } } @@ -247,7 +247,7 @@ func TestTypeCheck_implicit(t *testing.T) { // Do the first pass... visitor := &TypeCheck{Scope: tc.Scope, Implicit: implicitMap} err = visitor.Visit(node) - if (err != nil) != tc.Error { + if err != nil != tc.Error { t.Fatalf("Error: %s\n\nInput: %s", err, tc.Input) } if err != nil { diff --git a/config/lang/eval_test.go b/config/lang/eval_test.go index 44f25d6fd..122f44d1f 100644 --- a/config/lang/eval_test.go +++ b/config/lang/eval_test.go @@ -260,7 +260,7 @@ func TestEval(t *testing.T) { } out, outType, err := Eval(node, &EvalConfig{GlobalScope: tc.Scope}) - if (err != nil) != tc.Error { + if err != nil != tc.Error { t.Fatalf("Error: %s\n\nInput: %s", err, tc.Input) } if outType != tc.ResultType { diff --git a/config/lang/parse_test.go b/config/lang/parse_test.go index 8d705dccb..dc75424bc 100644 --- a/config/lang/parse_test.go +++ b/config/lang/parse_test.go @@ -353,7 +353,7 @@ func TestParse(t *testing.T) { for _, tc := range cases { actual, err := Parse(tc.Input) - if (err != nil) != tc.Error { + if err != nil != tc.Error { t.Fatalf("Error: %s\n\nInput: %s", err, tc.Input) } if !reflect.DeepEqual(actual, tc.Result) { diff --git a/config/loader.go b/config/loader.go index 4f5d8e765..5711ce8ef 100644 --- a/config/loader.go +++ b/config/loader.go @@ -210,5 +210,5 @@ func dirFiles(dir string) ([]string, []string, error) { func isIgnoredFile(name string) bool { return strings.HasPrefix(name, ".") || // Unix-like hidden files strings.HasSuffix(name, "~") || // vim - (strings.HasPrefix(name, "#") && strings.HasSuffix(name, "#")) // emacs + strings.HasPrefix(name, "#") && strings.HasSuffix(name, "#") // emacs } diff --git a/config/loader_test.go b/config/loader_test.go index d239bd0b9..eaf4f10aa 100644 --- a/config/loader_test.go +++ b/config/loader_test.go @@ -440,6 +440,54 @@ func TestLoadFile_createBeforeDestroy(t *testing.T) { } } +func TestLoadFile_ignoreChanges(t *testing.T) { + c, err := LoadFile(filepath.Join(fixtureDir, "ignore-changes.tf")) + if err != nil { + t.Fatalf("err: %s", err) + } + + if c == nil { + t.Fatal("config should not be nil") + } + + actual := resourcesStr(c.Resources) + print(actual) + if actual != strings.TrimSpace(ignoreChangesResourcesStr) { + t.Fatalf("bad:\n%s", actual) + } + + // Check for the flag value + r := c.Resources[0] + if r.Name != "web" && r.Type != "aws_instance" { + t.Fatalf("Bad: %#v", r) + } + + // Should populate ignore changes + if len(r.Lifecycle.IgnoreChanges) == 0 { + t.Fatalf("Bad: %#v", r) + } + + r = c.Resources[1] + if r.Name != "bar" && r.Type != "aws_instance" { + t.Fatalf("Bad: %#v", r) + } + + // Should not populate ignore changes + if len(r.Lifecycle.IgnoreChanges) > 0 { + t.Fatalf("Bad: %#v", r) + } + + r = c.Resources[2] + if r.Name != "baz" && r.Type != "aws_instance" { + t.Fatalf("Bad: %#v", r) + } + + // Should not populate ignore changes + if len(r.Lifecycle.IgnoreChanges) > 0 { + t.Fatalf("Bad: %#v", r) + } +} + func TestLoad_preventDestroyString(t *testing.T) { c, err := LoadFile(filepath.Join(fixtureDir, "prevent-destroy-string.tf")) if err != nil { @@ -676,3 +724,12 @@ aws_instance[bar] (x1) aws_instance[web] (x1) ami ` + +const ignoreChangesResourcesStr = ` +aws_instance[bar] (x1) + ami +aws_instance[baz] (x1) + ami +aws_instance[web] (x1) + ami +` diff --git a/config/merge_test.go b/config/merge_test.go index 40144f0c7..6fe55a2d5 100644 --- a/config/merge_test.go +++ b/config/merge_test.go @@ -157,7 +157,7 @@ func TestMerge(t *testing.T) { for i, tc := range cases { actual, err := Merge(tc.c1, tc.c2) - if (err != nil) != tc.err { + if err != nil != tc.err { t.Fatalf("%d: error fail", i) } diff --git a/config/module/detect_file_test.go b/config/module/detect_file_test.go index 4c75ce83d..3e9db8bba 100644 --- a/config/module/detect_file_test.go +++ b/config/module/detect_file_test.go @@ -74,7 +74,7 @@ func TestFileDetector_noPwd(t *testing.T) { f := new(FileDetector) for i, tc := range noPwdFileTests { out, ok, err := f.Detect(tc.in, tc.pwd) - if (err != nil) != tc.err { + if err != nil != tc.err { t.Fatalf("%d: err: %s", i, err) } if !ok { diff --git a/config/module/detect_test.go b/config/module/detect_test.go index e1e3b4372..d2ee8ea1a 100644 --- a/config/module/detect_test.go +++ b/config/module/detect_test.go @@ -41,7 +41,7 @@ func TestDetect(t *testing.T) { for i, tc := range cases { output, err := Detect(tc.Input, tc.Pwd) - if (err != nil) != tc.Err { + if err != nil != tc.Err { t.Fatalf("%d: bad err: %s", i, err) } if output != tc.Output { diff --git a/config/string_list.go b/config/string_list.go index 70d43d1e4..e3caea70b 100644 --- a/config/string_list.go +++ b/config/string_list.go @@ -24,6 +24,20 @@ type StringList string // ["", ""] => SLDSLDSLD const stringListDelim = `B780FFEC-B661-4EB8-9236-A01737AD98B6` +// Takes a Stringlist and returns one without empty strings in it +func (sl StringList) Compact() StringList { + parts := sl.Slice() + + newlist := []string{} + // drop the empty strings + for i := range parts { + if parts[i] != "" { + newlist = append(newlist, parts[i]) + } + } + return NewStringList(newlist) +} + // Build a StringList from a slice func NewStringList(parts []string) StringList { // We have to special case the empty list representation @@ -55,11 +69,10 @@ func (sl StringList) Length() int { func (sl StringList) Slice() []string { parts := strings.Split(string(sl), stringListDelim) - switch len(parts) { - case 0, 1: + // split on an empty StringList will have a length of 2, since there is + // always at least one deliminator + if len(parts) <= 2 { return []string{} - case 2: - return []string{""} } // strip empty elements generated by leading and trailing delimiters diff --git a/config/string_list_test.go b/config/string_list_test.go index 64049eb50..3fe57dfe2 100644 --- a/config/string_list_test.go +++ b/config/string_list_test.go @@ -27,3 +27,26 @@ func TestStringList_element(t *testing.T) { list, expected, actual) } } + +func TestStringList_empty_slice(t *testing.T) { + expected := []string{} + l := NewStringList(expected) + actual := l.Slice() + + if !reflect.DeepEqual(expected, actual) { + t.Fatalf("Expected %q, got %q", expected, actual) + } +} + +func TestStringList_empty_slice_length(t *testing.T) { + list := []string{} + l := NewStringList([]string{}) + actual := l.Length() + + expected := 0 + + if actual != expected { + t.Fatalf("Expected length of %q to be %d, got %d", + list, expected, actual) + } +} diff --git a/config/test-fixtures/ignore-changes.tf b/config/test-fixtures/ignore-changes.tf new file mode 100644 index 000000000..765a05798 --- /dev/null +++ b/config/test-fixtures/ignore-changes.tf @@ -0,0 +1,17 @@ +resource "aws_instance" "web" { + ami = "foo" + lifecycle { + ignore_changes = ["ami"] + } +} + +resource "aws_instance" "bar" { + ami = "foo" + lifecycle { + ignore_changes = [] + } +} + +resource "aws_instance" "baz" { + ami = "foo" +} diff --git a/config_unix.go b/config_unix.go index c51ea5ec4..69d76278a 100644 --- a/config_unix.go +++ b/config_unix.go @@ -33,7 +33,7 @@ func configDir() (string, error) { func homeDir() (string, error) { // First prefer the HOME environmental variable if home := os.Getenv("HOME"); home != "" { - log.Printf("Detected home directory from env var: %s", home) + log.Printf("[DEBUG] Detected home directory from env var: %s", home) return home, nil } diff --git a/examples/openstack-with-networking/README.md b/examples/openstack-with-networking/README.md new file mode 100644 index 000000000..2f9d381ca --- /dev/null +++ b/examples/openstack-with-networking/README.md @@ -0,0 +1,63 @@ +# Basic OpenStack architecture with networking + +This provides a template for running a simple architecture on an OpenStack +cloud. + +To simplify the example, this intentionally ignores deploying and +getting your application onto the servers. However, you could do so either via +[provisioners](https://www.terraform.io/docs/provisioners/) and a configuration +management tool, or by pre-baking configured images with +[Packer](http://www.packer.io). + +After you run `terraform apply` on this configuration, it will output the +floating IP address assigned to the instance. After your instance started, +this should respond with the default nginx web page. + +First set the required environment variables for the OpenStack provider by +sourcing the [credentials file](http://docs.openstack.org/cli-reference/content/cli_openrc.html). + +``` +source openrc +``` + +Afterwards run with a command like this: + +``` +terraform apply \ + -var 'external_gateway=c1901f39-f76e-498a-9547-c29ba45f64df' \ + -var 'pool=public' +``` + +To get a list of usable floating IP pools run this command: + +``` +$ nova floating-ip-pool-list ++--------+ +| name | ++--------+ +| public | ++--------+ +``` + +To get the UUID of the external gateway run this command: + +``` +$ neutron net-show FLOATING_IP_POOL ++---------------------------+--------------------------------------+ +| Field | Value | ++---------------------------+--------------------------------------+ +| admin_state_up | True | +| id | c1901f39-f76e-498a-9547-c29ba45f64df | +| mtu | 0 | +| name | public | +| port_security_enabled | True | +| provider:network_type | vxlan | +| provider:physical_network | | +| provider:segmentation_id | 1092 | +| router:external | True | +| shared | False | +| status | ACTIVE | +| subnets | 42b672ae-8d51-4a18-a028-ddae7859ec4c | +| tenant_id | 1bde0a49d2ff44ffb44e6339a8cefe3a | ++---------------------------+--------------------------------------+ +``` diff --git a/examples/openstack-with-networking/main.tf b/examples/openstack-with-networking/main.tf new file mode 100644 index 000000000..d57925263 --- /dev/null +++ b/examples/openstack-with-networking/main.tf @@ -0,0 +1,79 @@ +resource "openstack_compute_keypair_v2" "terraform" { + name = "terraform" + public_key = "${file("${var.ssh_key_file}.pub")}" +} + +resource "openstack_networking_network_v2" "terraform" { + name = "terraform" + admin_state_up = "true" +} + +resource "openstack_networking_subnet_v2" "terraform" { + name = "terraform" + network_id = "${openstack_networking_network_v2.terraform.id}" + cidr = "10.0.0.0/24" + ip_version = 4 + dns_nameservers = ["8.8.8.8","8.8.4.4"] +} + +resource "openstack_networking_router_v2" "terraform" { + name = "terraform" + admin_state_up = "true" + external_gateway = "${var.external_gateway}" +} + +resource "openstack_networking_router_interface_v2" "terraform" { + router_id = "${openstack_networking_router_v2.terraform.id}" + subnet_id = "${openstack_networking_subnet_v2.terraform.id}" +} + +resource "openstack_compute_secgroup_v2" "terraform" { + name = "terraform" + description = "Security group for the Terraform example instances" + rule { + from_port = 22 + to_port = 22 + ip_protocol = "tcp" + cidr = "0.0.0.0/0" + } + rule { + from_port = 80 + to_port = 80 + ip_protocol = "tcp" + cidr = "0.0.0.0/0" + } + rule { + from_port = -1 + to_port = -1 + ip_protocol = "icmp" + cidr = "0.0.0.0/0" + } +} + +resource "openstack_compute_floatingip_v2" "terraform" { + pool = "${var.pool}" + depends_on = ["openstack_networking_router_interface_v2.terraform"] +} + +resource "openstack_compute_instance_v2" "terraform" { + name = "terraform" + image_name = "${var.image}" + flavor_name = "${var.flavor}" + key_pair = "${openstack_compute_keypair_v2.terraform.name}" + security_groups = [ "${openstack_compute_secgroup_v2.terraform.name}" ] + floating_ip = "${openstack_compute_floatingip_v2.terraform.address}" + network { + uuid = "${openstack_networking_network_v2.terraform.id}" + } + provisioner "remote-exec" { + connection { + user = "${var.ssh_user_name}" + key_file = "${var.ssh_key_file}" + } + inline = [ + "sudo apt-get -y update", + "sudo apt-get -y install nginx", + "sudo service nginx start" + ] + } +} diff --git a/examples/openstack-with-networking/openrc.sample b/examples/openstack-with-networking/openrc.sample new file mode 100644 index 000000000..c9a38e0a1 --- /dev/null +++ b/examples/openstack-with-networking/openrc.sample @@ -0,0 +1,7 @@ +#!/usr/bin/env bash + +export OS_AUTH_URL=http://KEYSTONE.ENDPOINT.URL:5000/v2.0 +export OS_TENANT_NAME=YOUR_TENANT_NAME +export OS_USERNAME=YOUR_USERNAME +export OS_PASSWORD=YOUR_PASSWORD +export OS_REGION_NAME=YOUR_REGION_NAME diff --git a/examples/openstack-with-networking/outputs.tf b/examples/openstack-with-networking/outputs.tf new file mode 100644 index 000000000..42f923fe2 --- /dev/null +++ b/examples/openstack-with-networking/outputs.tf @@ -0,0 +1,3 @@ +output "address" { + value = "${openstack_compute_floatingip_v2.terraform.address}" +} diff --git a/examples/openstack-with-networking/variables.tf b/examples/openstack-with-networking/variables.tf new file mode 100644 index 000000000..3477cf67e --- /dev/null +++ b/examples/openstack-with-networking/variables.tf @@ -0,0 +1,22 @@ +variable "image" { + default = "Ubuntu 14.04" +} + +variable "flavor" { + default = "m1.small" +} + +variable "ssh_key_file" { + default = "~/.ssh/id_rsa.terraform" +} + +variable "ssh_user_name" { + default = "ubuntu" +} + +variable "external_gateway" { +} + +variable "pool" { + default = "public" +} diff --git a/helper/multierror/error.go b/helper/multierror/error.go deleted file mode 100644 index ae21e4366..000000000 --- a/helper/multierror/error.go +++ /dev/null @@ -1,54 +0,0 @@ -package multierror - -import ( - "fmt" - "strings" -) - -// Error is an error type to track multiple errors. This is used to -// accumulate errors in cases such as configuration parsing, and returning -// them as a single error. -type Error struct { - Errors []error -} - -func (e *Error) Error() string { - points := make([]string, len(e.Errors)) - for i, err := range e.Errors { - points[i] = fmt.Sprintf("* %s", err) - } - - return fmt.Sprintf( - "%d error(s) occurred:\n\n%s", - len(e.Errors), strings.Join(points, "\n")) -} - -func (e *Error) GoString() string { - return fmt.Sprintf("*%#v", *e) -} - -// ErrorAppend is a helper function that will append more errors -// onto an Error in order to create a larger multi-error. If the -// original error is not an Error, it will be turned into one. -func ErrorAppend(err error, errs ...error) *Error { - if err == nil { - err = new(Error) - } - - switch err := err.(type) { - case *Error: - if err == nil { - err = new(Error) - } - - err.Errors = append(err.Errors, errs...) - return err - default: - newErrs := make([]error, len(errs)+1) - newErrs[0] = err - copy(newErrs[1:], errs) - return &Error{ - Errors: newErrs, - } - } -} diff --git a/helper/multierror/error_test.go b/helper/multierror/error_test.go deleted file mode 100644 index 207c00465..000000000 --- a/helper/multierror/error_test.go +++ /dev/null @@ -1,56 +0,0 @@ -package multierror - -import ( - "errors" - "testing" -) - -func TestError_Impl(t *testing.T) { - var raw interface{} - raw = &Error{} - if _, ok := raw.(error); !ok { - t.Fatal("Error must implement error") - } -} - -func TestErrorError(t *testing.T) { - expected := `2 error(s) occurred: - -* foo -* bar` - - errors := []error{ - errors.New("foo"), - errors.New("bar"), - } - - multi := &Error{errors} - if multi.Error() != expected { - t.Fatalf("bad: %s", multi.Error()) - } -} - -func TestErrorAppend_Error(t *testing.T) { - original := &Error{ - Errors: []error{errors.New("foo")}, - } - - result := ErrorAppend(original, errors.New("bar")) - if len(result.Errors) != 2 { - t.Fatalf("wrong len: %d", len(result.Errors)) - } - - original = &Error{} - result = ErrorAppend(original, errors.New("bar")) - if len(result.Errors) != 1 { - t.Fatalf("wrong len: %d", len(result.Errors)) - } -} - -func TestErrorAppend_NonError(t *testing.T) { - original := errors.New("foo") - result := ErrorAppend(original, errors.New("bar")) - if len(result.Errors) != 2 { - t.Fatalf("wrong len: %d", len(result.Errors)) - } -} diff --git a/helper/schema/field_reader.go b/helper/schema/field_reader.go index fc2a1e090..c3a6c76fa 100644 --- a/helper/schema/field_reader.go +++ b/helper/schema/field_reader.go @@ -38,15 +38,7 @@ func (r *FieldReadResult) ValueOrZero(s *Schema) interface{} { return r.Value } - result := s.Type.Zero() - - // The zero value of a set is nil, but we want it - // to actually be an empty set object... - if set, ok := result.(*Set); ok && set.F == nil { - set.F = s.Set - } - - return result + return s.ZeroValue() } // addrToSchema finds the final element schema for the given address diff --git a/helper/schema/field_reader_config.go b/helper/schema/field_reader_config.go index 69b63eac7..76aeed2bd 100644 --- a/helper/schema/field_reader_config.go +++ b/helper/schema/field_reader_config.go @@ -201,7 +201,7 @@ func (r *ConfigFieldReader) readSet( address []string, schema *Schema) (FieldReadResult, map[int]int, error) { indexMap := make(map[int]int) // Create the set that will be our result - set := &Set{F: schema.Set} + set := schema.ZeroValue().(*Set) raw, err := readListField(&nestedConfigFieldReader{r}, address, schema) if err != nil { diff --git a/helper/schema/field_reader_config_test.go b/helper/schema/field_reader_config_test.go index 96028a89c..be37fcef9 100644 --- a/helper/schema/field_reader_config_test.go +++ b/helper/schema/field_reader_config_test.go @@ -122,7 +122,7 @@ func TestConfigFieldReader_DefaultHandling(t *testing.T) { Config: tc.Config, } out, err := r.ReadField(tc.Addr) - if (err != nil) != tc.Err { + if err != nil != tc.Err { t.Fatalf("%s: err: %s", name, err) } if s, ok := out.Value.(*Set); ok { @@ -192,7 +192,7 @@ func TestConfigFieldReader_ComputedMap(t *testing.T) { Config: tc.Config, } out, err := r.ReadField(tc.Addr) - if (err != nil) != tc.Err { + if err != nil != tc.Err { t.Fatalf("%s: err: %s", name, err) } if s, ok := out.Value.(*Set); ok { @@ -283,7 +283,7 @@ func TestConfigFieldReader_ComputedSet(t *testing.T) { Config: tc.Config, } out, err := r.ReadField(tc.Addr) - if (err != nil) != tc.Err { + if err != nil != tc.Err { t.Fatalf("%s: err: %s", name, err) } if s, ok := out.Value.(*Set); ok { diff --git a/helper/schema/field_reader_diff.go b/helper/schema/field_reader_diff.go index e17a6685e..dcb379436 100644 --- a/helper/schema/field_reader_diff.go +++ b/helper/schema/field_reader_diff.go @@ -141,7 +141,7 @@ func (r *DiffFieldReader) readSet( prefix := strings.Join(address, ".") + "." // Create the set that will be our result - set := &Set{F: schema.Set} + set := schema.ZeroValue().(*Set) // Go through the map and find all the set items for k, d := range r.Diff.Attributes { diff --git a/helper/schema/field_reader_diff_test.go b/helper/schema/field_reader_diff_test.go index 205b254f4..a763e4702 100644 --- a/helper/schema/field_reader_diff_test.go +++ b/helper/schema/field_reader_diff_test.go @@ -237,7 +237,7 @@ func TestDiffFieldReader_extra(t *testing.T) { for name, tc := range cases { out, err := r.ReadField(tc.Addr) - if (err != nil) != tc.Err { + if err != nil != tc.Err { t.Fatalf("%s: err: %s", name, err) } if s, ok := out.Value.(*Set); ok { diff --git a/helper/schema/field_reader_map.go b/helper/schema/field_reader_map.go index 6dc76c474..feb3fcc0a 100644 --- a/helper/schema/field_reader_map.go +++ b/helper/schema/field_reader_map.go @@ -105,7 +105,7 @@ func (r *MapFieldReader) readSet( } // Create the set that will be our result - set := &Set{F: schema.Set} + set := schema.ZeroValue().(*Set) // If we have an empty list, then return an empty list if countRaw.Computed || countRaw.Value.(int) == 0 { diff --git a/helper/schema/field_reader_map_test.go b/helper/schema/field_reader_map_test.go index e2d5342ee..61ffd4484 100644 --- a/helper/schema/field_reader_map_test.go +++ b/helper/schema/field_reader_map_test.go @@ -86,7 +86,7 @@ func TestMapFieldReader_extra(t *testing.T) { for name, tc := range cases { out, err := r.ReadField(tc.Addr) - if (err != nil) != tc.OutErr { + if err != nil != tc.OutErr { t.Fatalf("%s: err: %s", name, err) } if out.Computed != tc.OutComputed { diff --git a/helper/schema/field_reader_test.go b/helper/schema/field_reader_test.go index 7d4690762..c61fb8eb7 100644 --- a/helper/schema/field_reader_test.go +++ b/helper/schema/field_reader_test.go @@ -387,7 +387,7 @@ func testFieldReader(t *testing.T, f func(map[string]*Schema) FieldReader) { for name, tc := range cases { r := f(schema) out, err := r.ReadField(tc.Addr) - if (err != nil) != tc.Err { + if err != nil != tc.Err { t.Fatalf("%s: err: %s", name, err) } if s, ok := out.Value.(*Set); ok { diff --git a/helper/schema/field_writer_map_test.go b/helper/schema/field_writer_map_test.go index 8cf8100f2..3f54f8303 100644 --- a/helper/schema/field_writer_map_test.go +++ b/helper/schema/field_writer_map_test.go @@ -242,7 +242,7 @@ func TestMapFieldWriter(t *testing.T) { for name, tc := range cases { w := &MapFieldWriter{Schema: schema} err := w.WriteField(tc.Addr, tc.Value) - if (err != nil) != tc.Err { + if err != nil != tc.Err { t.Fatalf("%s: err: %s", name, err) } diff --git a/helper/schema/provider_test.go b/helper/schema/provider_test.go index e1f4b93b0..5701520a2 100644 --- a/helper/schema/provider_test.go +++ b/helper/schema/provider_test.go @@ -79,7 +79,7 @@ func TestProviderConfigure(t *testing.T) { } err = tc.P.Configure(terraform.NewResourceConfig(c)) - if (err != nil) != tc.Err { + if err != nil != tc.Err { t.Fatalf("%d: %s", i, err) } } @@ -141,7 +141,7 @@ func TestProviderValidate(t *testing.T) { } _, es := tc.P.Validate(terraform.NewResourceConfig(c)) - if (len(es) > 0) != tc.Err { + if len(es) > 0 != tc.Err { t.Fatalf("%d: %#v", i, es) } } @@ -180,7 +180,7 @@ func TestProviderValidateResource(t *testing.T) { } _, es := tc.P.ValidateResource(tc.Type, terraform.NewResourceConfig(c)) - if (len(es) > 0) != tc.Err { + if len(es) > 0 != tc.Err { t.Fatalf("%d: %#v", i, es) } } diff --git a/helper/schema/resource.go b/helper/schema/resource.go index 571fe18a6..a7b8cfe1e 100644 --- a/helper/schema/resource.go +++ b/helper/schema/resource.go @@ -244,7 +244,20 @@ func (r *Resource) InternalValidate(topSchemaMap schemaMap) error { return fmt.Errorf( "No Update defined, must set ForceNew on: %#v", nonForceNewAttrs) } + } else { + nonUpdateableAttrs := make([]string, 0) + for k, v := range r.Schema { + if v.ForceNew || v.Computed && !v.Optional { + nonUpdateableAttrs = append(nonUpdateableAttrs, k) + } + } + updateableAttrs := len(r.Schema) - len(nonUpdateableAttrs) + if updateableAttrs == 0 { + return fmt.Errorf( + "All fields are ForceNew or Computed w/out Optional, Update is superfluous") + } } + tsm = schemaMap(r.Schema) } diff --git a/helper/schema/resource_data_test.go b/helper/schema/resource_data_test.go index 95479cfbf..dc62a8a19 100644 --- a/helper/schema/resource_data_test.go +++ b/helper/schema/resource_data_test.go @@ -1736,7 +1736,7 @@ func TestResourceDataSet(t *testing.T) { } err = d.Set(tc.Key, tc.Value) - if (err != nil) != tc.Err { + if err != nil != tc.Err { t.Fatalf("%d err: %s", i, err) } diff --git a/helper/schema/resource_test.go b/helper/schema/resource_test.go index e35979eb2..ecfede51b 100644 --- a/helper/schema/resource_test.go +++ b/helper/schema/resource_test.go @@ -335,11 +335,41 @@ func TestResourceInternalValidate(t *testing.T) { }, true, }, + + // Update undefined for non-ForceNew field + { + &Resource{ + Create: func(d *ResourceData, meta interface{}) error { return nil }, + Schema: map[string]*Schema{ + "boo": &Schema{ + Type: TypeInt, + Optional: true, + }, + }, + }, + true, + }, + + // Update defined for ForceNew field + { + &Resource{ + Create: func(d *ResourceData, meta interface{}) error { return nil }, + Update: func(d *ResourceData, meta interface{}) error { return nil }, + Schema: map[string]*Schema{ + "goo": &Schema{ + Type: TypeInt, + Optional: true, + ForceNew: true, + }, + }, + }, + true, + }, } for i, tc := range cases { err := tc.In.InternalValidate(schemaMap{}) - if (err != nil) != tc.Err { + if err != nil != tc.Err { t.Fatalf("%d: bad: %s", i, err) } } @@ -555,7 +585,7 @@ func TestResourceRefresh_needsMigration(t *testing.T) { if err != nil { t.Fatalf("err: %#v", err) } - s.Attributes["newfoo"] = strconv.Itoa((int(oldfoo * 10))) + s.Attributes["newfoo"] = strconv.Itoa(int(oldfoo * 10)) delete(s.Attributes, "oldfoo") return s, nil diff --git a/helper/schema/schema.go b/helper/schema/schema.go index 59a3260fb..f4d860995 100644 --- a/helper/schema/schema.go +++ b/helper/schema/schema.go @@ -207,6 +207,30 @@ func (s *Schema) DefaultValue() (interface{}, error) { return nil, nil } +// Returns a zero value for the schema. +func (s *Schema) ZeroValue() interface{} { + // If it's a set then we'll do a bit of extra work to provide the + // right hashing function in our empty value. + if s.Type == TypeSet { + setFunc := s.Set + if setFunc == nil { + // Default set function uses the schema to hash the whole value + elem := s.Elem + switch t := elem.(type) { + case *Schema: + setFunc = HashSchema(t) + case *Resource: + setFunc = HashResource(t) + default: + panic("invalid set element type") + } + } + return &Set{F: setFunc} + } else { + return s.Type.Zero() + } +} + func (s *Schema) finalizeDiff( d *terraform.ResourceAttrDiff) *terraform.ResourceAttrDiff { if d == nil { @@ -496,10 +520,8 @@ func (m schemaMap) InternalValidate(topSchemaMap schemaMap) error { return fmt.Errorf("%s: Default is not valid for lists or sets", k) } - if v.Type == TypeList && v.Set != nil { + if v.Type != TypeSet && v.Set != nil { return fmt.Errorf("%s: Set can only be set for TypeSet", k) - } else if v.Type == TypeSet && v.Set == nil { - return fmt.Errorf("%s: Set must be set", k) } switch t := v.Elem.(type) { @@ -518,8 +540,8 @@ func (m schemaMap) InternalValidate(topSchemaMap schemaMap) error { if v.ValidateFunc != nil { switch v.Type { - case TypeList, TypeSet, TypeMap: - return fmt.Errorf("ValidateFunc is only supported on primitives.") + case TypeList, TypeSet: + return fmt.Errorf("ValidateFunc is not yet supported on lists or sets.") } } } @@ -782,10 +804,10 @@ func (m schemaMap) diffSet( } if o == nil { - o = &Set{F: schema.Set} + o = schema.ZeroValue().(*Set) } if n == nil { - n = &Set{F: schema.Set} + n = schema.ZeroValue().(*Set) } os := o.(*Set) ns := n.(*Set) @@ -805,7 +827,7 @@ func (m schemaMap) diffSet( newStr := strconv.Itoa(newLen) // If the set computed then say that the # is computed - if computedSet || (schema.Computed && !nSet) { + if computedSet || schema.Computed && !nSet { // If # already exists, equals 0 and no new set is supplied, there // is nothing to record in the diff count, ok := d.GetOk(k + ".#") @@ -1096,6 +1118,17 @@ func (m schemaMap) validateMap( } } + if schema.ValidateFunc != nil { + validatableMap := make(map[string]interface{}) + for _, raw := range raws { + for k, v := range raw.(map[string]interface{}) { + validatableMap[k] = v + } + } + + return schema.ValidateFunc(validatableMap, k) + } + return nil, nil } diff --git a/helper/schema/schema_test.go b/helper/schema/schema_test.go index 83aa72a5c..09eeef119 100644 --- a/helper/schema/schema_test.go +++ b/helper/schema/schema_test.go @@ -2437,7 +2437,7 @@ func TestSchemaMap_Diff(t *testing.T) { d, err := schemaMap(tc.Schema).Diff( tc.State, terraform.NewResourceConfig(c)) - if (err != nil) != tc.Err { + if err != nil != tc.Err { t.Fatalf("#%d err: %s", i, err) } @@ -2595,7 +2595,7 @@ func TestSchemaMap_Input(t *testing.T) { rc.Config = make(map[string]interface{}) actual, err := schemaMap(tc.Schema).Input(input, rc) - if (err != nil) != tc.Err { + if err != nil != tc.Err { t.Fatalf("#%v err: %s", i, err) } @@ -2789,7 +2789,7 @@ func TestSchemaMap_InternalValidate(t *testing.T) { Optional: true, }, }, - true, + false, }, // Required but computed @@ -2903,7 +2903,7 @@ func TestSchemaMap_InternalValidate(t *testing.T) { { map[string]*Schema{ "foo": &Schema{ - Type: TypeMap, + Type: TypeSet, Required: true, ValidateFunc: func(v interface{}, k string) (ws []string, es []error) { return @@ -2916,7 +2916,7 @@ func TestSchemaMap_InternalValidate(t *testing.T) { for i, tc := range cases { err := schemaMap(tc.In).InternalValidate(schemaMap{}) - if (err != nil) != tc.Err { + if err != nil != tc.Err { if tc.Err { t.Fatalf("%d: Expected error did not occur:\n\n%#v", i, tc.In) } @@ -3652,7 +3652,7 @@ func TestSchemaMap_Validate(t *testing.T) { } ws, es := schemaMap(tc.Schema).Validate(terraform.NewResourceConfig(c)) - if (len(es) > 0) != tc.Err { + if len(es) > 0 != tc.Err { if len(es) == 0 { t.Errorf("%q: no errors", tn) } diff --git a/helper/schema/serialize.go b/helper/schema/serialize.go new file mode 100644 index 000000000..78f5bfbd6 --- /dev/null +++ b/helper/schema/serialize.go @@ -0,0 +1,105 @@ +package schema + +import ( + "bytes" + "sort" + "strconv" +) + +func SerializeValueForHash(buf *bytes.Buffer, val interface{}, schema *Schema) { + if val == nil { + buf.WriteRune(';') + return + } + + switch schema.Type { + case TypeBool: + if val.(bool) { + buf.WriteRune('1') + } else { + buf.WriteRune('0') + } + case TypeInt: + buf.WriteString(strconv.Itoa(val.(int))) + case TypeFloat: + buf.WriteString(strconv.FormatFloat(val.(float64), 'g', -1, 64)) + case TypeString: + buf.WriteString(val.(string)) + case TypeList: + buf.WriteRune('(') + l := val.([]interface{}) + for _, innerVal := range l { + serializeCollectionMemberForHash(buf, innerVal, schema.Elem) + } + buf.WriteRune(')') + case TypeMap: + m := val.(map[string]interface{}) + var keys []string + for k := range m { + keys = append(keys, k) + } + sort.Strings(keys) + buf.WriteRune('[') + for _, k := range keys { + innerVal := m[k] + buf.WriteString(k) + buf.WriteRune(':') + serializeCollectionMemberForHash(buf, innerVal, schema.Elem) + } + buf.WriteRune(']') + case TypeSet: + buf.WriteRune('{') + s := val.(*Set) + for _, innerVal := range s.List() { + serializeCollectionMemberForHash(buf, innerVal, schema.Elem) + } + buf.WriteRune('}') + default: + panic("unknown schema type to serialize") + } + buf.WriteRune(';') +} + +// SerializeValueForHash appends a serialization of the given resource config +// to the given buffer, guaranteeing deterministic results given the same value +// and schema. +// +// Its primary purpose is as input into a hashing function in order +// to hash complex substructures when used in sets, and so the serialization +// is not reversible. +func SerializeResourceForHash(buf *bytes.Buffer, val interface{}, resource *Resource) { + sm := resource.Schema + m := val.(map[string]interface{}) + var keys []string + for k := range sm { + keys = append(keys, k) + } + sort.Strings(keys) + for _, k := range keys { + innerSchema := sm[k] + // Skip attributes that are not user-provided. Computed attributes + // do not contribute to the hash since their ultimate value cannot + // be known at plan/diff time. + if !(innerSchema.Required || innerSchema.Optional) { + continue + } + + buf.WriteString(k) + buf.WriteRune(':') + innerVal := m[k] + SerializeValueForHash(buf, innerVal, innerSchema) + } +} + +func serializeCollectionMemberForHash(buf *bytes.Buffer, val interface{}, elem interface{}) { + switch tElem := elem.(type) { + case *Schema: + SerializeValueForHash(buf, val, tElem) + case *Resource: + buf.WriteRune('<') + SerializeResourceForHash(buf, val, tElem) + buf.WriteString(">;") + default: + panic("invalid element type") + } +} diff --git a/helper/schema/serialize_test.go b/helper/schema/serialize_test.go new file mode 100644 index 000000000..7fe9e20bf --- /dev/null +++ b/helper/schema/serialize_test.go @@ -0,0 +1,214 @@ +package schema + +import ( + "bytes" + "testing" +) + +func TestSerializeForHash(t *testing.T) { + type testCase struct { + Schema interface{} + Value interface{} + Expected string + } + + tests := []testCase{ + + testCase{ + Schema: &Schema{ + Type: TypeInt, + }, + Value: 0, + Expected: "0;", + }, + + testCase{ + Schema: &Schema{ + Type: TypeInt, + }, + Value: 200, + Expected: "200;", + }, + + testCase{ + Schema: &Schema{ + Type: TypeBool, + }, + Value: true, + Expected: "1;", + }, + + testCase{ + Schema: &Schema{ + Type: TypeBool, + }, + Value: false, + Expected: "0;", + }, + + testCase{ + Schema: &Schema{ + Type: TypeFloat, + }, + Value: 1.0, + Expected: "1;", + }, + + testCase{ + Schema: &Schema{ + Type: TypeFloat, + }, + Value: 1.54, + Expected: "1.54;", + }, + + testCase{ + Schema: &Schema{ + Type: TypeFloat, + }, + Value: 0.1, + Expected: "0.1;", + }, + + testCase{ + Schema: &Schema{ + Type: TypeString, + }, + Value: "hello", + Expected: "hello;", + }, + + testCase{ + Schema: &Schema{ + Type: TypeString, + }, + Value: "1", + Expected: "1;", + }, + + testCase{ + Schema: &Schema{ + Type: TypeList, + Elem: &Schema{ + Type: TypeString, + }, + }, + Value: []interface{}{}, + Expected: "();", + }, + + testCase{ + Schema: &Schema{ + Type: TypeList, + Elem: &Schema{ + Type: TypeString, + }, + }, + Value: []interface{}{"hello", "world"}, + Expected: "(hello;world;);", + }, + + testCase{ + Schema: &Schema{ + Type: TypeList, + Elem: &Resource{ + Schema: map[string]*Schema{ + "fo": &Schema{ + Type: TypeString, + Required: true, + }, + "fum": &Schema{ + Type: TypeString, + Required: true, + }, + }, + }, + }, + Value: []interface{}{ + map[string]interface{}{ + "fo": "bar", + }, + map[string]interface{}{ + "fo": "baz", + "fum": "boz", + }, + }, + Expected: "(;;);", + }, + + testCase{ + Schema: &Schema{ + Type: TypeSet, + Elem: &Schema{ + Type: TypeString, + }, + }, + Value: NewSet(func(i interface{}) int { return len(i.(string)) }, []interface{}{ + "hello", + "woo", + }), + Expected: "{woo;hello;};", + }, + + testCase{ + Schema: &Schema{ + Type: TypeMap, + Elem: &Schema{ + Type: TypeString, + }, + }, + Value: map[string]interface{}{ + "foo": "bar", + "baz": "foo", + }, + Expected: "[baz:foo;foo:bar;];", + }, + + testCase{ + Schema: &Resource{ + Schema: map[string]*Schema{ + "name": &Schema{ + Type: TypeString, + Required: true, + }, + "size": &Schema{ + Type: TypeInt, + Optional: true, + }, + "green": &Schema{ + Type: TypeBool, + Optional: true, + Computed: true, + }, + "upside_down": &Schema{ + Type: TypeBool, + Computed: true, + }, + }, + }, + Value: map[string]interface{}{ + "name": "my-fun-database", + "size": 12, + "green": true, + }, + Expected: "green:1;name:my-fun-database;size:12;", + }, + } + + for _, test := range tests { + var gotBuf bytes.Buffer + schema := test.Schema + + switch s := schema.(type) { + case *Schema: + SerializeValueForHash(&gotBuf, test.Value, s) + case *Resource: + SerializeResourceForHash(&gotBuf, test.Value, s) + } + + got := gotBuf.String() + if got != test.Expected { + t.Errorf("hash(%#v) got %#v, but want %#v", test.Value, got, test.Expected) + } + } +} diff --git a/helper/schema/set.go b/helper/schema/set.go index 8d21866df..e070a1eb9 100644 --- a/helper/schema/set.go +++ b/helper/schema/set.go @@ -1,6 +1,7 @@ package schema import ( + "bytes" "fmt" "reflect" "sort" @@ -15,6 +16,28 @@ func HashString(v interface{}) int { return hashcode.String(v.(string)) } +// HashResource hashes complex structures that are described using +// a *Resource. This is the default set implementation used when a set's +// element type is a full resource. +func HashResource(resource *Resource) SchemaSetFunc { + return func(v interface{}) int { + var buf bytes.Buffer + SerializeResourceForHash(&buf, v, resource) + return hashcode.String(buf.String()) + } +} + +// HashSchema hashes values that are described using a *Schema. This is the +// default set implementation used when a set's element type is a single +// schema. +func HashSchema(schema *Schema) SchemaSetFunc { + return func(v interface{}) int { + var buf bytes.Buffer + SerializeValueForHash(&buf, v, schema) + return hashcode.String(buf.String()) + } +} + // Set is a set data structure that is returned for elements of type // TypeSet. type Set struct { diff --git a/log.go b/log.go index 70046b347..1077c3e55 100644 --- a/log.go +++ b/log.go @@ -2,28 +2,65 @@ package main import ( "io" + "log" "os" + "strings" + + "github.com/hashicorp/logutils" ) // These are the environmental variables that determine if we log, and if // we log whether or not the log should go to a file. -const EnvLog = "TF_LOG" //Set to True -const EnvLogFile = "TF_LOG_PATH" //Set to a file +const ( + EnvLog = "TF_LOG" // Set to True + EnvLogFile = "TF_LOG_PATH" // Set to a file +) -// logOutput determines where we should send logs (if anywhere). +var validLevels = []logutils.LogLevel{"TRACE", "DEBUG", "INFO", "WARN", "ERROR"} + +// logOutput determines where we should send logs (if anywhere) and the log level. func logOutput() (logOutput io.Writer, err error) { logOutput = nil - if os.Getenv(EnvLog) != "" { - logOutput = os.Stderr + envLevel := os.Getenv(EnvLog) + if envLevel == "" { + return + } - if logPath := os.Getenv(EnvLogFile); logPath != "" { - var err error - logOutput, err = os.Create(logPath) - if err != nil { - return nil, err - } + logOutput = os.Stderr + if logPath := os.Getenv(EnvLogFile); logPath != "" { + var err error + logOutput, err = os.Create(logPath) + if err != nil { + return nil, err } } + // This was the default since the beginning + logLevel := logutils.LogLevel("TRACE") + + if isValidLogLevel(envLevel) { + // allow following for better ux: info, Info or INFO + logLevel = logutils.LogLevel(strings.ToUpper(envLevel)) + } else { + log.Printf("[WARN] Invalid log level: %q. Defaulting to level: TRACE. Valid levels are: %+v", + envLevel, validLevels) + } + + logOutput = &logutils.LevelFilter{ + Levels: validLevels, + MinLevel: logLevel, + Writer: logOutput, + } + return } + +func isValidLogLevel(level string) bool { + for _, l := range validLevels { + if strings.ToUpper(level) == string(l) { + return true + } + } + + return false +} diff --git a/plugin/client.go b/plugin/client.go index be54526c7..8a3b03fc0 100644 --- a/plugin/client.go +++ b/plugin/client.go @@ -88,7 +88,7 @@ func CleanupClients() { }(client) } - log.Println("waiting for all plugin processes to complete...") + log.Println("[DEBUG] waiting for all plugin processes to complete...") wg.Wait() } @@ -326,7 +326,7 @@ func (c *Client) logStderr(r io.Reader) { c.config.Stderr.Write([]byte(line)) line = strings.TrimRightFunc(line, unicode.IsSpace) - log.Printf("%s: %s", filepath.Base(c.config.Cmd.Path), line) + log.Printf("[DEBUG] %s: %s", filepath.Base(c.config.Cmd.Path), line) } if err == io.EOF { diff --git a/scripts/website_push.sh b/scripts/website_push.sh index 36b62f1be..fa58fd694 100755 --- a/scripts/website_push.sh +++ b/scripts/website_push.sh @@ -16,7 +16,8 @@ while [ -h "$SOURCE" ] ; do SOURCE="$(readlink "$SOURCE")"; done DIR="$( cd -P "$( dirname "$SOURCE" )/.." && pwd )" # Copy into tmpdir -cp -R $DIR/website/ $DEPLOY/ +shopt -s dotglob +cp -r $DIR/website/* $DEPLOY/ # Change into that directory pushd $DEPLOY &>/dev/null @@ -25,6 +26,7 @@ pushd $DEPLOY &>/dev/null touch .gitignore echo ".sass-cache" >> .gitignore echo "build" >> .gitignore +echo "vendor" >> .gitignore # Add everything git init -q . diff --git a/state/remote/s3.go b/state/remote/s3.go index c2d897dd0..bdc6a63cf 100644 --- a/state/remote/s3.go +++ b/state/remote/s3.go @@ -4,6 +4,7 @@ import ( "bytes" "fmt" "io" + "log" "os" "strconv" @@ -45,6 +46,11 @@ func s3Factory(conf map[string]string) (Client, error) { serverSideEncryption = v } + acl := "" + if raw, ok := conf["acl"]; ok { + acl = raw + } + accessKeyId := conf["access_key"] secretAccessKey := conf["secret_key"] @@ -77,6 +83,7 @@ func s3Factory(conf map[string]string) (Client, error) { bucketName: bucketName, keyName: keyName, serverSideEncryption: serverSideEncryption, + acl: acl, }, nil } @@ -85,6 +92,7 @@ type S3Client struct { bucketName string keyName string serverSideEncryption bool + acl string } func (c *S3Client) Get() (*Payload, error) { @@ -125,7 +133,7 @@ func (c *S3Client) Get() (*Payload, error) { } func (c *S3Client) Put(data []byte) error { - contentType := "application/octet-stream" + contentType := "application/json" contentLength := int64(len(data)) i := &s3.PutObjectInput{ @@ -140,6 +148,12 @@ func (c *S3Client) Put(data []byte) error { i.ServerSideEncryption = aws.String("AES256") } + if c.acl != "" { + i.ACL = aws.String(c.acl) + } + + log.Printf("[DEBUG] Uploading remote state to S3: %#v", i) + if _, err := c.nativeClient.PutObject(i); err == nil { return nil } else { diff --git a/terraform/context.go b/terraform/context.go index be01a492a..d91a85176 100644 --- a/terraform/context.go +++ b/terraform/context.go @@ -292,7 +292,11 @@ func (c *Context) Apply() (*State, error) { } // Do the walk - _, err = c.walk(graph, walkApply) + if c.destroy { + _, err = c.walk(graph, walkDestroy) + } else { + _, err = c.walk(graph, walkApply) + } // Clean out any unused things c.state.prune() @@ -509,7 +513,7 @@ func (c *Context) releaseRun(ch chan<- struct{}) { func (c *Context) walk( graph *Graph, operation walkOperation) (*ContextGraphWalker, error) { // Walk the graph - log.Printf("[INFO] Starting graph walk: %s", operation.String()) + log.Printf("[DEBUG] Starting graph walk: %s", operation.String()) walker := &ContextGraphWalker{Context: c, Operation: operation} return walker, graph.Walk(walker) } diff --git a/terraform/context_apply_test.go b/terraform/context_apply_test.go index 4b2113d63..1fd069db0 100644 --- a/terraform/context_apply_test.go +++ b/terraform/context_apply_test.go @@ -10,6 +10,8 @@ import ( "sync/atomic" "testing" "time" + + "github.com/hashicorp/terraform/config/module" ) func TestContext2Apply(t *testing.T) { @@ -298,6 +300,88 @@ func TestContext2Apply_destroyComputed(t *testing.T) { } } +// https://github.com/hashicorp/terraform/issues/2892 +func TestContext2Apply_destroyCrossProviders(t *testing.T) { + m := testModule(t, "apply-destroy-cross-providers") + + p_aws := testProvider("aws") + p_aws.ApplyFn = testApplyFn + p_aws.DiffFn = testDiffFn + + p_tf := testProvider("terraform") + p_tf.ApplyFn = testApplyFn + p_tf.DiffFn = testDiffFn + + providers := map[string]ResourceProviderFactory{ + "aws": testProviderFuncFixed(p_aws), + "terraform": testProviderFuncFixed(p_tf), + } + + // Bug only appears from time to time, + // so we run this test multiple times + // to check for the race-condition + for i := 0; i <= 10; i++ { + ctx := getContextForApply_destroyCrossProviders( + t, m, providers) + + if p, err := ctx.Plan(); err != nil { + t.Fatalf("err: %s", err) + } else { + t.Logf(p.String()) + } + + if _, err := ctx.Apply(); err != nil { + t.Fatalf("err: %s", err) + } + } +} + +func getContextForApply_destroyCrossProviders( + t *testing.T, + m *module.Tree, + providers map[string]ResourceProviderFactory) *Context { + state := &State{ + Modules: []*ModuleState{ + &ModuleState{ + Path: rootModulePath, + Resources: map[string]*ResourceState{ + "terraform_remote_state.shared": &ResourceState{ + Type: "terraform_remote_state", + Primary: &InstanceState{ + ID: "remote-2652591293", + Attributes: map[string]string{ + "output.env_name": "test", + }, + }, + }, + }, + }, + &ModuleState{ + Path: []string{"root", "example"}, + Resources: map[string]*ResourceState{ + "aws_vpc.bar": &ResourceState{ + Type: "aws_vpc", + Primary: &InstanceState{ + ID: "vpc-aaabbb12", + Attributes: map[string]string{ + "value": "test", + }, + }, + }, + }, + }, + }, + } + ctx := testContext2(t, &ContextOpts{ + Module: m, + Providers: providers, + State: state, + Destroy: true, + }) + + return ctx +} + func TestContext2Apply_minimal(t *testing.T) { m := testModule(t, "apply-minimal") p := testProvider("aws") diff --git a/terraform/context_plan_test.go b/terraform/context_plan_test.go index 50f2bb471..db6f24577 100644 --- a/terraform/context_plan_test.go +++ b/terraform/context_plan_test.go @@ -1672,3 +1672,49 @@ func TestContext2Plan_varListErr(t *testing.T) { t.Fatal("should error") } } + +func TestContext2Plan_ignoreChanges(t *testing.T) { + m := testModule(t, "plan-ignore-changes") + p := testProvider("aws") + p.DiffFn = testDiffFn + s := &State{ + Modules: []*ModuleState{ + &ModuleState{ + Path: rootModulePath, + Resources: map[string]*ResourceState{ + "aws_instance.foo": &ResourceState{ + Primary: &InstanceState{ + ID: "bar", + Attributes: map[string]string{"ami": "ami-abcd1234"}, + }, + }, + }, + }, + }, + } + ctx := testContext2(t, &ContextOpts{ + Module: m, + Providers: map[string]ResourceProviderFactory{ + "aws": testProviderFuncFixed(p), + }, + Variables: map[string]string{ + "foo": "ami-1234abcd", + }, + State: s, + }) + + plan, err := ctx.Plan() + if err != nil { + t.Fatalf("err: %s", err) + } + + if len(plan.Diff.RootModule().Resources) < 1 { + t.Fatalf("bad: %#v", plan.Diff.RootModule().Resources) + } + + actual := strings.TrimSpace(plan.String()) + expected := strings.TrimSpace(testTerraformPlanIgnoreChangesStr) + if actual != expected { + t.Fatalf("bad:\n%s\n\nexpected\n\n%s", actual, expected) + } +} diff --git a/terraform/eval_apply.go b/terraform/eval_apply.go index c22a6ca4e..6314baa86 100644 --- a/terraform/eval_apply.go +++ b/terraform/eval_apply.go @@ -49,7 +49,7 @@ func (n *EvalApply) Eval(ctx EvalContext) (interface{}, error) { // Flag if we're creating a new instance if n.CreateNew != nil { - *n.CreateNew = (state.ID == "" && !diff.Destroy) || diff.RequiresNew() + *n.CreateNew = state.ID == "" && !diff.Destroy || diff.RequiresNew() } { diff --git a/terraform/eval_ignore_changes.go b/terraform/eval_ignore_changes.go new file mode 100644 index 000000000..1a44089a9 --- /dev/null +++ b/terraform/eval_ignore_changes.go @@ -0,0 +1,32 @@ +package terraform +import ( + "github.com/hashicorp/terraform/config" + "strings" +) + +// EvalIgnoreChanges is an EvalNode implementation that removes diff +// attributes if their name matches names provided by the resource's +// IgnoreChanges lifecycle. +type EvalIgnoreChanges struct { + Resource *config.Resource + Diff **InstanceDiff +} + +func (n *EvalIgnoreChanges) Eval(ctx EvalContext) (interface{}, error) { + if n.Diff == nil || *n.Diff == nil || n.Resource == nil || n.Resource.Id() == "" { + return nil, nil + } + + diff := *n.Diff + ignoreChanges := n.Resource.Lifecycle.IgnoreChanges + + for _, ignoredName := range ignoreChanges { + for name := range diff.Attributes { + if strings.HasPrefix(name, ignoredName) { + delete(diff.Attributes, name) + } + } + } + + return nil, nil +} diff --git a/terraform/eval_validate.go b/terraform/eval_validate.go index e808240a0..533788230 100644 --- a/terraform/eval_validate.go +++ b/terraform/eval_validate.go @@ -49,9 +49,12 @@ func (n *EvalValidateCount) Eval(ctx EvalContext) (interface{}, error) { } RETURN: - return nil, &EvalValidateError{ - Errors: errs, + if len(errs) != 0 { + err = &EvalValidateError{ + Errors: errs, + } } + return nil, err } // EvalValidateProvider is an EvalNode implementation that validates diff --git a/terraform/evaltree_provider.go b/terraform/evaltree_provider.go index 99e3ccb1e..9ec6ea0c5 100644 --- a/terraform/evaltree_provider.go +++ b/terraform/evaltree_provider.go @@ -71,7 +71,7 @@ func ProviderEvalTree(n string, config *config.RawConfig) EvalNode { // Apply stuff seq = append(seq, &EvalOpFilter{ - Ops: []walkOperation{walkRefresh, walkPlan, walkApply}, + Ops: []walkOperation{walkRefresh, walkPlan, walkApply, walkDestroy}, Node: &EvalSequence{ Nodes: []EvalNode{ &EvalGetProvider{ @@ -98,7 +98,7 @@ func ProviderEvalTree(n string, config *config.RawConfig) EvalNode { // We configure on everything but validate, since validate may // not have access to all the variables. seq = append(seq, &EvalOpFilter{ - Ops: []walkOperation{walkRefresh, walkPlan, walkApply}, + Ops: []walkOperation{walkRefresh, walkPlan, walkApply, walkDestroy}, Node: &EvalSequence{ Nodes: []EvalNode{ &EvalConfigProvider{ diff --git a/terraform/graph_builder.go b/terraform/graph_builder.go index 2190be15e..ca9966701 100644 --- a/terraform/graph_builder.go +++ b/terraform/graph_builder.go @@ -107,7 +107,7 @@ func (b *BuiltinGraphBuilder) Steps(path []string) []GraphTransformer { &OrphanTransformer{ State: b.State, Module: b.Root, - Targeting: (len(b.Targets) > 0), + Targeting: len(b.Targets) > 0, }, // Output-related transformations diff --git a/terraform/graph_config_node_output.go b/terraform/graph_config_node_output.go index 5b2d95fdc..d4f00451c 100644 --- a/terraform/graph_config_node_output.go +++ b/terraform/graph_config_node_output.go @@ -44,7 +44,7 @@ func (n *GraphNodeConfigOutput) DependentOn() []string { // GraphNodeEvalable impl. func (n *GraphNodeConfigOutput) EvalTree() EvalNode { return &EvalOpFilter{ - Ops: []walkOperation{walkRefresh, walkPlan, walkApply}, + Ops: []walkOperation{walkRefresh, walkPlan, walkApply, walkDestroy}, Node: &EvalSequence{ Nodes: []EvalNode{ &EvalWriteOutput{ diff --git a/terraform/graph_config_node_resource.go b/terraform/graph_config_node_resource.go index dfc958714..2bf0e4568 100644 --- a/terraform/graph_config_node_resource.go +++ b/terraform/graph_config_node_resource.go @@ -165,7 +165,7 @@ func (n *GraphNodeConfigResource) DynamicExpand(ctx EvalContext) (*Graph, error) steps = append(steps, &OrphanTransformer{ State: state, View: n.Resource.Id(), - Targeting: (len(n.Targets) > 0), + Targeting: len(n.Targets) > 0, }) steps = append(steps, &DeposedTransformer{ diff --git a/terraform/graph_dot_test.go b/terraform/graph_dot_test.go index da0e1f55e..ecef1984d 100644 --- a/terraform/graph_dot_test.go +++ b/terraform/graph_dot_test.go @@ -210,13 +210,13 @@ digraph { for tn, tc := range cases { actual, err := GraphDot(tc.Graph(), &tc.Opts) - if (err == nil) && tc.Error != "" { + if err == nil && tc.Error != "" { t.Fatalf("%s: expected err: %s, got none", tn, tc.Error) } - if (err != nil) && (tc.Error == "") { + if err != nil && tc.Error == "" { t.Fatalf("%s: unexpected err: %s", tn, err) } - if (err != nil) && (tc.Error != "") { + if err != nil && tc.Error != "" { if !strings.Contains(err.Error(), tc.Error) { t.Fatalf("%s: expected err: %s\nto contain: %s", tn, err, tc.Error) } diff --git a/terraform/graph_walk_operation.go b/terraform/graph_walk_operation.go index c2143fbd8..f2de24134 100644 --- a/terraform/graph_walk_operation.go +++ b/terraform/graph_walk_operation.go @@ -13,4 +13,5 @@ const ( walkPlanDestroy walkRefresh walkValidate + walkDestroy ) diff --git a/terraform/interpolate.go b/terraform/interpolate.go index d8e5288ec..31c366eab 100644 --- a/terraform/interpolate.go +++ b/terraform/interpolate.go @@ -342,7 +342,7 @@ func (i *Interpolater) computeResourceVariable( // TODO: test by creating a state and configuration that is referencing // a non-existent variable "foo.bar" where the state only has "foo" // and verify plan works, but apply doesn't. - if i.Operation == walkApply { + if i.Operation == walkApply || i.Operation == walkDestroy { goto MISSING } @@ -384,7 +384,7 @@ MISSING: // // For an input walk, computed values are okay to return because we're only // looking for missing variables to prompt the user for. - if i.Operation == walkRefresh || i.Operation == walkPlanDestroy || i.Operation == walkInput { + if i.Operation == walkRefresh || i.Operation == walkPlanDestroy || i.Operation == walkDestroy || i.Operation == walkInput { return config.UnknownVariableValue, nil } @@ -481,7 +481,7 @@ func (i *Interpolater) computeResourceMultiVariable( // // For an input walk, computed values are okay to return because we're only // looking for missing variables to prompt the user for. - if i.Operation == walkRefresh || i.Operation == walkPlanDestroy || i.Operation == walkInput { + if i.Operation == walkRefresh || i.Operation == walkPlanDestroy || i.Operation == walkDestroy || i.Operation == walkInput { return config.UnknownVariableValue, nil } diff --git a/terraform/interpolate_test.go b/terraform/interpolate_test.go index bbbb1024a..fbce848ea 100644 --- a/terraform/interpolate_test.go +++ b/terraform/interpolate_test.go @@ -330,11 +330,6 @@ func TestInterpolator_resourceMultiAttributesWithResourceCount(t *testing.T) { Value: config.NewStringList([]string{}).String(), Type: ast.TypeString, }) - // Zero + zero elements - testInterpolate(t, i, scope, "aws_route53_zone.terra.*.nothing", ast.Variable{ - Value: config.NewStringList([]string{"", ""}).String(), - Type: ast.TypeString, - }) // Zero + 1 element testInterpolate(t, i, scope, "aws_route53_zone.terra.*.special", ast.Variable{ Value: config.NewStringList([]string{"extra"}).String(), diff --git a/terraform/resource_address.go b/terraform/resource_address.go index 583cdd2a0..f7dd94074 100644 --- a/terraform/resource_address.go +++ b/terraform/resource_address.go @@ -53,26 +53,26 @@ func (addr *ResourceAddress) Equals(raw interface{}) bool { return false } - pathMatch := ((len(addr.Path) == 0 && len(other.Path) == 0) || - reflect.DeepEqual(addr.Path, other.Path)) + pathMatch := len(addr.Path) == 0 && len(other.Path) == 0 || + reflect.DeepEqual(addr.Path, other.Path) - indexMatch := (addr.Index == -1 || + indexMatch := addr.Index == -1 || other.Index == -1 || - addr.Index == other.Index) + addr.Index == other.Index - nameMatch := (addr.Name == "" || + nameMatch := addr.Name == "" || other.Name == "" || - addr.Name == other.Name) + addr.Name == other.Name - typeMatch := (addr.Type == "" || + typeMatch := addr.Type == "" || other.Type == "" || - addr.Type == other.Type) + addr.Type == other.Type - return (pathMatch && + return pathMatch && indexMatch && addr.InstanceType == other.InstanceType && nameMatch && - typeMatch) + typeMatch } func ParseResourceIndex(s string) (int, error) { diff --git a/terraform/terraform_test.go b/terraform/terraform_test.go index c84e9803c..02d4de2a2 100644 --- a/terraform/terraform_test.go +++ b/terraform/terraform_test.go @@ -1286,3 +1286,16 @@ STATE: ` + +const testTerraformPlanIgnoreChangesStr = ` +DIFF: + +UPDATE: aws_instance.foo + type: "" => "aws_instance" + +STATE: + +aws_instance.foo: + ID = bar + ami = ami-abcd1234 +` diff --git a/terraform/test-fixtures/apply-destroy-cross-providers/child/main.tf b/terraform/test-fixtures/apply-destroy-cross-providers/child/main.tf new file mode 100644 index 000000000..048b26dec --- /dev/null +++ b/terraform/test-fixtures/apply-destroy-cross-providers/child/main.tf @@ -0,0 +1,5 @@ +variable "value" {} + +resource "aws_vpc" "bar" { + value = "${var.value}" +} diff --git a/terraform/test-fixtures/apply-destroy-cross-providers/main.tf b/terraform/test-fixtures/apply-destroy-cross-providers/main.tf new file mode 100644 index 000000000..b0595b9e8 --- /dev/null +++ b/terraform/test-fixtures/apply-destroy-cross-providers/main.tf @@ -0,0 +1,6 @@ +resource "terraform_remote_state" "shared" {} + +module "child" { + source = "./child" + value = "${terraform_remote_state.shared.output.env_name}" +} diff --git a/terraform/test-fixtures/plan-ignore-changes/main.tf b/terraform/test-fixtures/plan-ignore-changes/main.tf new file mode 100644 index 000000000..056256a1d --- /dev/null +++ b/terraform/test-fixtures/plan-ignore-changes/main.tf @@ -0,0 +1,9 @@ +variable "foo" {} + +resource "aws_instance" "foo" { + ami = "${var.foo}" + + lifecycle { + ignore_changes = ["ami"] + } +} diff --git a/terraform/transform_deposed.go b/terraform/transform_deposed.go index 6ae1695f0..fa3143c3c 100644 --- a/terraform/transform_deposed.go +++ b/terraform/transform_deposed.go @@ -110,7 +110,7 @@ func (n *graphNodeDeposedResource) EvalTree() EvalNode { var diff *InstanceDiff var err error seq.Nodes = append(seq.Nodes, &EvalOpFilter{ - Ops: []walkOperation{walkApply}, + Ops: []walkOperation{walkApply, walkDestroy}, Node: &EvalSequence{ Nodes: []EvalNode{ &EvalGetProvider{ diff --git a/terraform/transform_orphan.go b/terraform/transform_orphan.go index bb381c823..45ea050ba 100644 --- a/terraform/transform_orphan.go +++ b/terraform/transform_orphan.go @@ -263,7 +263,7 @@ func (n *graphNodeOrphanResource) EvalTree() EvalNode { // Apply var err error seq.Nodes = append(seq.Nodes, &EvalOpFilter{ - Ops: []walkOperation{walkApply}, + Ops: []walkOperation{walkApply, walkDestroy}, Node: &EvalSequence{ Nodes: []EvalNode{ &EvalReadDiff{ diff --git a/terraform/transform_output.go b/terraform/transform_output.go index 5ea48a016..d3e839ce1 100644 --- a/terraform/transform_output.go +++ b/terraform/transform_output.go @@ -62,7 +62,7 @@ func (n *graphNodeOrphanOutput) Name() string { func (n *graphNodeOrphanOutput) EvalTree() EvalNode { return &EvalOpFilter{ - Ops: []walkOperation{walkApply, walkRefresh}, + Ops: []walkOperation{walkApply, walkDestroy, walkRefresh}, Node: &EvalDeleteOutput{ Name: n.OutputName, }, @@ -90,7 +90,7 @@ func (n *graphNodeOrphanOutputFlat) Name() string { func (n *graphNodeOrphanOutputFlat) EvalTree() EvalNode { return &EvalOpFilter{ - Ops: []walkOperation{walkApply, walkRefresh}, + Ops: []walkOperation{walkApply, walkDestroy, walkRefresh}, Node: &EvalDeleteOutput{ Name: n.OutputName, }, diff --git a/terraform/transform_provider.go b/terraform/transform_provider.go index 8a6655182..0ea226713 100644 --- a/terraform/transform_provider.go +++ b/terraform/transform_provider.go @@ -255,7 +255,7 @@ func (n *graphNodeDisabledProvider) EvalTree() EvalNode { var resourceConfig *ResourceConfig return &EvalOpFilter{ - Ops: []walkOperation{walkInput, walkValidate, walkRefresh, walkPlan, walkApply}, + Ops: []walkOperation{walkInput, walkValidate, walkRefresh, walkPlan, walkApply, walkDestroy}, Node: &EvalSequence{ Nodes: []EvalNode{ &EvalInterpolate{ diff --git a/terraform/transform_resource.go b/terraform/transform_resource.go index 0b56721b0..81ff158d9 100644 --- a/terraform/transform_resource.go +++ b/terraform/transform_resource.go @@ -318,6 +318,10 @@ func (n *graphNodeExpandedResource) EvalTree() EvalNode { Resource: n.Resource, Diff: &diff, }, + &EvalIgnoreChanges{ + Resource: n.Resource, + Diff: &diff, + }, &EvalWriteState{ Name: n.stateId(), ResourceType: n.Resource.Type, @@ -369,7 +373,7 @@ func (n *graphNodeExpandedResource) EvalTree() EvalNode { var createNew, tainted bool var createBeforeDestroyEnabled bool seq.Nodes = append(seq.Nodes, &EvalOpFilter{ - Ops: []walkOperation{walkApply}, + Ops: []walkOperation{walkApply, walkDestroy}, Node: &EvalSequence{ Nodes: []EvalNode{ // Get the saved diff for apply @@ -591,7 +595,7 @@ func (n *graphNodeExpandedResourceDestroy) EvalTree() EvalNode { var state *InstanceState var err error return &EvalOpFilter{ - Ops: []walkOperation{walkApply}, + Ops: []walkOperation{walkApply, walkDestroy}, Node: &EvalSequence{ Nodes: []EvalNode{ // Get the saved diff for apply diff --git a/terraform/transform_tainted.go b/terraform/transform_tainted.go index fdc1ae6bc..37e25df32 100644 --- a/terraform/transform_tainted.go +++ b/terraform/transform_tainted.go @@ -114,7 +114,7 @@ func (n *graphNodeTaintedResource) EvalTree() EvalNode { // Apply var diff *InstanceDiff seq.Nodes = append(seq.Nodes, &EvalOpFilter{ - Ops: []walkOperation{walkApply}, + Ops: []walkOperation{walkApply, walkDestroy}, Node: &EvalSequence{ Nodes: []EvalNode{ &EvalGetProvider{ diff --git a/terraform/walkoperation_string.go b/terraform/walkoperation_string.go index 423793c3c..1ce3661c4 100644 --- a/terraform/walkoperation_string.go +++ b/terraform/walkoperation_string.go @@ -4,9 +4,9 @@ package terraform import "fmt" -const _walkOperation_name = "walkInvalidwalkInputwalkApplywalkPlanwalkPlanDestroywalkRefreshwalkValidate" +const _walkOperation_name = "walkInvalidwalkInputwalkApplywalkPlanwalkPlanDestroywalkRefreshwalkValidatewalkDestroy" -var _walkOperation_index = [...]uint8{0, 11, 20, 29, 37, 52, 63, 75} +var _walkOperation_index = [...]uint8{0, 11, 20, 29, 37, 52, 63, 75, 86} func (i walkOperation) String() string { if i >= walkOperation(len(_walkOperation_index)-1) { diff --git a/website/Gemfile.lock b/website/Gemfile.lock index fac790740..325143cfd 100644 --- a/website/Gemfile.lock +++ b/website/Gemfile.lock @@ -1,6 +1,6 @@ GIT remote: https://github.com/hashicorp/middleman-hashicorp - revision: fc131cfce2a1d5c8671812d9844a944ebb4bd92f + revision: b152b6436348e8e1f9990436228b25b4c5c6fcb8 specs: middleman-hashicorp (0.1.0) bootstrap-sass (~> 3.3) @@ -76,7 +76,7 @@ GEM http_parser.rb (0.6.0) i18n (0.7.0) json (1.8.3) - kramdown (1.8.0) + kramdown (1.9.0) less (2.6.0) commonjs (~> 0.2.7) libv8 (3.16.14.11) @@ -148,10 +148,10 @@ GEM rb-fsevent (0.9.6) rb-inotify (0.9.5) ffi (>= 0.5.0) - redcarpet (3.3.2) + redcarpet (3.3.3) ref (2.0.0) rouge (1.10.1) - sass (3.4.18) + sass (3.4.19) sprockets (2.12.4) hike (~> 1.2) multi_json (~> 1.0) diff --git a/website/config.rb b/website/config.rb index 80bbb6443..5d0d0ec16 100644 --- a/website/config.rb +++ b/website/config.rb @@ -10,4 +10,5 @@ activate :hashicorp do |h| h.bintray_repo = "mitchellh/terraform" h.bintray_user = "mitchellh" h.bintray_key = ENV["BINTRAY_API_KEY"] + h.github_slug = "hashicorp/terraform" end diff --git a/website/source/assets/stylesheets/_docs.scss b/website/source/assets/stylesheets/_docs.scss index 799b631a0..6849f9106 100755 --- a/website/source/assets/stylesheets/_docs.scss +++ b/website/source/assets/stylesheets/_docs.scss @@ -20,6 +20,7 @@ body.layout-google, body.layout-heroku, body.layout-mailgun, body.layout-openstack, +body.layout-packet, body.layout-rundeck, body.layout-template, body.layout-docs, diff --git a/website/source/docs/commands/apply.html.markdown b/website/source/docs/commands/apply.html.markdown index dec4ea19d..770d41c95 100644 --- a/website/source/docs/commands/apply.html.markdown +++ b/website/source/docs/commands/apply.html.markdown @@ -35,6 +35,9 @@ The command-line flags are all optional. The list of available flags are: * `-no-color` - Disables output with coloring. +* `-parallelism=n` - Limit the number of concurrent operation as Terraform + [walks the graph](/docs/internals/graph.html#walking-the-graph). + * `-refresh=true` - Update the state for each resource prior to planning and applying. This has no effect if a plan file is given directly to apply. diff --git a/website/source/docs/commands/graph.html.markdown b/website/source/docs/commands/graph.html.markdown index c7c426142..d24005fcb 100644 --- a/website/source/docs/commands/graph.html.markdown +++ b/website/source/docs/commands/graph.html.markdown @@ -46,9 +46,6 @@ by GraphViz: $ terraform graph | dot -Tpng > graph.png ``` -Alternatively, the web-based [GraphViz Workspace](http://graphviz-dev.appspot.com) -can be used to quickly render DOT file inputs as well. - Here is an example graph output: ![Graph Example](graph-example.png) diff --git a/website/source/docs/commands/plan.html.markdown b/website/source/docs/commands/plan.html.markdown index 1c0b1b68a..e4a48ab5b 100644 --- a/website/source/docs/commands/plan.html.markdown +++ b/website/source/docs/commands/plan.html.markdown @@ -48,6 +48,9 @@ The command-line flags are all optional. The list of available flags are: changes shown in this plan are applied. Read the warning on saved plans below. +* `-parallelism=n` - Limit the number of concurrent operation as Terraform + [walks the graph](/docs/internals/graph.html#walking-the-graph). + * `-refresh=true` - Update the state prior to checking for differences. * `-state=path` - Path to the state file. Defaults to "terraform.tfstate". diff --git a/website/source/docs/commands/remote-config.html.markdown b/website/source/docs/commands/remote-config.html.markdown index c7586ac0e..73a06f821 100644 --- a/website/source/docs/commands/remote-config.html.markdown +++ b/website/source/docs/commands/remote-config.html.markdown @@ -57,6 +57,13 @@ The following backends are supported: in the `access_key`, `secret_key` and `region` variables respectively, but passing credentials this way is not recommended since they will be included in cleartext inside the persisted state. + Other supported parameters include: + * `bucket` - the name of the S3 bucket + * `key` - path where to place/look for state file inside the bucket + * `encrypt` - whether to enable [server side encryption](http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingServerSideEncryption.html) + of the state file + * `acl` - [Canned ACL](http://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl) + to be applied to the state file. * HTTP - Stores the state using a simple REST client. State will be fetched via GET, updated via POST, and purged with DELETE. Requires the `address` variable. diff --git a/website/source/docs/configuration/interpolation.html.md b/website/source/docs/configuration/interpolation.html.md index f4730acd6..28d03790d 100644 --- a/website/source/docs/configuration/interpolation.html.md +++ b/website/source/docs/configuration/interpolation.html.md @@ -74,6 +74,17 @@ are documented below. The supported built-in functions are: + * `base64decode(string)` - Given a base64-encoded string, decodes it and + returns the original string. + + * `base64encode(string)` - Returns a base64-encoded representation of the + given string. + + * `compact(list)` - Removes empty string elements from a list. This can be + useful in some cases, for example when passing joined lists as module + variables or when parsing module outputs. + Example: `compact(module.my_asg.load_balancer_names)` + * `concat(list1, list2)` - Combines two or more lists into a single list. Example: `concat(aws_instance.db.*.tags.Name, aws_instance.web.*.tags.Name)` diff --git a/website/source/docs/configuration/resources.html.md b/website/source/docs/configuration/resources.html.md index f099c5f25..d5e087fec 100644 --- a/website/source/docs/configuration/resources.html.md +++ b/website/source/docs/configuration/resources.html.md @@ -68,11 +68,20 @@ The `lifecycle` block allows the following keys to be set: destruction of a given resource. When this is set to `true`, any plan that includes a destroy of this resource will return an error message. + * `ignore_changes` (list of strings) - Customizes how diffs are evaluated for + resources, allowing individual attributes to be ignored through changes. + As an example, this can be used to ignore dynamic changes to the + resource from external resources. Other meta-parameters cannot be ignored. + ~> **NOTE on create\_before\_destroy and dependencies:** Resources that utilize the `create_before_destroy` key can only depend on other resources that also include `create_before_destroy`. Referencing a resource that does not include `create_before_destroy` will result in a dependency graph cycle. +~> **NOTE on ignore\_changes:** Ignored attribute names can be matched by their +name, not state ID. For example, if an `aws_route_table` has two routes defined +and the `ignore_changes` list contains "route", both routes will be ignored. + ------------- Within a resource, you can optionally have a **connection block**. @@ -191,6 +200,8 @@ where `LIFECYCLE` is: ``` lifecycle { [create_before_destroy = true|false] + [prevent_destroy = true|false] + [ignore_changes = [ATTRIBUTE NAME, ...]] } ``` diff --git a/website/source/docs/configuration/syntax.html.md b/website/source/docs/configuration/syntax.html.md index 2f0e7d547..8fcc6c68c 100644 --- a/website/source/docs/configuration/syntax.html.md +++ b/website/source/docs/configuration/syntax.html.md @@ -54,6 +54,11 @@ Basic bullet point reference: is [documented here](/docs/configuration/interpolation.html). + * Multiline strings can use shell-style "here doc" syntax, with + the string starting with a marker like `< To walk the graph, a standard depth-first traversal is done. Graph -walking is done with as much parallelism as possible: a node is walked -as soon as all of its dependencies are walked. +walking is done in parallel: a node is walked as soon as all of its +dependencies are walked. + +The amount of parallelism is limited using a semaphore to prevent too many +concurrent operations from overwhelming the resources of the machine running +Terraform. By default, up to 10 nodes in the graph will be processed +concurrently. This number can be set using the `-parallelism` flag on the +[plan](/docs/commands/plan.html), [apply](/docs/commands/apply.html), and +[destroy](/docs/commands/destroy.html) commands. + +Setting `-parallelism` is considered an advanced operation and should not be +necessary for normal usage of Terraform. It may be helpful in certain special +use cases or to help debug Terraform issues. + +Note that some providers (AWS, for example), handle API rate limiting issues at +a lower level by implementing graceful backoff/retry in their respective API +clients. For this reason, Terraform does not use this `parallelism` feature to +address API rate limits directly. diff --git a/website/source/docs/modules/sources.html.markdown b/website/source/docs/modules/sources.html.markdown index b0a2b4d0c..d9e6a1316 100644 --- a/website/source/docs/modules/sources.html.markdown +++ b/website/source/docs/modules/sources.html.markdown @@ -81,6 +81,30 @@ You can use the same parameters to GitHub repositories as you can generic Git repositories (such as tags or branches). See the documentation for generic Git repositories for more information. +#### Private GitHub Repos + +If you need Terraform to be able to fetch modules from private GitHub repos on +a remote machine (like a Atlas or a CI server), you'll need to provide +Terraform with credentials that can be used to authenticate as a user with read +access to the private repo. + +First, create a [machine +user](https://developer.github.com/guides/managing-deploy-keys/#machine-users) +with access to read from the private repo in question, then embed this user's +credentials into the source field: + +``` +module "private-infra" { + source = "git::https://MACHINE-USER:MACHINE-PASS@github.com/org/privatemodules//modules/foo" +} +``` + +Note that Terraform does not yet support interpolations in the `source` field, +so the machine username and password will have to be embedded directly into the +source string. You can track +[GH-1439](https://github.com/hashicorp/terraform/issues/1439) to learn when this +limitation is lifted. + ## BitBucket Terraform will automatically recognize BitBucket URLs and turn them into diff --git a/website/source/docs/providers/atlas/r/artifact.html.markdown b/website/source/docs/providers/atlas/r/artifact.html.markdown index 7c8be2985..08dae8fd9 100644 --- a/website/source/docs/providers/atlas/r/artifact.html.markdown +++ b/website/source/docs/providers/atlas/r/artifact.html.markdown @@ -32,6 +32,7 @@ resource "atlas_artifact" "web" { } # Start our instance with the dynamic ami value +# Remember to include the AWS region as it is part of the full ID resource "aws_instance" "app" { ami = "${atlas_artifact.web.metadata_full.region-us-east-1}" ... @@ -82,4 +83,3 @@ The following attributes are exported: For example, the "region.us-east-1" key will become "region-us-east-1". * `version_real` - The matching version of the artifact * `slug` - The artifact slug in Atlas - diff --git a/website/source/docs/providers/aws/r/app_cookie_stickiness_policy.html.markdown b/website/source/docs/providers/aws/r/app_cookie_stickiness_policy.html.markdown index 6d6215f22..c15f09d66 100644 --- a/website/source/docs/providers/aws/r/app_cookie_stickiness_policy.html.markdown +++ b/website/source/docs/providers/aws/r/app_cookie_stickiness_policy.html.markdown @@ -15,20 +15,20 @@ Provides an application cookie stickiness policy, which allows an ELB to wed its ``` resource "aws_elb" "lb" { name = "test-lb" - availability_zones = ["us-east-1a"] - listener { - instance_port = 8000 - instance_protocol = "http" - lb_port = 80 - lb_protocol = "http" - } + availability_zones = ["us-east-1a"] + listener { + instance_port = 8000 + instance_protocol = "http" + lb_port = 80 + lb_protocol = "http" + } } resource "aws_app_cookie_stickiness_policy" "foo" { - name = "foo_policy" - load_balancer = "${aws_elb.lb.id}" - lb_port = 80 - cookie_name = "MyAppCookie" + name = "foo_policy" + load_balancer = "${aws_elb.lb.name}" + lb_port = 80 + cookie_name = "MyAppCookie" } ``` @@ -37,7 +37,7 @@ resource "aws_app_cookie_stickiness_policy" "foo" { The following arguments are supported: * `name` - (Required) The name of the stickiness policy. -* `load_balancer` - (Required) The load balancer to which the policy +* `load_balancer` - (Required) The name of load balancer to which the policy should be attached. * `lb_port` - (Required) The load balancer port to which the policy should be applied. This must be an active listener on the load @@ -50,6 +50,6 @@ The following attributes are exported: * `id` - The ID of the policy. * `name` - The name of the stickiness policy. -* `load_balancer` - The load balancer to which the policy is attached. +* `load_balancer` - The name of load balancer to which the policy is attached. * `lb_port` - The load balancer port to which the policy is applied. * `cookie_name` - The application cookie whose lifetime the ELB's cookie should follow. diff --git a/website/source/docs/providers/aws/r/autoscaling_group.html.markdown b/website/source/docs/providers/aws/r/autoscaling_group.html.markdown index 022b1cf71..1c7641a2d 100644 --- a/website/source/docs/providers/aws/r/autoscaling_group.html.markdown +++ b/website/source/docs/providers/aws/r/autoscaling_group.html.markdown @@ -57,12 +57,20 @@ The following arguments are supported: for this number of healthy instances all attached load balancers. (See also [Waiting for Capacity](#waiting-for-capacity) below.) * `force_delete` - (Optional) Allows deleting the autoscaling group without waiting - for all instances in the pool to terminate. + for all instances in the pool to terminate. You can force an autoscaling group to delete + even if it's in the process of scaling a resource. Normally, Terraform + drains all the instances before deleting the group. This bypasses that + behavior and potentially leaves resources dangling. * `load_balancers` (Optional) A list of load balancer names to add to the autoscaling group names. * `vpc_zone_identifier` (Optional) A list of subnet IDs to launch resources in. * `termination_policies` (Optional) A list of policies to decide how the instances in the auto scale group should be terminated. * `tag` (Optional) A list of tag blocks. Tags documented below. +* `wait_for_capacity_timeout` (Default: "10m") A maximum + [duration](https://golang.org/pkg/time/#ParseDuration) that Terraform should + wait for ASG instances to be healthy before timing out. (See also [Waiting + for Capacity](#waiting-for-capacity) below.) Setting this to "0" causes + Terraform to skip all Capacity Waiting behavior. Tags support the following: @@ -110,9 +118,12 @@ Terraform considers an instance "healthy" when the ASG reports `HealthStatus: Docs](https://docs.aws.amazon.com/AutoScaling/latest/DeveloperGuide/AutoScalingGroupLifecycle.html) for more information on an ASG's lifecycle. -Terraform will wait for healthy instances for up to 10 minutes. If ASG creation -is taking more than a few minutes, it's worth investigating for scaling activity -errors, which can be caused by problems with the selected Launch Configuration. +Terraform will wait for healthy instances for up to +`wait_for_capacity_timeout`. If ASG creation is taking more than a few minutes, +it's worth investigating for scaling activity errors, which can be caused by +problems with the selected Launch Configuration. + +Setting `wait_for_capacity_timeout` to `"0"` disables ASG Capacity waiting. #### Waiting for ELB Capacity @@ -121,8 +132,9 @@ Balancers. If `min_elb_capacity` is set, Terraform will wait for that number of Instances to be `"InService"` in all attached `load_balancers`. This can be used to ensure that service is being provided before Terraform moves on. -As with ASG Capacity, Terraform will wait for up to 10 minutes for -`"InService"` instances. If ASG creation takes more than a few minutes, this -could indicate one of a number of configuration problems. See the [AWS Docs on -Load Balancer Troubleshooting](https://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/elb-troubleshooting.html) +As with ASG Capacity, Terraform will wait for up to `wait_for_capacity_timeout` +(for `"InService"` instances. If ASG creation takes more than a few minutes, +this could indicate one of a number of configuration problems. See the [AWS +Docs on Load Balancer +Troubleshooting](https://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/elb-troubleshooting.html) for more information. diff --git a/website/source/docs/providers/aws/r/autoscaling_lifecycle_hooks.html.markdown b/website/source/docs/providers/aws/r/autoscaling_lifecycle_hooks.html.markdown new file mode 100644 index 000000000..a753c864b --- /dev/null +++ b/website/source/docs/providers/aws/r/autoscaling_lifecycle_hooks.html.markdown @@ -0,0 +1,55 @@ +--- +layout: "aws" +page_title: "AWS: aws_autoscaling_lifecycle_hook" +sidebar_current: "docs-aws-resource-autoscaling-lifecycle-hook" +description: |- + Provides an AutoScaling Lifecycle Hooks resource. +--- + +# aws\_autoscaling\_lifecycle\_hook + +Provides an AutoScaling Lifecycle Hook resource. + +## Example Usage + +``` +resource "aws_autoscaling_group" "foobar" { + availability_zones = ["us-west-2a"] + name = "terraform-test-foobar5" + health_check_type = "EC2" + termination_policies = ["OldestInstance"] + tag { + key = "Foo" + value = "foo-bar" + propagate_at_launch = true + } +} + +resource "aws_autoscaling_lifecycle_hook" "foobar" { + name = "foobar" + autoscaling_group_name = "${aws_autoscaling_group.foobar.name}" + default_result = "CONTINUE" + heartbeat_timeout = 2000 + lifecycle_transition = "autoscaling:EC2_INSTANCE_LAUNCHING" + notification_metadata = < **NOTE:** You can specify either the `instance` ID or the `network_interface` ID, +but not both. Including both will **not** return an error from the AWS API, but will +have undefined behavior. See the relevant [AssociateAddress API Call][1] for +more information. + ## Attributes Reference The following attributes are exported: @@ -36,3 +41,5 @@ The following attributes are exported: * `instance` - Contains the ID of the attached instance. * `network_interface` - Contains the ID of the attached network interface. + +[1]: http://docs.aws.amazon.com/fr_fr/AWSEC2/latest/APIReference/API_AssociateAddress.html diff --git a/website/source/docs/providers/aws/r/elasticache_cluster.html.markdown b/website/source/docs/providers/aws/r/elasticache_cluster.html.markdown index 2fe6a8dcf..f2d0e5363 100644 --- a/website/source/docs/providers/aws/r/elasticache_cluster.html.markdown +++ b/website/source/docs/providers/aws/r/elasticache_cluster.html.markdown @@ -27,8 +27,8 @@ resource "aws_elasticache_cluster" "bar" { The following arguments are supported: -* `cluster_id` – (Required) Group identifier. This parameter is stored as a -lowercase string +* `cluster_id` – (Required) Group identifier. Elasticache converts + this name to lowercase * `engine` – (Required) Name of the cache engine to be used for this cache cluster. Valid values for this parameter are `memcached` or `redis` diff --git a/website/source/docs/providers/aws/r/elasticsearch_domain.html.markdown b/website/source/docs/providers/aws/r/elasticsearch_domain.html.markdown new file mode 100644 index 000000000..373edd59b --- /dev/null +++ b/website/source/docs/providers/aws/r/elasticsearch_domain.html.markdown @@ -0,0 +1,83 @@ +--- +layout: "aws" +page_title: "AWS: aws_elasticsearch_domain" +sidebar_current: "docs-aws-elasticsearch-domain" +description: |- + Provides an ElasticSearch Domain. +--- + +# aws\_elasticsearch\_domain + + +## Example Usage + +``` +resource "aws_elasticsearch_domain" "es" { + domain_name = "tf-test" + advanced_options { + "rest.action.multi.allow_explicit_index" = true + } + + access_policies = < **NOTE:** When trying to remove a Glacier Vault, the Vault must be empty. + +## Example Usage + +``` +resource "aws_sns_topic" "aws_sns_topic" { + name = "glacier-sns-topic" +} + +resource "aws_glacier_vault" "my_archive" { + name = "MyArchive" + + notification { + sns_topic = "${aws_sns_topic.aws_sns_topic.arn}" + events = ["ArchiveRetrievalCompleted","InventoryRetrievalCompleted"] + } + + access_policy = < ## Block devices @@ -134,3 +169,4 @@ The following attributes are exported: [1]: /docs/providers/aws/r/autoscaling_group.html [2]: /docs/configuration/resources.html#lifecycle +[3]: /docs/providers/aws/r/spot_instance_request.html diff --git a/website/source/docs/providers/aws/r/lb_cookie_stickiness_policy.html.markdown b/website/source/docs/providers/aws/r/lb_cookie_stickiness_policy.html.markdown index bb4ad524e..59e581c12 100644 --- a/website/source/docs/providers/aws/r/lb_cookie_stickiness_policy.html.markdown +++ b/website/source/docs/providers/aws/r/lb_cookie_stickiness_policy.html.markdown @@ -25,7 +25,7 @@ resource "aws_elb" "lb" { } resource "aws_lb_cookie_stickiness_policy" "foo" { - name = "foo_policy" + name = "foo-policy" load_balancer = "${aws_elb.lb.id}" lb_port = 80 cookie_expiration_period = 600 diff --git a/website/source/docs/providers/aws/r/opsworks_custom_layer.html.markdown b/website/source/docs/providers/aws/r/opsworks_custom_layer.html.markdown new file mode 100644 index 000000000..8bab63692 --- /dev/null +++ b/website/source/docs/providers/aws/r/opsworks_custom_layer.html.markdown @@ -0,0 +1,65 @@ +--- +layout: "aws" +page_title: "AWS: aws_opsworks_custom_layer" +sidebar_current: "docs-aws-resource-opsworks-custom-layer" +description: |- + Provides an OpsWorks custom layer resource. +--- + +# aws\_opsworks\_custom\_layer + +Provides an OpsWorks custom layer resource. + +## Example Usage + +``` +resource "aws_opsworks_custom_layer" "custlayer" { + name = "My Awesome Custom Layer" + short_name = "awesome" + stack_id = "${aws_opsworks_stack.main.id}" +} +``` + +## Argument Reference + +The following arguments are supported: + +* `name` - (Required) A human-readable name for the layer. +* `short_name` - (Required) A short, machine-readable name for the layer, which will be used to identify it in the Chef node JSON. +* `stack_id` - (Required) The id of the stack the layer will belong to. +* `auto_assign_elastic_ips` - (Optional) Whether to automatically assign an elastic IP address to the layer's instances. +* `auto_assign_public_ips` - (Optional) For stacks belonging to a VPC, whether to automatically assign a public IP address to each of the layer's instances. +* `custom_instance_profile_arn` - (Optional) The ARN of an IAM profile that will be used for the layer's instances. +* `custom_security_group_ids` - (Optional) Ids for a set of security groups to apply to the layer's instances. +* `auto_healing` - (Optional) Whether to enable auto-healing for the layer. +* `install_updates_on_boot` - (Optional) Whether to install OS and package updates on each instance when it boots. +* `instance_shutdown_timeout` - (Optional) The time, in seconds, that OpsWorks will wait for Chef to complete after triggering the Shutdown event. +* `drain_elb_on_shutdown` - (Optional) Whether to enable Elastic Load Balancing connection draining. +* `system_packages` - (Optional) Names of a set of system packages to install on the layer's instances. +* `use_ebs_optimized_instances` - (Optional) Whether to use EBS-optimized instances. +* `ebs_volume` - (Optional) `ebs_volume` blocks, as described below, will each create an EBS volume and connect it to the layer's instances. + +The following extra optional arguments, all lists of Chef recipe names, allow +custom Chef recipes to be applied to layer instances at the five different +lifecycle events, if custom cookbooks are enabled on the layer's stack: + +* `custom_configure_recipes` +* `custom_deploy_recipes` +* `custom_setup_recipes` +* `custom_shutdown_recipes` +* `custom_undeploy_recipes` + +An `ebs_volume` block supports the following arguments: + +* `mount_point` - (Required) The path to mount the EBS volume on the layer's instances. +* `size` - (Required) The size of the volume in gigabytes. +* `number_of_disks` - (Required) The number of disks to use for the EBS volume. +* `raid_level` - (Required) The RAID level to use for the volume. +* `type` - (Optional) The type of volume to create. This may be `standard` (the default), `io1` or `gp2`. +* `iops` - (Optional) For PIOPS volumes, the IOPS per disk. + +## Attributes Reference + +The following attributes are exported: + +* `id` - The id of the layer. diff --git a/website/source/docs/providers/aws/r/opsworks_ganglia_layer.html.markdown b/website/source/docs/providers/aws/r/opsworks_ganglia_layer.html.markdown new file mode 100644 index 000000000..2137e0bf2 --- /dev/null +++ b/website/source/docs/providers/aws/r/opsworks_ganglia_layer.html.markdown @@ -0,0 +1,66 @@ +--- +layout: "aws" +page_title: "AWS: aws_opsworks_ganglia_layer" +sidebar_current: "docs-aws-resource-opsworks-ganglia-layer" +description: |- + Provides an OpsWorks Ganglia layer resource. +--- + +# aws\_opsworks\_ganglia\_layer + +Provides an OpsWorks Ganglia layer resource. + +## Example Usage + +``` +resource "aws_opsworks_ganglia_layer" "monitor" { + stack_id = "${aws_opsworks_stack.main.id}" + password = "foobarbaz" +} +``` + +## Argument Reference + +The following arguments are supported: + +* `stack_id` - (Required) The id of the stack the layer will belong to. +* `password` - (Required) The password to use for Ganglia. +* `name` - (Optional) A human-readable name for the layer. +* `auto_assign_elastic_ips` - (Optional) Whether to automatically assign an elastic IP address to the layer's instances. +* `auto_assign_public_ips` - (Optional) For stacks belonging to a VPC, whether to automatically assign a public IP address to each of the layer's instances. +* `custom_instance_profile_arn` - (Optional) The ARN of an IAM profile that will be used for the layer's instances. +* `custom_security_group_ids` - (Optional) Ids for a set of security groups to apply to the layer's instances. +* `auto_healing` - (Optional) Whether to enable auto-healing for the layer. +* `install_updates_on_boot` - (Optional) Whether to install OS and package updates on each instance when it boots. +* `instance_shutdown_timeout` - (Optional) The time, in seconds, that OpsWorks will wait for Chef to complete after triggering the Shutdown event. +* `drain_elb_on_shutdown` - (Optional) Whether to enable Elastic Load Balancing connection draining. +* `system_packages` - (Optional) Names of a set of system packages to install on the layer's instances. +* `url` - (Optional) The URL path to use for Ganglia. Defaults to "/ganglia". +* `username` - (Optiona) The username to use for Ganglia. Defaults to "opsworks". +* `use_ebs_optimized_instances` - (Optional) Whether to use EBS-optimized instances. +* `ebs_volume` - (Optional) `ebs_volume` blocks, as described below, will each create an EBS volume and connect it to the layer's instances. + +The following extra optional arguments, all lists of Chef recipe names, allow +custom Chef recipes to be applied to layer instances at the five different +lifecycle events, if custom cookbooks are enabled on the layer's stack: + +* `custom_configure_recipes` +* `custom_deploy_recipes` +* `custom_setup_recipes` +* `custom_shutdown_recipes` +* `custom_undeploy_recipes` + +An `ebs_volume` block supports the following arguments: + +* `mount_point` - (Required) The path to mount the EBS volume on the layer's instances. +* `size` - (Required) The size of the volume in gigabytes. +* `number_of_disks` - (Required) The number of disks to use for the EBS volume. +* `raid_level` - (Required) The RAID level to use for the volume. +* `type` - (Optional) The type of volume to create. This may be `standard` (the default), `io1` or `gp2`. +* `iops` - (Optional) For PIOPS volumes, the IOPS per disk. + +## Attributes Reference + +The following attributes are exported: + +* `id` - The id of the layer. diff --git a/website/source/docs/providers/aws/r/opsworks_haproxy_layer.html.markdown b/website/source/docs/providers/aws/r/opsworks_haproxy_layer.html.markdown new file mode 100644 index 000000000..a921a8135 --- /dev/null +++ b/website/source/docs/providers/aws/r/opsworks_haproxy_layer.html.markdown @@ -0,0 +1,69 @@ +--- +layout: "aws" +page_title: "AWS: aws_opsworks_haproxy_layer" +sidebar_current: "docs-aws-resource-opsworks-haproxy-layer" +description: |- + Provides an OpsWorks HAProxy layer resource. +--- + +# aws\_opsworks\_haproxy\_layer + +Provides an OpsWorks haproxy layer resource. + +## Example Usage + +``` +resource "aws_opsworks_haproxy_layer" "lb" { + stack_id = "${aws_opsworks_stack.main.id}" + stats_password = "foobarbaz" +} +``` + +## Argument Reference + +The following arguments are supported: + +* `stack_id` - (Required) The id of the stack the layer will belong to. +* `stats_password` - (Required) The password to use for HAProxy stats. +* `name` - (Optional) A human-readable name for the layer. +* `auto_assign_elastic_ips` - (Optional) Whether to automatically assign an elastic IP address to the layer's instances. +* `auto_assign_public_ips` - (Optional) For stacks belonging to a VPC, whether to automatically assign a public IP address to each of the layer's instances. +* `custom_instance_profile_arn` - (Optional) The ARN of an IAM profile that will be used for the layer's instances. +* `custom_security_group_ids` - (Optional) Ids for a set of security groups to apply to the layer's instances. +* `auto_healing` - (Optional) Whether to enable auto-healing for the layer. +* `healthcheck_method` - (Optional) HTTP method to use for instance healthchecks. Defaults to "OPTIONS". +* `healthcheck_url` - (Optional) URL path to use for instance healthchecks. Defaults to "/". +* `install_updates_on_boot` - (Optional) Whether to install OS and package updates on each instance when it boots. +* `instance_shutdown_timeout` - (Optional) The time, in seconds, that OpsWorks will wait for Chef to complete after triggering the Shutdown event. +* `drain_elb_on_shutdown` - (Optional) Whether to enable Elastic Load Balancing connection draining. +* `stats_enabled` - (Optional) Whether to enable HAProxy stats. +* `stats_url` - (Optional) The HAProxy stats URL. Defaults to "/haproxy?stats". +* `stats_user` - (Optional) The username for HAProxy stats. Defaults to "opsworks". +* `system_packages` - (Optional) Names of a set of system packages to install on the layer's instances. +* `use_ebs_optimized_instances` - (Optional) Whether to use EBS-optimized instances. +* `ebs_volume` - (Optional) `ebs_volume` blocks, as described below, will each create an EBS volume and connect it to the layer's instances. + +The following extra optional arguments, all lists of Chef recipe names, allow +custom Chef recipes to be applied to layer instances at the five different +lifecycle events, if custom cookbooks are enabled on the layer's stack: + +* `custom_configure_recipes` +* `custom_deploy_recipes` +* `custom_setup_recipes` +* `custom_shutdown_recipes` +* `custom_undeploy_recipes` + +An `ebs_volume` block supports the following arguments: + +* `mount_point` - (Required) The path to mount the EBS volume on the layer's instances. +* `size` - (Required) The size of the volume in gigabytes. +* `number_of_disks` - (Required) The number of disks to use for the EBS volume. +* `raid_level` - (Required) The RAID level to use for the volume. +* `type` - (Optional) The type of volume to create. This may be `standard` (the default), `io1` or `gp2`. +* `iops` - (Optional) For PIOPS volumes, the IOPS per disk. + +## Attributes Reference + +The following attributes are exported: + +* `id` - The id of the layer. diff --git a/website/source/docs/providers/aws/r/opsworks_java_app_layer.html.markdown b/website/source/docs/providers/aws/r/opsworks_java_app_layer.html.markdown new file mode 100644 index 000000000..c9a4823fe --- /dev/null +++ b/website/source/docs/providers/aws/r/opsworks_java_app_layer.html.markdown @@ -0,0 +1,67 @@ +--- +layout: "aws" +page_title: "AWS: aws_opsworks_java_app_layer" +sidebar_current: "docs-aws-resource-opsworks-java-app-layer" +description: |- + Provides an OpsWorks Java application layer resource. +--- + +# aws\_opsworks\_java\_app\_layer + +Provides an OpsWorks Java application layer resource. + +## Example Usage + +``` +resource "aws_opsworks_java_app_layer" "app" { + stack_id = "${aws_opsworks_stack.main.id}" +} +``` + +## Argument Reference + +The following arguments are supported: + +* `stack_id` - (Required) The id of the stack the layer will belong to. +* `name` - (Optional) A human-readable name for the layer. +* `app_server` - (Optional) Keyword for the application container to use. Defaults to "tomcat". +* `app_server_version` - (Optional) Version of the selected application container to use. Defaults to "7". +* `auto_assign_elastic_ips` - (Optional) Whether to automatically assign an elastic IP address to the layer's instances. +* `auto_assign_public_ips` - (Optional) For stacks belonging to a VPC, whether to automatically assign a public IP address to each of the layer's instances. +* `custom_instance_profile_arn` - (Optional) The ARN of an IAM profile that will be used for the layer's instances. +* `custom_security_group_ids` - (Optional) Ids for a set of security groups to apply to the layer's instances. +* `auto_healing` - (Optional) Whether to enable auto-healing for the layer. +* `install_updates_on_boot` - (Optional) Whether to install OS and package updates on each instance when it boots. +* `instance_shutdown_timeout` - (Optional) The time, in seconds, that OpsWorks will wait for Chef to complete after triggering the Shutdown event. +* `jvm_type` - (Optional) Keyword for the type of JVM to use. Defaults to `openjdk`. +* `jvm_options` - (Optional) Options to set for the JVM. +* `jvm_version` - (Optional) Version of JVM to use. Defaults to "7". +* `drain_elb_on_shutdown` - (Optional) Whether to enable Elastic Load Balancing connection draining. +* `system_packages` - (Optional) Names of a set of system packages to install on the layer's instances. +* `use_ebs_optimized_instances` - (Optional) Whether to use EBS-optimized instances. +* `ebs_volume` - (Optional) `ebs_volume` blocks, as described below, will each create an EBS volume and connect it to the layer's instances. + +The following extra optional arguments, all lists of Chef recipe names, allow +custom Chef recipes to be applied to layer instances at the five different +lifecycle events, if custom cookbooks are enabled on the layer's stack: + +* `custom_configure_recipes` +* `custom_deploy_recipes` +* `custom_setup_recipes` +* `custom_shutdown_recipes` +* `custom_undeploy_recipes` + +An `ebs_volume` block supports the following arguments: + +* `mount_point` - (Required) The path to mount the EBS volume on the layer's instances. +* `size` - (Required) The size of the volume in gigabytes. +* `number_of_disks` - (Required) The number of disks to use for the EBS volume. +* `raid_level` - (Required) The RAID level to use for the volume. +* `type` - (Optional) The type of volume to create. This may be `standard` (the default), `io1` or `gp2`. +* `iops` - (Optional) For PIOPS volumes, the IOPS per disk. + +## Attributes Reference + +The following attributes are exported: + +* `id` - The id of the layer. diff --git a/website/source/docs/providers/aws/r/opsworks_memcached_layer.html.markdown b/website/source/docs/providers/aws/r/opsworks_memcached_layer.html.markdown new file mode 100644 index 000000000..4a725bd7c --- /dev/null +++ b/website/source/docs/providers/aws/r/opsworks_memcached_layer.html.markdown @@ -0,0 +1,63 @@ +--- +layout: "aws" +page_title: "AWS: aws_opsworks_memcached_layer" +sidebar_current: "docs-aws-resource-opsworks-memcached-layer" +description: |- + Provides an OpsWorks memcached layer resource. +--- + +# aws\_opsworks\_memcached\_layer + +Provides an OpsWorks memcached layer resource. + +## Example Usage + +``` +resource "aws_opsworks_memcached_layer" "cache" { + stack_id = "${aws_opsworks_stack.main.id}" +} +``` + +## Argument Reference + +The following arguments are supported: + +* `stack_id` - (Required) The id of the stack the layer will belong to. +* `name` - (Optional) A human-readable name for the layer. +* `allocated_memory` - (Optional) Amount of memory to allocate for the cache on each instance, in megabytes. Defaults to 512MB. +* `auto_assign_elastic_ips` - (Optional) Whether to automatically assign an elastic IP address to the layer's instances. +* `auto_assign_public_ips` - (Optional) For stacks belonging to a VPC, whether to automatically assign a public IP address to each of the layer's instances. +* `custom_instance_profile_arn` - (Optional) The ARN of an IAM profile that will be used for the layer's instances. +* `custom_security_group_ids` - (Optional) Ids for a set of security groups to apply to the layer's instances. +* `auto_healing` - (Optional) Whether to enable auto-healing for the layer. +* `install_updates_on_boot` - (Optional) Whether to install OS and package updates on each instance when it boots. +* `instance_shutdown_timeout` - (Optional) The time, in seconds, that OpsWorks will wait for Chef to complete after triggering the Shutdown event. +* `drain_elb_on_shutdown` - (Optional) Whether to enable Elastic Load Balancing connection draining. +* `system_packages` - (Optional) Names of a set of system packages to install on the layer's instances. +* `use_ebs_optimized_instances` - (Optional) Whether to use EBS-optimized instances. +* `ebs_volume` - (Optional) `ebs_volume` blocks, as described below, will each create an EBS volume and connect it to the layer's instances. + +The following extra optional arguments, all lists of Chef recipe names, allow +custom Chef recipes to be applied to layer instances at the five different +lifecycle events, if custom cookbooks are enabled on the layer's stack: + +* `custom_configure_recipes` +* `custom_deploy_recipes` +* `custom_setup_recipes` +* `custom_shutdown_recipes` +* `custom_undeploy_recipes` + +An `ebs_volume` block supports the following arguments: + +* `mount_point` - (Required) The path to mount the EBS volume on the layer's instances. +* `size` - (Required) The size of the volume in gigabytes. +* `number_of_disks` - (Required) The number of disks to use for the EBS volume. +* `raid_level` - (Required) The RAID level to use for the volume. +* `type` - (Optional) The type of volume to create. This may be `standard` (the default), `io1` or `gp2`. +* `iops` - (Optional) For PIOPS volumes, the IOPS per disk. + +## Attributes Reference + +The following attributes are exported: + +* `id` - The id of the layer. diff --git a/website/source/docs/providers/aws/r/opsworks_mysql_layer.html.markdown b/website/source/docs/providers/aws/r/opsworks_mysql_layer.html.markdown new file mode 100644 index 000000000..fcbcef97d --- /dev/null +++ b/website/source/docs/providers/aws/r/opsworks_mysql_layer.html.markdown @@ -0,0 +1,64 @@ +--- +layout: "aws" +page_title: "AWS: aws_opsworks_mysql_layer" +sidebar_current: "docs-aws-resource-opsworks-mysql-layer" +description: |- + Provides an OpsWorks MySQL layer resource. +--- + +# aws\_opsworks\_mysql\_layer + +Provides an OpsWorks MySQL layer resource. + +## Example Usage + +``` +resource "aws_opsworks_mysql_layer" "db" { + stack_id = "${aws_opsworks_stack.main.id}" +} +``` + +## Argument Reference + +The following arguments are supported: + +* `stack_id` - (Required) The id of the stack the layer will belong to. +* `name` - (Optional) A human-readable name for the layer. +* `auto_assign_elastic_ips` - (Optional) Whether to automatically assign an elastic IP address to the layer's instances. +* `auto_assign_public_ips` - (Optional) For stacks belonging to a VPC, whether to automatically assign a public IP address to each of the layer's instances. +* `custom_instance_profile_arn` - (Optional) The ARN of an IAM profile that will be used for the layer's instances. +* `custom_security_group_ids` - (Optional) Ids for a set of security groups to apply to the layer's instances. +* `auto_healing` - (Optional) Whether to enable auto-healing for the layer. +* `install_updates_on_boot` - (Optional) Whether to install OS and package updates on each instance when it boots. +* `instance_shutdown_timeout` - (Optional) The time, in seconds, that OpsWorks will wait for Chef to complete after triggering the Shutdown event. +* `drain_elb_on_shutdown` - (Optional) Whether to enable Elastic Load Balancing connection draining. +* `root_password` - (Optional) Root password to use for MySQL. +* `root_password_on_all_instances` - (Optional) Whether to set the root user password to all instances in the stack so they can access the instances in this layer. +* `system_packages` - (Optional) Names of a set of system packages to install on the layer's instances. +* `use_ebs_optimized_instances` - (Optional) Whether to use EBS-optimized instances. +* `ebs_volume` - (Optional) `ebs_volume` blocks, as described below, will each create an EBS volume and connect it to the layer's instances. + +The following extra optional arguments, all lists of Chef recipe names, allow +custom Chef recipes to be applied to layer instances at the five different +lifecycle events, if custom cookbooks are enabled on the layer's stack: + +* `custom_configure_recipes` +* `custom_deploy_recipes` +* `custom_setup_recipes` +* `custom_shutdown_recipes` +* `custom_undeploy_recipes` + +An `ebs_volume` block supports the following arguments: + +* `mount_point` - (Required) The path to mount the EBS volume on the layer's instances. +* `size` - (Required) The size of the volume in gigabytes. +* `number_of_disks` - (Required) The number of disks to use for the EBS volume. +* `raid_level` - (Required) The RAID level to use for the volume. +* `type` - (Optional) The type of volume to create. This may be `standard` (the default), `io1` or `gp2`. +* `iops` - (Optional) For PIOPS volumes, the IOPS per disk. + +## Attributes Reference + +The following attributes are exported: + +* `id` - The id of the layer. diff --git a/website/source/docs/providers/aws/r/opsworks_nodejs_app_layer.html.markdown b/website/source/docs/providers/aws/r/opsworks_nodejs_app_layer.html.markdown new file mode 100644 index 000000000..e5a0f5b8a --- /dev/null +++ b/website/source/docs/providers/aws/r/opsworks_nodejs_app_layer.html.markdown @@ -0,0 +1,63 @@ +--- +layout: "aws" +page_title: "AWS: aws_opsworks_nodejs_app_layer" +sidebar_current: "docs-aws-resource-opsworks-nodejs-app-layer" +description: |- + Provides an OpsWorks NodeJS application layer resource. +--- + +# aws\_opsworks\_nodejs\_app\_layer + +Provides an OpsWorks NodeJS application layer resource. + +## Example Usage + +``` +resource "aws_opsworks_nodejs_app_layer" "app" { + stack_id = "${aws_opsworks_stack.main.id}" +} +``` + +## Argument Reference + +The following arguments are supported: + +* `stack_id` - (Required) The id of the stack the layer will belong to. +* `name` - (Optional) A human-readable name for the layer. +* `auto_assign_elastic_ips` - (Optional) Whether to automatically assign an elastic IP address to the layer's instances. +* `auto_assign_public_ips` - (Optional) For stacks belonging to a VPC, whether to automatically assign a public IP address to each of the layer's instances. +* `custom_instance_profile_arn` - (Optional) The ARN of an IAM profile that will be used for the layer's instances. +* `custom_security_group_ids` - (Optional) Ids for a set of security groups to apply to the layer's instances. +* `auto_healing` - (Optional) Whether to enable auto-healing for the layer. +* `install_updates_on_boot` - (Optional) Whether to install OS and package updates on each instance when it boots. +* `instance_shutdown_timeout` - (Optional) The time, in seconds, that OpsWorks will wait for Chef to complete after triggering the Shutdown event. +* `drain_elb_on_shutdown` - (Optional) Whether to enable Elastic Load Balancing connection draining. +* `nodejs_version` - (Optional) The version of NodeJS to use. Defaults to "0.10.38". +* `system_packages` - (Optional) Names of a set of system packages to install on the layer's instances. +* `use_ebs_optimized_instances` - (Optional) Whether to use EBS-optimized instances. +* `ebs_volume` - (Optional) `ebs_volume` blocks, as described below, will each create an EBS volume and connect it to the layer's instances. + +The following extra optional arguments, all lists of Chef recipe names, allow +custom Chef recipes to be applied to layer instances at the five different +lifecycle events, if custom cookbooks are enabled on the layer's stack: + +* `custom_configure_recipes` +* `custom_deploy_recipes` +* `custom_setup_recipes` +* `custom_shutdown_recipes` +* `custom_undeploy_recipes` + +An `ebs_volume` block supports the following arguments: + +* `mount_point` - (Required) The path to mount the EBS volume on the layer's instances. +* `size` - (Required) The size of the volume in gigabytes. +* `number_of_disks` - (Required) The number of disks to use for the EBS volume. +* `raid_level` - (Required) The RAID level to use for the volume. +* `type` - (Optional) The type of volume to create. This may be `standard` (the default), `io1` or `gp2`. +* `iops` - (Optional) For PIOPS volumes, the IOPS per disk. + +## Attributes Reference + +The following attributes are exported: + +* `id` - The id of the layer. diff --git a/website/source/docs/providers/aws/r/opsworks_php_app_layer.html.markdown b/website/source/docs/providers/aws/r/opsworks_php_app_layer.html.markdown new file mode 100644 index 000000000..ec91e4ed3 --- /dev/null +++ b/website/source/docs/providers/aws/r/opsworks_php_app_layer.html.markdown @@ -0,0 +1,62 @@ +--- +layout: "aws" +page_title: "AWS: aws_opsworks_php_app_layer" +sidebar_current: "docs-aws-resource-opsworks-php-app-layer" +description: |- + Provides an OpsWorks PHP application layer resource. +--- + +# aws\_opsworks\_php\_app\_layer + +Provides an OpsWorks PHP application layer resource. + +## Example Usage + +``` +resource "aws_opsworks_php_app_layer" "app" { + stack_id = "${aws_opsworks_stack.main.id}" +} +``` + +## Argument Reference + +The following arguments are supported: + +* `stack_id` - (Required) The id of the stack the layer will belong to. +* `name` - (Optional) A human-readable name for the layer. +* `auto_assign_elastic_ips` - (Optional) Whether to automatically assign an elastic IP address to the layer's instances. +* `auto_assign_public_ips` - (Optional) For stacks belonging to a VPC, whether to automatically assign a public IP address to each of the layer's instances. +* `custom_instance_profile_arn` - (Optional) The ARN of an IAM profile that will be used for the layer's instances. +* `custom_security_group_ids` - (Optional) Ids for a set of security groups to apply to the layer's instances. +* `auto_healing` - (Optional) Whether to enable auto-healing for the layer. +* `install_updates_on_boot` - (Optional) Whether to install OS and package updates on each instance when it boots. +* `instance_shutdown_timeout` - (Optional) The time, in seconds, that OpsWorks will wait for Chef to complete after triggering the Shutdown event. +* `drain_elb_on_shutdown` - (Optional) Whether to enable Elastic Load Balancing connection draining. +* `system_packages` - (Optional) Names of a set of system packages to install on the layer's instances. +* `use_ebs_optimized_instances` - (Optional) Whether to use EBS-optimized instances. +* `ebs_volume` - (Optional) `ebs_volume` blocks, as described below, will each create an EBS volume and connect it to the layer's instances. + +The following extra optional arguments, all lists of Chef recipe names, allow +custom Chef recipes to be applied to layer instances at the five different +lifecycle events, if custom cookbooks are enabled on the layer's stack: + +* `custom_configure_recipes` +* `custom_deploy_recipes` +* `custom_setup_recipes` +* `custom_shutdown_recipes` +* `custom_undeploy_recipes` + +An `ebs_volume` block supports the following arguments: + +* `mount_point` - (Required) The path to mount the EBS volume on the layer's instances. +* `size` - (Required) The size of the volume in gigabytes. +* `number_of_disks` - (Required) The number of disks to use for the EBS volume. +* `raid_level` - (Required) The RAID level to use for the volume. +* `type` - (Optional) The type of volume to create. This may be `standard` (the default), `io1` or `gp2`. +* `iops` - (Optional) For PIOPS volumes, the IOPS per disk. + +## Attributes Reference + +The following attributes are exported: + +* `id` - The id of the layer. diff --git a/website/source/docs/providers/aws/r/opsworks_rails_app_layer.html.markdown b/website/source/docs/providers/aws/r/opsworks_rails_app_layer.html.markdown new file mode 100644 index 000000000..ee4f85ed4 --- /dev/null +++ b/website/source/docs/providers/aws/r/opsworks_rails_app_layer.html.markdown @@ -0,0 +1,68 @@ +--- +layout: "aws" +page_title: "AWS: aws_opsworks_rails_app_layer" +sidebar_current: "docs-aws-resource-opsworks-rails-app-layer" +description: |- + Provides an OpsWorks Ruby on Rails application layer resource. +--- + +# aws\_opsworks\_rails\_app\_layer + +Provides an OpsWorks Ruby on Rails application layer resource. + +## Example Usage + +``` +resource "aws_opsworks_rails_app_layer" "app" { + stack_id = "${aws_opsworks_stack.main.id}" +} +``` + +## Argument Reference + +The following arguments are supported: + +* `stack_id` - (Required) The id of the stack the layer will belong to. +* `name` - (Optional) A human-readable name for the layer. +* `app_server` - (Optional) Keyword for the app server to use. Defaults to "apache_passenger". +* `auto_assign_elastic_ips` - (Optional) Whether to automatically assign an elastic IP address to the layer's instances. +* `auto_assign_public_ips` - (Optional) For stacks belonging to a VPC, whether to automatically assign a public IP address to each of the layer's instances. +* `bundler_version` - (Optional) When OpsWorks is managing Bundler, which version to use. Defaults to "1.5.3". +* `custom_instance_profile_arn` - (Optional) The ARN of an IAM profile that will be used for the layer's instances. +* `custom_security_group_ids` - (Optional) Ids for a set of security groups to apply to the layer's instances. +* `auto_healing` - (Optional) Whether to enable auto-healing for the layer. +* `install_updates_on_boot` - (Optional) Whether to install OS and package updates on each instance when it boots. +* `instance_shutdown_timeout` - (Optional) The time, in seconds, that OpsWorks will wait for Chef to complete after triggering the Shutdown event. +* `drain_elb_on_shutdown` - (Optional) Whether to enable Elastic Load Balancing connection draining. +* `manage_bundler` - (Optional) Whether OpsWorks should manage bundler. On by default. +* `passenger_version` - (Optional) The version of Passenger to use. Defaults to "4.0.46". +* `ruby_version` - (Optional) The version of Ruby to use. Defaults to "2.0.0". +* `rubygems_version` - (Optional) The version of RubyGems to use. Defaults to "2.2.2". +* `system_packages` - (Optional) Names of a set of system packages to install on the layer's instances. +* `use_ebs_optimized_instances` - (Optional) Whether to use EBS-optimized instances. +* `ebs_volume` - (Optional) `ebs_volume` blocks, as described below, will each create an EBS volume and connect it to the layer's instances. + +The following extra optional arguments, all lists of Chef recipe names, allow +custom Chef recipes to be applied to layer instances at the five different +lifecycle events, if custom cookbooks are enabled on the layer's stack: + +* `custom_configure_recipes` +* `custom_deploy_recipes` +* `custom_setup_recipes` +* `custom_shutdown_recipes` +* `custom_undeploy_recipes` + +An `ebs_volume` block supports the following arguments: + +* `mount_point` - (Required) The path to mount the EBS volume on the layer's instances. +* `size` - (Required) The size of the volume in gigabytes. +* `number_of_disks` - (Required) The number of disks to use for the EBS volume. +* `raid_level` - (Required) The RAID level to use for the volume. +* `type` - (Optional) The type of volume to create. This may be `standard` (the default), `io1` or `gp2`. +* `iops` - (Optional) For PIOPS volumes, the IOPS per disk. + +## Attributes Reference + +The following attributes are exported: + +* `id` - The id of the layer. diff --git a/website/source/docs/providers/aws/r/opsworks_stack.html.markdown b/website/source/docs/providers/aws/r/opsworks_stack.html.markdown new file mode 100644 index 000000000..d664ca1a9 --- /dev/null +++ b/website/source/docs/providers/aws/r/opsworks_stack.html.markdown @@ -0,0 +1,68 @@ +--- +layout: "aws" +page_title: "AWS: aws_opsworks_stack" +sidebar_current: "docs-aws-resource-opsworks-stack" +description: |- + Provides an OpsWorks stack resource. +--- + +# aws\_opsworks\_stack + +Provides an OpsWorks stack resource. + +## Example Usage + +``` +resource "aws_opsworks_stack" "main" { + name = "awesome-stack" + region = "us-west-1" + service_role_arn = "${aws_iam_role.opsworks.arn}" + default_instance_profile_arn = "${aws_iam_instance_profile.opsworks.arn}" +} +``` + +## Argument Reference + +The following arguments are supported: + +* `name` - (Required) The name of the stack. +* `region` - (Required) The name of the region where the stack will exist. +* `service_role_arn` - (Required) The ARN of an IAM role that the OpsWorks service will act as. +* `default_instance_profile_arn` - (Required) The ARN of an IAM Instance Profile that created instances + will have by default. +* `berkshelf_version` - (Optional) If `manage_berkshelf` is enabled, the version of Berkshelf to use. +* `color` - (Optional) Color to paint next to the stack's resources in the OpsWorks console. +* `default_availability_zone` - (Optional) Name of the availability zone where instances will be created + by default. This is required unless you set `vpc_id`. +* `configuration_manager_name` - (Optional) Name of the configuration manager to use. Defaults to "Chef". +* `configuration_manager_version` - (Optional) Version of the configuratino manager to use. Defaults to "11.4". +* `custom_cookbooks_source` - (Optional) When `use_custom_cookbooks` is set, provide this sub-object as + described below. +* `default_os` - (Optional) Name of OS that will be installed on instances by default. +* `default_root_device_type` - (Optional) Name of the type of root device instances will have by default. +* `default_ssh_key_name` - (Optional) Name of the SSH keypair that instances will have by default. +* `default_subnet_id` - (Optional) Id of the subnet in which instances will be created by default. Mandatory + if `vpc_id` is set, and forbidden if it isn't. +* `hostname_theme` - (Optional) Keyword representing the naming scheme that will be used for instance hostnames + within this stack. +* `manage_berkshelf` - (Optional) Boolean value controlling whether Opsworks will run Berkshelf for this stack. +* `use_custom_cookbooks` - (Optional) Boolean value controlling whether the custom cookbook settings are + enabled. +* `use_opsworks_security_groups` - (Optional) Boolean value controlling whether the standard OpsWorks + security groups apply to created instances. +* `vpc_id` - (Optional) The id of the VPC that this stack belongs to. + +The `custom_cookbooks_source` block supports the following arguments: + +* `type` - (Required) The type of source to use. For example, "archive". +* `url` - (Required) The URL where the cookbooks resource can be found. +* `username` - (Optional) Username to use when authenticating to the source. +* `password` - (Optional) Password to use when authenticating to the source. +* `ssh_key` - (Optional) SSH key to use when authenticating to the source. +* `revision` - (Optional) For sources that are version-aware, the revision to use. + +## Attributes Reference + +The following attributes are exported: + +* `id` - The id of the stack. diff --git a/website/source/docs/providers/aws/r/opsworks_static_web_layer.html.markdown b/website/source/docs/providers/aws/r/opsworks_static_web_layer.html.markdown new file mode 100644 index 000000000..70272e8d0 --- /dev/null +++ b/website/source/docs/providers/aws/r/opsworks_static_web_layer.html.markdown @@ -0,0 +1,62 @@ +--- +layout: "aws" +page_title: "AWS: aws_opsworks_static_web_layer" +sidebar_current: "docs-aws-resource-opsworks-static-web-layer" +description: |- + Provides an OpsWorks static web server layer resource. +--- + +# aws\_opsworks\_static\_web\_layer + +Provides an OpsWorks static web server layer resource. + +## Example Usage + +``` +resource "aws_opsworks_static_web_layer" "web" { + stack_id = "${aws_opsworks_stack.main.id}" +} +``` + +## Argument Reference + +The following arguments are supported: + +* `stack_id` - (Required) The id of the stack the layer will belong to. +* `name` - (Optional) A human-readable name for the layer. +* `auto_assign_elastic_ips` - (Optional) Whether to automatically assign an elastic IP address to the layer's instances. +* `auto_assign_public_ips` - (Optional) For stacks belonging to a VPC, whether to automatically assign a public IP address to each of the layer's instances. +* `custom_instance_profile_arn` - (Optional) The ARN of an IAM profile that will be used for the layer's instances. +* `custom_security_group_ids` - (Optional) Ids for a set of security groups to apply to the layer's instances. +* `auto_healing` - (Optional) Whether to enable auto-healing for the layer. +* `install_updates_on_boot` - (Optional) Whether to install OS and package updates on each instance when it boots. +* `instance_shutdown_timeout` - (Optional) The time, in seconds, that OpsWorks will wait for Chef to complete after triggering the Shutdown event. +* `drain_elb_on_shutdown` - (Optional) Whether to enable Elastic Load Balancing connection draining. +* `system_packages` - (Optional) Names of a set of system packages to install on the layer's instances. +* `use_ebs_optimized_instances` - (Optional) Whether to use EBS-optimized instances. +* `ebs_volume` - (Optional) `ebs_volume` blocks, as described below, will each create an EBS volume and connect it to the layer's instances. + +The following extra optional arguments, all lists of Chef recipe names, allow +custom Chef recipes to be applied to layer instances at the five different +lifecycle events, if custom cookbooks are enabled on the layer's stack: + +* `custom_configure_recipes` +* `custom_deploy_recipes` +* `custom_setup_recipes` +* `custom_shutdown_recipes` +* `custom_undeploy_recipes` + +An `ebs_volume` block supports the following arguments: + +* `mount_point` - (Required) The path to mount the EBS volume on the layer's instances. +* `size` - (Required) The size of the volume in gigabytes. +* `number_of_disks` - (Required) The number of disks to use for the EBS volume. +* `raid_level` - (Required) The RAID level to use for the volume. +* `type` - (Optional) The type of volume to create. This may be `standard` (the default), `io1` or `gp2`. +* `iops` - (Optional) For PIOPS volumes, the IOPS per disk. + +## Attributes Reference + +The following attributes are exported: + +* `id` - The id of the layer. diff --git a/website/source/docs/providers/aws/r/placement_group.html.markdown b/website/source/docs/providers/aws/r/placement_group.html.markdown new file mode 100644 index 000000000..e4ad98df8 --- /dev/null +++ b/website/source/docs/providers/aws/r/placement_group.html.markdown @@ -0,0 +1,34 @@ +--- +layout: "aws" +page_title: "AWS: aws_placement_group" +sidebar_current: "docs-aws-resource-placement-group" +description: |- + Provides an EC2 placement group. +--- + +# aws\_placement\_group + +Provides an EC2 placement group. Read more about placement groups +in [AWS Docs](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/placement-groups.html). + +## Example Usage + +``` +resource "aws_placement_group" "web" { + name = "hunky-dory-pg" + strategy = "cluster" +} +``` + +## Argument Reference + +The following arguments are supported: + +* `name` - (Required) The name of the placement group. +* `strategy` - (Required) The placement strategy. The only supported value is `cluster` + +## Attributes Reference + +The following attributes are exported: + +* `id` - The name of the placement group. diff --git a/website/source/docs/providers/aws/r/rds_cluster.html.markdown b/website/source/docs/providers/aws/r/rds_cluster.html.markdown new file mode 100644 index 000000000..c45814f46 --- /dev/null +++ b/website/source/docs/providers/aws/r/rds_cluster.html.markdown @@ -0,0 +1,87 @@ +--- +layout: "aws" +page_title: "AWS: aws_rds_cluster" +sidebar_current: "docs-aws-resource-rds-cluster" +description: |- + Provides an RDS Cluster Resource +--- + +# aws\_rds\_cluster + +Provides an RDS Cluster Resource. A Cluster Resource defines attributes that are +applied to the entire cluster of [RDS Cluster Instances][3]. Use the RDS Cluster +resource and RDS Cluster Instances to create and use Amazon Aurora, a MySQL-compatible +database engine. + +For more information on Amazon Aurora, see [Aurora on Amazon RDS][2] in the Amazon RDS User Guide. + +## Example Usage + +``` +resource "aws_rds_cluster" "default" { + cluster_identifier = "aurora-cluster-demo" + availability_zones = ["us-west-2a","us-west-2b","us-west-2c"] + database_name = "mydb" + master_username = "foo" + master_password = "bar" +} +``` + +~> **NOTE:** RDS Clusters resources that are created without any matching +RDS Cluster Instances do not currently display in the AWS Console. + +## Argument Reference + +For more detailed documentation about each argument, refer to +the [AWS official documentation](http://docs.aws.amazon.com/AmazonRDS/latest/CommandLineReference/CLIReference-cmd-ModifyDBInstance.html). + +The following arguments are supported: + +* `cluster_identifier` - (Required) The Cluster Identifier. Must be a lower case +string. +* `database_name` - (Optional) The name for your database of up to 8 alpha-numeric + characters. If you do not provide a name, Amazon RDS will not create a + database in the DB cluster you are creating +* `master_password` - (Required) Password for the master DB user. Note that this may + show up in logs, and it will be stored in the state file +* `master_username` - (Required) Username for the master DB user +* `final_snapshot_identifier` - (Optional) The name of your final DB snapshot + when this DB cluster is deleted. If omitted, no final snapshot will be + made. +* `availability_zones` - (Optional) A list of EC2 Availability Zones that + instances in the DB cluster can be created in +* `backup_retention_period` - (Optional) The days to retain backups for. Default +1 +* `port` - (Optional) The port on which the DB accepts connections +* `vpc_security_group_ids` - (Optional) List of VPC security groups to associate + with the Cluster +* `apply_immediately` - (Optional) Specifies whether any cluster modifications + are applied immediately, or during the next maintenance window. Default is + `false`. See [Amazon RDS Documentation for more information.](http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.DBInstance.Modifying.html) + +## Attributes Reference + +The following attributes are exported: + +* `id` - The RDS Cluster Identifier +* `cluster_identifier` - The RDS Cluster Identifier +* `cluster_members` – List of RDS Instances that are a part of this cluster +* `address` - The address of the RDS instance. +* `allocated_storage` - The amount of allocated storage +* `availability_zones` - The availability zone of the instance +* `backup_retention_period` - The backup retention period +* `backup_window` - The backup window +* `endpoint` - The primary, writeable connection endpoint +* `engine` - The database engine +* `engine_version` - The database engine version +* `maintenance_window` - The instance maintenance window +* `database_name` - The database name +* `port` - The database port +* `status` - The RDS instance status +* `username` - The master username for the database +* `storage_encrypted` - Specifies whether the DB instance is encrypted + +[1]: http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.Replication.html + +[2]: http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Aurora.html +[3]: /docs/providers/aws/r/rds_cluster_instance.html diff --git a/website/source/docs/providers/aws/r/rds_cluster_instance.html.markdown b/website/source/docs/providers/aws/r/rds_cluster_instance.html.markdown new file mode 100644 index 000000000..782339a34 --- /dev/null +++ b/website/source/docs/providers/aws/r/rds_cluster_instance.html.markdown @@ -0,0 +1,89 @@ +--- +layout: "aws" +page_title: "AWS: aws_rds_cluster_instance" +sidebar_current: "docs-aws-resource-rds-cluster-instance" +description: |- + Provides an RDS Cluster Resource Instance +--- + +# aws\_rds\_cluster\_instance + +Provides an RDS Cluster Resource Instance. A Cluster Instance Resource defines +attributes that are specific to a single instance in a [RDS Cluster][3], +specifically running Amazon Aurora. + +Unlike other RDS resources that support replication, with Amazon Aurora you do +not designate a primary and subsequent replicas. Instead, you simply add RDS +Instances and Aurora manages the replication. You can use the [count][5] +meta-parameter to make multiple instances and join them all to the same RDS +Cluster, or you may specify different Cluster Instance resources with various +`instance_class` sizes. + +For more information on Amazon Aurora, see [Aurora on Amazon RDS][2] in the Amazon RDS User Guide. + +## Example Usage + +``` +resource "aws_rds_cluster_instance" "cluster_instances" { + count = 2 + identifier = "aurora-cluster-demo" + cluster_identifer = "${aws_rds_cluster.default.id}" + instance_class = "db.r3.large" +} + +resource "aws_rds_cluster" "default" { + cluster_identifier = "aurora-cluster-demo" + availability_zones = ["us-west-2a","us-west-2b","us-west-2c"] + database_name = "mydb" + master_username = "foo" + master_password = "bar" +} +``` + +## Argument Reference + +For more detailed documentation about each argument, refer to +the [AWS official documentation](http://docs.aws.amazon.com/AmazonRDS/latest/CommandLineReference/CLIReference-cmd-ModifyDBInstance.html). + +The following arguments are supported: + +* `identifier` - (Optional) The Instance Identifier. Must be a lower case +string. If omitted, a unique identifier will be generated. +* `cluster_identifier` - (Required) The Cluster Identifier for this Instance to +join. Must be a lower case +string. +* `instance_class` - (Required) The instance class to use. For details on CPU +and memory, see [Scaling Aurora DB Instances][4]. Aurora currently + supports the below instance classes. + - db.r3.large + - db.r3.xlarge + - db.r3.2xlarge + - db.r3.4xlarge + - db.r3.8xlarge +* `publicly_accessible` - (Optional) Bool to control if instance is publicly accessible. +Default `false`. See the documentation on [Creating DB Instances][6] for more +details on controlling this property. + +## Attributes Reference + +The following attributes are exported: + +* `cluster_identifier` - The RDS Cluster Identifier +* `identifier` - The Instance identifier +* `id` - The Instance identifier +* `writer` – Boolean indicating if this instance is writable. `False` indicates +this instance is a read replica +* `allocated_storage` - The amount of allocated storage +* `availability_zones` - The availability zone of the instance +* `endpoint` - The IP address for this instance. May not be writable +* `engine` - The database engine +* `engine_version` - The database engine version +* `database_name` - The database name +* `port` - The database port +* `status` - The RDS instance status + +[2]: http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Aurora.html +[3]: /docs/providers/aws/r/rds_cluster.html +[4]: http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Aurora.Managing.html +[5]: /docs/configuration/resources.html#count +[6]: http://docs.aws.amazon.com/fr_fr/AmazonRDS/latest/APIReference/API_CreateDBInstance.html diff --git a/website/source/docs/providers/aws/r/s3_bucket_object.html.markdown b/website/source/docs/providers/aws/r/s3_bucket_object.html.markdown index 63d201b82..bd54047d6 100644 --- a/website/source/docs/providers/aws/r/s3_bucket_object.html.markdown +++ b/website/source/docs/providers/aws/r/s3_bucket_object.html.markdown @@ -29,6 +29,15 @@ The following arguments are supported: * `bucket` - (Required) The name of the bucket to put the file in. * `key` - (Required) The name of the object once it is in the bucket. * `source` - (Required) The path to the source file being uploaded to the bucket. +* `content` - (Required unless `source` given) The literal content being uploaded to the bucket. +* `cache_control` - (Optional) Specifies caching behavior along the request/reply chain Read [w3c cache_control](http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.9) for futher details. +* `content_disposition` - (Optional) Specifies presentational information for the object. Read [wc3 content_disposition](http://www.w3.org/Protocols/rfc2616/rfc2616-sec19.html#sec19.5.1) for further information. +* `content_encoding` - (Optional) Specifies what content encodings have been applied to the object and thus what decoding mechanisms must be applied to obtain the media-type referenced by the Content-Type header field. Read [w3c content encoding](http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.11) for further information. +* `content_language` - (Optional) The language the content is in e.g. en-US or en-GB. +* `content_type` - (Optional) A standard MIME type describing the format of the object data, e.g. application/octet-stream. All Valid MIME Types are valid for this input. + +Either `source` or `content` must be provided to specify the bucket content. +These two arguments are mutually-exclusive. ## Attributes Reference diff --git a/website/source/docs/providers/aws/r/security_group_rule.html.markdown b/website/source/docs/providers/aws/r/security_group_rule.html.markdown index b77286296..4a772d1a7 100644 --- a/website/source/docs/providers/aws/r/security_group_rule.html.markdown +++ b/website/source/docs/providers/aws/r/security_group_rule.html.markdown @@ -49,6 +49,7 @@ or `egress` (outbound). depending on the `type`. * `self` - (Optional) If true, the security group itself will be added as a source to this ingress rule. +* `to_port` - (Required) The end range port. ## Attributes Reference diff --git a/website/source/docs/providers/aws/r/spot_instance_request.html.markdown b/website/source/docs/providers/aws/r/spot_instance_request.html.markdown index abb1f4705..5ca6daf9f 100644 --- a/website/source/docs/providers/aws/r/spot_instance_request.html.markdown +++ b/website/source/docs/providers/aws/r/spot_instance_request.html.markdown @@ -51,6 +51,9 @@ Spot Instance Requests support all the same arguments as * `wait_for_fulfillment` - (Optional; Default: false) If set, Terraform will wait for the Spot Request to be fulfilled, and will throw an error if the timeout of 10m is reached. +* `spot_type` - (Optional; Default: "persistent") If set to "one-time", after + the instance is terminated, the spot request will be closed. Also, Terraform + can't manage one-time spot requests, just launch them. ## Attributes Reference diff --git a/website/source/docs/providers/aws/r/vpn_connection_route.html.markdown b/website/source/docs/providers/aws/r/vpn_connection_route.html.markdown index a0d2f2ccc..0f4782f2d 100644 --- a/website/source/docs/providers/aws/r/vpn_connection_route.html.markdown +++ b/website/source/docs/providers/aws/r/vpn_connection_route.html.markdown @@ -24,7 +24,7 @@ resource "aws_vpn_gateway" "vpn_gateway" { resource "aws_customer_gateway" "customer_gateway" { bgp_asn = 60000 ip_address = "172.0.0.1" - type = ipsec.1 + type = "ipsec.1" } resource "aws_vpn_connection" "main" { diff --git a/website/source/docs/providers/cloudstack/r/vpc.html.markdown b/website/source/docs/providers/cloudstack/r/vpc.html.markdown index d77357631..4610feb0d 100644 --- a/website/source/docs/providers/cloudstack/r/vpc.html.markdown +++ b/website/source/docs/providers/cloudstack/r/vpc.html.markdown @@ -39,7 +39,7 @@ The following arguments are supported: * `project` - (Optional) The name or ID of the project to deploy this instance to. Changing this forces a new resource to be created. - + * `zone` - (Required) The name or ID of the zone where this disk volume will be available. Changing this forces a new resource to be created. @@ -49,3 +49,4 @@ The following attributes are exported: * `id` - The ID of the VPC. * `display_text` - The display text of the VPC. +* `source_nat_ip` - The source NAT IP assigned to the VPC. diff --git a/website/source/docs/providers/docker/r/container.html.markdown b/website/source/docs/providers/docker/r/container.html.markdown index 5653f139a..91a4714b7 100644 --- a/website/source/docs/providers/docker/r/container.html.markdown +++ b/website/source/docs/providers/docker/r/container.html.markdown @@ -35,7 +35,8 @@ The following arguments are supported: as is shown in the example above. * `command` - (Optional, list of strings) The command to use to start the - container. + container. For example, to run `/usr/bin/myprogram -f baz.conf` set the + command to be `["/usr/bin/myprogram", "-f", "baz.conf"]`. * `dns` - (Optional, set of strings) Set of DNS servers. * `env` - (Optional, set of strings) Environmental variables to set. * `links` - (Optional, set of strings) Set of links for link based @@ -76,7 +77,7 @@ the following: volume will be mounted. * `host_path` - (Optional, string) The path on the host where the volume is coming from. -* `read_only` - (Optinal, bool) If true, this volume will be readonly. +* `read_only` - (Optional, bool) If true, this volume will be readonly. Defaults to false. ## Attributes Reference diff --git a/website/source/docs/providers/google/r/compute_backend_service.html.markdown b/website/source/docs/providers/google/r/compute_backend_service.html.markdown index c9d9396c5..5a862e238 100644 --- a/website/source/docs/providers/google/r/compute_backend_service.html.markdown +++ b/website/source/docs/providers/google/r/compute_backend_service.html.markdown @@ -19,6 +19,7 @@ resource "google_compute_backend_service" "foobar" { port_name = "http" protocol = "HTTP" timeout_sec = 10 + region = us-central1 backend { group = "${google_compute_instance_group_manager.foo.instance_group}" @@ -67,6 +68,7 @@ The following arguments are supported: for checking the health of the backend service. * `description` - (Optional) The textual description for the backend service. * `backend` - (Optional) The list of backends that serve this BackendService. See *Backend* below. +* `region` - (Optional) The region the service sits in. If not specified, the project region is used. * `port_name` - (Optional) The name of a service that has been added to an instance group in this backend. See [related docs](https://cloud.google.com/compute/docs/instance-groups/#specifying_service_endpoints) for details. Defaults to http. diff --git a/website/source/docs/providers/google/r/compute_target_pool.html.markdown b/website/source/docs/providers/google/r/compute_target_pool.html.markdown index 1efc5905e..82bc4a7d1 100644 --- a/website/source/docs/providers/google/r/compute_target_pool.html.markdown +++ b/website/source/docs/providers/google/r/compute_target_pool.html.markdown @@ -49,6 +49,7 @@ The following arguments are supported: * `session_affinity` - (Optional) How to distribute load. Options are "NONE" (no affinity). "CLIENT\_IP" (hash of the source/dest addresses / ports), and "CLIENT\_IP\_PROTO" also includes the protocol (default "NONE"). +* `region` - (Optional) Where the target pool resides. Defaults to project region. ## Attributes Reference diff --git a/website/source/docs/providers/google/r/storage_bucket.html.markdown b/website/source/docs/providers/google/r/storage_bucket.html.markdown index a7eea21b1..2821e5588 100644 --- a/website/source/docs/providers/google/r/storage_bucket.html.markdown +++ b/website/source/docs/providers/google/r/storage_bucket.html.markdown @@ -17,9 +17,8 @@ Example creating a private bucket in standard storage, in the EU region. ``` resource "google_storage_bucket" "image-store" { - name = "image-store-bucket" - predefined_acl = "projectPrivate" - location = "EU" + name = "image-store-bucket" + location = "EU" website { main_page_suffix = "index.html" not_found_page = "404.html" @@ -33,7 +32,8 @@ resource "google_storage_bucket" "image-store" { The following arguments are supported: * `name` - (Required) The name of the bucket. -* `predefined_acl` - (Optional, Default: 'private') The [canned GCS ACL](https://cloud.google.com/storage/docs/access-control#predefined-acl) to apply. +* `predefined_acl` - (Optional, Deprecated) The [canned GCS ACL](https://cloud.google.com/storage/docs/access-control#predefined-acl) to apply. Please switch +to `google_storage_bucket_acl.predefined_acl`. * `location` - (Optional, Default: 'US') The [GCS location](https://cloud.google.com/storage/docs/bucket-locations) * `force_destroy` - (Optional, Default: false) When deleting a bucket, this boolean option will delete all contained objects. If you try to delete a bucket that contains objects, Terraform will fail that run. diff --git a/website/source/docs/providers/google/r/storage_bucket_acl.html.markdown b/website/source/docs/providers/google/r/storage_bucket_acl.html.markdown new file mode 100644 index 000000000..b7734b065 --- /dev/null +++ b/website/source/docs/providers/google/r/storage_bucket_acl.html.markdown @@ -0,0 +1,36 @@ +--- +layout: "google" +page_title: "Google: google_storage_bucket_acl" +sidebar_current: "docs-google-resource-storage-acl" +description: |- + Creates a new bucket ACL in Google Cloud Storage. +--- + +# google\_storage\_bucket\_acl + +Creates a new bucket ACL in Google cloud storage service(GCS). + +## Example Usage + +Example creating an ACL on a bucket with one owner, and one reader. + +``` +resource "google_storage_bucket" "image-store" { + name = "image-store-bucket" + location = "EU" +} + +resource "google_storage_bucket_acl" "image-store-acl" { + bucket = "${google_storage_bucket.image_store.name}" + role_entity = ["OWNER:user-my.email@gmail.com", + "READER:group-mygroup"] +} + +``` + +## Argument Reference + +* `bucket` - (Required) The name of the bucket it applies to. +* `predefined_acl` - (Optional) The [canned GCS ACL](https://cloud.google.com/storage/docs/access-control#predefined-acl) to apply. Must be set if both `role_entity` and `default_acl` are not. +* `default_acl` - (Optional) The [canned GCS ACL](https://cloud.google.com/storage/docs/access-control#predefined-acl) to apply to future buckets. Must be set both `role_entity` and `predefined_acl` are not. +* `role_entity` - (Optional) List of role/entity pairs in the form `ROLE:entity`. See [GCS Bucket ACL documentation](https://cloud.google.com/storage/docs/json_api/v1/bucketAccessControls) for more details. Must be set if both `predefined_acl` and `default_acl` are not. diff --git a/website/source/docs/providers/google/r/storage_bucket_object.html.markdown b/website/source/docs/providers/google/r/storage_bucket_object.html.markdown index 76e4b7c5d..61b32823f 100644 --- a/website/source/docs/providers/google/r/storage_bucket_object.html.markdown +++ b/website/source/docs/providers/google/r/storage_bucket_object.html.markdown @@ -20,7 +20,6 @@ resource "google_storage_bucket_object" "picture" { name = "butterfly01" source = "/images/nature/garden-tiger-moth.jpg" bucket = "image-store" - predefined_acl = "publicRead" } ``` @@ -32,7 +31,8 @@ The following arguments are supported: * `name` - (Required) The name of the object. * `bucket` - (Required) The name of the containing bucket. * `source` - (Required) A path to the data you want to upload. -* `predefined_acl` - (Optional, Default: 'projectPrivate') The [canned GCS ACL](https://cloud.google.com/storage/docs/access-control#predefined-acl) apply. +* `predefined_acl` - (Optional, Deprecated) The [canned GCS ACL](https://cloud.google.com/storage/docs/access-control#predefined-acl) apply. Please switch +to `google_storage_object_acl.predefined_acl`. ## Attributes Reference diff --git a/website/source/docs/providers/google/r/storage_object_acl.html.markdown b/website/source/docs/providers/google/r/storage_object_acl.html.markdown new file mode 100644 index 000000000..9f04d4844 --- /dev/null +++ b/website/source/docs/providers/google/r/storage_object_acl.html.markdown @@ -0,0 +1,43 @@ +--- +layout: "google" +page_title: "Google: google_storage_object_acl" +sidebar_current: "docs-google-resource-storage-acl" +description: |- + Creates a new object ACL in Google Cloud Storage. +--- + +# google\_storage\_object\_acl + +Creates a new object ACL in Google cloud storage service (GCS) + +## Example Usage + +Create an object ACL with one owner and one reader. + +``` +resource "google_storage_bucket" "image-store" { + name = "image-store-bucket" + location = "EU" +} + +resource "google_storage_bucket_object" "image" { + name = "image1" + bucket = "${google_storage_bucket.name}" + source = "image1.jpg" +} + +resource "google_storage_object_acl" "image-store-acl" { + bucket = "${google_storage_bucket.image_store.name}" + object = "${google_storage_bucket_object.image_store.name}" + role_entity = ["OWNER:user-my.email@gmail.com", + "READER:group-mygroup"] +} + +``` + +## Argument Reference + +* `bucket` - (Required) The name of the bucket it applies to. +* `object` - (Required) The name of the object it applies to. +* `predefined_acl` - (Optional) The [canned GCS ACL](https://cloud.google.com/storage/docs/access-control#predefined-acl) to apply. Must be set if `role_entity` is not. +* `role_entity` - (Optional) List of role/entity pairs in the form `ROLE:entity`. See [GCS Object ACL documentation](https://cloud.google.com/storage/docs/json_api/v1/objectAccessControls) for more details. Must be set if `predefined_acl` is not. diff --git a/website/source/docs/providers/packet/index.html.markdown b/website/source/docs/providers/packet/index.html.markdown new file mode 100644 index 000000000..bbe9f5d1e --- /dev/null +++ b/website/source/docs/providers/packet/index.html.markdown @@ -0,0 +1,47 @@ +--- +layout: "packet" +page_title: "Provider: Packet" +sidebar_current: "docs-packet-index" +description: |- + The Packet provider is used to interact with the resources supported by Packet. The provider needs to be configured with the proper credentials before it can be used. +--- + +# Packet Provider + +The Packet provider is used to interact with the resources supported by Packet. +The provider needs to be configured with the proper credentials before it can be used. + +Use the navigation to the left to read about the available resources. + +## Example Usage + +``` +# Configure the Packet Provider +provider "packet" { + auth_token = "${var.auth_token}" +} + +# Create a project +resource "packet_project" "tf_project_1" { + name = "My First Terraform Project" + payment_method = "PAYMENT_METHOD_ID" +} + +# Create a device and add it to tf_project_1 +resource "packet_device" "web1" { + hostname = "tf.coreos2" + plan = "baremetal_1" + facility = "ewr1" + operating_system = "coreos_stable" + billing_cycle = "hourly" + project_id = "${packet_project.tf_project_1.id}" +} +``` + +## Argument Reference + +The following arguments are supported: + +* `auth_token` - (Required) This is your Packet API Auth token. This can also be specified + with the `PACKET_AUTH_TOKEN` shell environment variable. + diff --git a/website/source/docs/providers/packet/r/device.html.markdown b/website/source/docs/providers/packet/r/device.html.markdown new file mode 100644 index 000000000..6d57dcbb5 --- /dev/null +++ b/website/source/docs/providers/packet/r/device.html.markdown @@ -0,0 +1,55 @@ +--- +layout: "packet" +page_title: "Packet: packet_device" +sidebar_current: "docs-packet-resource-device" +description: |- + Provides a Packet device resource. This can be used to create, modify, and delete devices. +--- + +# packet\_device + +Provides a Packet device resource. This can be used to create, +modify, and delete devices. + +## Example Usage + +``` +# Create a device and add it to tf_project_1 +resource "packet_device" "web1" { + hostname = "tf.coreos2" + plan = "baremetal_1" + facility = "ewr1" + operating_system = "coreos_stable" + billing_cycle = "hourly" + project_id = "${packet_project.tf_project_1.id}" +} +``` + +## Argument Reference + +The following arguments are supported: + +* `hostname` - (Required) The device name +* `project_id` - (Required) The id of the project in which to create the device +* `operating_system` - (Required) The operating system slug +* `facility` - (Required) The facility in which to create the device +* `plan` - (Required) The config type slug +* `billing_cycle` - (Required) monthly or hourly +* `user_data` (Optional) - A string of the desired User Data for the device. + +## Attributes Reference + +The following attributes are exported: + +* `id` - The ID of the device +* `hostname`- The hostname of the device +* `project_id`- The Id of the project the device belonds to +* `facility` - The facility the device is in +* `plan` - The config type of the device +* `network` - The private and public v4 and v6 IPs assigned to the device +* `locked` - Is the device locked +* `billing_cycle` - The billing cycle of the device (monthly or hourly) +* `operating_system` - The operating system running on the device +* `status` - The status of the device +* `created` - The timestamp for when the device was created +* `updated` - The timestamp for the last time the device was udpated diff --git a/website/source/docs/providers/packet/r/project.html.markdown b/website/source/docs/providers/packet/r/project.html.markdown new file mode 100644 index 000000000..b008f864f --- /dev/null +++ b/website/source/docs/providers/packet/r/project.html.markdown @@ -0,0 +1,40 @@ +--- +layout: "packet" +page_title: "Packet: packet_project" +sidebar_current: "docs-packet-resource-project" +description: |- + Provides a Packet Project resource. +--- + +# packet\_project + +Provides a Packet Project resource to allow you manage devices +in your projects. + +## Example Usage + +``` +# Create a new Project +resource "packet_project" "tf_project_1" { + name = "Terraform Fun" + payment_method = "payment-method-id" +} +``` + +## Argument Reference + +The following arguments are supported: + +* `name` - (Required) The name of the SSH key for identification +* `payment_method` - (Required) The id of the payment method on file to use for services created +on this project. + +## Attributes Reference + +The following attributes are exported: + +* `id` - The unique ID of the key +* `payment_method` - The id of the payment method on file to use for services created +on this project. +* `created` - The timestamp for when the SSH key was created +* `updated` - The timestamp for the last time the SSH key was udpated diff --git a/website/source/docs/providers/packet/r/ssh_key.html.markdown b/website/source/docs/providers/packet/r/ssh_key.html.markdown new file mode 100644 index 000000000..cb27aaa77 --- /dev/null +++ b/website/source/docs/providers/packet/r/ssh_key.html.markdown @@ -0,0 +1,43 @@ +--- +layout: "packet" +page_title: "Packet: packet_ssh_key" +sidebar_current: "docs-packet-resource-ssh-key" +description: |- + Provides a Packet SSH key resource. +--- + +# packet\_ssh_key + +Provides a Packet SSH key resource to allow you manage SSH +keys on your account. All ssh keys on your account are loaded on +all new devices, they do not have to be explicitly declared on +device creation. + +## Example Usage + +``` +# Create a new SSH key +resource "packet_ssh_key" "key1" { + name = "terraform-1" + public_key = "${file("/home/terraform/.ssh/id_rsa.pub")}" +} +``` + +## Argument Reference + +The following arguments are supported: + +* `name` - (Required) The name of the SSH key for identification +* `public_key` - (Required) The public key. If this is a file, it +can be read using the file interpolation function + +## Attributes Reference + +The following attributes are exported: + +* `id` - The unique ID of the key +* `name` - The name of the SSH key +* `public_key` - The text of the public key +* `fingerprint` - The fingerprint of the SSH key +* `created` - The timestamp for when the SSH key was created +* `updated` - The timestamp for the last time the SSH key was udpated diff --git a/website/source/docs/provisioners/remote-exec.html.markdown b/website/source/docs/provisioners/remote-exec.html.markdown index d771e5586..7ce46c684 100644 --- a/website/source/docs/provisioners/remote-exec.html.markdown +++ b/website/source/docs/provisioners/remote-exec.html.markdown @@ -63,7 +63,10 @@ resource "aws_instance" "web" { } provisioner "remote-exec" { - inline = ["/tmp/script.sh args"] + inline = [ + "chmod +x /tmp/script.sh", + "/tmp/script.sh args" + ] } } ``` diff --git a/website/source/intro/getting-started/variables.html.md b/website/source/intro/getting-started/variables.html.md index 691c91f38..41e828a72 100644 --- a/website/source/intro/getting-started/variables.html.md +++ b/website/source/intro/getting-started/variables.html.md @@ -95,8 +95,17 @@ files. And like Terraform configuration files, these files can also be JSON. in the form of `TF_VAR_name` to find the value for a variable. For example, the `TF_VAR_access_key` variable can be set to set the `access_key` variable. -We recommend using the "terraform.tfvars" file, and ignoring it from -version control. +We don't recommend saving usernames and password to version control, But you +can create a local secret variables file and use `-var-file` to load it. + +You can use multiple `-var-file` arguments in a single command, with some +checked in to version control and others not checked in. For example: + +``` +$ terraform plan \ + -var-file="secret.tfvars" \ + -var-file="production.tfvars" +``` ## Mappings diff --git a/website/source/layouts/_footer.erb b/website/source/layouts/_footer.erb index 791861e42..d42c55cac 100644 --- a/website/source/layouts/_footer.erb +++ b/website/source/layouts/_footer.erb @@ -8,7 +8,7 @@
  • Docs
  • Community
  • <% if current_page.url != '/' %> -
  • Edit this page
  • +
  • Edit this page
  • <% end %> diff --git a/website/source/layouts/aws.erb b/website/source/layouts/aws.erb index 22801a507..4b34da23a 100644 --- a/website/source/layouts/aws.erb +++ b/website/source/layouts/aws.erb @@ -15,14 +15,28 @@ CloudWatch Resources + + + > + Directory Service Resources + - > DynamoDB Resources