diff --git a/.gitignore b/.gitignore index 314611940..5a230d5ca 100644 --- a/.gitignore +++ b/.gitignore @@ -1,5 +1,6 @@ *.dll *.exe +.DS_Store example.tf terraform.tfplan terraform.tfstate @@ -18,3 +19,4 @@ website/node_modules *.bak *~ .*.swp +.idea diff --git a/CHANGELOG.md b/CHANGELOG.md index 4623ef61e..075cb0f44 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -1,34 +1,125 @@ -## 0.6.4 (unreleased) +## 0.6.7 (Unreleased) + +FEATURES: + + * **New resources: `aws_cloudformation_stack`** [GH-2636] + +IMPROVEMENTS: + + * provider/google: Accurate Terraform Version [GH-3554] + * provider/google: Simplified auth (DefaultClient support) [GH-3553] + * provider/google: automatic_restart, preemptible, on_host_maintenance options [GH-3643] + * null_resource: enhance and document [GH-3244, GH-3659] + * provider/aws: Add CORS settings to S3 bucket [GH-3387] + +BUG FIXES: + + * `terraform remote config`: update `--help` output [GH-3632] + * core: modules on Git branches now update properly [GH-1568] + * provider/google: Timeout when deleting large instance_group_manager [GH-3591] + * provider/aws: Fix issue with order of Termincation Policies in AutoScaling Groups. + This will introduce plans on upgrade to this version, in order to correct the ordering [GH-2890] + * provider/aws: Allow cluster name, not only ARN for `aws_ecs_service` [GH-3668] + +## 0.6.6 (October 23, 2015) + +FEATURES: + + * New interpolation functions: `cidrhost`, `cidrnetmask` and `cidrsubnet` [GH-3127] + +IMPROVEMENTS: + + * "forces new resource" now highlighted in plan output [GH-3136] + +BUG FIXES: + + * helper/schema: Better error message for assigning list/map to string [GH-3009] + * remote/state/atlas: Additional remote state conflict handling for semantically neutral state changes [GH-3603] + +## 0.6.5 (October 21, 2015) + +FEATURES: + + * **New resources: `aws_codeploy_app` and `aws_codeploy_deployment_group`** [GH-2783] + * New remote state backend: `etcd` [GH-3487] + * New interpolation functions: `upper` and `lower` [GH-3558] + +BUG FIXES: + + * core: Fix remote state conflicts caused by ambiguity in ordering of deeply nested modules [GH-3573] + * core: Fix remote state conflicts caused by state metadata differences [GH-3569] + * core: Avoid using http.DefaultClient [GH-3532] + +INTERNAL IMPROVEMENTS: + + * provider/digitalocean: use official Go client [GH-3333] + * core: extract module fetching to external library [GH-3516] + +## 0.6.4 (October 15, 2015) FEATURES: * **New provider: `rundeck`** [GH-2412] + * **New provider: `packet`** [GH-2260], [GH-3472] + * **New provider: `vsphere`**: Initial support for a VM resource [GH-3419] * **New resource: `cloudstack_loadbalancer_rule`** [GH-2934] * **New resource: `google_compute_project_metadata`** [GH-3065] - * **New resources: `aws_ami`, `aws_ami_copy`, `aws_ami_from_instance`** [GH-2874] + * **New resources: `aws_ami`, `aws_ami_copy`, `aws_ami_from_instance`** [GH-2784] + * **New resources: `aws_cloudwatch_log_group`** [GH-2415] * **New resource: `google_storage_bucket_object`** [GH-3192] * **New resources: `google_compute_vpn_gateway`, `google_compute_vpn_tunnel`** [GH-3213] + * **New resources: `google_storage_bucket_acl`, `google_storage_object_acl`** [GH-3272] + * **New resource: `aws_iam_saml_provider`** [GH-3156] + * **New resources: `aws_efs_file_system` and `aws_efs_mount_target`** [GH-2196] + * **New resources: `aws_opsworks_*`** [GH-2162] + * **New resource: `aws_elasticsearch_domain`** [GH-3443] + * **New resource: `aws_directory_service_directory`** [GH-3228] + * **New resource: `aws_autoscaling_lifecycle_hook`** [GH-3351] + * **New resource: `aws_placement_group`** [GH-3457] + * **New resource: `aws_glacier_vault`** [GH-3491] + * **New lifecycle flag: `ignore_changes`** [GH-2525] IMPROVEMENTS: * core: Add a function to find the index of an element in a list. [GH-2704] * core: Print all outputs when `terraform output` is called with no arguments [GH-2920] * core: In plan output summary, count resource replacement as Add/Remove instead of Change [GH-3173] + * core: Add interpolation functions for base64 encoding and decoding. [GH-3325] + * core: Expose parallelism as a CLI option instead of a hard-coding the default of 10 [GH-3365] + * core: Add interpolation function `compact`, to remove empty elements from a list. [GH-3239], [GH-3479] + * core: Allow filtering of log output by level, using e.g. ``TF_LOG=INFO`` [GH-3380] * provider/aws: Add `instance_initiated_shutdown_behavior` to AWS Instance [GH-2887] * provider/aws: Support IAM role names (previously just ARNs) in `aws_ecs_service.iam_role` [GH-3061] * provider/aws: Add update method to RDS Subnet groups, can modify subnets without recreating [GH-3053] * provider/aws: Paginate notifications returned for ASG Notifications [GH-3043] + * provider/aws: Adds additional S3 Bucket Object inputs [GH-3265] * provider/aws: add `ses_smtp_password` to `aws_iam_access_key` [GH-3165] * provider/aws: read `iam_instance_profile` for `aws_instance` and save to state [GH-3167] + * provider/aws: allow `instance` to be computed in `aws_eip` [GH-3036] * provider/aws: Add `versioning` option to `aws_s3_bucket` [GH-2942] * provider/aws: Add `configuation_endpoint` to `aws_elasticache_cluster` [GH-3250] + * provider/aws: Add validation for `app_cookie_stickiness_policy.name` [GH-3277] + * provider/aws: Add validation for `db_parameter_group.name` [GH-3279] + * provider/aws: Set DynamoDB Table ARN after creation [GH-3500] + * provider/aws: `aws_s3_bucket_object` allows interpolated content to be set with new `content` attribute. [GH-3200] + * provider/aws: Allow tags for `aws_kinesis_stream` resource. [GH-3397] + * provider/aws: Configurable capacity waiting duration for ASGs [GH-3191] + * provider/aws: Allow non-persistent Spot Requests [GH-3311] + * provider/aws: Support tags for AWS DB subnet group [GH-3138] * provider/cloudstack: Add `project` parameter to `cloudstack_vpc`, `cloudstack_network`, `cloudstack_ipaddress` and `cloudstack_disk` [GH-3035] + * provider/openstack: add functionality to attach FloatingIP to Port [GH-1788] + * provider/google: Can now do multi-region deployments without using multiple providers [GH-3258] + * remote/s3: Allow canned ACLs to be set on state objects. [GH-3233] + * remote/s3: Remote state is stored in S3 with `Content-Type: application/json` [GH-3385] BUG FIXES: * core: Fix problems referencing list attributes in interpolations [GH-2157] + * core: don't error on computed value during input walk [GH-2988] + * core: Ignore missing variables during destroy phase [GH-3393] * provider/google: Crashes with interface conversion in GCE Instance Template [GH-3027] * provider/google: Convert int to int64 when building the GKE cluster.NodeConfig struct [GH-2978] + * provider/google: google_compute_instance_template.network_interface.network should be a URL [GH-3226] * provider/aws: Retry creation of `aws_ecs_service` if IAM policy isn't ready yet [GH-3061] * provider/aws: Fix issue with mixed capitalization for RDS Instances [GH-3053] * provider/aws: Fix issue with RDS to allow major version upgrades [GH-3053] @@ -38,8 +129,26 @@ BUG FIXES: by AWS [GH-3120] * provider/aws: Read instance source_dest_check and save to state [GH-3152] * provider/aws: Allow `weight = 0` in Route53 records [GH-3196] + * provider/aws: Normalize aws_elasticache_cluster id to lowercase, allowing convergence. [GH-3235] + * provider/aws: Fix ValidateAccountId for IAM Instance Profiles [GH-3313] + * provider/aws: Update Security Group Rules to Version 2 [GH-3019] + * provider/aws: Migrate KeyPair to version 1, fixing issue with using `file()` [GH-3470] + * provider/aws: Fix force_delete on autoscaling groups [GH-3485] + * provider/aws: Fix crash with VPC Peering connections [GH-3490] + * provider/aws: fix bug with reading GSIs from dynamodb [GH-3300] + * provider/docker: Fix issue preventing private images from being referenced [GH-2619] + * provider/digitalocean: Fix issue causing unnecessary diffs based on droplet slugsize case [GH-3284] * provider/openstack: add state 'downloading' to list of expected states in `blockstorage_volume_v1` creation [GH-2866] + * provider/openstack: remove security groups (by name) before adding security + groups (by id) [GH-2008] + +INTERNAL IMPROVEMENTS: + + * core: Makefile target "plugin-dev" for building just one plugin. [GH-3229] + * helper/schema: Don't allow ``Update`` func if no attributes can actually be updated, per schema. [GH-3288] + * helper/schema: Default hashing function for sets [GH-3018] + * helper/multierror: Remove in favor of [github.com/hashicorp/go-multierror](http://github.com/hashicorp/go-multierror). [GH-3336] ## 0.6.3 (August 11, 2015) diff --git a/Makefile b/Makefile index 548b3ed2b..e53106307 100644 --- a/Makefile +++ b/Makefile @@ -15,6 +15,12 @@ dev: generate quickdev: generate @TF_QUICKDEV=1 TF_DEV=1 sh -c "'$(CURDIR)/scripts/build.sh'" +# Shorthand for building and installing just one plugin for local testing. +# Run as (for example): make plugin-dev PLUGIN=provider-aws +plugin-dev: generate + go install github.com/hashicorp/terraform/builtin/bins/$(PLUGIN) + mv $(GOPATH)/bin/$(PLUGIN) $(GOPATH)/bin/terraform-$(PLUGIN) + release: updatedeps gox -build-toolchain @$(MAKE) bin diff --git a/Vagrantfile b/Vagrantfile index 5b2d70bcc..59709339d 100644 --- a/Vagrantfile +++ b/Vagrantfile @@ -13,6 +13,7 @@ ARCH=`uname -m | sed 's|i686|386|' | sed 's|x86_64|amd64|'` # Install Prereq Packages sudo apt-get update +sudo apt-get upgrade -y sudo apt-get install -y build-essential curl git-core libpcre3-dev mercurial pkg-config zip # Install Go @@ -41,7 +42,7 @@ source /etc/profile.d/gopath.sh SCRIPT Vagrant.configure(VAGRANTFILE_API_VERSION) do |config| - config.vm.box = "chef/ubuntu-12.04" + config.vm.box = "bento/ubuntu-12.04" config.vm.hostname = "terraform" config.vm.provision "shell", inline: $script, privileged: false @@ -53,4 +54,9 @@ Vagrant.configure(VAGRANTFILE_API_VERSION) do |config| v.vmx["numvcpus"] = "2" end end + + config.vm.provider "virtualbox" do |v| + v.memory = 4096 + v.cpus = 2 + end end diff --git a/builtin/bins/provider-packet/main.go b/builtin/bins/provider-packet/main.go new file mode 100644 index 000000000..6d8198ef2 --- /dev/null +++ b/builtin/bins/provider-packet/main.go @@ -0,0 +1,12 @@ +package main + +import ( + "github.com/hashicorp/terraform/builtin/providers/packet" + "github.com/hashicorp/terraform/plugin" +) + +func main() { + plugin.Serve(&plugin.ServeOpts{ + ProviderFunc: packet.Provider, + }) +} diff --git a/builtin/bins/provider-vsphere/main.go b/builtin/bins/provider-vsphere/main.go new file mode 100644 index 000000000..99dba9584 --- /dev/null +++ b/builtin/bins/provider-vsphere/main.go @@ -0,0 +1,12 @@ +package main + +import ( + "github.com/hashicorp/terraform/builtin/providers/vsphere" + "github.com/hashicorp/terraform/plugin" +) + +func main() { + plugin.Serve(&plugin.ServeOpts{ + ProviderFunc: vsphere.Provider, + }) +} diff --git a/builtin/bins/provider-vsphere/main_test.go b/builtin/bins/provider-vsphere/main_test.go new file mode 100644 index 000000000..06ab7d0f9 --- /dev/null +++ b/builtin/bins/provider-vsphere/main_test.go @@ -0,0 +1 @@ +package main diff --git a/builtin/providers/atlas/resource_artifact.go b/builtin/providers/atlas/resource_artifact.go index f4d264a8a..b9ed5aea0 100644 --- a/builtin/providers/atlas/resource_artifact.go +++ b/builtin/providers/atlas/resource_artifact.go @@ -19,7 +19,6 @@ func resourceArtifact() *schema.Resource { return &schema.Resource{ Create: resourceArtifactRead, Read: resourceArtifactRead, - Update: resourceArtifactRead, Delete: resourceArtifactDelete, Schema: map[string]*schema.Schema{ diff --git a/builtin/providers/aws/config.go b/builtin/providers/aws/config.go index a57c65c1b..b8fc9fa47 100644 --- a/builtin/providers/aws/config.go +++ b/builtin/providers/aws/config.go @@ -5,21 +5,30 @@ import ( "log" "strings" - "github.com/hashicorp/terraform/helper/multierror" + "github.com/hashicorp/go-cleanhttp" + "github.com/hashicorp/go-multierror" "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/aws/awserr" "github.com/aws/aws-sdk-go/aws/credentials" "github.com/aws/aws-sdk-go/service/autoscaling" + "github.com/aws/aws-sdk-go/service/cloudformation" "github.com/aws/aws-sdk-go/service/cloudwatch" + "github.com/aws/aws-sdk-go/service/cloudwatchlogs" + "github.com/aws/aws-sdk-go/service/codedeploy" + "github.com/aws/aws-sdk-go/service/directoryservice" "github.com/aws/aws-sdk-go/service/dynamodb" "github.com/aws/aws-sdk-go/service/ec2" "github.com/aws/aws-sdk-go/service/ecs" + "github.com/aws/aws-sdk-go/service/efs" "github.com/aws/aws-sdk-go/service/elasticache" + elasticsearch "github.com/aws/aws-sdk-go/service/elasticsearchservice" "github.com/aws/aws-sdk-go/service/elb" + "github.com/aws/aws-sdk-go/service/glacier" "github.com/aws/aws-sdk-go/service/iam" "github.com/aws/aws-sdk-go/service/kinesis" "github.com/aws/aws-sdk-go/service/lambda" + "github.com/aws/aws-sdk-go/service/opsworks" "github.com/aws/aws-sdk-go/service/rds" "github.com/aws/aws-sdk-go/service/route53" "github.com/aws/aws-sdk-go/service/s3" @@ -41,22 +50,30 @@ type Config struct { } type AWSClient struct { - cloudwatchconn *cloudwatch.CloudWatch - dynamodbconn *dynamodb.DynamoDB - ec2conn *ec2.EC2 - ecsconn *ecs.ECS - elbconn *elb.ELB - autoscalingconn *autoscaling.AutoScaling - s3conn *s3.S3 - sqsconn *sqs.SQS - snsconn *sns.SNS - r53conn *route53.Route53 - region string - rdsconn *rds.RDS - iamconn *iam.IAM - kinesisconn *kinesis.Kinesis - elasticacheconn *elasticache.ElastiCache - lambdaconn *lambda.Lambda + cfconn *cloudformation.CloudFormation + cloudwatchconn *cloudwatch.CloudWatch + cloudwatchlogsconn *cloudwatchlogs.CloudWatchLogs + dsconn *directoryservice.DirectoryService + dynamodbconn *dynamodb.DynamoDB + ec2conn *ec2.EC2 + ecsconn *ecs.ECS + efsconn *efs.EFS + elbconn *elb.ELB + esconn *elasticsearch.ElasticsearchService + autoscalingconn *autoscaling.AutoScaling + s3conn *s3.S3 + sqsconn *sqs.SQS + snsconn *sns.SNS + r53conn *route53.Route53 + region string + rdsconn *rds.RDS + iamconn *iam.IAM + kinesisconn *kinesis.Kinesis + elasticacheconn *elasticache.ElastiCache + lambdaconn *lambda.Lambda + opsworksconn *opsworks.OpsWorks + glacierconn *glacier.Glacier + codedeployconn *codedeploy.CodeDeploy } // Client configures and returns a fully initialized AWSClient @@ -86,6 +103,7 @@ func (c *Config) Client() (interface{}, error) { Credentials: creds, Region: aws.String(c.Region), MaxRetries: aws.Int(c.MaxRetries), + HTTPClient: cleanhttp.DefaultClient(), } log.Println("[INFO] Initializing IAM Connection") @@ -102,6 +120,17 @@ func (c *Config) Client() (interface{}, error) { MaxRetries: aws.Int(c.MaxRetries), Endpoint: aws.String(c.DynamoDBEndpoint), } + // Some services exist only in us-east-1, e.g. because they manage + // resources that can span across multiple regions, or because + // signature format v4 requires region to be us-east-1 for global + // endpoints: + // http://docs.aws.amazon.com/general/latest/gr/sigv4_changes.html + usEast1AwsConfig := &aws.Config{ + Credentials: creds, + Region: aws.String("us-east-1"), + MaxRetries: aws.Int(c.MaxRetries), + HTTPClient: cleanhttp.DefaultClient(), + } log.Println("[INFO] Initializing DynamoDB connection") client.dynamodbconn = dynamodb.New(awsDynamoDBConfig) @@ -138,15 +167,14 @@ func (c *Config) Client() (interface{}, error) { log.Println("[INFO] Initializing ECS Connection") client.ecsconn = ecs.New(awsConfig) - // aws-sdk-go uses v4 for signing requests, which requires all global - // endpoints to use 'us-east-1'. - // See http://docs.aws.amazon.com/general/latest/gr/sigv4_changes.html + log.Println("[INFO] Initializing EFS Connection") + client.efsconn = efs.New(awsConfig) + + log.Println("[INFO] Initializing ElasticSearch Connection") + client.esconn = elasticsearch.New(awsConfig) + log.Println("[INFO] Initializing Route 53 connection") - client.r53conn = route53.New(&aws.Config{ - Credentials: creds, - Region: aws.String("us-east-1"), - MaxRetries: aws.Int(c.MaxRetries), - }) + client.r53conn = route53.New(usEast1AwsConfig) log.Println("[INFO] Initializing Elasticache Connection") client.elasticacheconn = elasticache.New(awsConfig) @@ -154,8 +182,26 @@ func (c *Config) Client() (interface{}, error) { log.Println("[INFO] Initializing Lambda Connection") client.lambdaconn = lambda.New(awsConfig) + log.Println("[INFO] Initializing Cloudformation Connection") + client.cfconn = cloudformation.New(awsConfig) + log.Println("[INFO] Initializing CloudWatch SDK connection") client.cloudwatchconn = cloudwatch.New(awsConfig) + + log.Println("[INFO] Initializing CloudWatch Logs connection") + client.cloudwatchlogsconn = cloudwatchlogs.New(awsConfig) + + log.Println("[INFO] Initializing OpsWorks Connection") + client.opsworksconn = opsworks.New(usEast1AwsConfig) + + log.Println("[INFO] Initializing Directory Service connection") + client.dsconn = directoryservice.New(awsConfig) + + log.Println("[INFO] Initializing Glacier connection") + client.glacierconn = glacier.New(awsConfig) + + log.Println("[INFO] Initializing CodeDeploy Connection") + client.codedeployconn = codedeploy.New(awsConfig) } if len(errs) > 0 { @@ -221,6 +267,7 @@ func (c *Config) ValidateAccountId(iamconn *iam.IAM) error { // User may be an IAM instance profile, so fail silently. // If it is an IAM instance profile // validating account might be superfluous + return nil } else { return fmt.Errorf("Failed getting account ID from IAM: %s", err) // return error if the account id is explicitly not authorised diff --git a/builtin/providers/aws/conversions.go b/builtin/providers/aws/conversions.go new file mode 100644 index 000000000..1b69cee06 --- /dev/null +++ b/builtin/providers/aws/conversions.go @@ -0,0 +1,33 @@ +package aws + +import ( + "github.com/awslabs/aws-sdk-go/aws" + "github.com/hashicorp/terraform/helper/schema" +) + +func makeAwsStringList(in []interface{}) []*string { + ret := make([]*string, len(in), len(in)) + for i := 0; i < len(in); i++ { + ret[i] = aws.String(in[i].(string)) + } + return ret +} + +func makeAwsStringSet(in *schema.Set) []*string { + inList := in.List() + ret := make([]*string, len(inList), len(inList)) + for i := 0; i < len(ret); i++ { + ret[i] = aws.String(inList[i].(string)) + } + return ret +} + +func unwrapAwsStringList(in []*string) []string { + ret := make([]string, len(in), len(in)) + for i := 0; i < len(in); i++ { + if in[i] != nil { + ret[i] = *in[i] + } + } + return ret +} diff --git a/builtin/providers/aws/network_acl_entry.go b/builtin/providers/aws/network_acl_entry.go index 299c9f8c5..22b909bce 100644 --- a/builtin/providers/aws/network_acl_entry.go +++ b/builtin/providers/aws/network_acl_entry.go @@ -29,7 +29,7 @@ func expandNetworkAclEntries(configured []interface{}, entryType string) ([]*ec2 From: aws.Int64(int64(data["from_port"].(int))), To: aws.Int64(int64(data["to_port"].(int))), }, - Egress: aws.Bool((entryType == "egress")), + Egress: aws.Bool(entryType == "egress"), RuleAction: aws.String(data["action"].(string)), RuleNumber: aws.Int64(int64(data["rule_no"].(int))), CidrBlock: aws.String(data["cidr_block"].(string)), diff --git a/builtin/providers/aws/opsworks_layers.go b/builtin/providers/aws/opsworks_layers.go new file mode 100644 index 000000000..4ad9382eb --- /dev/null +++ b/builtin/providers/aws/opsworks_layers.go @@ -0,0 +1,558 @@ +package aws + +import ( + "fmt" + "log" + "strconv" + + "github.com/hashicorp/terraform/helper/hashcode" + "github.com/hashicorp/terraform/helper/schema" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/awserr" + "github.com/aws/aws-sdk-go/service/opsworks" +) + +// OpsWorks has a single concept of "layer" which represents several different +// layer types. The differences between these are in some extra properties that +// get packed into an "Attributes" map, but in the OpsWorks UI these are presented +// as first-class options, and so Terraform prefers to expose them this way and +// hide the implementation detail that they are all packed into a single type +// in the underlying API. +// +// This file contains utilities that are shared between all of the concrete +// layer resource types, which have names matching aws_opsworks_*_layer . + +type opsworksLayerTypeAttribute struct { + AttrName string + Type schema.ValueType + Default interface{} + Required bool + WriteOnly bool +} + +type opsworksLayerType struct { + TypeName string + DefaultLayerName string + Attributes map[string]*opsworksLayerTypeAttribute + CustomShortName bool +} + +var ( + opsworksTrueString = "1" + opsworksFalseString = "0" +) + +func (lt *opsworksLayerType) SchemaResource() *schema.Resource { + resourceSchema := map[string]*schema.Schema{ + "id": &schema.Schema{ + Type: schema.TypeString, + Computed: true, + }, + + "auto_assign_elastic_ips": &schema.Schema{ + Type: schema.TypeBool, + Optional: true, + Default: false, + }, + + "auto_assign_public_ips": &schema.Schema{ + Type: schema.TypeBool, + Optional: true, + Default: false, + }, + + "custom_instance_profile_arn": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + }, + + "custom_setup_recipes": &schema.Schema{ + Type: schema.TypeList, + Optional: true, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + + "custom_configure_recipes": &schema.Schema{ + Type: schema.TypeList, + Optional: true, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + + "custom_deploy_recipes": &schema.Schema{ + Type: schema.TypeList, + Optional: true, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + + "custom_undeploy_recipes": &schema.Schema{ + Type: schema.TypeList, + Optional: true, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + + "custom_shutdown_recipes": &schema.Schema{ + Type: schema.TypeList, + Optional: true, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + + "custom_security_group_ids": &schema.Schema{ + Type: schema.TypeSet, + Optional: true, + Elem: &schema.Schema{Type: schema.TypeString}, + Set: schema.HashString, + }, + + "auto_healing": &schema.Schema{ + Type: schema.TypeBool, + Optional: true, + Default: true, + }, + + "install_updates_on_boot": &schema.Schema{ + Type: schema.TypeBool, + Optional: true, + Default: true, + }, + + "instance_shutdown_timeout": &schema.Schema{ + Type: schema.TypeInt, + Optional: true, + Default: 120, + }, + + "drain_elb_on_shutdown": &schema.Schema{ + Type: schema.TypeBool, + Optional: true, + Default: true, + }, + + "system_packages": &schema.Schema{ + Type: schema.TypeSet, + Optional: true, + Elem: &schema.Schema{Type: schema.TypeString}, + Set: schema.HashString, + }, + + "stack_id": &schema.Schema{ + Type: schema.TypeString, + ForceNew: true, + Required: true, + }, + + "use_ebs_optimized_instances": &schema.Schema{ + Type: schema.TypeBool, + Optional: true, + Default: false, + }, + + "ebs_volume": &schema.Schema{ + Type: schema.TypeSet, + Optional: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + + "iops": &schema.Schema{ + Type: schema.TypeInt, + Optional: true, + Default: 0, + }, + + "mount_point": &schema.Schema{ + Type: schema.TypeString, + Required: true, + }, + + "number_of_disks": &schema.Schema{ + Type: schema.TypeInt, + Required: true, + }, + + "raid_level": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + Default: "", + }, + + "size": &schema.Schema{ + Type: schema.TypeInt, + Required: true, + }, + + "type": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + Default: "standard", + }, + }, + }, + Set: func(v interface{}) int { + m := v.(map[string]interface{}) + return hashcode.String(m["mount_point"].(string)) + }, + }, + } + + if lt.CustomShortName { + resourceSchema["short_name"] = &schema.Schema{ + Type: schema.TypeString, + Required: true, + } + } + + if lt.DefaultLayerName != "" { + resourceSchema["name"] = &schema.Schema{ + Type: schema.TypeString, + Optional: true, + Default: lt.DefaultLayerName, + } + } else { + resourceSchema["name"] = &schema.Schema{ + Type: schema.TypeString, + Required: true, + } + } + + for key, def := range lt.Attributes { + resourceSchema[key] = &schema.Schema{ + Type: def.Type, + Default: def.Default, + Required: def.Required, + Optional: !def.Required, + } + } + + return &schema.Resource{ + Read: func(d *schema.ResourceData, meta interface{}) error { + client := meta.(*AWSClient).opsworksconn + return lt.Read(d, client) + }, + Create: func(d *schema.ResourceData, meta interface{}) error { + client := meta.(*AWSClient).opsworksconn + return lt.Create(d, client) + }, + Update: func(d *schema.ResourceData, meta interface{}) error { + client := meta.(*AWSClient).opsworksconn + return lt.Update(d, client) + }, + Delete: func(d *schema.ResourceData, meta interface{}) error { + client := meta.(*AWSClient).opsworksconn + return lt.Delete(d, client) + }, + + Schema: resourceSchema, + } +} + +func (lt *opsworksLayerType) Read(d *schema.ResourceData, client *opsworks.OpsWorks) error { + + req := &opsworks.DescribeLayersInput{ + LayerIds: []*string{ + aws.String(d.Id()), + }, + } + + log.Printf("[DEBUG] Reading OpsWorks layer: %s", d.Id()) + + resp, err := client.DescribeLayers(req) + if err != nil { + if awserr, ok := err.(awserr.Error); ok { + if awserr.Code() == "ResourceNotFoundException" { + d.SetId("") + return nil + } + } + return err + } + + layer := resp.Layers[0] + d.Set("id", layer.LayerId) + d.Set("auto_assign_elastic_ips", layer.AutoAssignElasticIps) + d.Set("auto_assign_public_ips", layer.AutoAssignPublicIps) + d.Set("custom_instance_profile_arn", layer.CustomInstanceProfileArn) + d.Set("custom_security_group_ids", unwrapAwsStringList(layer.CustomSecurityGroupIds)) + d.Set("auto_healing", layer.EnableAutoHealing) + d.Set("install_updates_on_boot", layer.InstallUpdatesOnBoot) + d.Set("name", layer.Name) + d.Set("system_packages", unwrapAwsStringList(layer.Packages)) + d.Set("stack_id", layer.StackId) + d.Set("use_ebs_optimized_instances", layer.UseEbsOptimizedInstances) + + if lt.CustomShortName { + d.Set("short_name", layer.Shortname) + } + + lt.SetAttributeMap(d, layer.Attributes) + lt.SetLifecycleEventConfiguration(d, layer.LifecycleEventConfiguration) + lt.SetCustomRecipes(d, layer.CustomRecipes) + lt.SetVolumeConfigurations(d, layer.VolumeConfigurations) + + return nil +} + +func (lt *opsworksLayerType) Create(d *schema.ResourceData, client *opsworks.OpsWorks) error { + + req := &opsworks.CreateLayerInput{ + AutoAssignElasticIps: aws.Bool(d.Get("auto_assign_elastic_ips").(bool)), + AutoAssignPublicIps: aws.Bool(d.Get("auto_assign_public_ips").(bool)), + CustomInstanceProfileArn: aws.String(d.Get("custom_instance_profile_arn").(string)), + CustomRecipes: lt.CustomRecipes(d), + CustomSecurityGroupIds: makeAwsStringSet(d.Get("custom_security_group_ids").(*schema.Set)), + EnableAutoHealing: aws.Bool(d.Get("auto_healing").(bool)), + InstallUpdatesOnBoot: aws.Bool(d.Get("install_updates_on_boot").(bool)), + LifecycleEventConfiguration: lt.LifecycleEventConfiguration(d), + Name: aws.String(d.Get("name").(string)), + Packages: makeAwsStringSet(d.Get("system_packages").(*schema.Set)), + Type: aws.String(lt.TypeName), + StackId: aws.String(d.Get("stack_id").(string)), + UseEbsOptimizedInstances: aws.Bool(d.Get("use_ebs_optimized_instances").(bool)), + Attributes: lt.AttributeMap(d), + VolumeConfigurations: lt.VolumeConfigurations(d), + } + + if lt.CustomShortName { + req.Shortname = aws.String(d.Get("short_name").(string)) + } else { + req.Shortname = aws.String(lt.TypeName) + } + + log.Printf("[DEBUG] Creating OpsWorks layer: %s", d.Id()) + + resp, err := client.CreateLayer(req) + if err != nil { + return err + } + + layerId := *resp.LayerId + d.SetId(layerId) + d.Set("id", layerId) + + return lt.Read(d, client) +} + +func (lt *opsworksLayerType) Update(d *schema.ResourceData, client *opsworks.OpsWorks) error { + + req := &opsworks.UpdateLayerInput{ + LayerId: aws.String(d.Id()), + AutoAssignElasticIps: aws.Bool(d.Get("auto_assign_elastic_ips").(bool)), + AutoAssignPublicIps: aws.Bool(d.Get("auto_assign_public_ips").(bool)), + CustomInstanceProfileArn: aws.String(d.Get("custom_instance_profile_arn").(string)), + CustomRecipes: lt.CustomRecipes(d), + CustomSecurityGroupIds: makeAwsStringSet(d.Get("custom_security_group_ids").(*schema.Set)), + EnableAutoHealing: aws.Bool(d.Get("auto_healing").(bool)), + InstallUpdatesOnBoot: aws.Bool(d.Get("install_updates_on_boot").(bool)), + LifecycleEventConfiguration: lt.LifecycleEventConfiguration(d), + Name: aws.String(d.Get("name").(string)), + Packages: makeAwsStringSet(d.Get("system_packages").(*schema.Set)), + UseEbsOptimizedInstances: aws.Bool(d.Get("use_ebs_optimized_instances").(bool)), + Attributes: lt.AttributeMap(d), + VolumeConfigurations: lt.VolumeConfigurations(d), + } + + if lt.CustomShortName { + req.Shortname = aws.String(d.Get("short_name").(string)) + } else { + req.Shortname = aws.String(lt.TypeName) + } + + log.Printf("[DEBUG] Updating OpsWorks layer: %s", d.Id()) + + _, err := client.UpdateLayer(req) + if err != nil { + return err + } + + return lt.Read(d, client) +} + +func (lt *opsworksLayerType) Delete(d *schema.ResourceData, client *opsworks.OpsWorks) error { + req := &opsworks.DeleteLayerInput{ + LayerId: aws.String(d.Id()), + } + + log.Printf("[DEBUG] Deleting OpsWorks layer: %s", d.Id()) + + _, err := client.DeleteLayer(req) + return err +} + +func (lt *opsworksLayerType) AttributeMap(d *schema.ResourceData) map[string]*string { + attrs := map[string]*string{} + + for key, def := range lt.Attributes { + value := d.Get(key) + switch def.Type { + case schema.TypeString: + strValue := value.(string) + attrs[def.AttrName] = &strValue + case schema.TypeInt: + intValue := value.(int) + strValue := strconv.Itoa(intValue) + attrs[def.AttrName] = &strValue + case schema.TypeBool: + boolValue := value.(bool) + if boolValue { + attrs[def.AttrName] = &opsworksTrueString + } else { + attrs[def.AttrName] = &opsworksFalseString + } + default: + // should never happen + panic(fmt.Errorf("Unsupported OpsWorks layer attribute type")) + } + } + + return attrs +} + +func (lt *opsworksLayerType) SetAttributeMap(d *schema.ResourceData, attrs map[string]*string) { + for key, def := range lt.Attributes { + // Ignore write-only attributes; we'll just keep what we already have stored. + // (The AWS API returns garbage placeholder values for these.) + if def.WriteOnly { + continue + } + + if strPtr, ok := attrs[def.AttrName]; ok && strPtr != nil { + strValue := *strPtr + + switch def.Type { + case schema.TypeString: + d.Set(key, strValue) + case schema.TypeInt: + intValue, err := strconv.Atoi(strValue) + if err == nil { + d.Set(key, intValue) + } else { + // Got garbage from the AWS API + d.Set(key, nil) + } + case schema.TypeBool: + boolValue := true + if strValue == opsworksFalseString { + boolValue = false + } + d.Set(key, boolValue) + default: + // should never happen + panic(fmt.Errorf("Unsupported OpsWorks layer attribute type")) + } + return + + } else { + d.Set(key, nil) + } + } +} + +func (lt *opsworksLayerType) LifecycleEventConfiguration(d *schema.ResourceData) *opsworks.LifecycleEventConfiguration { + return &opsworks.LifecycleEventConfiguration{ + Shutdown: &opsworks.ShutdownEventConfiguration{ + DelayUntilElbConnectionsDrained: aws.Bool(d.Get("drain_elb_on_shutdown").(bool)), + ExecutionTimeout: aws.Int64(int64(d.Get("instance_shutdown_timeout").(int))), + }, + } +} + +func (lt *opsworksLayerType) SetLifecycleEventConfiguration(d *schema.ResourceData, v *opsworks.LifecycleEventConfiguration) { + if v == nil || v.Shutdown == nil { + d.Set("drain_elb_on_shutdown", nil) + d.Set("instance_shutdown_timeout", nil) + } else { + d.Set("drain_elb_on_shutdown", v.Shutdown.DelayUntilElbConnectionsDrained) + d.Set("instance_shutdown_timeout", v.Shutdown.ExecutionTimeout) + } +} + +func (lt *opsworksLayerType) CustomRecipes(d *schema.ResourceData) *opsworks.Recipes { + return &opsworks.Recipes{ + Configure: makeAwsStringList(d.Get("custom_configure_recipes").([]interface{})), + Deploy: makeAwsStringList(d.Get("custom_deploy_recipes").([]interface{})), + Setup: makeAwsStringList(d.Get("custom_setup_recipes").([]interface{})), + Shutdown: makeAwsStringList(d.Get("custom_shutdown_recipes").([]interface{})), + Undeploy: makeAwsStringList(d.Get("custom_undeploy_recipes").([]interface{})), + } +} + +func (lt *opsworksLayerType) SetCustomRecipes(d *schema.ResourceData, v *opsworks.Recipes) { + // Null out everything first, and then we'll consider what to put back. + d.Set("custom_configure_recipes", nil) + d.Set("custom_deploy_recipes", nil) + d.Set("custom_setup_recipes", nil) + d.Set("custom_shutdown_recipes", nil) + d.Set("custom_undeploy_recipes", nil) + + if v == nil { + return + } + + d.Set("custom_configure_recipes", unwrapAwsStringList(v.Configure)) + d.Set("custom_deploy_recipes", unwrapAwsStringList(v.Deploy)) + d.Set("custom_setup_recipes", unwrapAwsStringList(v.Setup)) + d.Set("custom_shutdown_recipes", unwrapAwsStringList(v.Shutdown)) + d.Set("custom_undeploy_recipes", unwrapAwsStringList(v.Undeploy)) +} + +func (lt *opsworksLayerType) VolumeConfigurations(d *schema.ResourceData) []*opsworks.VolumeConfiguration { + configuredVolumes := d.Get("ebs_volume").(*schema.Set).List() + result := make([]*opsworks.VolumeConfiguration, len(configuredVolumes)) + + for i := 0; i < len(configuredVolumes); i++ { + volumeData := configuredVolumes[i].(map[string]interface{}) + + result[i] = &opsworks.VolumeConfiguration{ + MountPoint: aws.String(volumeData["mount_point"].(string)), + NumberOfDisks: aws.Int64(int64(volumeData["number_of_disks"].(int))), + Size: aws.Int64(int64(volumeData["size"].(int))), + VolumeType: aws.String(volumeData["type"].(string)), + } + iops := int64(volumeData["iops"].(int)) + if iops != 0 { + result[i].Iops = aws.Int64(iops) + } + + raidLevelStr := volumeData["raid_level"].(string) + if raidLevelStr != "" { + raidLevel, err := strconv.Atoi(raidLevelStr) + if err == nil { + result[i].RaidLevel = aws.Int64(int64(raidLevel)) + } + } + } + + return result +} + +func (lt *opsworksLayerType) SetVolumeConfigurations(d *schema.ResourceData, v []*opsworks.VolumeConfiguration) { + newValue := make([]*map[string]interface{}, len(v)) + + for i := 0; i < len(v); i++ { + config := v[i] + data := make(map[string]interface{}) + newValue[i] = &data + + if config.Iops != nil { + data["iops"] = int(*config.Iops) + } else { + data["iops"] = 0 + } + if config.MountPoint != nil { + data["mount_point"] = *config.MountPoint + } + if config.NumberOfDisks != nil { + data["number_of_disks"] = int(*config.NumberOfDisks) + } + if config.RaidLevel != nil { + data["raid_level"] = strconv.Itoa(int(*config.RaidLevel)) + } + if config.Size != nil { + data["size"] = int(*config.Size) + } + if config.VolumeType != nil { + data["type"] = *config.VolumeType + } + } + + d.Set("ebs_volume", newValue) +} diff --git a/builtin/providers/aws/provider.go b/builtin/providers/aws/provider.go index 6b2c16c7a..5b02d4a70 100644 --- a/builtin/providers/aws/provider.go +++ b/builtin/providers/aws/provider.go @@ -163,24 +163,34 @@ func Provider() terraform.ResourceProvider { "aws_autoscaling_group": resourceAwsAutoscalingGroup(), "aws_autoscaling_notification": resourceAwsAutoscalingNotification(), "aws_autoscaling_policy": resourceAwsAutoscalingPolicy(), + "aws_cloudformation_stack": resourceAwsCloudFormationStack(), + "aws_cloudwatch_log_group": resourceAwsCloudWatchLogGroup(), + "aws_autoscaling_lifecycle_hook": resourceAwsAutoscalingLifecycleHook(), "aws_cloudwatch_metric_alarm": resourceAwsCloudWatchMetricAlarm(), + "aws_codedeploy_app": resourceAwsCodeDeployApp(), + "aws_codedeploy_deployment_group": resourceAwsCodeDeployDeploymentGroup(), "aws_customer_gateway": resourceAwsCustomerGateway(), "aws_db_instance": resourceAwsDbInstance(), "aws_db_parameter_group": resourceAwsDbParameterGroup(), "aws_db_security_group": resourceAwsDbSecurityGroup(), "aws_db_subnet_group": resourceAwsDbSubnetGroup(), + "aws_directory_service_directory": resourceAwsDirectoryServiceDirectory(), "aws_dynamodb_table": resourceAwsDynamoDbTable(), "aws_ebs_volume": resourceAwsEbsVolume(), "aws_ecs_cluster": resourceAwsEcsCluster(), "aws_ecs_service": resourceAwsEcsService(), "aws_ecs_task_definition": resourceAwsEcsTaskDefinition(), + "aws_efs_file_system": resourceAwsEfsFileSystem(), + "aws_efs_mount_target": resourceAwsEfsMountTarget(), "aws_eip": resourceAwsEip(), "aws_elasticache_cluster": resourceAwsElasticacheCluster(), "aws_elasticache_parameter_group": resourceAwsElasticacheParameterGroup(), "aws_elasticache_security_group": resourceAwsElasticacheSecurityGroup(), "aws_elasticache_subnet_group": resourceAwsElasticacheSubnetGroup(), + "aws_elasticsearch_domain": resourceAwsElasticSearchDomain(), "aws_elb": resourceAwsElb(), "aws_flow_log": resourceAwsFlowLog(), + "aws_glacier_vault": resourceAwsGlacierVault(), "aws_iam_access_key": resourceAwsIamAccessKey(), "aws_iam_group_policy": resourceAwsIamGroupPolicy(), "aws_iam_group": resourceAwsIamGroup(), @@ -190,6 +200,7 @@ func Provider() terraform.ResourceProvider { "aws_iam_policy_attachment": resourceAwsIamPolicyAttachment(), "aws_iam_role_policy": resourceAwsIamRolePolicy(), "aws_iam_role": resourceAwsIamRole(), + "aws_iam_saml_provider": resourceAwsIamSamlProvider(), "aws_iam_server_certificate": resourceAwsIAMServerCertificate(), "aws_iam_user_policy": resourceAwsIamUserPolicy(), "aws_iam_user": resourceAwsIamUser(), @@ -203,7 +214,21 @@ func Provider() terraform.ResourceProvider { "aws_main_route_table_association": resourceAwsMainRouteTableAssociation(), "aws_network_acl": resourceAwsNetworkAcl(), "aws_network_interface": resourceAwsNetworkInterface(), + "aws_opsworks_stack": resourceAwsOpsworksStack(), + "aws_opsworks_java_app_layer": resourceAwsOpsworksJavaAppLayer(), + "aws_opsworks_haproxy_layer": resourceAwsOpsworksHaproxyLayer(), + "aws_opsworks_static_web_layer": resourceAwsOpsworksStaticWebLayer(), + "aws_opsworks_php_app_layer": resourceAwsOpsworksPhpAppLayer(), + "aws_opsworks_rails_app_layer": resourceAwsOpsworksRailsAppLayer(), + "aws_opsworks_nodejs_app_layer": resourceAwsOpsworksNodejsAppLayer(), + "aws_opsworks_memcached_layer": resourceAwsOpsworksMemcachedLayer(), + "aws_opsworks_mysql_layer": resourceAwsOpsworksMysqlLayer(), + "aws_opsworks_ganglia_layer": resourceAwsOpsworksGangliaLayer(), + "aws_opsworks_custom_layer": resourceAwsOpsworksCustomLayer(), + "aws_placement_group": resourceAwsPlacementGroup(), "aws_proxy_protocol_policy": resourceAwsProxyProtocolPolicy(), + "aws_rds_cluster": resourceAwsRDSCluster(), + "aws_rds_cluster_instance": resourceAwsRDSClusterInstance(), "aws_route53_delegation_set": resourceAwsRoute53DelegationSet(), "aws_route53_record": resourceAwsRoute53Record(), "aws_route53_zone_association": resourceAwsRoute53ZoneAssociation(), diff --git a/builtin/providers/aws/resource_aws_ami.go b/builtin/providers/aws/resource_aws_ami.go index ec3ce73b9..621881036 100644 --- a/builtin/providers/aws/resource_aws_ami.go +++ b/builtin/providers/aws/resource_aws_ami.go @@ -130,7 +130,7 @@ func resourceAwsAmiRead(d *schema.ResourceData, meta interface{}) error { } image := res.Images[0] - state := *(image.State) + state := *image.State if state == "pending" { // This could happen if a user manually adds an image we didn't create @@ -142,7 +142,7 @@ func resourceAwsAmiRead(d *schema.ResourceData, meta interface{}) error { if err != nil { return err } - state = *(image.State) + state = *image.State } if state == "deregistered" { @@ -170,22 +170,22 @@ func resourceAwsAmiRead(d *schema.ResourceData, meta interface{}) error { for _, blockDev := range image.BlockDeviceMappings { if blockDev.Ebs != nil { ebsBlockDev := map[string]interface{}{ - "device_name": *(blockDev.DeviceName), - "delete_on_termination": *(blockDev.Ebs.DeleteOnTermination), - "encrypted": *(blockDev.Ebs.Encrypted), + "device_name": *blockDev.DeviceName, + "delete_on_termination": *blockDev.Ebs.DeleteOnTermination, + "encrypted": *blockDev.Ebs.Encrypted, "iops": 0, - "snapshot_id": *(blockDev.Ebs.SnapshotId), - "volume_size": int(*(blockDev.Ebs.VolumeSize)), - "volume_type": *(blockDev.Ebs.VolumeType), + "snapshot_id": *blockDev.Ebs.SnapshotId, + "volume_size": int(*blockDev.Ebs.VolumeSize), + "volume_type": *blockDev.Ebs.VolumeType, } if blockDev.Ebs.Iops != nil { - ebsBlockDev["iops"] = int(*(blockDev.Ebs.Iops)) + ebsBlockDev["iops"] = int(*blockDev.Ebs.Iops) } ebsBlockDevs = append(ebsBlockDevs, ebsBlockDev) } else { ephemeralBlockDevs = append(ephemeralBlockDevs, map[string]interface{}{ - "device_name": *(blockDev.DeviceName), - "virtual_name": *(blockDev.VirtualName), + "device_name": *blockDev.DeviceName, + "virtual_name": *blockDev.VirtualName, }) } } @@ -301,7 +301,7 @@ func resourceAwsAmiWaitForAvailable(id string, client *ec2.EC2) (*ec2.Image, err return nil, fmt.Errorf("new AMI vanished while pending") } - state := *(res.Images[0].State) + state := *res.Images[0].State if state == "pending" { // Give it a few seconds before we poll again. @@ -316,7 +316,7 @@ func resourceAwsAmiWaitForAvailable(id string, client *ec2.EC2) (*ec2.Image, err // If we're not pending or available then we're in one of the invalid/error // states, so stop polling and bail out. - stateReason := *(res.Images[0].StateReason) + stateReason := *res.Images[0].StateReason return nil, fmt.Errorf("new AMI became %s while pending: %s", state, stateReason) } } diff --git a/builtin/providers/aws/resource_aws_app_cookie_stickiness_policy.go b/builtin/providers/aws/resource_aws_app_cookie_stickiness_policy.go index 3f7e1bf7f..0fe85f9e9 100644 --- a/builtin/providers/aws/resource_aws_app_cookie_stickiness_policy.go +++ b/builtin/providers/aws/resource_aws_app_cookie_stickiness_policy.go @@ -2,6 +2,7 @@ package aws import ( "fmt" + "regexp" "strings" "github.com/aws/aws-sdk-go/aws" @@ -15,8 +16,6 @@ func resourceAwsAppCookieStickinessPolicy() *schema.Resource { // There is no concept of "updating" an App Stickiness policy in // the AWS API. Create: resourceAwsAppCookieStickinessPolicyCreate, - Update: resourceAwsAppCookieStickinessPolicyCreate, - Read: resourceAwsAppCookieStickinessPolicyRead, Delete: resourceAwsAppCookieStickinessPolicyDelete, @@ -25,6 +24,14 @@ func resourceAwsAppCookieStickinessPolicy() *schema.Resource { Type: schema.TypeString, Required: true, ForceNew: true, + ValidateFunc: func(v interface{}, k string) (ws []string, es []error) { + value := v.(string) + if !regexp.MustCompile(`^[0-9A-Za-z-]+$`).MatchString(value) { + es = append(es, fmt.Errorf( + "only alphanumeric characters and hyphens allowed in %q", k)) + } + return + }, }, "load_balancer": &schema.Schema{ diff --git a/builtin/providers/aws/resource_aws_autoscaling_group.go b/builtin/providers/aws/resource_aws_autoscaling_group.go index 771bda2e3..f457e6dcd 100644 --- a/builtin/providers/aws/resource_aws_autoscaling_group.go +++ b/builtin/providers/aws/resource_aws_autoscaling_group.go @@ -73,8 +73,7 @@ func resourceAwsAutoscalingGroup() *schema.Resource { "force_delete": &schema.Schema{ Type: schema.TypeBool, Optional: true, - Computed: true, - ForceNew: true, + Default: false, }, "health_check_grace_period": &schema.Schema{ @@ -112,12 +111,28 @@ func resourceAwsAutoscalingGroup() *schema.Resource { }, "termination_policies": &schema.Schema{ - Type: schema.TypeSet, + Type: schema.TypeList, Optional: true, - Computed: true, - ForceNew: true, Elem: &schema.Schema{Type: schema.TypeString}, - Set: schema.HashString, + }, + + "wait_for_capacity_timeout": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + Default: "10m", + ValidateFunc: func(v interface{}, k string) (ws []string, errors []error) { + value := v.(string) + duration, err := time.ParseDuration(value) + if err != nil { + errors = append(errors, fmt.Errorf( + "%q cannot be parsed as a duration: %s", k, err)) + } + if duration < 0 { + errors = append(errors, fmt.Errorf( + "%q must be greater than zero", k)) + } + return + }, }, "tag": autoscalingTagsSchema(), @@ -169,9 +184,8 @@ func resourceAwsAutoscalingGroupCreate(d *schema.ResourceData, meta interface{}) autoScalingGroupOpts.VPCZoneIdentifier = expandVpcZoneIdentifiers(v.(*schema.Set).List()) } - if v, ok := d.GetOk("termination_policies"); ok && v.(*schema.Set).Len() > 0 { - autoScalingGroupOpts.TerminationPolicies = expandStringList( - v.(*schema.Set).List()) + if v, ok := d.GetOk("termination_policies"); ok && len(v.([]interface{})) > 0 { + autoScalingGroupOpts.TerminationPolicies = expandStringList(v.([]interface{})) } log.Printf("[DEBUG] AutoScaling Group create configuration: %#v", autoScalingGroupOpts) @@ -262,6 +276,24 @@ func resourceAwsAutoscalingGroupUpdate(d *schema.ResourceData, meta interface{}) } } + if d.HasChange("termination_policies") { + // If the termination policy is set to null, we need to explicitly set + // it back to "Default", or the API won't reset it for us. + // This means GetOk() will fail us on the zero check. + v := d.Get("termination_policies") + if len(v.([]interface{})) > 0 { + opts.TerminationPolicies = expandStringList(v.([]interface{})) + } else { + // Policies is a slice of string pointers, so build one. + // Maybe there's a better idiom for this? + log.Printf("[DEBUG] Explictly setting null termination policy to 'Default'") + pol := "Default" + s := make([]*string, 1, 1) + s[0] = &pol + opts.TerminationPolicies = s + } + } + if err := setAutoscalingTags(conn, d); err != nil { return err } else { @@ -334,15 +366,9 @@ func resourceAwsAutoscalingGroupDelete(d *schema.ResourceData, meta interface{}) } log.Printf("[DEBUG] AutoScaling Group destroy: %v", d.Id()) - deleteopts := autoscaling.DeleteAutoScalingGroupInput{AutoScalingGroupName: aws.String(d.Id())} - - // You can force an autoscaling group to delete - // even if it's in the process of scaling a resource. - // Normally, you would set the min-size and max-size to 0,0 - // and then delete the group. This bypasses that and leaves - // resources potentially dangling. - if d.Get("force_delete").(bool) { - deleteopts.ForceDelete = aws.Bool(true) + deleteopts := autoscaling.DeleteAutoScalingGroupInput{ + AutoScalingGroupName: aws.String(d.Id()), + ForceDelete: aws.Bool(d.Get("force_delete").(bool)), } // We retry the delete operation to handle InUse/InProgress errors coming @@ -414,6 +440,11 @@ func getAwsAutoscalingGroup( func resourceAwsAutoscalingGroupDrain(d *schema.ResourceData, meta interface{}) error { conn := meta.(*AWSClient).autoscalingconn + if d.Get("force_delete").(bool) { + log.Printf("[DEBUG] Skipping ASG drain, force_delete was set.") + return nil + } + // First, set the capacity to zero so the group will drain log.Printf("[DEBUG] Reducing autoscaling group capacity to zero") opts := autoscaling.UpdateAutoScalingGroupInput{ @@ -445,8 +476,6 @@ func resourceAwsAutoscalingGroupDrain(d *schema.ResourceData, meta interface{}) }) } -var waitForASGCapacityTimeout = 10 * time.Minute - // Waits for a minimum number of healthy instances to show up as healthy in the // ASG before continuing. Waits up to `waitForASGCapacityTimeout` for // "desired_capacity", or "min_size" if desired capacity is not specified. @@ -461,9 +490,20 @@ func waitForASGCapacity(d *schema.ResourceData, meta interface{}) error { } wantELB := d.Get("min_elb_capacity").(int) - log.Printf("[DEBUG] Waiting for capacity: %d ASG, %d ELB", wantASG, wantELB) + wait, err := time.ParseDuration(d.Get("wait_for_capacity_timeout").(string)) + if err != nil { + return err + } - return resource.Retry(waitForASGCapacityTimeout, func() error { + if wait == 0 { + log.Printf("[DEBUG] Capacity timeout set to 0, skipping capacity waiting.") + return nil + } + + log.Printf("[DEBUG] Waiting %s for capacity: %d ASG, %d ELB", + wait, wantASG, wantELB) + + return resource.Retry(wait, func() error { g, err := getAwsAutoscalingGroup(d, meta) if err != nil { return resource.RetryError{Err: err} diff --git a/builtin/providers/aws/resource_aws_autoscaling_group_test.go b/builtin/providers/aws/resource_aws_autoscaling_group_test.go index c1d3c2b24..1a25c9dea 100644 --- a/builtin/providers/aws/resource_aws_autoscaling_group_test.go +++ b/builtin/providers/aws/resource_aws_autoscaling_group_test.go @@ -45,7 +45,9 @@ func TestAccAWSAutoScalingGroup_basic(t *testing.T) { resource.TestCheckResourceAttr( "aws_autoscaling_group.bar", "force_delete", "true"), resource.TestCheckResourceAttr( - "aws_autoscaling_group.bar", "termination_policies.912102603", "OldestInstance"), + "aws_autoscaling_group.bar", "termination_policies.0", "OldestInstance"), + resource.TestCheckResourceAttr( + "aws_autoscaling_group.bar", "termination_policies.1", "ClosestToNextInstanceHour"), ), }, @@ -56,6 +58,8 @@ func TestAccAWSAutoScalingGroup_basic(t *testing.T) { testAccCheckAWSLaunchConfigurationExists("aws_launch_configuration.new", &lc), resource.TestCheckResourceAttr( "aws_autoscaling_group.bar", "desired_capacity", "5"), + resource.TestCheckResourceAttr( + "aws_autoscaling_group.bar", "termination_policies.0", "ClosestToNextInstanceHour"), testLaunchConfigurationName("aws_autoscaling_group.bar", &lc), testAccCheckAutoscalingTags(&group.Tags, "Bar", map[string]interface{}{ "value": "bar-foo", @@ -359,7 +363,7 @@ resource "aws_autoscaling_group" "bar" { health_check_type = "ELB" desired_capacity = 4 force_delete = true - termination_policies = ["OldestInstance"] + termination_policies = ["OldestInstance","ClosestToNextInstanceHour"] launch_configuration = "${aws_launch_configuration.foobar.name}" @@ -391,6 +395,7 @@ resource "aws_autoscaling_group" "bar" { health_check_type = "ELB" desired_capacity = 5 force_delete = true + termination_policies = ["ClosestToNextInstanceHour"] launch_configuration = "${aws_launch_configuration.new.name}" diff --git a/builtin/providers/aws/resource_aws_autoscaling_lifecycle_hook.go b/builtin/providers/aws/resource_aws_autoscaling_lifecycle_hook.go new file mode 100644 index 000000000..faacadb7a --- /dev/null +++ b/builtin/providers/aws/resource_aws_autoscaling_lifecycle_hook.go @@ -0,0 +1,175 @@ +package aws + +import ( + "fmt" + "log" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/autoscaling" + "github.com/hashicorp/terraform/helper/schema" +) + +func resourceAwsAutoscalingLifecycleHook() *schema.Resource { + return &schema.Resource{ + Create: resourceAwsAutoscalingLifecycleHookPut, + Read: resourceAwsAutoscalingLifecycleHookRead, + Update: resourceAwsAutoscalingLifecycleHookPut, + Delete: resourceAwsAutoscalingLifecycleHookDelete, + + Schema: map[string]*schema.Schema{ + "name": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + "autoscaling_group_name": &schema.Schema{ + Type: schema.TypeString, + Required: true, + }, + "default_result": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + }, + "heartbeat_timeout": &schema.Schema{ + Type: schema.TypeInt, + Optional: true, + }, + "lifecycle_transition": &schema.Schema{ + Type: schema.TypeString, + Required: true, + }, + "notification_metadata": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + }, + "notification_target_arn": &schema.Schema{ + Type: schema.TypeString, + Required: true, + }, + "role_arn": &schema.Schema{ + Type: schema.TypeString, + Required: true, + }, + }, + } +} + +func resourceAwsAutoscalingLifecycleHookPut(d *schema.ResourceData, meta interface{}) error { + autoscalingconn := meta.(*AWSClient).autoscalingconn + + params := getAwsAutoscalingPutLifecycleHookInput(d) + + log.Printf("[DEBUG] AutoScaling PutLifecyleHook: %#v", params) + _, err := autoscalingconn.PutLifecycleHook(¶ms) + if err != nil { + return fmt.Errorf("Error putting lifecycle hook: %s", err) + } + + d.SetId(d.Get("name").(string)) + + return resourceAwsAutoscalingLifecycleHookRead(d, meta) +} + +func resourceAwsAutoscalingLifecycleHookRead(d *schema.ResourceData, meta interface{}) error { + p, err := getAwsAutoscalingLifecycleHook(d, meta) + if err != nil { + return err + } + if p == nil { + d.SetId("") + return nil + } + + log.Printf("[DEBUG] Read Lifecycle Hook: ASG: %s, SH: %s, Obj: %#v", d.Get("autoscaling_group_name"), d.Get("name"), p) + + d.Set("default_result", p.DefaultResult) + d.Set("heartbeat_timeout", p.HeartbeatTimeout) + d.Set("lifecycle_transition", p.LifecycleTransition) + d.Set("notification_metadata", p.NotificationMetadata) + d.Set("notification_target_arn", p.NotificationTargetARN) + d.Set("name", p.LifecycleHookName) + d.Set("role_arn", p.RoleARN) + + return nil +} + +func resourceAwsAutoscalingLifecycleHookDelete(d *schema.ResourceData, meta interface{}) error { + autoscalingconn := meta.(*AWSClient).autoscalingconn + p, err := getAwsAutoscalingLifecycleHook(d, meta) + if err != nil { + return err + } + if p == nil { + return nil + } + + params := autoscaling.DeleteLifecycleHookInput{ + AutoScalingGroupName: aws.String(d.Get("autoscaling_group_name").(string)), + LifecycleHookName: aws.String(d.Get("name").(string)), + } + if _, err := autoscalingconn.DeleteLifecycleHook(¶ms); err != nil { + return fmt.Errorf("Autoscaling Lifecycle Hook: %s ", err) + } + + d.SetId("") + return nil +} + +func getAwsAutoscalingPutLifecycleHookInput(d *schema.ResourceData) autoscaling.PutLifecycleHookInput { + var params = autoscaling.PutLifecycleHookInput{ + AutoScalingGroupName: aws.String(d.Get("autoscaling_group_name").(string)), + LifecycleHookName: aws.String(d.Get("name").(string)), + } + + if v, ok := d.GetOk("default_result"); ok { + params.DefaultResult = aws.String(v.(string)) + } + + if v, ok := d.GetOk("heartbeat_timeout"); ok { + params.HeartbeatTimeout = aws.Int64(int64(v.(int))) + } + + if v, ok := d.GetOk("lifecycle_transition"); ok { + params.LifecycleTransition = aws.String(v.(string)) + } + + if v, ok := d.GetOk("notification_metadata"); ok { + params.NotificationMetadata = aws.String(v.(string)) + } + + if v, ok := d.GetOk("notification_target_arn"); ok { + params.NotificationTargetARN = aws.String(v.(string)) + } + + if v, ok := d.GetOk("role_arn"); ok { + params.RoleARN = aws.String(v.(string)) + } + + return params +} + +func getAwsAutoscalingLifecycleHook(d *schema.ResourceData, meta interface{}) (*autoscaling.LifecycleHook, error) { + autoscalingconn := meta.(*AWSClient).autoscalingconn + + params := autoscaling.DescribeLifecycleHooksInput{ + AutoScalingGroupName: aws.String(d.Get("autoscaling_group_name").(string)), + LifecycleHookNames: []*string{aws.String(d.Get("name").(string))}, + } + + log.Printf("[DEBUG] AutoScaling Lifecycle Hook Describe Params: %#v", params) + resp, err := autoscalingconn.DescribeLifecycleHooks(¶ms) + if err != nil { + return nil, fmt.Errorf("Error retrieving lifecycle hooks: %s", err) + } + + // find lifecycle hooks + name := d.Get("name") + for idx, sp := range resp.LifecycleHooks { + if *sp.LifecycleHookName == name { + return resp.LifecycleHooks[idx], nil + } + } + + // lifecycle hook not found + return nil, nil +} diff --git a/builtin/providers/aws/resource_aws_autoscaling_lifecycle_hook_test.go b/builtin/providers/aws/resource_aws_autoscaling_lifecycle_hook_test.go new file mode 100644 index 000000000..f425570e9 --- /dev/null +++ b/builtin/providers/aws/resource_aws_autoscaling_lifecycle_hook_test.go @@ -0,0 +1,168 @@ +package aws + +import ( + "fmt" + "testing" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/autoscaling" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +func TestAccAWSAutoscalingLifecycleHook_basic(t *testing.T) { + var hook autoscaling.LifecycleHook + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSAutoscalingLifecycleHookDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccAWSAutoscalingLifecycleHookConfig, + Check: resource.ComposeTestCheckFunc( + testAccCheckLifecycleHookExists("aws_autoscaling_lifecycle_hook.foobar", &hook), + resource.TestCheckResourceAttr("aws_autoscaling_lifecycle_hook.foobar", "autoscaling_group_name", "terraform-test-foobar5"), + resource.TestCheckResourceAttr("aws_autoscaling_lifecycle_hook.foobar", "default_result", "CONTINUE"), + resource.TestCheckResourceAttr("aws_autoscaling_lifecycle_hook.foobar", "heartbeat_timeout", "2000"), + resource.TestCheckResourceAttr("aws_autoscaling_lifecycle_hook.foobar", "lifecycle_transition", "autoscaling:EC2_INSTANCE_LAUNCHING"), + ), + }, + }, + }) +} + +func testAccCheckLifecycleHookExists(n string, hook *autoscaling.LifecycleHook) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[n] + if !ok { + rs = rs + return fmt.Errorf("Not found: %s", n) + } + + conn := testAccProvider.Meta().(*AWSClient).autoscalingconn + params := &autoscaling.DescribeLifecycleHooksInput{ + AutoScalingGroupName: aws.String(rs.Primary.Attributes["autoscaling_group_name"]), + LifecycleHookNames: []*string{aws.String(rs.Primary.ID)}, + } + resp, err := conn.DescribeLifecycleHooks(params) + if err != nil { + return err + } + if len(resp.LifecycleHooks) == 0 { + return fmt.Errorf("LifecycleHook not found") + } + + return nil + } +} + +func testAccCheckAWSAutoscalingLifecycleHookDestroy(s *terraform.State) error { + conn := testAccProvider.Meta().(*AWSClient).autoscalingconn + + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_autoscaling_group" { + continue + } + + params := autoscaling.DescribeLifecycleHooksInput{ + AutoScalingGroupName: aws.String(rs.Primary.Attributes["autoscaling_group_name"]), + LifecycleHookNames: []*string{aws.String(rs.Primary.ID)}, + } + + resp, err := conn.DescribeLifecycleHooks(¶ms) + + if err == nil { + if len(resp.LifecycleHooks) != 0 && + *resp.LifecycleHooks[0].LifecycleHookName == rs.Primary.ID { + return fmt.Errorf("Lifecycle Hook Still Exists: %s", rs.Primary.ID) + } + } + } + + return nil +} + +var testAccAWSAutoscalingLifecycleHookConfig = fmt.Sprintf(` +resource "aws_launch_configuration" "foobar" { + name = "terraform-test-foobar5" + image_id = "ami-21f78e11" + instance_type = "t1.micro" +} + +resource "aws_sqs_queue" "foobar" { + name = "foobar" + delay_seconds = 90 + max_message_size = 2048 + message_retention_seconds = 86400 + receive_wait_time_seconds = 10 +} + +resource "aws_iam_role" "foobar" { + name = "foobar" + assume_role_policy = < 0 { + err = d.Set("notification_arns", schema.NewSet(schema.HashString, flattenStringList(stack.NotificationARNs))) + if err != nil { + return err + } + } + + originalParams := d.Get("parameters").(map[string]interface{}) + err = d.Set("parameters", flattenCloudFormationParameters(stack.Parameters, originalParams)) + if err != nil { + return err + } + + err = d.Set("tags", flattenCloudFormationTags(stack.Tags)) + if err != nil { + return err + } + + err = d.Set("outputs", flattenCloudFormationOutputs(stack.Outputs)) + if err != nil { + return err + } + + if len(stack.Capabilities) > 0 { + err = d.Set("capabilities", schema.NewSet(schema.HashString, flattenStringList(stack.Capabilities))) + if err != nil { + return err + } + } + + return nil +} + +func resourceAwsCloudFormationStackUpdate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).cfconn + + input := &cloudformation.UpdateStackInput{ + StackName: aws.String(d.Get("name").(string)), + } + + if d.HasChange("template_body") { + input.TemplateBody = aws.String(normalizeJson(d.Get("template_body").(string))) + } + if d.HasChange("template_url") { + input.TemplateURL = aws.String(d.Get("template_url").(string)) + } + if d.HasChange("capabilities") { + input.Capabilities = expandStringList(d.Get("capabilities").(*schema.Set).List()) + } + if d.HasChange("notification_arns") { + input.NotificationARNs = expandStringList(d.Get("notification_arns").(*schema.Set).List()) + } + if d.HasChange("parameters") { + input.Parameters = expandCloudFormationParameters(d.Get("parameters").(map[string]interface{})) + } + if d.HasChange("policy_body") { + input.StackPolicyBody = aws.String(normalizeJson(d.Get("policy_body").(string))) + } + if d.HasChange("policy_url") { + input.StackPolicyURL = aws.String(d.Get("policy_url").(string)) + } + + log.Printf("[DEBUG] Updating CloudFormation stack: %s", input) + stack, err := conn.UpdateStack(input) + if err != nil { + return err + } + + lastUpdatedTime, err := getLastCfEventTimestamp(d.Get("name").(string), conn) + if err != nil { + return err + } + + wait := resource.StateChangeConf{ + Pending: []string{ + "UPDATE_COMPLETE_CLEANUP_IN_PROGRESS", + "UPDATE_IN_PROGRESS", + "UPDATE_ROLLBACK_IN_PROGRESS", + "UPDATE_ROLLBACK_COMPLETE_CLEANUP_IN_PROGRESS", + "UPDATE_ROLLBACK_COMPLETE", + }, + Target: "UPDATE_COMPLETE", + Timeout: 15 * time.Minute, + MinTimeout: 5 * time.Second, + Refresh: func() (interface{}, string, error) { + resp, err := conn.DescribeStacks(&cloudformation.DescribeStacksInput{ + StackName: aws.String(d.Get("name").(string)), + }) + stack := resp.Stacks[0] + status := *stack.StackStatus + log.Printf("[DEBUG] Current CloudFormation stack status: %q", status) + + if status == "UPDATE_ROLLBACK_COMPLETE" { + failures, err := getCloudFormationFailures(stack.StackName, *lastUpdatedTime, conn) + if err != nil { + return resp, "", fmt.Errorf( + "Failed getting details about rollback: %q", err.Error()) + } + + return resp, "", fmt.Errorf( + "UPDATE_ROLLBACK_COMPLETE:\n%q", failures) + } + + return resp, status, err + }, + } + + _, err = wait.WaitForState() + if err != nil { + return err + } + + log.Printf("[DEBUG] CloudFormation stack %q has been updated", *stack.StackId) + + return resourceAwsCloudFormationStackRead(d, meta) +} + +func resourceAwsCloudFormationStackDelete(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).cfconn + + input := &cloudformation.DeleteStackInput{ + StackName: aws.String(d.Get("name").(string)), + } + log.Printf("[DEBUG] Deleting CloudFormation stack %s", input) + _, err := conn.DeleteStack(input) + if err != nil { + awsErr, ok := err.(awserr.Error) + if !ok { + return err + } + + if awsErr.Code() == "ValidationError" { + // Ignore stack which has been already deleted + return nil + } + return err + } + + wait := resource.StateChangeConf{ + Pending: []string{"DELETE_IN_PROGRESS", "ROLLBACK_IN_PROGRESS"}, + Target: "DELETE_COMPLETE", + Timeout: 30 * time.Minute, + MinTimeout: 5 * time.Second, + Refresh: func() (interface{}, string, error) { + resp, err := conn.DescribeStacks(&cloudformation.DescribeStacksInput{ + StackName: aws.String(d.Get("name").(string)), + }) + + if err != nil { + awsErr, ok := err.(awserr.Error) + if !ok { + return resp, "DELETE_FAILED", err + } + + log.Printf("[DEBUG] Error when deleting CloudFormation stack: %s: %s", + awsErr.Code(), awsErr.Message()) + + if awsErr.Code() == "ValidationError" { + return resp, "DELETE_COMPLETE", nil + } + } + + if len(resp.Stacks) == 0 { + log.Printf("[DEBUG] CloudFormation stack %q is already gone", d.Get("name")) + return resp, "DELETE_COMPLETE", nil + } + + status := *resp.Stacks[0].StackStatus + log.Printf("[DEBUG] Current CloudFormation stack status: %q", status) + + return resp, status, err + }, + } + + _, err = wait.WaitForState() + if err != nil { + return err + } + + log.Printf("[DEBUG] CloudFormation stack %q has been deleted", d.Id()) + + d.SetId("") + + return nil +} + +// getLastCfEventTimestamp takes the first event in a list +// of events ordered from the newest to the oldest +// and extracts timestamp from it +// LastUpdatedTime only provides last >successful< updated time +func getLastCfEventTimestamp(stackName string, conn *cloudformation.CloudFormation) ( + *time.Time, error) { + output, err := conn.DescribeStackEvents(&cloudformation.DescribeStackEventsInput{ + StackName: aws.String(stackName), + }) + if err != nil { + return nil, err + } + + return output.StackEvents[0].Timestamp, nil +} + +// getCloudFormationFailures returns ResourceStatusReason(s) +// of events that should be failures based on regexp match of status +func getCloudFormationFailures(stackName *string, afterTime time.Time, + conn *cloudformation.CloudFormation) ([]string, error) { + var failures []string + // Only catching failures from last 100 events + // Some extra iteration logic via NextToken could be added + // but in reality it's nearly impossible to generate >100 + // events by a single stack update + events, err := conn.DescribeStackEvents(&cloudformation.DescribeStackEventsInput{ + StackName: stackName, + }) + + if err != nil { + return nil, err + } + + failRe := regexp.MustCompile("_FAILED$") + rollbackRe := regexp.MustCompile("^ROLLBACK_") + + for _, e := range events.StackEvents { + if (failRe.MatchString(*e.ResourceStatus) || rollbackRe.MatchString(*e.ResourceStatus)) && + e.Timestamp.After(afterTime) && e.ResourceStatusReason != nil { + failures = append(failures, *e.ResourceStatusReason) + } + } + + return failures, nil +} diff --git a/builtin/providers/aws/resource_aws_cloudformation_stack_test.go b/builtin/providers/aws/resource_aws_cloudformation_stack_test.go new file mode 100644 index 000000000..7ad24be34 --- /dev/null +++ b/builtin/providers/aws/resource_aws_cloudformation_stack_test.go @@ -0,0 +1,228 @@ +package aws + +import ( + "fmt" + "testing" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/cloudformation" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +func TestAccAWSCloudFormation_basic(t *testing.T) { + var stack cloudformation.Stack + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSCloudFormationDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccAWSCloudFormationConfig, + Check: resource.ComposeTestCheckFunc( + testAccCheckCloudFormationStackExists("aws_cloudformation_stack.network", &stack), + ), + }, + }, + }) +} + +func TestAccAWSCloudFormation_defaultParams(t *testing.T) { + var stack cloudformation.Stack + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSCloudFormationDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccAWSCloudFormationConfig_defaultParams, + Check: resource.ComposeTestCheckFunc( + testAccCheckCloudFormationStackExists("aws_cloudformation_stack.asg-demo", &stack), + ), + }, + }, + }) +} + +func TestAccAWSCloudFormation_allAttributes(t *testing.T) { + var stack cloudformation.Stack + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSCloudFormationDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccAWSCloudFormationConfig_allAttributes, + Check: resource.ComposeTestCheckFunc( + testAccCheckCloudFormationStackExists("aws_cloudformation_stack.full", &stack), + ), + }, + }, + }) +} + +func testAccCheckCloudFormationStackExists(n string, stack *cloudformation.Stack) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[n] + if !ok { + rs = rs + return fmt.Errorf("Not found: %s", n) + } + + conn := testAccProvider.Meta().(*AWSClient).cfconn + params := &cloudformation.DescribeStacksInput{ + StackName: aws.String(rs.Primary.ID), + } + resp, err := conn.DescribeStacks(params) + if err != nil { + return err + } + if len(resp.Stacks) == 0 { + return fmt.Errorf("CloudFormation stack not found") + } + + return nil + } +} + +func testAccCheckAWSCloudFormationDestroy(s *terraform.State) error { + conn := testAccProvider.Meta().(*AWSClient).cfconn + + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_cloudformation_stack" { + continue + } + + params := cloudformation.DescribeStacksInput{ + StackName: aws.String(rs.Primary.ID), + } + + resp, err := conn.DescribeStacks(¶ms) + + if err == nil { + if len(resp.Stacks) != 0 && + *resp.Stacks[0].StackId == rs.Primary.ID { + return fmt.Errorf("CloudFormation stack still exists: %q", rs.Primary.ID) + } + } + } + + return nil +} + +var testAccAWSCloudFormationConfig = ` +resource "aws_cloudformation_stack" "network" { + name = "tf-networking-stack" + template_body = < 100 { + errors = append(errors, fmt.Errorf( + "%q cannot exceed 100 characters", k)) + } + return + }, + }, + + "deployment_group_name": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + ValidateFunc: func(v interface{}, k string) (ws []string, errors []error) { + value := v.(string) + if len(value) > 100 { + errors = append(errors, fmt.Errorf( + "%q cannot exceed 100 characters", k)) + } + return + }, + }, + + "service_role_arn": &schema.Schema{ + Type: schema.TypeString, + Required: true, + }, + + "autoscaling_groups": &schema.Schema{ + Type: schema.TypeSet, + Optional: true, + Elem: &schema.Schema{Type: schema.TypeString}, + Set: schema.HashString, + }, + + "deployment_config_name": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + Default: "CodeDeployDefault.OneAtATime", + ValidateFunc: func(v interface{}, k string) (ws []string, errors []error) { + value := v.(string) + if len(value) > 100 { + errors = append(errors, fmt.Errorf( + "%q cannot exceed 100 characters", k)) + } + return + }, + }, + + "ec2_tag_filter": &schema.Schema{ + Type: schema.TypeSet, + Optional: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "key": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + }, + + "type": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ValidateFunc: validateTagFilters, + }, + + "value": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + }, + }, + }, + Set: resourceAwsCodeDeployTagFilterHash, + }, + + "on_premises_instance_tag_filter": &schema.Schema{ + Type: schema.TypeSet, + Optional: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "key": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + }, + + "type": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ValidateFunc: validateTagFilters, + }, + + "value": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + }, + }, + }, + Set: resourceAwsCodeDeployTagFilterHash, + }, + }, + } +} + +func resourceAwsCodeDeployDeploymentGroupCreate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).codedeployconn + + application := d.Get("app_name").(string) + deploymentGroup := d.Get("deployment_group_name").(string) + + input := codedeploy.CreateDeploymentGroupInput{ + ApplicationName: aws.String(application), + DeploymentGroupName: aws.String(deploymentGroup), + ServiceRoleArn: aws.String(d.Get("service_role_arn").(string)), + } + if attr, ok := d.GetOk("deployment_config_name"); ok { + input.DeploymentConfigName = aws.String(attr.(string)) + } + if attr, ok := d.GetOk("autoscaling_groups"); ok { + input.AutoScalingGroups = expandStringList(attr.(*schema.Set).List()) + } + if attr, ok := d.GetOk("on_premises_instance_tag_filters"); ok { + onPremFilters := buildOnPremTagFilters(attr.(*schema.Set).List()) + input.OnPremisesInstanceTagFilters = onPremFilters + } + if attr, ok := d.GetOk("ec2_tag_filter"); ok { + ec2TagFilters := buildEC2TagFilters(attr.(*schema.Set).List()) + input.Ec2TagFilters = ec2TagFilters + } + + // Retry to handle IAM role eventual consistency. + var resp *codedeploy.CreateDeploymentGroupOutput + var err error + err = resource.Retry(2*time.Minute, func() error { + resp, err = conn.CreateDeploymentGroup(&input) + if err != nil { + codedeployErr, ok := err.(awserr.Error) + if !ok { + return &resource.RetryError{Err: err} + } + if codedeployErr.Code() == "InvalidRoleException" { + log.Printf("[DEBUG] Trying to create deployment group again: %q", + codedeployErr.Message()) + return err + } + + return &resource.RetryError{Err: err} + } + return nil + }) + if err != nil { + return err + } + + d.SetId(*resp.DeploymentGroupId) + + return resourceAwsCodeDeployDeploymentGroupRead(d, meta) +} + +func resourceAwsCodeDeployDeploymentGroupRead(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).codedeployconn + + log.Printf("[DEBUG] Reading CodeDeploy DeploymentGroup %s", d.Id()) + resp, err := conn.GetDeploymentGroup(&codedeploy.GetDeploymentGroupInput{ + ApplicationName: aws.String(d.Get("app_name").(string)), + DeploymentGroupName: aws.String(d.Get("deployment_group_name").(string)), + }) + if err != nil { + return err + } + + d.Set("app_name", *resp.DeploymentGroupInfo.ApplicationName) + d.Set("autoscaling_groups", resp.DeploymentGroupInfo.AutoScalingGroups) + d.Set("deployment_config_name", *resp.DeploymentGroupInfo.DeploymentConfigName) + d.Set("deployment_group_name", *resp.DeploymentGroupInfo.DeploymentGroupName) + d.Set("service_role_arn", *resp.DeploymentGroupInfo.ServiceRoleArn) + if err := d.Set("ec2_tag_filter", ec2TagFiltersToMap(resp.DeploymentGroupInfo.Ec2TagFilters)); err != nil { + return err + } + if err := d.Set("on_premises_instance_tag_filter", onPremisesTagFiltersToMap(resp.DeploymentGroupInfo.OnPremisesInstanceTagFilters)); err != nil { + return err + } + + return nil +} + +func resourceAwsCodeDeployDeploymentGroupUpdate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).codedeployconn + + input := codedeploy.UpdateDeploymentGroupInput{ + ApplicationName: aws.String(d.Get("app_name").(string)), + CurrentDeploymentGroupName: aws.String(d.Get("deployment_group_name").(string)), + } + + if d.HasChange("autoscaling_groups") { + _, n := d.GetChange("autoscaling_groups") + input.AutoScalingGroups = expandStringList(n.(*schema.Set).List()) + } + if d.HasChange("deployment_config_name") { + _, n := d.GetChange("deployment_config_name") + input.DeploymentConfigName = aws.String(n.(string)) + } + if d.HasChange("deployment_group_name") { + _, n := d.GetChange("deployment_group_name") + input.NewDeploymentGroupName = aws.String(n.(string)) + } + + // TagFilters aren't like tags. They don't append. They simply replace. + if d.HasChange("on_premises_instance_tag_filter") { + _, n := d.GetChange("on_premises_instance_tag_filter") + onPremFilters := buildOnPremTagFilters(n.(*schema.Set).List()) + input.OnPremisesInstanceTagFilters = onPremFilters + } + if d.HasChange("ec2_tag_filter") { + _, n := d.GetChange("ec2_tag_filter") + ec2Filters := buildEC2TagFilters(n.(*schema.Set).List()) + input.Ec2TagFilters = ec2Filters + } + + log.Printf("[DEBUG] Updating CodeDeploy DeploymentGroup %s", d.Id()) + _, err := conn.UpdateDeploymentGroup(&input) + if err != nil { + return err + } + + return resourceAwsCodeDeployDeploymentGroupRead(d, meta) +} + +func resourceAwsCodeDeployDeploymentGroupDelete(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).codedeployconn + + log.Printf("[DEBUG] Deleting CodeDeploy DeploymentGroup %s", d.Id()) + _, err := conn.DeleteDeploymentGroup(&codedeploy.DeleteDeploymentGroupInput{ + ApplicationName: aws.String(d.Get("app_name").(string)), + DeploymentGroupName: aws.String(d.Get("deployment_group_name").(string)), + }) + if err != nil { + return err + } + + d.SetId("") + + return nil +} + +// buildOnPremTagFilters converts raw schema lists into a list of +// codedeploy.TagFilters. +func buildOnPremTagFilters(configured []interface{}) []*codedeploy.TagFilter { + filters := make([]*codedeploy.TagFilter, 0) + for _, raw := range configured { + var filter codedeploy.TagFilter + m := raw.(map[string]interface{}) + + filter.Key = aws.String(m["key"].(string)) + filter.Type = aws.String(m["type"].(string)) + filter.Value = aws.String(m["value"].(string)) + + filters = append(filters, &filter) + } + + return filters +} + +// buildEC2TagFilters converts raw schema lists into a list of +// codedeploy.EC2TagFilters. +func buildEC2TagFilters(configured []interface{}) []*codedeploy.EC2TagFilter { + filters := make([]*codedeploy.EC2TagFilter, 0) + for _, raw := range configured { + var filter codedeploy.EC2TagFilter + m := raw.(map[string]interface{}) + + filter.Key = aws.String(m["key"].(string)) + filter.Type = aws.String(m["type"].(string)) + filter.Value = aws.String(m["value"].(string)) + + filters = append(filters, &filter) + } + + return filters +} + +// ec2TagFiltersToMap converts lists of tag filters into a []map[string]string. +func ec2TagFiltersToMap(list []*codedeploy.EC2TagFilter) []map[string]string { + result := make([]map[string]string, 0, len(list)) + for _, tf := range list { + l := make(map[string]string) + if *tf.Key != "" { + l["key"] = *tf.Key + } + if *tf.Value != "" { + l["value"] = *tf.Value + } + if *tf.Type != "" { + l["type"] = *tf.Type + } + result = append(result, l) + } + return result +} + +// onPremisesTagFiltersToMap converts lists of on-prem tag filters into a []map[string]string. +func onPremisesTagFiltersToMap(list []*codedeploy.TagFilter) []map[string]string { + result := make([]map[string]string, 0, len(list)) + for _, tf := range list { + l := make(map[string]string) + if *tf.Key != "" { + l["key"] = *tf.Key + } + if *tf.Value != "" { + l["value"] = *tf.Value + } + if *tf.Type != "" { + l["type"] = *tf.Type + } + result = append(result, l) + } + return result +} + +// validateTagFilters confirms the "value" component of a tag filter is one of +// AWS's three allowed types. +func validateTagFilters(v interface{}, k string) (ws []string, errors []error) { + value := v.(string) + if value != "KEY_ONLY" && value != "VALUE_ONLY" && value != "KEY_AND_VALUE" { + errors = append(errors, fmt.Errorf( + "%q must be one of \"KEY_ONLY\", \"VALUE_ONLY\", or \"KEY_AND_VALUE\"", k)) + } + return +} + +func resourceAwsCodeDeployTagFilterHash(v interface{}) int { + var buf bytes.Buffer + m := v.(map[string]interface{}) + + // Nothing's actually required in tag filters, so we must check the + // presence of all values before attempting a hash. + if v, ok := m["key"]; ok { + buf.WriteString(fmt.Sprintf("%s-", v.(string))) + } + if v, ok := m["type"]; ok { + buf.WriteString(fmt.Sprintf("%s-", v.(string))) + } + if v, ok := m["value"]; ok { + buf.WriteString(fmt.Sprintf("%s-", v.(string))) + } + + return hashcode.String(buf.String()) +} diff --git a/builtin/providers/aws/resource_aws_codedeploy_deployment_group_test.go b/builtin/providers/aws/resource_aws_codedeploy_deployment_group_test.go new file mode 100644 index 000000000..7608b1f58 --- /dev/null +++ b/builtin/providers/aws/resource_aws_codedeploy_deployment_group_test.go @@ -0,0 +1,199 @@ +package aws + +import ( + "fmt" + "testing" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/codedeploy" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +func TestAccAWSCodeDeployDeploymentGroup_basic(t *testing.T) { + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSCodeDeployDeploymentGroupDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccAWSCodeDeployDeploymentGroup, + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSCodeDeployDeploymentGroupExists("aws_codedeploy_deployment_group.foo"), + ), + }, + resource.TestStep{ + Config: testAccAWSCodeDeployDeploymentGroupModifier, + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSCodeDeployDeploymentGroupExists("aws_codedeploy_deployment_group.foo"), + ), + }, + }, + }) +} + +func testAccCheckAWSCodeDeployDeploymentGroupDestroy(s *terraform.State) error { + conn := testAccProvider.Meta().(*AWSClient).codedeployconn + + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_codedeploy_deployment_group" { + continue + } + + resp, err := conn.GetDeploymentGroup(&codedeploy.GetDeploymentGroupInput{ + ApplicationName: aws.String(rs.Primary.Attributes["app_name"]), + DeploymentGroupName: aws.String(rs.Primary.Attributes["deployment_group_name"]), + }) + + if err == nil { + if resp.DeploymentGroupInfo.DeploymentGroupName != nil { + return fmt.Errorf("CodeDeploy deployment group still exists:\n%#v", *resp.DeploymentGroupInfo.DeploymentGroupName) + } + } + + return err + } + + return nil +} + +func testAccCheckAWSCodeDeployDeploymentGroupExists(name string) resource.TestCheckFunc { + return func(s *terraform.State) error { + _, ok := s.RootModule().Resources[name] + if !ok { + return fmt.Errorf("Not found: %s", name) + } + + return nil + } +} + +var testAccAWSCodeDeployDeploymentGroup = ` +resource "aws_codedeploy_app" "foo_app" { + name = "foo_app" +} + +resource "aws_iam_role_policy" "foo_policy" { + name = "foo_policy" + role = "${aws_iam_role.foo_role.id}" + policy = < 255 { + errors = append(errors, fmt.Errorf( + "%q cannot be greater than 255 characters", k)) + } + return + +} diff --git a/builtin/providers/aws/resource_aws_db_parameter_group_test.go b/builtin/providers/aws/resource_aws_db_parameter_group_test.go index 93e74bb74..d0042df23 100644 --- a/builtin/providers/aws/resource_aws_db_parameter_group_test.go +++ b/builtin/providers/aws/resource_aws_db_parameter_group_test.go @@ -2,7 +2,9 @@ package aws import ( "fmt" + "math/rand" "testing" + "time" "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/aws/awserr" @@ -106,6 +108,46 @@ func TestAccAWSDBParameterGroupOnly(t *testing.T) { }) } +func TestResourceAWSDBParameterGroupName_validation(t *testing.T) { + cases := []struct { + Value string + ErrCount int + }{ + { + Value: "tEsting123", + ErrCount: 1, + }, + { + Value: "testing123!", + ErrCount: 1, + }, + { + Value: "1testing123", + ErrCount: 1, + }, + { + Value: "testing--123", + ErrCount: 1, + }, + { + Value: "testing123-", + ErrCount: 1, + }, + { + Value: randomString(256), + ErrCount: 1, + }, + } + + for _, tc := range cases { + _, errors := validateDbParamGroupName(tc.Value, "aws_db_parameter_group_name") + + if len(errors) != tc.ErrCount { + t.Fatalf("Expected the DB Parameter Group Name to trigger a validation error") + } + } +} + func testAccCheckAWSDBParameterGroupDestroy(s *terraform.State) error { conn := testAccProvider.Meta().(*AWSClient).rdsconn @@ -193,6 +235,16 @@ func testAccCheckAWSDBParameterGroupExists(n string, v *rds.DBParameterGroup) re } } +func randomString(strlen int) string { + rand.Seed(time.Now().UTC().UnixNano()) + const chars = "abcdefghijklmnopqrstuvwxyz" + result := make([]byte, strlen) + for i := 0; i < strlen; i++ { + result[i] = chars[rand.Intn(len(chars))] + } + return string(result) +} + const testAccAWSDBParameterGroupConfig = ` resource "aws_db_parameter_group" "bar" { name = "parameter-group-test-terraform" diff --git a/builtin/providers/aws/resource_aws_db_security_group.go b/builtin/providers/aws/resource_aws_db_security_group.go index 6932fc971..367400ae7 100644 --- a/builtin/providers/aws/resource_aws_db_security_group.go +++ b/builtin/providers/aws/resource_aws_db_security_group.go @@ -9,8 +9,8 @@ import ( "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/aws/awserr" "github.com/aws/aws-sdk-go/service/rds" + "github.com/hashicorp/go-multierror" "github.com/hashicorp/terraform/helper/hashcode" - "github.com/hashicorp/terraform/helper/multierror" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/helper/schema" ) diff --git a/builtin/providers/aws/resource_aws_db_subnet_group.go b/builtin/providers/aws/resource_aws_db_subnet_group.go index 9c09b72d7..e6b17ea1f 100644 --- a/builtin/providers/aws/resource_aws_db_subnet_group.go +++ b/builtin/providers/aws/resource_aws_db_subnet_group.go @@ -9,6 +9,7 @@ import ( "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/aws/awserr" + "github.com/aws/aws-sdk-go/service/iam" "github.com/aws/aws-sdk-go/service/rds" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/helper/schema" @@ -56,12 +57,15 @@ func resourceAwsDbSubnetGroup() *schema.Resource { Elem: &schema.Schema{Type: schema.TypeString}, Set: schema.HashString, }, + + "tags": tagsSchema(), }, } } func resourceAwsDbSubnetGroupCreate(d *schema.ResourceData, meta interface{}) error { rdsconn := meta.(*AWSClient).rdsconn + tags := tagsFromMapRDS(d.Get("tags").(map[string]interface{})) subnetIdsSet := d.Get("subnet_ids").(*schema.Set) subnetIds := make([]*string, subnetIdsSet.Len()) @@ -73,6 +77,7 @@ func resourceAwsDbSubnetGroupCreate(d *schema.ResourceData, meta interface{}) er DBSubnetGroupName: aws.String(d.Get("name").(string)), DBSubnetGroupDescription: aws.String(d.Get("description").(string)), SubnetIds: subnetIds, + Tags: tags, } log.Printf("[DEBUG] Create DB Subnet Group: %#v", createOpts) @@ -130,6 +135,28 @@ func resourceAwsDbSubnetGroupRead(d *schema.ResourceData, meta interface{}) erro } d.Set("subnet_ids", subnets) + // list tags for resource + // set tags + conn := meta.(*AWSClient).rdsconn + arn, err := buildRDSsubgrpARN(d, meta) + if err != nil { + log.Printf("[DEBUG] Error building ARN for DB Subnet Group, not setting Tags for group %s", *subnetGroup.DBSubnetGroupName) + } else { + resp, err := conn.ListTagsForResource(&rds.ListTagsForResourceInput{ + ResourceName: aws.String(arn), + }) + + if err != nil { + log.Printf("[DEBUG] Error retreiving tags for ARN: %s", arn) + } + + var dt []*rds.Tag + if len(resp.TagList) > 0 { + dt = resp.TagList + } + d.Set("tags", tagsToMapRDS(dt)) + } + return nil } @@ -156,6 +183,15 @@ func resourceAwsDbSubnetGroupUpdate(d *schema.ResourceData, meta interface{}) er return err } } + + if arn, err := buildRDSsubgrpARN(d, meta); err == nil { + if err := setTagsRDS(conn, d, arn); err != nil { + return err + } else { + d.SetPartial("tags") + } + } + return resourceAwsDbSubnetGroupRead(d, meta) } @@ -196,3 +232,17 @@ func resourceAwsDbSubnetGroupDeleteRefreshFunc( return d, "destroyed", nil } } + +func buildRDSsubgrpARN(d *schema.ResourceData, meta interface{}) (string, error) { + iamconn := meta.(*AWSClient).iamconn + region := meta.(*AWSClient).region + // An zero value GetUserInput{} defers to the currently logged in user + resp, err := iamconn.GetUser(&iam.GetUserInput{}) + if err != nil { + return "", err + } + userARN := *resp.User.Arn + accountID := strings.Split(userARN, ":")[4] + arn := fmt.Sprintf("arn:aws:rds:%s:%s:subgrp:%s", region, accountID, d.Id()) + return arn, nil +} diff --git a/builtin/providers/aws/resource_aws_db_subnet_group_test.go b/builtin/providers/aws/resource_aws_db_subnet_group_test.go index cbf1f8497..e189b1e21 100644 --- a/builtin/providers/aws/resource_aws_db_subnet_group_test.go +++ b/builtin/providers/aws/resource_aws_db_subnet_group_test.go @@ -150,6 +150,9 @@ resource "aws_db_subnet_group" "foo" { name = "FOO" description = "foo description" subnet_ids = ["${aws_subnet.foo.id}", "${aws_subnet.bar.id}"] + tags { + Name = "tf-dbsubnet-group-test" + } } ` diff --git a/builtin/providers/aws/resource_aws_directory_service_directory.go b/builtin/providers/aws/resource_aws_directory_service_directory.go new file mode 100644 index 000000000..1fdb9491e --- /dev/null +++ b/builtin/providers/aws/resource_aws_directory_service_directory.go @@ -0,0 +1,291 @@ +package aws + +import ( + "fmt" + "log" + "time" + + "github.com/hashicorp/terraform/helper/schema" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/directoryservice" + "github.com/hashicorp/terraform/helper/resource" +) + +func resourceAwsDirectoryServiceDirectory() *schema.Resource { + return &schema.Resource{ + Create: resourceAwsDirectoryServiceDirectoryCreate, + Read: resourceAwsDirectoryServiceDirectoryRead, + Update: resourceAwsDirectoryServiceDirectoryUpdate, + Delete: resourceAwsDirectoryServiceDirectoryDelete, + + Schema: map[string]*schema.Schema{ + "name": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + "password": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + "size": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + "alias": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + }, + "description": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ForceNew: true, + }, + "short_name": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + }, + "vpc_settings": &schema.Schema{ + Type: schema.TypeList, + Required: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "subnet_ids": &schema.Schema{ + Type: schema.TypeSet, + Required: true, + ForceNew: true, + Elem: &schema.Schema{Type: schema.TypeString}, + Set: schema.HashString, + }, + "vpc_id": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + }, + }, + }, + "enable_sso": &schema.Schema{ + Type: schema.TypeBool, + Optional: true, + Default: false, + }, + "access_url": &schema.Schema{ + Type: schema.TypeString, + Computed: true, + }, + "dns_ip_addresses": &schema.Schema{ + Type: schema.TypeSet, + Elem: &schema.Schema{Type: schema.TypeString}, + Set: schema.HashString, + Computed: true, + }, + "type": &schema.Schema{ + Type: schema.TypeString, + Computed: true, + }, + }, + } +} + +func resourceAwsDirectoryServiceDirectoryCreate(d *schema.ResourceData, meta interface{}) error { + dsconn := meta.(*AWSClient).dsconn + + input := directoryservice.CreateDirectoryInput{ + Name: aws.String(d.Get("name").(string)), + Password: aws.String(d.Get("password").(string)), + Size: aws.String(d.Get("size").(string)), + } + + if v, ok := d.GetOk("description"); ok { + input.Description = aws.String(v.(string)) + } + if v, ok := d.GetOk("short_name"); ok { + input.ShortName = aws.String(v.(string)) + } + + if v, ok := d.GetOk("vpc_settings"); ok { + settings := v.([]interface{}) + + if len(settings) > 1 { + return fmt.Errorf("Only a single vpc_settings block is expected") + } else if len(settings) == 1 { + s := settings[0].(map[string]interface{}) + var subnetIds []*string + for _, id := range s["subnet_ids"].(*schema.Set).List() { + subnetIds = append(subnetIds, aws.String(id.(string))) + } + + vpcSettings := directoryservice.DirectoryVpcSettings{ + SubnetIds: subnetIds, + VpcId: aws.String(s["vpc_id"].(string)), + } + input.VpcSettings = &vpcSettings + } + } + + log.Printf("[DEBUG] Creating Directory Service: %s", input) + out, err := dsconn.CreateDirectory(&input) + if err != nil { + return err + } + log.Printf("[DEBUG] Directory Service created: %s", out) + d.SetId(*out.DirectoryId) + + // Wait for creation + log.Printf("[DEBUG] Waiting for DS (%q) to become available", d.Id()) + stateConf := &resource.StateChangeConf{ + Pending: []string{"Requested", "Creating", "Created"}, + Target: "Active", + Refresh: func() (interface{}, string, error) { + resp, err := dsconn.DescribeDirectories(&directoryservice.DescribeDirectoriesInput{ + DirectoryIds: []*string{aws.String(d.Id())}, + }) + if err != nil { + log.Printf("Error during creation of DS: %q", err.Error()) + return nil, "", err + } + + ds := resp.DirectoryDescriptions[0] + log.Printf("[DEBUG] Creation of DS %q is in following stage: %q.", + d.Id(), *ds.Stage) + return ds, *ds.Stage, nil + }, + Timeout: 10 * time.Minute, + } + if _, err := stateConf.WaitForState(); err != nil { + return fmt.Errorf( + "Error waiting for Directory Service (%s) to become available: %#v", + d.Id(), err) + } + + if v, ok := d.GetOk("alias"); ok { + d.SetPartial("alias") + + input := directoryservice.CreateAliasInput{ + DirectoryId: aws.String(d.Id()), + Alias: aws.String(v.(string)), + } + + log.Printf("[DEBUG] Assigning alias %q to DS directory %q", + v.(string), d.Id()) + out, err := dsconn.CreateAlias(&input) + if err != nil { + return err + } + log.Printf("[DEBUG] Alias %q assigned to DS directory %q", + *out.Alias, *out.DirectoryId) + } + + return resourceAwsDirectoryServiceDirectoryUpdate(d, meta) +} + +func resourceAwsDirectoryServiceDirectoryUpdate(d *schema.ResourceData, meta interface{}) error { + dsconn := meta.(*AWSClient).dsconn + + if d.HasChange("enable_sso") { + d.SetPartial("enable_sso") + var err error + + if v, ok := d.GetOk("enable_sso"); ok && v.(bool) { + log.Printf("[DEBUG] Enabling SSO for DS directory %q", d.Id()) + _, err = dsconn.EnableSso(&directoryservice.EnableSsoInput{ + DirectoryId: aws.String(d.Id()), + }) + } else { + log.Printf("[DEBUG] Disabling SSO for DS directory %q", d.Id()) + _, err = dsconn.DisableSso(&directoryservice.DisableSsoInput{ + DirectoryId: aws.String(d.Id()), + }) + } + + if err != nil { + return err + } + } + + return resourceAwsDirectoryServiceDirectoryRead(d, meta) +} + +func resourceAwsDirectoryServiceDirectoryRead(d *schema.ResourceData, meta interface{}) error { + dsconn := meta.(*AWSClient).dsconn + + input := directoryservice.DescribeDirectoriesInput{ + DirectoryIds: []*string{aws.String(d.Id())}, + } + out, err := dsconn.DescribeDirectories(&input) + if err != nil { + return err + } + + dir := out.DirectoryDescriptions[0] + log.Printf("[DEBUG] Received DS directory: %s", *dir) + + d.Set("access_url", *dir.AccessUrl) + d.Set("alias", *dir.Alias) + if dir.Description != nil { + d.Set("description", *dir.Description) + } + d.Set("dns_ip_addresses", schema.NewSet(schema.HashString, flattenStringList(dir.DnsIpAddrs))) + d.Set("name", *dir.Name) + if dir.ShortName != nil { + d.Set("short_name", *dir.ShortName) + } + d.Set("size", *dir.Size) + d.Set("type", *dir.Type) + d.Set("vpc_settings", flattenDSVpcSettings(dir.VpcSettings)) + d.Set("enable_sso", *dir.SsoEnabled) + + return nil +} + +func resourceAwsDirectoryServiceDirectoryDelete(d *schema.ResourceData, meta interface{}) error { + dsconn := meta.(*AWSClient).dsconn + + input := directoryservice.DeleteDirectoryInput{ + DirectoryId: aws.String(d.Id()), + } + _, err := dsconn.DeleteDirectory(&input) + if err != nil { + return err + } + + // Wait for deletion + log.Printf("[DEBUG] Waiting for DS (%q) to be deleted", d.Id()) + stateConf := &resource.StateChangeConf{ + Pending: []string{"Deleting"}, + Target: "", + Refresh: func() (interface{}, string, error) { + resp, err := dsconn.DescribeDirectories(&directoryservice.DescribeDirectoriesInput{ + DirectoryIds: []*string{aws.String(d.Id())}, + }) + if err != nil { + return nil, "", err + } + + if len(resp.DirectoryDescriptions) == 0 { + return nil, "", nil + } + + ds := resp.DirectoryDescriptions[0] + log.Printf("[DEBUG] Deletion of DS %q is in following stage: %q.", + d.Id(), *ds.Stage) + return ds, *ds.Stage, nil + }, + Timeout: 10 * time.Minute, + } + if _, err := stateConf.WaitForState(); err != nil { + return fmt.Errorf( + "Error waiting for Directory Service (%s) to be deleted: %q", + d.Id(), err.Error()) + } + + return nil +} diff --git a/builtin/providers/aws/resource_aws_directory_service_directory_test.go b/builtin/providers/aws/resource_aws_directory_service_directory_test.go new file mode 100644 index 000000000..b10174bdb --- /dev/null +++ b/builtin/providers/aws/resource_aws_directory_service_directory_test.go @@ -0,0 +1,283 @@ +package aws + +import ( + "fmt" + "testing" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/directoryservice" + + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +func TestAccAWSDirectoryServiceDirectory_basic(t *testing.T) { + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckDirectoryServiceDirectoryDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccDirectoryServiceDirectoryConfig, + Check: resource.ComposeTestCheckFunc( + testAccCheckServiceDirectoryExists("aws_directory_service_directory.bar"), + ), + }, + }, + }) +} + +func TestAccAWSDirectoryServiceDirectory_withAliasAndSso(t *testing.T) { + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckDirectoryServiceDirectoryDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccDirectoryServiceDirectoryConfig_withAlias, + Check: resource.ComposeTestCheckFunc( + testAccCheckServiceDirectoryExists("aws_directory_service_directory.bar_a"), + testAccCheckServiceDirectoryAlias("aws_directory_service_directory.bar_a", + fmt.Sprintf("tf-d-%d", randomInteger)), + testAccCheckServiceDirectorySso("aws_directory_service_directory.bar_a", false), + ), + }, + resource.TestStep{ + Config: testAccDirectoryServiceDirectoryConfig_withSso, + Check: resource.ComposeTestCheckFunc( + testAccCheckServiceDirectoryExists("aws_directory_service_directory.bar_a"), + testAccCheckServiceDirectoryAlias("aws_directory_service_directory.bar_a", + fmt.Sprintf("tf-d-%d", randomInteger)), + testAccCheckServiceDirectorySso("aws_directory_service_directory.bar_a", true), + ), + }, + resource.TestStep{ + Config: testAccDirectoryServiceDirectoryConfig_withSso_modified, + Check: resource.ComposeTestCheckFunc( + testAccCheckServiceDirectoryExists("aws_directory_service_directory.bar_a"), + testAccCheckServiceDirectoryAlias("aws_directory_service_directory.bar_a", + fmt.Sprintf("tf-d-%d", randomInteger)), + testAccCheckServiceDirectorySso("aws_directory_service_directory.bar_a", false), + ), + }, + }, + }) +} + +func testAccCheckDirectoryServiceDirectoryDestroy(s *terraform.State) error { + if len(s.RootModule().Resources) > 0 { + return fmt.Errorf("Expected all resources to be gone, but found: %#v", + s.RootModule().Resources) + } + + return nil +} + +func testAccCheckServiceDirectoryExists(name string) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[name] + if !ok { + return fmt.Errorf("Not found: %s", name) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No ID is set") + } + + dsconn := testAccProvider.Meta().(*AWSClient).dsconn + out, err := dsconn.DescribeDirectories(&directoryservice.DescribeDirectoriesInput{ + DirectoryIds: []*string{aws.String(rs.Primary.ID)}, + }) + + if err != nil { + return err + } + + if len(out.DirectoryDescriptions) < 1 { + return fmt.Errorf("No DS directory found") + } + + if *out.DirectoryDescriptions[0].DirectoryId != rs.Primary.ID { + return fmt.Errorf("DS directory ID mismatch - existing: %q, state: %q", + *out.DirectoryDescriptions[0].DirectoryId, rs.Primary.ID) + } + + return nil + } +} + +func testAccCheckServiceDirectoryAlias(name, alias string) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[name] + if !ok { + return fmt.Errorf("Not found: %s", name) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No ID is set") + } + + dsconn := testAccProvider.Meta().(*AWSClient).dsconn + out, err := dsconn.DescribeDirectories(&directoryservice.DescribeDirectoriesInput{ + DirectoryIds: []*string{aws.String(rs.Primary.ID)}, + }) + + if err != nil { + return err + } + + if *out.DirectoryDescriptions[0].Alias != alias { + return fmt.Errorf("DS directory Alias mismatch - actual: %q, expected: %q", + *out.DirectoryDescriptions[0].Alias, alias) + } + + return nil + } +} + +func testAccCheckServiceDirectorySso(name string, ssoEnabled bool) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[name] + if !ok { + return fmt.Errorf("Not found: %s", name) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No ID is set") + } + + dsconn := testAccProvider.Meta().(*AWSClient).dsconn + out, err := dsconn.DescribeDirectories(&directoryservice.DescribeDirectoriesInput{ + DirectoryIds: []*string{aws.String(rs.Primary.ID)}, + }) + + if err != nil { + return err + } + + if *out.DirectoryDescriptions[0].SsoEnabled != ssoEnabled { + return fmt.Errorf("DS directory SSO mismatch - actual: %t, expected: %t", + *out.DirectoryDescriptions[0].SsoEnabled, ssoEnabled) + } + + return nil + } +} + +const testAccDirectoryServiceDirectoryConfig = ` +resource "aws_directory_service_directory" "bar" { + name = "corp.notexample.com" + password = "SuperSecretPassw0rd" + size = "Small" + + vpc_settings { + vpc_id = "${aws_vpc.main.id}" + subnet_ids = ["${aws_subnet.foo.id}", "${aws_subnet.bar.id}"] + } +} + +resource "aws_vpc" "main" { + cidr_block = "10.0.0.0/16" +} + +resource "aws_subnet" "foo" { + vpc_id = "${aws_vpc.main.id}" + availability_zone = "us-west-2a" + cidr_block = "10.0.1.0/24" +} +resource "aws_subnet" "bar" { + vpc_id = "${aws_vpc.main.id}" + availability_zone = "us-west-2b" + cidr_block = "10.0.2.0/24" +} +` + +var randomInteger = genRandInt() +var testAccDirectoryServiceDirectoryConfig_withAlias = fmt.Sprintf(` +resource "aws_directory_service_directory" "bar_a" { + name = "corp.notexample.com" + password = "SuperSecretPassw0rd" + size = "Small" + alias = "tf-d-%d" + + vpc_settings { + vpc_id = "${aws_vpc.main.id}" + subnet_ids = ["${aws_subnet.foo.id}", "${aws_subnet.bar.id}"] + } +} + +resource "aws_vpc" "main" { + cidr_block = "10.0.0.0/16" +} + +resource "aws_subnet" "foo" { + vpc_id = "${aws_vpc.main.id}" + availability_zone = "us-west-2a" + cidr_block = "10.0.1.0/24" +} +resource "aws_subnet" "bar" { + vpc_id = "${aws_vpc.main.id}" + availability_zone = "us-west-2b" + cidr_block = "10.0.2.0/24" +} +`, randomInteger) + +var testAccDirectoryServiceDirectoryConfig_withSso = fmt.Sprintf(` +resource "aws_directory_service_directory" "bar_a" { + name = "corp.notexample.com" + password = "SuperSecretPassw0rd" + size = "Small" + alias = "tf-d-%d" + enable_sso = true + + vpc_settings { + vpc_id = "${aws_vpc.main.id}" + subnet_ids = ["${aws_subnet.foo.id}", "${aws_subnet.bar.id}"] + } +} + +resource "aws_vpc" "main" { + cidr_block = "10.0.0.0/16" +} + +resource "aws_subnet" "foo" { + vpc_id = "${aws_vpc.main.id}" + availability_zone = "us-west-2a" + cidr_block = "10.0.1.0/24" +} +resource "aws_subnet" "bar" { + vpc_id = "${aws_vpc.main.id}" + availability_zone = "us-west-2b" + cidr_block = "10.0.2.0/24" +} +`, randomInteger) + +var testAccDirectoryServiceDirectoryConfig_withSso_modified = fmt.Sprintf(` +resource "aws_directory_service_directory" "bar_a" { + name = "corp.notexample.com" + password = "SuperSecretPassw0rd" + size = "Small" + alias = "tf-d-%d" + enable_sso = false + + vpc_settings { + vpc_id = "${aws_vpc.main.id}" + subnet_ids = ["${aws_subnet.foo.id}", "${aws_subnet.bar.id}"] + } +} + +resource "aws_vpc" "main" { + cidr_block = "10.0.0.0/16" +} + +resource "aws_subnet" "foo" { + vpc_id = "${aws_vpc.main.id}" + availability_zone = "us-west-2a" + cidr_block = "10.0.1.0/24" +} +resource "aws_subnet" "bar" { + vpc_id = "${aws_vpc.main.id}" + availability_zone = "us-west-2b" + cidr_block = "10.0.2.0/24" +} +`, randomInteger) diff --git a/builtin/providers/aws/resource_aws_dynamodb_table.go b/builtin/providers/aws/resource_aws_dynamodb_table.go index 333686229..88146662b 100644 --- a/builtin/providers/aws/resource_aws_dynamodb_table.go +++ b/builtin/providers/aws/resource_aws_dynamodb_table.go @@ -287,7 +287,11 @@ func resourceAwsDynamoDbTableCreate(d *schema.ResourceData, meta interface{}) er } else { // No error, set ID and return d.SetId(*output.TableDescription.TableName) - return nil + if err := d.Set("arn", *output.TableDescription.TableArn); err != nil { + return err + } + + return resourceAwsDynamoDbTableRead(d, meta) } } @@ -384,7 +388,7 @@ func resourceAwsDynamoDbTableUpdate(d *schema.ResourceData, meta interface{}) er updates = append(updates, update) // Hash key is required, range key isn't - hashkey_type, err := getAttributeType(d, *(gsi.KeySchema[0].AttributeName)) + hashkey_type, err := getAttributeType(d, *gsi.KeySchema[0].AttributeName) if err != nil { return err } @@ -396,7 +400,7 @@ func resourceAwsDynamoDbTableUpdate(d *schema.ResourceData, meta interface{}) er // If there's a range key, there will be 2 elements in KeySchema if len(gsi.KeySchema) == 2 { - rangekey_type, err := getAttributeType(d, *(gsi.KeySchema[1].AttributeName)) + rangekey_type, err := getAttributeType(d, *gsi.KeySchema[1].AttributeName) if err != nil { return err } @@ -480,8 +484,8 @@ func resourceAwsDynamoDbTableUpdate(d *schema.ResourceData, meta interface{}) er capacityUpdated := false - if int64(gsiReadCapacity) != *(gsi.ProvisionedThroughput.ReadCapacityUnits) || - int64(gsiWriteCapacity) != *(gsi.ProvisionedThroughput.WriteCapacityUnits) { + if int64(gsiReadCapacity) != *gsi.ProvisionedThroughput.ReadCapacityUnits || + int64(gsiWriteCapacity) != *gsi.ProvisionedThroughput.WriteCapacityUnits { capacityUpdated = true } @@ -544,8 +548,8 @@ func resourceAwsDynamoDbTableRead(d *schema.ResourceData, meta interface{}) erro attributes := []interface{}{} for _, attrdef := range table.AttributeDefinitions { attribute := map[string]string{ - "name": *(attrdef.AttributeName), - "type": *(attrdef.AttributeType), + "name": *attrdef.AttributeName, + "type": *attrdef.AttributeType, } attributes = append(attributes, attribute) log.Printf("[DEBUG] Added Attribute: %s", attribute["name"]) @@ -556,9 +560,9 @@ func resourceAwsDynamoDbTableRead(d *schema.ResourceData, meta interface{}) erro gsiList := make([]map[string]interface{}, 0, len(table.GlobalSecondaryIndexes)) for _, gsiObject := range table.GlobalSecondaryIndexes { gsi := map[string]interface{}{ - "write_capacity": *(gsiObject.ProvisionedThroughput.WriteCapacityUnits), - "read_capacity": *(gsiObject.ProvisionedThroughput.ReadCapacityUnits), - "name": *(gsiObject.IndexName), + "write_capacity": *gsiObject.ProvisionedThroughput.WriteCapacityUnits, + "read_capacity": *gsiObject.ProvisionedThroughput.ReadCapacityUnits, + "name": *gsiObject.IndexName, } for _, attribute := range gsiObject.KeySchema { @@ -572,13 +576,22 @@ func resourceAwsDynamoDbTableRead(d *schema.ResourceData, meta interface{}) erro } gsi["projection_type"] = *(gsiObject.Projection.ProjectionType) - gsi["non_key_attributes"] = gsiObject.Projection.NonKeyAttributes + + nonKeyAttrs := make([]string, 0, len(gsiObject.Projection.NonKeyAttributes)) + for _, nonKeyAttr := range gsiObject.Projection.NonKeyAttributes { + nonKeyAttrs = append(nonKeyAttrs, *nonKeyAttr) + } + gsi["non_key_attributes"] = nonKeyAttrs gsiList = append(gsiList, gsi) log.Printf("[DEBUG] Added GSI: %s - Read: %d / Write: %d", gsi["name"], gsi["read_capacity"], gsi["write_capacity"]) } - d.Set("global_secondary_index", gsiList) + err = d.Set("global_secondary_index", gsiList) + if err != nil { + return err + } + d.Set("arn", table.TableArn) return nil @@ -647,7 +660,7 @@ func createGSIFromData(data *map[string]interface{}) dynamodb.GlobalSecondaryInd func getGlobalSecondaryIndex(indexName string, indexList []*dynamodb.GlobalSecondaryIndexDescription) (*dynamodb.GlobalSecondaryIndexDescription, error) { for _, gsi := range indexList { - if *(gsi.IndexName) == indexName { + if *gsi.IndexName == indexName { return gsi, nil } } @@ -726,7 +739,7 @@ func waitForTableToBeActive(tableName string, meta interface{}) error { return err } - activeState = *(result.Table.TableStatus) == "ACTIVE" + activeState = *result.Table.TableStatus == "ACTIVE" // Wait for a few seconds if !activeState { diff --git a/builtin/providers/aws/resource_aws_dynamodb_table_test.go b/builtin/providers/aws/resource_aws_dynamodb_table_test.go index 6c26efc73..adf457f0a 100644 --- a/builtin/providers/aws/resource_aws_dynamodb_table_test.go +++ b/builtin/providers/aws/resource_aws_dynamodb_table_test.go @@ -211,7 +211,7 @@ func dynamoDbAttributesToMap(attributes *[]*dynamodb.AttributeDefinition) map[st attrmap := make(map[string]string) for _, attrdef := range *attributes { - attrmap[*(attrdef.AttributeName)] = *(attrdef.AttributeType) + attrmap[*attrdef.AttributeName] = *attrdef.AttributeType } return attrmap diff --git a/builtin/providers/aws/resource_aws_ecs_service.go b/builtin/providers/aws/resource_aws_ecs_service.go index 9d3a36ab2..ab8562acb 100644 --- a/builtin/providers/aws/resource_aws_ecs_service.go +++ b/builtin/providers/aws/resource_aws_ecs_service.go @@ -137,7 +137,6 @@ func resourceAwsEcsServiceCreate(d *schema.ResourceData, meta interface{}) error log.Printf("[DEBUG] ECS service created: %s", *service.ServiceArn) d.SetId(*service.ServiceArn) - d.Set("cluster", *service.ClusterArn) return resourceAwsEcsServiceUpdate(d, meta) } @@ -175,14 +174,21 @@ func resourceAwsEcsServiceRead(d *schema.ResourceData, meta interface{}) error { } d.Set("desired_count", *service.DesiredCount) - d.Set("cluster", *service.ClusterArn) + + // Save cluster in the same format + if strings.HasPrefix(d.Get("cluster").(string), "arn:aws:ecs:") { + d.Set("cluster", *service.ClusterArn) + } else { + clusterARN := getNameFromARN(*service.ClusterArn) + d.Set("cluster", clusterARN) + } // Save IAM role in the same format if service.RoleArn != nil { if strings.HasPrefix(d.Get("iam_role").(string), "arn:aws:iam:") { d.Set("iam_role", *service.RoleArn) } else { - roleARN := buildIamRoleNameFromARN(*service.RoleArn) + roleARN := getNameFromARN(*service.RoleArn) d.Set("iam_role", roleARN) } } @@ -306,8 +312,10 @@ func buildFamilyAndRevisionFromARN(arn string) string { return strings.Split(arn, "/")[1] } -func buildIamRoleNameFromARN(arn string) string { - // arn:aws:iam::0123456789:role/EcsService +// Expects the following ARNs: +// arn:aws:iam::0123456789:role/EcsService +// arn:aws:ecs:us-west-2:0123456789:cluster/radek-cluster +func getNameFromARN(arn string) string { return strings.Split(arn, "/")[1] } diff --git a/builtin/providers/aws/resource_aws_ecs_service_test.go b/builtin/providers/aws/resource_aws_ecs_service_test.go index 2f9b8fedb..7f88f1536 100644 --- a/builtin/providers/aws/resource_aws_ecs_service_test.go +++ b/builtin/providers/aws/resource_aws_ecs_service_test.go @@ -178,6 +178,26 @@ func TestAccAWSEcsService_withIamRole(t *testing.T) { }) } +// Regression for https://github.com/hashicorp/terraform/issues/3361 +func TestAccAWSEcsService_withEcsClusterName(t *testing.T) { + clusterName := regexp.MustCompile("^terraformecstestcluster$") + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSEcsServiceDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccAWSEcsServiceWithEcsClusterName, + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSEcsServiceExists("aws_ecs_service.jenkins"), + resource.TestMatchResourceAttr( + "aws_ecs_service.jenkins", "cluster", clusterName), + ), + }, + }, + }) +} + func testAccCheckAWSEcsServiceDestroy(s *terraform.State) error { conn := testAccProvider.Meta().(*AWSClient).ecsconn @@ -471,3 +491,31 @@ resource "aws_ecs_service" "ghost" { desired_count = 1 } ` + +var testAccAWSEcsServiceWithEcsClusterName = ` +resource "aws_ecs_cluster" "default" { + name = "terraformecstestcluster" +} + +resource "aws_ecs_task_definition" "jenkins" { + family = "jenkins" + container_definitions = < 0 { + return fmt.Errorf("Expected all resources to be gone, but found: %#v", s.RootModule().Resources) + } + + return nil +} + +func testAccCheckEfsFileSystem(resourceID string) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[resourceID] + if !ok { + return fmt.Errorf("Not found: %s", resourceID) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No ID is set") + } + + fs, ok := s.RootModule().Resources[resourceID] + if !ok { + return fmt.Errorf("Not found: %s", resourceID) + } + + conn := testAccProvider.Meta().(*AWSClient).efsconn + _, err := conn.DescribeFileSystems(&efs.DescribeFileSystemsInput{ + FileSystemId: aws.String(fs.Primary.ID), + }) + + if err != nil { + return err + } + + return nil + } +} + +func testAccCheckEfsFileSystemTags(resourceID string, expectedTags map[string]string) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[resourceID] + if !ok { + return fmt.Errorf("Not found: %s", resourceID) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No ID is set") + } + + fs, ok := s.RootModule().Resources[resourceID] + if !ok { + return fmt.Errorf("Not found: %s", resourceID) + } + + conn := testAccProvider.Meta().(*AWSClient).efsconn + resp, err := conn.DescribeTags(&efs.DescribeTagsInput{ + FileSystemId: aws.String(fs.Primary.ID), + }) + + if !reflect.DeepEqual(expectedTags, tagsToMapEFS(resp.Tags)) { + return fmt.Errorf("Tags mismatch.\nExpected: %#v\nGiven: %#v", + expectedTags, resp.Tags) + } + + if err != nil { + return err + } + + return nil + } +} + +const testAccAWSEFSFileSystemConfig = ` +resource "aws_efs_file_system" "foo" { + reference_name = "radeksimko" +} +` + +const testAccAWSEFSFileSystemConfigWithTags = ` +resource "aws_efs_file_system" "foo-with-tags" { + reference_name = "yada_yada" + tags { + Name = "foo-efs" + Another = "tag" + } +} +` diff --git a/builtin/providers/aws/resource_aws_efs_mount_target.go b/builtin/providers/aws/resource_aws_efs_mount_target.go new file mode 100644 index 000000000..ca7656e63 --- /dev/null +++ b/builtin/providers/aws/resource_aws_efs_mount_target.go @@ -0,0 +1,223 @@ +package aws + +import ( + "fmt" + "log" + "time" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/awserr" + "github.com/aws/aws-sdk-go/service/efs" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/helper/schema" +) + +func resourceAwsEfsMountTarget() *schema.Resource { + return &schema.Resource{ + Create: resourceAwsEfsMountTargetCreate, + Read: resourceAwsEfsMountTargetRead, + Update: resourceAwsEfsMountTargetUpdate, + Delete: resourceAwsEfsMountTargetDelete, + + Schema: map[string]*schema.Schema{ + "file_system_id": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + + "ip_address": &schema.Schema{ + Type: schema.TypeString, + Computed: true, + Optional: true, + ForceNew: true, + }, + + "security_groups": &schema.Schema{ + Type: schema.TypeSet, + Elem: &schema.Schema{Type: schema.TypeString}, + Set: schema.HashString, + Computed: true, + Optional: true, + }, + + "subnet_id": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + + "network_interface_id": &schema.Schema{ + Type: schema.TypeString, + Computed: true, + }, + }, + } +} + +func resourceAwsEfsMountTargetCreate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).efsconn + + input := efs.CreateMountTargetInput{ + FileSystemId: aws.String(d.Get("file_system_id").(string)), + SubnetId: aws.String(d.Get("subnet_id").(string)), + } + + if v, ok := d.GetOk("ip_address"); ok { + input.IpAddress = aws.String(v.(string)) + } + if v, ok := d.GetOk("security_groups"); ok { + input.SecurityGroups = expandStringList(v.(*schema.Set).List()) + } + + log.Printf("[DEBUG] Creating EFS mount target: %#v", input) + + mt, err := conn.CreateMountTarget(&input) + if err != nil { + return err + } + + d.SetId(*mt.MountTargetId) + + stateConf := &resource.StateChangeConf{ + Pending: []string{"creating"}, + Target: "available", + Refresh: func() (interface{}, string, error) { + resp, err := conn.DescribeMountTargets(&efs.DescribeMountTargetsInput{ + MountTargetId: aws.String(d.Id()), + }) + if err != nil { + return nil, "error", err + } + + if len(resp.MountTargets) < 1 { + return nil, "error", fmt.Errorf("EFS mount target %q not found", d.Id()) + } + + mt := resp.MountTargets[0] + + log.Printf("[DEBUG] Current status of %q: %q", *mt.MountTargetId, *mt.LifeCycleState) + return mt, *mt.LifeCycleState, nil + }, + Timeout: 10 * time.Minute, + Delay: 2 * time.Second, + MinTimeout: 3 * time.Second, + } + + _, err = stateConf.WaitForState() + if err != nil { + return fmt.Errorf("Error waiting for EFS mount target (%s) to create: %s", d.Id(), err) + } + + log.Printf("[DEBUG] EFS mount target created: %s", *mt.MountTargetId) + + return resourceAwsEfsMountTargetRead(d, meta) +} + +func resourceAwsEfsMountTargetUpdate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).efsconn + + if d.HasChange("security_groups") { + input := efs.ModifyMountTargetSecurityGroupsInput{ + MountTargetId: aws.String(d.Id()), + SecurityGroups: expandStringList(d.Get("security_groups").(*schema.Set).List()), + } + _, err := conn.ModifyMountTargetSecurityGroups(&input) + if err != nil { + return err + } + } + + return resourceAwsEfsMountTargetRead(d, meta) +} + +func resourceAwsEfsMountTargetRead(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).efsconn + resp, err := conn.DescribeMountTargets(&efs.DescribeMountTargetsInput{ + MountTargetId: aws.String(d.Id()), + }) + if err != nil { + return err + } + + if len(resp.MountTargets) < 1 { + return fmt.Errorf("EFS mount target %q not found", d.Id()) + } + + mt := resp.MountTargets[0] + + log.Printf("[DEBUG] Found EFS mount target: %#v", mt) + + d.SetId(*mt.MountTargetId) + d.Set("file_system_id", *mt.FileSystemId) + d.Set("ip_address", *mt.IpAddress) + d.Set("subnet_id", *mt.SubnetId) + d.Set("network_interface_id", *mt.NetworkInterfaceId) + + sgResp, err := conn.DescribeMountTargetSecurityGroups(&efs.DescribeMountTargetSecurityGroupsInput{ + MountTargetId: aws.String(d.Id()), + }) + if err != nil { + return err + } + + d.Set("security_groups", schema.NewSet(schema.HashString, flattenStringList(sgResp.SecurityGroups))) + + return nil +} + +func resourceAwsEfsMountTargetDelete(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).efsconn + + log.Printf("[DEBUG] Deleting EFS mount target %q", d.Id()) + _, err := conn.DeleteMountTarget(&efs.DeleteMountTargetInput{ + MountTargetId: aws.String(d.Id()), + }) + if err != nil { + return err + } + + stateConf := &resource.StateChangeConf{ + Pending: []string{"available", "deleting", "deleted"}, + Target: "", + Refresh: func() (interface{}, string, error) { + resp, err := conn.DescribeMountTargets(&efs.DescribeMountTargetsInput{ + MountTargetId: aws.String(d.Id()), + }) + if err != nil { + awsErr, ok := err.(awserr.Error) + if !ok { + return nil, "error", err + } + + if awsErr.Code() == "MountTargetNotFound" { + return nil, "", nil + } + + return nil, "error", awsErr + } + + if len(resp.MountTargets) < 1 { + return nil, "", nil + } + + mt := resp.MountTargets[0] + + log.Printf("[DEBUG] Current status of %q: %q", *mt.MountTargetId, *mt.LifeCycleState) + return mt, *mt.LifeCycleState, nil + }, + Timeout: 10 * time.Minute, + Delay: 2 * time.Second, + MinTimeout: 3 * time.Second, + } + + _, err = stateConf.WaitForState() + if err != nil { + return fmt.Errorf("Error waiting for EFS mount target (%q) to delete: %q", + d.Id(), err.Error()) + } + + log.Printf("[DEBUG] EFS mount target %q deleted.", d.Id()) + + return nil +} diff --git a/builtin/providers/aws/resource_aws_efs_mount_target_test.go b/builtin/providers/aws/resource_aws_efs_mount_target_test.go new file mode 100644 index 000000000..e9d624e03 --- /dev/null +++ b/builtin/providers/aws/resource_aws_efs_mount_target_test.go @@ -0,0 +1,135 @@ +package aws + +import ( + "fmt" + "testing" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/efs" + + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +func TestAccAWSEFSMountTarget(t *testing.T) { + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckEfsMountTargetDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccAWSEFSMountTargetConfig, + Check: resource.ComposeTestCheckFunc( + testAccCheckEfsMountTarget( + "aws_efs_mount_target.alpha", + ), + ), + }, + resource.TestStep{ + Config: testAccAWSEFSMountTargetConfigModified, + Check: resource.ComposeTestCheckFunc( + testAccCheckEfsMountTarget( + "aws_efs_mount_target.alpha", + ), + testAccCheckEfsMountTarget( + "aws_efs_mount_target.beta", + ), + ), + }, + }, + }) +} + +func testAccCheckEfsMountTargetDestroy(s *terraform.State) error { + if len(s.RootModule().Resources) > 0 { + return fmt.Errorf("Expected all resources to be gone, but found: %#v", s.RootModule().Resources) + } + + return nil +} + +func testAccCheckEfsMountTarget(resourceID string) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[resourceID] + if !ok { + return fmt.Errorf("Not found: %s", resourceID) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No ID is set") + } + + fs, ok := s.RootModule().Resources[resourceID] + if !ok { + return fmt.Errorf("Not found: %s", resourceID) + } + + conn := testAccProvider.Meta().(*AWSClient).efsconn + mt, err := conn.DescribeMountTargets(&efs.DescribeMountTargetsInput{ + MountTargetId: aws.String(fs.Primary.ID), + }) + if err != nil { + return err + } + + if *mt.MountTargets[0].MountTargetId != fs.Primary.ID { + return fmt.Errorf("Mount target ID mismatch: %q != %q", + *mt.MountTargets[0].MountTargetId, fs.Primary.ID) + } + + return nil + } +} + +const testAccAWSEFSMountTargetConfig = ` +resource "aws_efs_file_system" "foo" { + reference_name = "radeksimko" +} + +resource "aws_efs_mount_target" "alpha" { + file_system_id = "${aws_efs_file_system.foo.id}" + subnet_id = "${aws_subnet.alpha.id}" +} + +resource "aws_vpc" "foo" { + cidr_block = "10.0.0.0/16" +} + +resource "aws_subnet" "alpha" { + vpc_id = "${aws_vpc.foo.id}" + availability_zone = "us-west-2a" + cidr_block = "10.0.1.0/24" +} +` + +const testAccAWSEFSMountTargetConfigModified = ` +resource "aws_efs_file_system" "foo" { + reference_name = "radeksimko" +} + +resource "aws_efs_mount_target" "alpha" { + file_system_id = "${aws_efs_file_system.foo.id}" + subnet_id = "${aws_subnet.alpha.id}" +} + +resource "aws_efs_mount_target" "beta" { + file_system_id = "${aws_efs_file_system.foo.id}" + subnet_id = "${aws_subnet.beta.id}" +} + +resource "aws_vpc" "foo" { + cidr_block = "10.0.0.0/16" +} + +resource "aws_subnet" "alpha" { + vpc_id = "${aws_vpc.foo.id}" + availability_zone = "us-west-2a" + cidr_block = "10.0.1.0/24" +} + +resource "aws_subnet" "beta" { + vpc_id = "${aws_vpc.foo.id}" + availability_zone = "us-west-2b" + cidr_block = "10.0.2.0/24" +} +` diff --git a/builtin/providers/aws/resource_aws_eip.go b/builtin/providers/aws/resource_aws_eip.go index 4729af537..4b369ee60 100644 --- a/builtin/providers/aws/resource_aws_eip.go +++ b/builtin/providers/aws/resource_aws_eip.go @@ -30,13 +30,13 @@ func resourceAwsEip() *schema.Resource { "instance": &schema.Schema{ Type: schema.TypeString, Optional: true, + Computed: true, }, "network_interface": &schema.Schema{ - Type: schema.TypeString, - Optional: true, - Computed: true, - ConflictsWith: []string{"instance"}, + Type: schema.TypeString, + Optional: true, + Computed: true, }, "allocation_id": &schema.Schema{ @@ -134,7 +134,7 @@ func resourceAwsEipRead(d *schema.ResourceData, meta interface{}) error { // Verify AWS returned our EIP if len(describeAddresses.Addresses) != 1 || - (domain == "vpc" && *describeAddresses.Addresses[0].AllocationId != id) || + domain == "vpc" && *describeAddresses.Addresses[0].AllocationId != id || *describeAddresses.Addresses[0].PublicIp != id { if err != nil { return fmt.Errorf("Unable to find EIP: %#v", describeAddresses.Addresses) diff --git a/builtin/providers/aws/resource_aws_elasticache_cluster.go b/builtin/providers/aws/resource_aws_elasticache_cluster.go index 968a5c9cf..3460fb292 100644 --- a/builtin/providers/aws/resource_aws_elasticache_cluster.go +++ b/builtin/providers/aws/resource_aws_elasticache_cluster.go @@ -28,6 +28,12 @@ func resourceAwsElasticacheCluster() *schema.Resource { Type: schema.TypeString, Required: true, ForceNew: true, + StateFunc: func(val interface{}) string { + // Elasticache normalizes cluster ids to lowercase, + // so we have to do this too or else we can end up + // with non-converging diffs. + return strings.ToLower(val.(string)) + }, }, "configuration_endpoint": &schema.Schema{ Type: schema.TypeString, @@ -201,7 +207,11 @@ func resourceAwsElasticacheClusterCreate(d *schema.ResourceData, meta interface{ return fmt.Errorf("Error creating Elasticache: %s", err) } - d.SetId(*resp.CacheCluster.CacheClusterId) + // Assign the cluster id as the resource ID + // Elasticache always retains the id in lower case, so we have to + // mimic that or else we won't be able to refresh a resource whose + // name contained uppercase characters. + d.SetId(strings.ToLower(*resp.CacheCluster.CacheClusterId)) pending := []string{"creating"} stateConf := &resource.StateChangeConf{ diff --git a/builtin/providers/aws/resource_aws_elasticache_cluster_test.go b/builtin/providers/aws/resource_aws_elasticache_cluster_test.go index caa14a8df..173ca21ea 100644 --- a/builtin/providers/aws/resource_aws_elasticache_cluster_test.go +++ b/builtin/providers/aws/resource_aws_elasticache_cluster_test.go @@ -163,7 +163,10 @@ resource "aws_security_group" "bar" { } resource "aws_elasticache_cluster" "bar" { - cluster_id = "tf-test-%03d" + // Including uppercase letters in this name to ensure + // that we correctly handle the fact that the API + // normalizes names to lowercase. + cluster_id = "tf-TEST-%03d" node_type = "cache.m1.small" num_cache_nodes = 1 engine = "redis" diff --git a/builtin/providers/aws/resource_aws_elasticsearch_domain.go b/builtin/providers/aws/resource_aws_elasticsearch_domain.go new file mode 100644 index 000000000..8f2d6c9c9 --- /dev/null +++ b/builtin/providers/aws/resource_aws_elasticsearch_domain.go @@ -0,0 +1,399 @@ +package aws + +import ( + "fmt" + "log" + "regexp" + "time" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/awserr" + elasticsearch "github.com/aws/aws-sdk-go/service/elasticsearchservice" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/helper/schema" +) + +func resourceAwsElasticSearchDomain() *schema.Resource { + return &schema.Resource{ + Create: resourceAwsElasticSearchDomainCreate, + Read: resourceAwsElasticSearchDomainRead, + Update: resourceAwsElasticSearchDomainUpdate, + Delete: resourceAwsElasticSearchDomainDelete, + + Schema: map[string]*schema.Schema{ + "access_policies": &schema.Schema{ + Type: schema.TypeString, + StateFunc: normalizeJson, + Optional: true, + }, + "advanced_options": &schema.Schema{ + Type: schema.TypeMap, + Optional: true, + Computed: true, + }, + "domain_name": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + ValidateFunc: func(v interface{}, k string) (ws []string, errors []error) { + value := v.(string) + if !regexp.MustCompile(`^[0-9A-Za-z]+`).MatchString(value) { + errors = append(errors, fmt.Errorf( + "%q must start with a letter or number", k)) + } + if !regexp.MustCompile(`^[0-9A-Za-z][0-9a-z-]+$`).MatchString(value) { + errors = append(errors, fmt.Errorf( + "%q can only contain lowercase characters, numbers and hyphens", k)) + } + return + }, + }, + "arn": &schema.Schema{ + Type: schema.TypeString, + Computed: true, + }, + "domain_id": &schema.Schema{ + Type: schema.TypeString, + Computed: true, + }, + "endpoint": &schema.Schema{ + Type: schema.TypeString, + Computed: true, + }, + "ebs_options": &schema.Schema{ + Type: schema.TypeList, + Optional: true, + Computed: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "ebs_enabled": &schema.Schema{ + Type: schema.TypeBool, + Required: true, + }, + "iops": &schema.Schema{ + Type: schema.TypeInt, + Optional: true, + }, + "volume_size": &schema.Schema{ + Type: schema.TypeInt, + Optional: true, + }, + "volume_type": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + }, + }, + }, + }, + "cluster_config": &schema.Schema{ + Type: schema.TypeList, + Optional: true, + Computed: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "dedicated_master_count": &schema.Schema{ + Type: schema.TypeInt, + Optional: true, + }, + "dedicated_master_enabled": &schema.Schema{ + Type: schema.TypeBool, + Optional: true, + Default: false, + }, + "dedicated_master_type": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + }, + "instance_count": &schema.Schema{ + Type: schema.TypeInt, + Optional: true, + Default: 1, + }, + "instance_type": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + Default: "m3.medium.elasticsearch", + }, + "zone_awareness_enabled": &schema.Schema{ + Type: schema.TypeBool, + Optional: true, + }, + }, + }, + }, + "snapshot_options": &schema.Schema{ + Type: schema.TypeList, + Optional: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "automated_snapshot_start_hour": &schema.Schema{ + Type: schema.TypeInt, + Required: true, + }, + }, + }, + }, + }, + } +} + +func resourceAwsElasticSearchDomainCreate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).esconn + + input := elasticsearch.CreateElasticsearchDomainInput{ + DomainName: aws.String(d.Get("domain_name").(string)), + } + + if v, ok := d.GetOk("access_policies"); ok { + input.AccessPolicies = aws.String(v.(string)) + } + + if v, ok := d.GetOk("advanced_options"); ok { + input.AdvancedOptions = stringMapToPointers(v.(map[string]interface{})) + } + + if v, ok := d.GetOk("ebs_options"); ok { + options := v.([]interface{}) + + if len(options) > 1 { + return fmt.Errorf("Only a single ebs_options block is expected") + } else if len(options) == 1 { + if options[0] == nil { + return fmt.Errorf("At least one field is expected inside ebs_options") + } + + s := options[0].(map[string]interface{}) + input.EBSOptions = expandESEBSOptions(s) + } + } + + if v, ok := d.GetOk("cluster_config"); ok { + config := v.([]interface{}) + + if len(config) > 1 { + return fmt.Errorf("Only a single cluster_config block is expected") + } else if len(config) == 1 { + if config[0] == nil { + return fmt.Errorf("At least one field is expected inside cluster_config") + } + m := config[0].(map[string]interface{}) + input.ElasticsearchClusterConfig = expandESClusterConfig(m) + } + } + + if v, ok := d.GetOk("snapshot_options"); ok { + options := v.([]interface{}) + + if len(options) > 1 { + return fmt.Errorf("Only a single snapshot_options block is expected") + } else if len(options) == 1 { + if options[0] == nil { + return fmt.Errorf("At least one field is expected inside snapshot_options") + } + + o := options[0].(map[string]interface{}) + + snapshotOptions := elasticsearch.SnapshotOptions{ + AutomatedSnapshotStartHour: aws.Int64(int64(o["automated_snapshot_start_hour"].(int))), + } + + input.SnapshotOptions = &snapshotOptions + } + } + + log.Printf("[DEBUG] Creating ElasticSearch domain: %s", input) + out, err := conn.CreateElasticsearchDomain(&input) + if err != nil { + return err + } + + d.SetId(*out.DomainStatus.ARN) + + log.Printf("[DEBUG] Waiting for ElasticSearch domain %q to be created", d.Id()) + err = resource.Retry(15*time.Minute, func() error { + out, err := conn.DescribeElasticsearchDomain(&elasticsearch.DescribeElasticsearchDomainInput{ + DomainName: aws.String(d.Get("domain_name").(string)), + }) + if err != nil { + return resource.RetryError{Err: err} + } + + if !*out.DomainStatus.Processing && out.DomainStatus.Endpoint != nil { + return nil + } + + return fmt.Errorf("%q: Timeout while waiting for the domain to be created", d.Id()) + }) + if err != nil { + return err + } + + log.Printf("[DEBUG] ElasticSearch domain %q created", d.Id()) + + return resourceAwsElasticSearchDomainRead(d, meta) +} + +func resourceAwsElasticSearchDomainRead(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).esconn + + out, err := conn.DescribeElasticsearchDomain(&elasticsearch.DescribeElasticsearchDomainInput{ + DomainName: aws.String(d.Get("domain_name").(string)), + }) + if err != nil { + return err + } + + log.Printf("[DEBUG] Received ElasticSearch domain: %s", out) + + ds := out.DomainStatus + + d.Set("access_policies", *ds.AccessPolicies) + err = d.Set("advanced_options", pointersMapToStringList(ds.AdvancedOptions)) + if err != nil { + return err + } + d.Set("domain_id", *ds.DomainId) + d.Set("domain_name", *ds.DomainName) + if ds.Endpoint != nil { + d.Set("endpoint", *ds.Endpoint) + } + + err = d.Set("ebs_options", flattenESEBSOptions(ds.EBSOptions)) + if err != nil { + return err + } + err = d.Set("cluster_config", flattenESClusterConfig(ds.ElasticsearchClusterConfig)) + if err != nil { + return err + } + if ds.SnapshotOptions != nil { + d.Set("snapshot_options", map[string]interface{}{ + "automated_snapshot_start_hour": *ds.SnapshotOptions.AutomatedSnapshotStartHour, + }) + } + + d.Set("arn", *ds.ARN) + + return nil +} + +func resourceAwsElasticSearchDomainUpdate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).esconn + + input := elasticsearch.UpdateElasticsearchDomainConfigInput{ + DomainName: aws.String(d.Get("domain_name").(string)), + } + + if d.HasChange("access_policies") { + input.AccessPolicies = aws.String(d.Get("access_policies").(string)) + } + + if d.HasChange("advanced_options") { + input.AdvancedOptions = stringMapToPointers(d.Get("advanced_options").(map[string]interface{})) + } + + if d.HasChange("ebs_options") { + options := d.Get("ebs_options").([]interface{}) + + if len(options) > 1 { + return fmt.Errorf("Only a single ebs_options block is expected") + } else if len(options) == 1 { + s := options[0].(map[string]interface{}) + input.EBSOptions = expandESEBSOptions(s) + } + } + + if d.HasChange("cluster_config") { + config := d.Get("cluster_config").([]interface{}) + + if len(config) > 1 { + return fmt.Errorf("Only a single cluster_config block is expected") + } else if len(config) == 1 { + m := config[0].(map[string]interface{}) + input.ElasticsearchClusterConfig = expandESClusterConfig(m) + } + } + + if d.HasChange("snapshot_options") { + options := d.Get("snapshot_options").([]interface{}) + + if len(options) > 1 { + return fmt.Errorf("Only a single snapshot_options block is expected") + } else if len(options) == 1 { + o := options[0].(map[string]interface{}) + + snapshotOptions := elasticsearch.SnapshotOptions{ + AutomatedSnapshotStartHour: aws.Int64(int64(o["automated_snapshot_start_hour"].(int))), + } + + input.SnapshotOptions = &snapshotOptions + } + } + + _, err := conn.UpdateElasticsearchDomainConfig(&input) + if err != nil { + return err + } + + err = resource.Retry(25*time.Minute, func() error { + out, err := conn.DescribeElasticsearchDomain(&elasticsearch.DescribeElasticsearchDomainInput{ + DomainName: aws.String(d.Get("domain_name").(string)), + }) + if err != nil { + return resource.RetryError{Err: err} + } + + if *out.DomainStatus.Processing == false { + return nil + } + + return fmt.Errorf("%q: Timeout while waiting for changes to be processed", d.Id()) + }) + if err != nil { + return err + } + + return resourceAwsElasticSearchDomainRead(d, meta) +} + +func resourceAwsElasticSearchDomainDelete(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).esconn + + log.Printf("[DEBUG] Deleting ElasticSearch domain: %q", d.Get("domain_name").(string)) + _, err := conn.DeleteElasticsearchDomain(&elasticsearch.DeleteElasticsearchDomainInput{ + DomainName: aws.String(d.Get("domain_name").(string)), + }) + if err != nil { + return err + } + + log.Printf("[DEBUG] Waiting for ElasticSearch domain %q to be deleted", d.Get("domain_name").(string)) + err = resource.Retry(15*time.Minute, func() error { + out, err := conn.DescribeElasticsearchDomain(&elasticsearch.DescribeElasticsearchDomainInput{ + DomainName: aws.String(d.Get("domain_name").(string)), + }) + + if err != nil { + awsErr, ok := err.(awserr.Error) + if !ok { + return resource.RetryError{Err: err} + } + + if awsErr.Code() == "ResourceNotFoundException" { + return nil + } + + return resource.RetryError{Err: awsErr} + } + + if !*out.DomainStatus.Processing { + return nil + } + + return fmt.Errorf("%q: Timeout while waiting for the domain to be deleted", d.Id()) + }) + + d.SetId("") + + return err +} diff --git a/builtin/providers/aws/resource_aws_elasticsearch_domain_test.go b/builtin/providers/aws/resource_aws_elasticsearch_domain_test.go new file mode 100644 index 000000000..dee675d0d --- /dev/null +++ b/builtin/providers/aws/resource_aws_elasticsearch_domain_test.go @@ -0,0 +1,122 @@ +package aws + +import ( + "fmt" + "testing" + + "github.com/aws/aws-sdk-go/aws" + elasticsearch "github.com/aws/aws-sdk-go/service/elasticsearchservice" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +func TestAccAWSElasticSearchDomain_basic(t *testing.T) { + var domain elasticsearch.ElasticsearchDomainStatus + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckESDomainDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccESDomainConfig_basic, + Check: resource.ComposeTestCheckFunc( + testAccCheckESDomainExists("aws_elasticsearch_domain.example", &domain), + ), + }, + }, + }) +} + +func TestAccAWSElasticSearchDomain_complex(t *testing.T) { + var domain elasticsearch.ElasticsearchDomainStatus + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckESDomainDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccESDomainConfig_complex, + Check: resource.ComposeTestCheckFunc( + testAccCheckESDomainExists("aws_elasticsearch_domain.example", &domain), + ), + }, + }, + }) +} + +func testAccCheckESDomainExists(n string, domain *elasticsearch.ElasticsearchDomainStatus) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[n] + if !ok { + return fmt.Errorf("Not found: %s", n) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No ES Domain ID is set") + } + + conn := testAccProvider.Meta().(*AWSClient).esconn + opts := &elasticsearch.DescribeElasticsearchDomainInput{ + DomainName: aws.String(rs.Primary.Attributes["domain_name"]), + } + + resp, err := conn.DescribeElasticsearchDomain(opts) + if err != nil { + return fmt.Errorf("Error describing domain: %s", err.Error()) + } + + *domain = *resp.DomainStatus + + return nil + } +} + +func testAccCheckESDomainDestroy(s *terraform.State) error { + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_elasticsearch_domain" { + continue + } + + conn := testAccProvider.Meta().(*AWSClient).esconn + opts := &elasticsearch.DescribeElasticsearchDomainInput{ + DomainName: aws.String(rs.Primary.Attributes["domain_name"]), + } + + _, err := conn.DescribeElasticsearchDomain(opts) + if err != nil { + return fmt.Errorf("Error describing ES domains: %q", err.Error()) + } + } + return nil +} + +const testAccESDomainConfig_basic = ` +resource "aws_elasticsearch_domain" "example" { + domain_name = "tf-test-1" +} +` + +const testAccESDomainConfig_complex = ` +resource "aws_elasticsearch_domain" "example" { + domain_name = "tf-test-2" + + advanced_options { + "indices.fielddata.cache.size" = 80 + } + + ebs_options { + ebs_enabled = false + } + + cluster_config { + instance_count = 2 + zone_awareness_enabled = true + } + + snapshot_options { + automated_snapshot_start_hour = 23 + } +} +` diff --git a/builtin/providers/aws/resource_aws_elb.go b/builtin/providers/aws/resource_aws_elb.go index a57fc840a..9955c7cf0 100644 --- a/builtin/providers/aws/resource_aws_elb.go +++ b/builtin/providers/aws/resource_aws_elb.go @@ -24,31 +24,11 @@ func resourceAwsElb() *schema.Resource { Schema: map[string]*schema.Schema{ "name": &schema.Schema{ - Type: schema.TypeString, - Optional: true, - Computed: true, - ForceNew: true, - ValidateFunc: func(v interface{}, k string) (ws []string, errors []error) { - value := v.(string) - if !regexp.MustCompile(`^[0-9A-Za-z-]+$`).MatchString(value) { - errors = append(errors, fmt.Errorf( - "only alphanumeric characters and hyphens allowed in %q: %q", - k, value)) - } - if len(value) > 32 { - errors = append(errors, fmt.Errorf( - "%q cannot be longer than 32 characters: %q", k, value)) - } - if regexp.MustCompile(`^-`).MatchString(value) { - errors = append(errors, fmt.Errorf( - "%q cannot begin with a hyphen: %q", k, value)) - } - if regexp.MustCompile(`-$`).MatchString(value) { - errors = append(errors, fmt.Errorf( - "%q cannot end with a hyphen: %q", k, value)) - } - return - }, + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + ValidateFunc: validateElbName, }, "internal": &schema.Schema{ @@ -591,3 +571,26 @@ func isLoadBalancerNotFound(err error) bool { elberr, ok := err.(awserr.Error) return ok && elberr.Code() == "LoadBalancerNotFound" } + +func validateElbName(v interface{}, k string) (ws []string, errors []error) { + value := v.(string) + if !regexp.MustCompile(`^[0-9A-Za-z-]+$`).MatchString(value) { + errors = append(errors, fmt.Errorf( + "only alphanumeric characters and hyphens allowed in %q: %q", + k, value)) + } + if len(value) > 32 { + errors = append(errors, fmt.Errorf( + "%q cannot be longer than 32 characters: %q", k, value)) + } + if regexp.MustCompile(`^-`).MatchString(value) { + errors = append(errors, fmt.Errorf( + "%q cannot begin with a hyphen: %q", k, value)) + } + if regexp.MustCompile(`-$`).MatchString(value) { + errors = append(errors, fmt.Errorf( + "%q cannot end with a hyphen: %q", k, value)) + } + return + +} diff --git a/builtin/providers/aws/resource_aws_elb_test.go b/builtin/providers/aws/resource_aws_elb_test.go index 941b2fdef..dadf4aba3 100644 --- a/builtin/providers/aws/resource_aws_elb_test.go +++ b/builtin/providers/aws/resource_aws_elb_test.go @@ -431,12 +431,48 @@ func TestResourceAwsElbListenerHash(t *testing.T) { for tn, tc := range cases { leftHash := resourceAwsElbListenerHash(tc.Left) rightHash := resourceAwsElbListenerHash(tc.Right) - if (leftHash == rightHash) != tc.Match { + if leftHash == rightHash != tc.Match { t.Fatalf("%s: expected match: %t, but did not get it", tn, tc.Match) } } } +func TestResourceAWSELB_validateElbNameCannotBeginWithHyphen(t *testing.T) { + var elbName = "-Testing123" + _, errors := validateElbName(elbName, "SampleKey") + + if len(errors) != 1 { + t.Fatalf("Expected the ELB Name to trigger a validation error") + } +} + +func TestResourceAWSELB_validateElbNameCannotBeLongerThen32Characters(t *testing.T) { + var elbName = "Testing123dddddddddddddddddddvvvv" + _, errors := validateElbName(elbName, "SampleKey") + + if len(errors) != 1 { + t.Fatalf("Expected the ELB Name to trigger a validation error") + } +} + +func TestResourceAWSELB_validateElbNameCannotHaveSpecialCharacters(t *testing.T) { + var elbName = "Testing123%%" + _, errors := validateElbName(elbName, "SampleKey") + + if len(errors) != 1 { + t.Fatalf("Expected the ELB Name to trigger a validation error") + } +} + +func TestResourceAWSELB_validateElbNameCannotEndWithHyphen(t *testing.T) { + var elbName = "Testing123-" + _, errors := validateElbName(elbName, "SampleKey") + + if len(errors) != 1 { + t.Fatalf("Expected the ELB Name to trigger a validation error") + } +} + func testAccCheckAWSELBDestroy(s *terraform.State) error { conn := testAccProvider.Meta().(*AWSClient).elbconn diff --git a/builtin/providers/aws/resource_aws_glacier_vault.go b/builtin/providers/aws/resource_aws_glacier_vault.go new file mode 100644 index 000000000..21ac4d7cc --- /dev/null +++ b/builtin/providers/aws/resource_aws_glacier_vault.go @@ -0,0 +1,387 @@ +package aws + +import ( + "fmt" + "log" + "regexp" + + "github.com/hashicorp/terraform/helper/schema" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/awserr" + "github.com/aws/aws-sdk-go/service/glacier" +) + +func resourceAwsGlacierVault() *schema.Resource { + return &schema.Resource{ + Create: resourceAwsGlacierVaultCreate, + Read: resourceAwsGlacierVaultRead, + Update: resourceAwsGlacierVaultUpdate, + Delete: resourceAwsGlacierVaultDelete, + + Schema: map[string]*schema.Schema{ + "name": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + ValidateFunc: func(v interface{}, k string) (ws []string, errors []error) { + value := v.(string) + if !regexp.MustCompile(`^[.0-9A-Za-z-_]+$`).MatchString(value) { + errors = append(errors, fmt.Errorf( + "only alphanumeric characters, hyphens, underscores, and periods allowed in %q", k)) + } + if len(value) > 255 { + errors = append(errors, fmt.Errorf( + "%q cannot be longer than 255 characters", k)) + } + return + }, + }, + + "location": &schema.Schema{ + Type: schema.TypeString, + Computed: true, + }, + + "arn": &schema.Schema{ + Type: schema.TypeString, + Computed: true, + }, + + "access_policy": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + StateFunc: normalizeJson, + }, + + "notification": &schema.Schema{ + Type: schema.TypeList, + Optional: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "events": &schema.Schema{ + Type: schema.TypeSet, + Required: true, + Elem: &schema.Schema{Type: schema.TypeString}, + Set: schema.HashString, + }, + "sns_topic": &schema.Schema{ + Type: schema.TypeString, + Required: true, + }, + }, + }, + }, + + "tags": tagsSchema(), + }, + } +} + +func resourceAwsGlacierVaultCreate(d *schema.ResourceData, meta interface{}) error { + glacierconn := meta.(*AWSClient).glacierconn + + input := &glacier.CreateVaultInput{ + VaultName: aws.String(d.Get("name").(string)), + } + + out, err := glacierconn.CreateVault(input) + if err != nil { + return fmt.Errorf("Error creating Glacier Vault: %s", err) + } + + d.SetId(d.Get("name").(string)) + d.Set("location", *out.Location) + + return resourceAwsGlacierVaultUpdate(d, meta) +} + +func resourceAwsGlacierVaultUpdate(d *schema.ResourceData, meta interface{}) error { + glacierconn := meta.(*AWSClient).glacierconn + + if err := setGlacierVaultTags(glacierconn, d); err != nil { + return err + } + + if d.HasChange("access_policy") { + if err := resourceAwsGlacierVaultPolicyUpdate(glacierconn, d); err != nil { + return err + } + } + + if d.HasChange("notification") { + if err := resourceAwsGlacierVaultNotificationUpdate(glacierconn, d); err != nil { + return err + } + } + + return resourceAwsGlacierVaultRead(d, meta) +} + +func resourceAwsGlacierVaultRead(d *schema.ResourceData, meta interface{}) error { + glacierconn := meta.(*AWSClient).glacierconn + + input := &glacier.DescribeVaultInput{ + VaultName: aws.String(d.Id()), + } + + out, err := glacierconn.DescribeVault(input) + if err != nil { + return fmt.Errorf("Error reading Glacier Vault: %s", err.Error()) + } + + d.Set("arn", *out.VaultARN) + + tags, err := getGlacierVaultTags(glacierconn, d.Id()) + if err != nil { + return err + } + d.Set("tags", tags) + + log.Printf("[DEBUG] Getting the access_policy for Vault %s", d.Id()) + pol, err := glacierconn.GetVaultAccessPolicy(&glacier.GetVaultAccessPolicyInput{ + VaultName: aws.String(d.Id()), + }) + + if awserr, ok := err.(awserr.Error); ok && awserr.Code() == "ResourceNotFoundException" { + d.Set("access_policy", "") + } else if pol != nil { + d.Set("access_policy", normalizeJson(*pol.Policy.Policy)) + } else { + return err + } + + notifications, err := getGlacierVaultNotification(glacierconn, d.Id()) + if awserr, ok := err.(awserr.Error); ok && awserr.Code() == "ResourceNotFoundException" { + d.Set("notification", "") + } else if pol != nil { + d.Set("notification", notifications) + } else { + return err + } + + return nil +} + +func resourceAwsGlacierVaultDelete(d *schema.ResourceData, meta interface{}) error { + glacierconn := meta.(*AWSClient).glacierconn + + log.Printf("[DEBUG] Glacier Delete Vault: %s", d.Id()) + _, err := glacierconn.DeleteVault(&glacier.DeleteVaultInput{ + VaultName: aws.String(d.Id()), + }) + if err != nil { + return fmt.Errorf("Error deleting Glacier Vault: %s", err.Error()) + } + return nil +} + +func resourceAwsGlacierVaultNotificationUpdate(glacierconn *glacier.Glacier, d *schema.ResourceData) error { + + if v, ok := d.GetOk("notification"); ok { + settings := v.([]interface{}) + + if len(settings) > 1 { + return fmt.Errorf("Only a single Notification Block is allowed for Glacier Vault") + } else if len(settings) == 1 { + s := settings[0].(map[string]interface{}) + var events []*string + for _, id := range s["events"].(*schema.Set).List() { + events = append(events, aws.String(id.(string))) + } + + _, err := glacierconn.SetVaultNotifications(&glacier.SetVaultNotificationsInput{ + VaultName: aws.String(d.Id()), + VaultNotificationConfig: &glacier.VaultNotificationConfig{ + SNSTopic: aws.String(s["sns_topic"].(string)), + Events: events, + }, + }) + + if err != nil { + return fmt.Errorf("Error Updating Glacier Vault Notifications: %s", err.Error()) + } + } + } else { + _, err := glacierconn.DeleteVaultNotifications(&glacier.DeleteVaultNotificationsInput{ + VaultName: aws.String(d.Id()), + }) + + if err != nil { + return fmt.Errorf("Error Removing Glacier Vault Notifications: %s", err.Error()) + } + + } + + return nil +} + +func resourceAwsGlacierVaultPolicyUpdate(glacierconn *glacier.Glacier, d *schema.ResourceData) error { + vaultName := d.Id() + policyContents := d.Get("access_policy").(string) + + policy := &glacier.VaultAccessPolicy{ + Policy: aws.String(policyContents), + } + + if policyContents != "" { + log.Printf("[DEBUG] Glacier Vault: %s, put policy", vaultName) + + _, err := glacierconn.SetVaultAccessPolicy(&glacier.SetVaultAccessPolicyInput{ + VaultName: aws.String(d.Id()), + Policy: policy, + }) + + if err != nil { + return fmt.Errorf("Error putting Glacier Vault policy: %s", err.Error()) + } + } else { + log.Printf("[DEBUG] Glacier Vault: %s, delete policy: %s", vaultName, policy) + _, err := glacierconn.DeleteVaultAccessPolicy(&glacier.DeleteVaultAccessPolicyInput{ + VaultName: aws.String(d.Id()), + }) + + if err != nil { + return fmt.Errorf("Error deleting Glacier Vault policy: %s", err.Error()) + } + } + + return nil +} + +func setGlacierVaultTags(conn *glacier.Glacier, d *schema.ResourceData) error { + if d.HasChange("tags") { + oraw, nraw := d.GetChange("tags") + o := oraw.(map[string]interface{}) + n := nraw.(map[string]interface{}) + create, remove := diffGlacierVaultTags(mapGlacierVaultTags(o), mapGlacierVaultTags(n)) + + // Set tags + if len(remove) > 0 { + tagsToRemove := &glacier.RemoveTagsFromVaultInput{ + VaultName: aws.String(d.Id()), + TagKeys: glacierStringsToPointyString(remove), + } + + log.Printf("[DEBUG] Removing tags: from %s", d.Id()) + _, err := conn.RemoveTagsFromVault(tagsToRemove) + if err != nil { + return err + } + } + if len(create) > 0 { + tagsToAdd := &glacier.AddTagsToVaultInput{ + VaultName: aws.String(d.Id()), + Tags: glacierVaultTagsFromMap(create), + } + + log.Printf("[DEBUG] Creating tags: for %s", d.Id()) + _, err := conn.AddTagsToVault(tagsToAdd) + if err != nil { + return err + } + } + } + + return nil +} + +func mapGlacierVaultTags(m map[string]interface{}) map[string]string { + results := make(map[string]string) + for k, v := range m { + results[k] = v.(string) + } + + return results +} + +func diffGlacierVaultTags(oldTags, newTags map[string]string) (map[string]string, []string) { + + create := make(map[string]string) + for k, v := range newTags { + create[k] = v + } + + // Build the list of what to remove + var remove []string + for k, v := range oldTags { + old, ok := create[k] + if !ok || old != v { + // Delete it! + remove = append(remove, k) + } + } + + return create, remove +} + +func getGlacierVaultTags(glacierconn *glacier.Glacier, vaultName string) (map[string]string, error) { + request := &glacier.ListTagsForVaultInput{ + VaultName: aws.String(vaultName), + } + + log.Printf("[DEBUG] Getting the tags: for %s", vaultName) + response, err := glacierconn.ListTagsForVault(request) + if awserr, ok := err.(awserr.Error); ok && awserr.Code() == "NoSuchTagSet" { + return map[string]string{}, nil + } else if err != nil { + return nil, err + } + + return glacierVaultTagsToMap(response.Tags), nil +} + +func glacierVaultTagsToMap(responseTags map[string]*string) map[string]string { + results := make(map[string]string, len(responseTags)) + for k, v := range responseTags { + results[k] = *v + } + + return results +} + +func glacierVaultTagsFromMap(responseTags map[string]string) map[string]*string { + results := make(map[string]*string, len(responseTags)) + for k, v := range responseTags { + results[k] = aws.String(v) + } + + return results +} + +func glacierStringsToPointyString(s []string) []*string { + results := make([]*string, len(s)) + for i, x := range s { + results[i] = aws.String(x) + } + + return results +} + +func glacierPointersToStringList(pointers []*string) []interface{} { + list := make([]interface{}, len(pointers)) + for i, v := range pointers { + list[i] = *v + } + return list +} + +func getGlacierVaultNotification(glacierconn *glacier.Glacier, vaultName string) ([]map[string]interface{}, error) { + request := &glacier.GetVaultNotificationsInput{ + VaultName: aws.String(vaultName), + } + + response, err := glacierconn.GetVaultNotifications(request) + if err != nil { + return nil, fmt.Errorf("Error reading Glacier Vault Notifications: %s", err.Error()) + } + + notifications := make(map[string]interface{}, 0) + + log.Print("[DEBUG] Flattening Glacier Vault Notifications") + + notifications["events"] = schema.NewSet(schema.HashString, glacierPointersToStringList(response.VaultNotificationConfig.Events)) + notifications["sns_topic"] = *response.VaultNotificationConfig.SNSTopic + + return []map[string]interface{}{notifications}, nil +} diff --git a/builtin/providers/aws/resource_aws_glacier_vault_test.go b/builtin/providers/aws/resource_aws_glacier_vault_test.go new file mode 100644 index 000000000..4f5c26bf2 --- /dev/null +++ b/builtin/providers/aws/resource_aws_glacier_vault_test.go @@ -0,0 +1,227 @@ +package aws + +import ( + "fmt" + "reflect" + "testing" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/awserr" + "github.com/aws/aws-sdk-go/service/glacier" + + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +func TestAccAWSGlacierVault_basic(t *testing.T) { + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckGlacierVaultDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccGlacierVault_basic, + Check: resource.ComposeTestCheckFunc( + testAccCheckGlacierVaultExists("aws_glacier_vault.test"), + ), + }, + }, + }) +} + +func TestAccAWSGlacierVault_full(t *testing.T) { + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckGlacierVaultDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccGlacierVault_full, + Check: resource.ComposeTestCheckFunc( + testAccCheckGlacierVaultExists("aws_glacier_vault.full"), + ), + }, + }, + }) +} + +func TestAccAWSGlacierVault_RemoveNotifications(t *testing.T) { + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckGlacierVaultDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccGlacierVault_full, + Check: resource.ComposeTestCheckFunc( + testAccCheckGlacierVaultExists("aws_glacier_vault.full"), + ), + }, + resource.TestStep{ + Config: testAccGlacierVault_withoutNotification, + Check: resource.ComposeTestCheckFunc( + testAccCheckGlacierVaultExists("aws_glacier_vault.full"), + testAccCheckVaultNotificationsMissing("aws_glacier_vault.full"), + ), + }, + }, + }) +} + +func TestDiffGlacierVaultTags(t *testing.T) { + cases := []struct { + Old, New map[string]interface{} + Create map[string]string + Remove []string + }{ + // Basic add/remove + { + Old: map[string]interface{}{ + "foo": "bar", + }, + New: map[string]interface{}{ + "bar": "baz", + }, + Create: map[string]string{ + "bar": "baz", + }, + Remove: []string{ + "foo", + }, + }, + + // Modify + { + Old: map[string]interface{}{ + "foo": "bar", + }, + New: map[string]interface{}{ + "foo": "baz", + }, + Create: map[string]string{ + "foo": "baz", + }, + Remove: []string{ + "foo", + }, + }, + } + + for i, tc := range cases { + c, r := diffGlacierVaultTags(mapGlacierVaultTags(tc.Old), mapGlacierVaultTags(tc.New)) + + if !reflect.DeepEqual(c, tc.Create) { + t.Fatalf("%d: bad create: %#v", i, c) + } + if !reflect.DeepEqual(r, tc.Remove) { + t.Fatalf("%d: bad remove: %#v", i, r) + } + } +} + +func testAccCheckGlacierVaultExists(name string) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[name] + if !ok { + return fmt.Errorf("Not found: %s", name) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No ID is set") + } + + glacierconn := testAccProvider.Meta().(*AWSClient).glacierconn + out, err := glacierconn.DescribeVault(&glacier.DescribeVaultInput{ + VaultName: aws.String(rs.Primary.ID), + }) + + if err != nil { + return err + } + + if out.VaultARN == nil { + return fmt.Errorf("No Glacier Vault Found") + } + + if *out.VaultName != rs.Primary.ID { + return fmt.Errorf("Glacier Vault Mismatch - existing: %q, state: %q", + *out.VaultName, rs.Primary.ID) + } + + return nil + } +} + +func testAccCheckVaultNotificationsMissing(name string) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[name] + if !ok { + return fmt.Errorf("Not found: %s", name) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No ID is set") + } + + glacierconn := testAccProvider.Meta().(*AWSClient).glacierconn + out, err := glacierconn.GetVaultNotifications(&glacier.GetVaultNotificationsInput{ + VaultName: aws.String(rs.Primary.ID), + }) + + if awserr, ok := err.(awserr.Error); ok && awserr.Code() != "ResourceNotFoundException" { + return fmt.Errorf("Expected ResourceNotFoundException for Vault %s Notification Block but got %s", rs.Primary.ID, awserr.Code()) + } + + if out.VaultNotificationConfig != nil { + return fmt.Errorf("Vault Notification Block has been found for %s", rs.Primary.ID) + } + + return nil + } + +} + +func testAccCheckGlacierVaultDestroy(s *terraform.State) error { + if len(s.RootModule().Resources) > 0 { + return fmt.Errorf("Expected all resources to be gone, but found: %#v", + s.RootModule().Resources) + } + + return nil +} + +const testAccGlacierVault_basic = ` +resource "aws_glacier_vault" "test" { + name = "my_test_vault" +} +` + +const testAccGlacierVault_full = ` +resource "aws_sns_topic" "aws_sns_topic" { + name = "glacier-sns-topic" +} + +resource "aws_glacier_vault" "full" { + name = "my_test_vault" + notification { + sns_topic = "${aws_sns_topic.aws_sns_topic.arn}" + events = ["ArchiveRetrievalCompleted","InventoryRetrievalCompleted"] + } + tags { + Test="Test1" + } +} +` + +const testAccGlacierVault_withoutNotification = ` +resource "aws_sns_topic" "aws_sns_topic" { + name = "glacier-sns-topic" +} + +resource "aws_glacier_vault" "full" { + name = "my_test_vault" + tags { + Test="Test1" + } +} +` diff --git a/builtin/providers/aws/resource_aws_iam_policy_attachment_test.go b/builtin/providers/aws/resource_aws_iam_policy_attachment_test.go index a68d956a5..11e50b0d9 100644 --- a/builtin/providers/aws/resource_aws_iam_policy_attachment_test.go +++ b/builtin/providers/aws/resource_aws_iam_policy_attachment_test.go @@ -102,7 +102,7 @@ func testAccCheckAWSPolicyAttachmentAttributes(users []string, roles []string, g } } if uc != 0 || rc != 0 || gc != 0 { - return fmt.Errorf("Error: Number of attached users, roles, or groups was incorrect:\n expected %d users and found %d\nexpected %d roles and found %d\nexpected %d groups and found %d", len(users), (len(users) - uc), len(roles), (len(roles) - rc), len(groups), (len(groups) - gc)) + return fmt.Errorf("Error: Number of attached users, roles, or groups was incorrect:\n expected %d users and found %d\nexpected %d roles and found %d\nexpected %d groups and found %d", len(users), len(users)-uc, len(roles), len(roles)-rc, len(groups), len(groups)-gc) } return nil } diff --git a/builtin/providers/aws/resource_aws_iam_saml_provider.go b/builtin/providers/aws/resource_aws_iam_saml_provider.go new file mode 100644 index 000000000..6a166d711 --- /dev/null +++ b/builtin/providers/aws/resource_aws_iam_saml_provider.go @@ -0,0 +1,101 @@ +package aws + +import ( + "time" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/iam" + + "github.com/hashicorp/terraform/helper/schema" +) + +func resourceAwsIamSamlProvider() *schema.Resource { + return &schema.Resource{ + Create: resourceAwsIamSamlProviderCreate, + Read: resourceAwsIamSamlProviderRead, + Update: resourceAwsIamSamlProviderUpdate, + Delete: resourceAwsIamSamlProviderDelete, + + Schema: map[string]*schema.Schema{ + "arn": &schema.Schema{ + Type: schema.TypeString, + Computed: true, + }, + "valid_until": &schema.Schema{ + Type: schema.TypeString, + Computed: true, + }, + "name": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + "saml_metadata_document": &schema.Schema{ + Type: schema.TypeString, + Required: true, + }, + }, + } +} + +func resourceAwsIamSamlProviderCreate(d *schema.ResourceData, meta interface{}) error { + iamconn := meta.(*AWSClient).iamconn + + input := &iam.CreateSAMLProviderInput{ + Name: aws.String(d.Get("name").(string)), + SAMLMetadataDocument: aws.String(d.Get("saml_metadata_document").(string)), + } + + out, err := iamconn.CreateSAMLProvider(input) + if err != nil { + return err + } + + d.SetId(*out.SAMLProviderArn) + + return resourceAwsIamSamlProviderRead(d, meta) +} + +func resourceAwsIamSamlProviderRead(d *schema.ResourceData, meta interface{}) error { + iamconn := meta.(*AWSClient).iamconn + + input := &iam.GetSAMLProviderInput{ + SAMLProviderArn: aws.String(d.Id()), + } + out, err := iamconn.GetSAMLProvider(input) + if err != nil { + return err + } + + validUntil := out.ValidUntil.Format(time.RFC1123) + d.Set("valid_until", validUntil) + d.Set("saml_metadata_document", *out.SAMLMetadataDocument) + + return nil +} + +func resourceAwsIamSamlProviderUpdate(d *schema.ResourceData, meta interface{}) error { + iamconn := meta.(*AWSClient).iamconn + + input := &iam.UpdateSAMLProviderInput{ + SAMLProviderArn: aws.String(d.Id()), + SAMLMetadataDocument: aws.String(d.Get("saml_metadata_document").(string)), + } + _, err := iamconn.UpdateSAMLProvider(input) + if err != nil { + return err + } + + return resourceAwsIamSamlProviderRead(d, meta) +} + +func resourceAwsIamSamlProviderDelete(d *schema.ResourceData, meta interface{}) error { + iamconn := meta.(*AWSClient).iamconn + + input := &iam.DeleteSAMLProviderInput{ + SAMLProviderArn: aws.String(d.Id()), + } + _, err := iamconn.DeleteSAMLProvider(input) + + return err +} diff --git a/builtin/providers/aws/resource_aws_iam_saml_provider_test.go b/builtin/providers/aws/resource_aws_iam_saml_provider_test.go new file mode 100644 index 000000000..63ed39588 --- /dev/null +++ b/builtin/providers/aws/resource_aws_iam_saml_provider_test.go @@ -0,0 +1,79 @@ +package aws + +import ( + "fmt" + "testing" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/iam" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +func TestAccAWSIAMSamlProvider_basic(t *testing.T) { + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckIAMSamlProviderDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccIAMSamlProviderConfig, + Check: resource.ComposeTestCheckFunc( + testAccCheckIAMSamlProvider("aws_iam_saml_provider.salesforce"), + ), + }, + resource.TestStep{ + Config: testAccIAMSamlProviderConfigUpdate, + Check: resource.ComposeTestCheckFunc( + testAccCheckIAMSamlProvider("aws_iam_saml_provider.salesforce"), + ), + }, + }, + }) +} + +func testAccCheckIAMSamlProviderDestroy(s *terraform.State) error { + if len(s.RootModule().Resources) > 0 { + return fmt.Errorf("Expected all resources to be gone, but found: %#v", s.RootModule().Resources) + } + + return nil +} + +func testAccCheckIAMSamlProvider(id string) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[id] + if !ok { + return fmt.Errorf("Not Found: %s", id) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No ID is set") + } + + iamconn := testAccProvider.Meta().(*AWSClient).iamconn + _, err := iamconn.GetSAMLProvider(&iam.GetSAMLProviderInput{ + SAMLProviderArn: aws.String(rs.Primary.ID), + }) + + if err != nil { + return err + } + + return nil + } +} + +const testAccIAMSamlProviderConfig = ` +resource "aws_iam_saml_provider" "salesforce" { + name = "tf-salesforce-test" + saml_metadata_document = "${file("./test-fixtures/saml-metadata.xml")}" +} +` + +const testAccIAMSamlProviderConfigUpdate = ` +resource "aws_iam_saml_provider" "salesforce" { + name = "tf-salesforce-test" + saml_metadata_document = "${file("./test-fixtures/saml-metadata-modified.xml")}" +} +` diff --git a/builtin/providers/aws/resource_aws_instance.go b/builtin/providers/aws/resource_aws_instance.go index 093b6ae86..d096a45d6 100644 --- a/builtin/providers/aws/resource_aws_instance.go +++ b/builtin/providers/aws/resource_aws_instance.go @@ -414,11 +414,6 @@ func resourceAwsInstanceCreate(d *schema.ResourceData, meta interface{}) error { }) } - // Set our attributes - if err := resourceAwsInstanceRead(d, meta); err != nil { - return err - } - // Update if we need to return resourceAwsInstanceUpdate(d, meta) } @@ -548,16 +543,23 @@ func resourceAwsInstanceUpdate(d *schema.ResourceData, meta interface{}) error { } // SourceDestCheck can only be set on VPC instances - if d.Get("subnet_id").(string) != "" { - log.Printf("[INFO] Modifying instance %s", d.Id()) - _, err := conn.ModifyInstanceAttribute(&ec2.ModifyInstanceAttributeInput{ - InstanceId: aws.String(d.Id()), - SourceDestCheck: &ec2.AttributeBooleanValue{ - Value: aws.Bool(d.Get("source_dest_check").(bool)), - }, - }) - if err != nil { - return err + // AWS will return an error of InvalidParameterCombination if we attempt + // to modify the source_dest_check of an instance in EC2 Classic + log.Printf("[INFO] Modifying instance %s", d.Id()) + _, err := conn.ModifyInstanceAttribute(&ec2.ModifyInstanceAttributeInput{ + InstanceId: aws.String(d.Id()), + SourceDestCheck: &ec2.AttributeBooleanValue{ + Value: aws.Bool(d.Get("source_dest_check").(bool)), + }, + }) + if err != nil { + if ec2err, ok := err.(awserr.Error); ok { + // Toloerate InvalidParameterCombination error in Classic, otherwise + // return the error + if "InvalidParameterCombination" != ec2err.Code() { + return err + } + log.Printf("[WARN] Attempted to modify SourceDestCheck on non VPC instance: %s", ec2err.Message()) } } @@ -693,7 +695,7 @@ func readBlockDevicesFromInstance(instance *ec2.Instance, conn *ec2.EC2) (map[st instanceBlockDevices := make(map[string]*ec2.InstanceBlockDeviceMapping) for _, bd := range instance.BlockDeviceMappings { if bd.Ebs != nil { - instanceBlockDevices[*(bd.Ebs.VolumeId)] = bd + instanceBlockDevices[*bd.Ebs.VolumeId] = bd } } @@ -753,9 +755,9 @@ func readBlockDevicesFromInstance(instance *ec2.Instance, conn *ec2.EC2) (map[st } func blockDeviceIsRoot(bd *ec2.InstanceBlockDeviceMapping, instance *ec2.Instance) bool { - return (bd.DeviceName != nil && + return bd.DeviceName != nil && instance.RootDeviceName != nil && - *bd.DeviceName == *instance.RootDeviceName) + *bd.DeviceName == *instance.RootDeviceName } func fetchRootDeviceName(ami string, conn *ec2.EC2) (*string, error) { diff --git a/builtin/providers/aws/resource_aws_instance_test.go b/builtin/providers/aws/resource_aws_instance_test.go index 258320d54..3224f9b5e 100644 --- a/builtin/providers/aws/resource_aws_instance_test.go +++ b/builtin/providers/aws/resource_aws_instance_test.go @@ -190,6 +190,9 @@ func TestAccAWSInstance_sourceDestCheck(t *testing.T) { testCheck := func(enabled bool) resource.TestCheckFunc { return func(*terraform.State) error { + if v.SourceDestCheck == nil { + return fmt.Errorf("bad source_dest_check: got nil") + } if *v.SourceDestCheck != enabled { return fmt.Errorf("bad source_dest_check: %#v", *v.SourceDestCheck) } diff --git a/builtin/providers/aws/resource_aws_key_pair.go b/builtin/providers/aws/resource_aws_key_pair.go index e747fbfc5..0d6c51fcf 100644 --- a/builtin/providers/aws/resource_aws_key_pair.go +++ b/builtin/providers/aws/resource_aws_key_pair.go @@ -2,6 +2,7 @@ package aws import ( "fmt" + "strings" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/helper/schema" @@ -18,6 +19,9 @@ func resourceAwsKeyPair() *schema.Resource { Update: nil, Delete: resourceAwsKeyPairDelete, + SchemaVersion: 1, + MigrateState: resourceAwsKeyPairMigrateState, + Schema: map[string]*schema.Schema{ "key_name": &schema.Schema{ Type: schema.TypeString, @@ -29,6 +33,14 @@ func resourceAwsKeyPair() *schema.Resource { Type: schema.TypeString, Required: true, ForceNew: true, + StateFunc: func(v interface{}) string { + switch v.(type) { + case string: + return strings.TrimSpace(v.(string)) + default: + return "" + } + }, }, "fingerprint": &schema.Schema{ Type: schema.TypeString, @@ -45,6 +57,7 @@ func resourceAwsKeyPairCreate(d *schema.ResourceData, meta interface{}) error { if keyName == "" { keyName = resource.UniqueId() } + publicKey := d.Get("public_key").(string) req := &ec2.ImportKeyPairInput{ KeyName: aws.String(keyName), diff --git a/builtin/providers/aws/resource_aws_key_pair_migrate.go b/builtin/providers/aws/resource_aws_key_pair_migrate.go new file mode 100644 index 000000000..0d56123aa --- /dev/null +++ b/builtin/providers/aws/resource_aws_key_pair_migrate.go @@ -0,0 +1,38 @@ +package aws + +import ( + "fmt" + "log" + "strings" + + "github.com/hashicorp/terraform/terraform" +) + +func resourceAwsKeyPairMigrateState( + v int, is *terraform.InstanceState, meta interface{}) (*terraform.InstanceState, error) { + switch v { + case 0: + log.Println("[INFO] Found AWS Key Pair State v0; migrating to v1") + return migrateKeyPairStateV0toV1(is) + default: + return is, fmt.Errorf("Unexpected schema version: %d", v) + } + + return is, nil +} + +func migrateKeyPairStateV0toV1(is *terraform.InstanceState) (*terraform.InstanceState, error) { + if is.Empty() { + log.Println("[DEBUG] Empty InstanceState; nothing to migrate.") + return is, nil + } + + log.Printf("[DEBUG] Attributes before migration: %#v", is.Attributes) + + // replace public_key with a stripped version, removing `\n` from the end + // see https://github.com/hashicorp/terraform/issues/3455 + is.Attributes["public_key"] = strings.TrimSpace(is.Attributes["public_key"]) + + log.Printf("[DEBUG] Attributes after migration: %#v", is.Attributes) + return is, nil +} diff --git a/builtin/providers/aws/resource_aws_key_pair_migrate_test.go b/builtin/providers/aws/resource_aws_key_pair_migrate_test.go new file mode 100644 index 000000000..825d3c40f --- /dev/null +++ b/builtin/providers/aws/resource_aws_key_pair_migrate_test.go @@ -0,0 +1,55 @@ +package aws + +import ( + "testing" + + "github.com/hashicorp/terraform/terraform" +) + +func TestAWSKeyPairMigrateState(t *testing.T) { + cases := map[string]struct { + StateVersion int + ID string + Attributes map[string]string + Expected string + Meta interface{} + }{ + "v0_1": { + StateVersion: 0, + ID: "tf-testing-file", + Attributes: map[string]string{ + "fingerprint": "1d:cd:46:31:a9:4a:e0:06:8a:a1:22:cb:3b:bf:8e:42", + "key_name": "tf-testing-file", + "public_key": "ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4LBtwcFsQAYWw1cnOwRTZCJCzPSzq0dl3== ctshryock", + }, + Expected: "ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4LBtwcFsQAYWw1cnOwRTZCJCzPSzq0dl3== ctshryock", + }, + "v0_2": { + StateVersion: 0, + ID: "tf-testing-file", + Attributes: map[string]string{ + "fingerprint": "1d:cd:46:31:a9:4a:e0:06:8a:a1:22:cb:3b:bf:8e:42", + "key_name": "tf-testing-file", + "public_key": "ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4LBtwcFsQAYWw1cnOwRTZCJCzPSzq0dl3== ctshryock\n", + }, + Expected: "ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4LBtwcFsQAYWw1cnOwRTZCJCzPSzq0dl3== ctshryock", + }, + } + + for tn, tc := range cases { + is := &terraform.InstanceState{ + ID: tc.ID, + Attributes: tc.Attributes, + } + is, err := resourceAwsKeyPairMigrateState( + tc.StateVersion, is, tc.Meta) + + if err != nil { + t.Fatalf("bad: %s, err: %#v", tn, err) + } + + if is.Attributes["public_key"] != tc.Expected { + t.Fatalf("Bad public_key migration: %s\n\n expected: %s", is.Attributes["public_key"], tc.Expected) + } + } +} diff --git a/builtin/providers/aws/resource_aws_kinesis_stream.go b/builtin/providers/aws/resource_aws_kinesis_stream.go index 45d685c1d..1abb9dbc3 100644 --- a/builtin/providers/aws/resource_aws_kinesis_stream.go +++ b/builtin/providers/aws/resource_aws_kinesis_stream.go @@ -2,6 +2,7 @@ package aws import ( "fmt" + "log" "time" "github.com/aws/aws-sdk-go/aws" @@ -15,6 +16,7 @@ func resourceAwsKinesisStream() *schema.Resource { return &schema.Resource{ Create: resourceAwsKinesisStreamCreate, Read: resourceAwsKinesisStreamRead, + Update: resourceAwsKinesisStreamUpdate, Delete: resourceAwsKinesisStreamDelete, Schema: map[string]*schema.Schema{ @@ -35,6 +37,7 @@ func resourceAwsKinesisStream() *schema.Resource { Optional: true, Computed: true, }, + "tags": tagsSchema(), }, } } @@ -75,13 +78,28 @@ func resourceAwsKinesisStreamCreate(d *schema.ResourceData, meta interface{}) er d.SetId(*s.StreamARN) d.Set("arn", s.StreamARN) - return nil + return resourceAwsKinesisStreamUpdate(d, meta) +} + +func resourceAwsKinesisStreamUpdate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).kinesisconn + + d.Partial(true) + if err := setTagsKinesis(conn, d); err != nil { + return err + } + + d.SetPartial("tags") + d.Partial(false) + + return resourceAwsKinesisStreamRead(d, meta) } func resourceAwsKinesisStreamRead(d *schema.ResourceData, meta interface{}) error { conn := meta.(*AWSClient).kinesisconn + sn := d.Get("name").(string) describeOpts := &kinesis.DescribeStreamInput{ - StreamName: aws.String(d.Get("name").(string)), + StreamName: aws.String(sn), } resp, err := conn.DescribeStream(describeOpts) if err != nil { @@ -99,6 +117,17 @@ func resourceAwsKinesisStreamRead(d *schema.ResourceData, meta interface{}) erro d.Set("arn", *s.StreamARN) d.Set("shard_count", len(s.Shards)) + // set tags + describeTagsOpts := &kinesis.ListTagsForStreamInput{ + StreamName: aws.String(sn), + } + tagsResp, err := conn.ListTagsForStream(describeTagsOpts) + if err != nil { + log.Printf("[DEBUG] Error retrieving tags for Stream: %s. %s", sn, err) + } else { + d.Set("tags", tagsToMapKinesis(tagsResp.Tags)) + } + return nil } diff --git a/builtin/providers/aws/resource_aws_kinesis_stream_test.go b/builtin/providers/aws/resource_aws_kinesis_stream_test.go index c9580ad22..82c0b64fa 100644 --- a/builtin/providers/aws/resource_aws_kinesis_stream_test.go +++ b/builtin/providers/aws/resource_aws_kinesis_stream_test.go @@ -107,5 +107,8 @@ var testAccKinesisStreamConfig = fmt.Sprintf(` resource "aws_kinesis_stream" "test_stream" { name = "terraform-kinesis-test-%d" shard_count = 2 + tags { + Name = "tf-test" + } } `, rand.New(rand.NewSource(time.Now().UnixNano())).Int()) diff --git a/builtin/providers/aws/resource_aws_lb_cookie_stickiness_policy.go b/builtin/providers/aws/resource_aws_lb_cookie_stickiness_policy.go index 50c6186de..bed01aadd 100644 --- a/builtin/providers/aws/resource_aws_lb_cookie_stickiness_policy.go +++ b/builtin/providers/aws/resource_aws_lb_cookie_stickiness_policy.go @@ -15,8 +15,6 @@ func resourceAwsLBCookieStickinessPolicy() *schema.Resource { // There is no concept of "updating" an LB Stickiness policy in // the AWS API. Create: resourceAwsLBCookieStickinessPolicyCreate, - Update: resourceAwsLBCookieStickinessPolicyCreate, - Read: resourceAwsLBCookieStickinessPolicyRead, Delete: resourceAwsLBCookieStickinessPolicyDelete, diff --git a/builtin/providers/aws/resource_aws_opsworks_custom_layer.go b/builtin/providers/aws/resource_aws_opsworks_custom_layer.go new file mode 100644 index 000000000..59de60db6 --- /dev/null +++ b/builtin/providers/aws/resource_aws_opsworks_custom_layer.go @@ -0,0 +1,17 @@ +package aws + +import ( + "github.com/hashicorp/terraform/helper/schema" +) + +func resourceAwsOpsworksCustomLayer() *schema.Resource { + layerType := &opsworksLayerType{ + TypeName: "custom", + CustomShortName: true, + + // The "custom" layer type has no additional attributes + Attributes: map[string]*opsworksLayerTypeAttribute{}, + } + + return layerType.SchemaResource() +} diff --git a/builtin/providers/aws/resource_aws_opsworks_custom_layer_test.go b/builtin/providers/aws/resource_aws_opsworks_custom_layer_test.go new file mode 100644 index 000000000..14a65b106 --- /dev/null +++ b/builtin/providers/aws/resource_aws_opsworks_custom_layer_test.go @@ -0,0 +1,234 @@ +package aws + +import ( + "fmt" + "testing" + + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +// These tests assume the existence of predefined Opsworks IAM roles named `aws-opsworks-ec2-role` +// and `aws-opsworks-service-role`. + +func TestAccAwsOpsworksCustomLayer(t *testing.T) { + opsiam := testAccAwsOpsworksStackIam{} + testAccAwsOpsworksStackPopulateIam(t, &opsiam) + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAwsOpsworksCustomLayerDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: fmt.Sprintf(testAccAwsOpsworksCustomLayerConfigCreate, opsiam.ServiceRoleArn, opsiam.InstanceProfileArn), + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr( + "aws_opsworks_custom_layer.tf-acc", "name", "tf-ops-acc-custom-layer", + ), + resource.TestCheckResourceAttr( + "aws_opsworks_custom_layer.tf-acc", "auto_assign_elastic_ips", "false", + ), + resource.TestCheckResourceAttr( + "aws_opsworks_custom_layer.tf-acc", "auto_healing", "true", + ), + resource.TestCheckResourceAttr( + "aws_opsworks_custom_layer.tf-acc", "drain_elb_on_shutdown", "true", + ), + resource.TestCheckResourceAttr( + "aws_opsworks_custom_layer.tf-acc", "instance_shutdown_timeout", "300", + ), + resource.TestCheckResourceAttr( + "aws_opsworks_custom_layer.tf-acc", "custom_security_group_ids.#", "2", + ), + resource.TestCheckResourceAttr( + "aws_opsworks_custom_layer.tf-acc", "system_packages.#", "2", + ), + resource.TestCheckResourceAttr( + "aws_opsworks_custom_layer.tf-acc", "system_packages.1368285564", "git", + ), + resource.TestCheckResourceAttr( + "aws_opsworks_custom_layer.tf-acc", "system_packages.2937857443", "golang", + ), + resource.TestCheckResourceAttr( + "aws_opsworks_custom_layer.tf-acc", "ebs_volume.#", "1", + ), + resource.TestCheckResourceAttr( + "aws_opsworks_custom_layer.tf-acc", "ebs_volume.3575749636.type", "gp2", + ), + resource.TestCheckResourceAttr( + "aws_opsworks_custom_layer.tf-acc", "ebs_volume.3575749636.number_of_disks", "2", + ), + resource.TestCheckResourceAttr( + "aws_opsworks_custom_layer.tf-acc", "ebs_volume.3575749636.mount_point", "/home", + ), + resource.TestCheckResourceAttr( + "aws_opsworks_custom_layer.tf-acc", "ebs_volume.3575749636.size", "100", + ), + ), + }, + resource.TestStep{ + Config: fmt.Sprintf(testAccAwsOpsworksCustomLayerConfigUpdate, opsiam.ServiceRoleArn, opsiam.InstanceProfileArn), + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr( + "aws_opsworks_custom_layer.tf-acc", "name", "tf-ops-acc-custom-layer", + ), + resource.TestCheckResourceAttr( + "aws_opsworks_custom_layer.tf-acc", "drain_elb_on_shutdown", "false", + ), + resource.TestCheckResourceAttr( + "aws_opsworks_custom_layer.tf-acc", "instance_shutdown_timeout", "120", + ), + resource.TestCheckResourceAttr( + "aws_opsworks_custom_layer.tf-acc", "custom_security_group_ids.#", "3", + ), + resource.TestCheckResourceAttr( + "aws_opsworks_custom_layer.tf-acc", "system_packages.#", "3", + ), + resource.TestCheckResourceAttr( + "aws_opsworks_custom_layer.tf-acc", "system_packages.1368285564", "git", + ), + resource.TestCheckResourceAttr( + "aws_opsworks_custom_layer.tf-acc", "system_packages.2937857443", "golang", + ), + resource.TestCheckResourceAttr( + "aws_opsworks_custom_layer.tf-acc", "system_packages.4101929740", "subversion", + ), + resource.TestCheckResourceAttr( + "aws_opsworks_custom_layer.tf-acc", "ebs_volume.#", "2", + ), + resource.TestCheckResourceAttr( + "aws_opsworks_custom_layer.tf-acc", "ebs_volume.3575749636.type", "gp2", + ), + resource.TestCheckResourceAttr( + "aws_opsworks_custom_layer.tf-acc", "ebs_volume.3575749636.number_of_disks", "2", + ), + resource.TestCheckResourceAttr( + "aws_opsworks_custom_layer.tf-acc", "ebs_volume.3575749636.mount_point", "/home", + ), + resource.TestCheckResourceAttr( + "aws_opsworks_custom_layer.tf-acc", "ebs_volume.3575749636.size", "100", + ), + resource.TestCheckResourceAttr( + "aws_opsworks_custom_layer.tf-acc", "ebs_volume.1266957920.type", "io1", + ), + resource.TestCheckResourceAttr( + "aws_opsworks_custom_layer.tf-acc", "ebs_volume.1266957920.number_of_disks", "4", + ), + resource.TestCheckResourceAttr( + "aws_opsworks_custom_layer.tf-acc", "ebs_volume.1266957920.mount_point", "/var", + ), + resource.TestCheckResourceAttr( + "aws_opsworks_custom_layer.tf-acc", "ebs_volume.1266957920.size", "100", + ), + resource.TestCheckResourceAttr( + "aws_opsworks_custom_layer.tf-acc", "ebs_volume.1266957920.raid_level", "1", + ), + resource.TestCheckResourceAttr( + "aws_opsworks_custom_layer.tf-acc", "ebs_volume.1266957920.iops", "3000", + ), + ), + }, + }, + }) +} + +func testAccCheckAwsOpsworksCustomLayerDestroy(s *terraform.State) error { + if len(s.RootModule().Resources) > 0 { + return fmt.Errorf("Expected all resources to be gone, but found: %#v", s.RootModule().Resources) + } + + return nil +} + +var testAccAwsOpsworksCustomLayerSecurityGroups = ` +resource "aws_security_group" "tf-ops-acc-layer1" { + name = "tf-ops-acc-layer1" + ingress { + from_port = 8 + to_port = -1 + protocol = "icmp" + cidr_blocks = ["0.0.0.0/0"] + } +} +resource "aws_security_group" "tf-ops-acc-layer2" { + name = "tf-ops-acc-layer2" + ingress { + from_port = 8 + to_port = -1 + protocol = "icmp" + cidr_blocks = ["0.0.0.0/0"] + } +} +` + +var testAccAwsOpsworksCustomLayerConfigCreate = testAccAwsOpsworksStackConfigNoVpcCreate + testAccAwsOpsworksCustomLayerSecurityGroups + ` +resource "aws_opsworks_custom_layer" "tf-acc" { + stack_id = "${aws_opsworks_stack.tf-acc.id}" + name = "tf-ops-acc-custom-layer" + short_name = "tf-ops-acc-custom-layer" + auto_assign_public_ips = true + custom_security_group_ids = [ + "${aws_security_group.tf-ops-acc-layer1.id}", + "${aws_security_group.tf-ops-acc-layer2.id}", + ] + drain_elb_on_shutdown = true + instance_shutdown_timeout = 300 + system_packages = [ + "git", + "golang", + ] + ebs_volume { + type = "gp2" + number_of_disks = 2 + mount_point = "/home" + size = 100 + raid_level = 0 + } +} +` + +var testAccAwsOpsworksCustomLayerConfigUpdate = testAccAwsOpsworksStackConfigNoVpcCreate + testAccAwsOpsworksCustomLayerSecurityGroups + ` +resource "aws_security_group" "tf-ops-acc-layer3" { + name = "tf-ops-acc-layer3" + ingress { + from_port = 8 + to_port = -1 + protocol = "icmp" + cidr_blocks = ["0.0.0.0/0"] + } +} +resource "aws_opsworks_custom_layer" "tf-acc" { + stack_id = "${aws_opsworks_stack.tf-acc.id}" + name = "tf-ops-acc-custom-layer" + short_name = "tf-ops-acc-custom-layer" + auto_assign_public_ips = true + custom_security_group_ids = [ + "${aws_security_group.tf-ops-acc-layer1.id}", + "${aws_security_group.tf-ops-acc-layer2.id}", + "${aws_security_group.tf-ops-acc-layer3.id}", + ] + drain_elb_on_shutdown = false + instance_shutdown_timeout = 120 + system_packages = [ + "git", + "golang", + "subversion", + ] + ebs_volume { + type = "gp2" + number_of_disks = 2 + mount_point = "/home" + size = 100 + raid_level = 0 + } + ebs_volume { + type = "io1" + number_of_disks = 4 + mount_point = "/var" + size = 100 + raid_level = 1 + iops = 3000 + } +} +` diff --git a/builtin/providers/aws/resource_aws_opsworks_ganglia_layer.go b/builtin/providers/aws/resource_aws_opsworks_ganglia_layer.go new file mode 100644 index 000000000..24778501c --- /dev/null +++ b/builtin/providers/aws/resource_aws_opsworks_ganglia_layer.go @@ -0,0 +1,33 @@ +package aws + +import ( + "github.com/hashicorp/terraform/helper/schema" +) + +func resourceAwsOpsworksGangliaLayer() *schema.Resource { + layerType := &opsworksLayerType{ + TypeName: "monitoring-master", + DefaultLayerName: "Ganglia", + + Attributes: map[string]*opsworksLayerTypeAttribute{ + "url": &opsworksLayerTypeAttribute{ + AttrName: "GangliaUrl", + Type: schema.TypeString, + Default: "/ganglia", + }, + "username": &opsworksLayerTypeAttribute{ + AttrName: "GangliaUser", + Type: schema.TypeString, + Default: "opsworks", + }, + "password": &opsworksLayerTypeAttribute{ + AttrName: "GangliaPassword", + Type: schema.TypeString, + Required: true, + WriteOnly: true, + }, + }, + } + + return layerType.SchemaResource() +} diff --git a/builtin/providers/aws/resource_aws_opsworks_haproxy_layer.go b/builtin/providers/aws/resource_aws_opsworks_haproxy_layer.go new file mode 100644 index 000000000..2b05dce05 --- /dev/null +++ b/builtin/providers/aws/resource_aws_opsworks_haproxy_layer.go @@ -0,0 +1,48 @@ +package aws + +import ( + "github.com/hashicorp/terraform/helper/schema" +) + +func resourceAwsOpsworksHaproxyLayer() *schema.Resource { + layerType := &opsworksLayerType{ + TypeName: "lb", + DefaultLayerName: "HAProxy", + + Attributes: map[string]*opsworksLayerTypeAttribute{ + "stats_enabled": &opsworksLayerTypeAttribute{ + AttrName: "EnableHaproxyStats", + Type: schema.TypeBool, + Default: true, + }, + "stats_url": &opsworksLayerTypeAttribute{ + AttrName: "HaproxyStatsUrl", + Type: schema.TypeString, + Default: "/haproxy?stats", + }, + "stats_user": &opsworksLayerTypeAttribute{ + AttrName: "HaproxyStatsUser", + Type: schema.TypeString, + Default: "opsworks", + }, + "stats_password": &opsworksLayerTypeAttribute{ + AttrName: "HaproxyStatsPassword", + Type: schema.TypeString, + WriteOnly: true, + Required: true, + }, + "healthcheck_url": &opsworksLayerTypeAttribute{ + AttrName: "HaproxyHealthCheckUrl", + Type: schema.TypeString, + Default: "/", + }, + "healthcheck_method": &opsworksLayerTypeAttribute{ + AttrName: "HaproxyHealthCheckMethod", + Type: schema.TypeString, + Default: "OPTIONS", + }, + }, + } + + return layerType.SchemaResource() +} diff --git a/builtin/providers/aws/resource_aws_opsworks_java_app_layer.go b/builtin/providers/aws/resource_aws_opsworks_java_app_layer.go new file mode 100644 index 000000000..2b79fcfad --- /dev/null +++ b/builtin/providers/aws/resource_aws_opsworks_java_app_layer.go @@ -0,0 +1,42 @@ +package aws + +import ( + "github.com/hashicorp/terraform/helper/schema" +) + +func resourceAwsOpsworksJavaAppLayer() *schema.Resource { + layerType := &opsworksLayerType{ + TypeName: "java-app", + DefaultLayerName: "Java App Server", + + Attributes: map[string]*opsworksLayerTypeAttribute{ + "jvm_type": &opsworksLayerTypeAttribute{ + AttrName: "Jvm", + Type: schema.TypeString, + Default: "openjdk", + }, + "jvm_version": &opsworksLayerTypeAttribute{ + AttrName: "JvmVersion", + Type: schema.TypeString, + Default: "7", + }, + "jvm_options": &opsworksLayerTypeAttribute{ + AttrName: "JvmOptions", + Type: schema.TypeString, + Default: "", + }, + "app_server": &opsworksLayerTypeAttribute{ + AttrName: "JavaAppServer", + Type: schema.TypeString, + Default: "tomcat", + }, + "app_server_version": &opsworksLayerTypeAttribute{ + AttrName: "JavaAppServerVersion", + Type: schema.TypeString, + Default: "7", + }, + }, + } + + return layerType.SchemaResource() +} diff --git a/builtin/providers/aws/resource_aws_opsworks_memcached_layer.go b/builtin/providers/aws/resource_aws_opsworks_memcached_layer.go new file mode 100644 index 000000000..626b428bb --- /dev/null +++ b/builtin/providers/aws/resource_aws_opsworks_memcached_layer.go @@ -0,0 +1,22 @@ +package aws + +import ( + "github.com/hashicorp/terraform/helper/schema" +) + +func resourceAwsOpsworksMemcachedLayer() *schema.Resource { + layerType := &opsworksLayerType{ + TypeName: "memcached", + DefaultLayerName: "Memcached", + + Attributes: map[string]*opsworksLayerTypeAttribute{ + "allocated_memory": &opsworksLayerTypeAttribute{ + AttrName: "MemcachedMemory", + Type: schema.TypeInt, + Default: 512, + }, + }, + } + + return layerType.SchemaResource() +} diff --git a/builtin/providers/aws/resource_aws_opsworks_mysql_layer.go b/builtin/providers/aws/resource_aws_opsworks_mysql_layer.go new file mode 100644 index 000000000..6ab4476a3 --- /dev/null +++ b/builtin/providers/aws/resource_aws_opsworks_mysql_layer.go @@ -0,0 +1,27 @@ +package aws + +import ( + "github.com/hashicorp/terraform/helper/schema" +) + +func resourceAwsOpsworksMysqlLayer() *schema.Resource { + layerType := &opsworksLayerType{ + TypeName: "db-master", + DefaultLayerName: "MySQL", + + Attributes: map[string]*opsworksLayerTypeAttribute{ + "root_password": &opsworksLayerTypeAttribute{ + AttrName: "MysqlRootPassword", + Type: schema.TypeString, + WriteOnly: true, + }, + "root_password_on_all_instances": &opsworksLayerTypeAttribute{ + AttrName: "MysqlRootPasswordUbiquitous", + Type: schema.TypeBool, + Default: true, + }, + }, + } + + return layerType.SchemaResource() +} diff --git a/builtin/providers/aws/resource_aws_opsworks_nodejs_app_layer.go b/builtin/providers/aws/resource_aws_opsworks_nodejs_app_layer.go new file mode 100644 index 000000000..24f3d0f3e --- /dev/null +++ b/builtin/providers/aws/resource_aws_opsworks_nodejs_app_layer.go @@ -0,0 +1,22 @@ +package aws + +import ( + "github.com/hashicorp/terraform/helper/schema" +) + +func resourceAwsOpsworksNodejsAppLayer() *schema.Resource { + layerType := &opsworksLayerType{ + TypeName: "nodejs-app", + DefaultLayerName: "Node.js App Server", + + Attributes: map[string]*opsworksLayerTypeAttribute{ + "nodejs_version": &opsworksLayerTypeAttribute{ + AttrName: "NodejsVersion", + Type: schema.TypeString, + Default: "0.10.38", + }, + }, + } + + return layerType.SchemaResource() +} diff --git a/builtin/providers/aws/resource_aws_opsworks_php_app_layer.go b/builtin/providers/aws/resource_aws_opsworks_php_app_layer.go new file mode 100644 index 000000000..c3176af5b --- /dev/null +++ b/builtin/providers/aws/resource_aws_opsworks_php_app_layer.go @@ -0,0 +1,16 @@ +package aws + +import ( + "github.com/hashicorp/terraform/helper/schema" +) + +func resourceAwsOpsworksPhpAppLayer() *schema.Resource { + layerType := &opsworksLayerType{ + TypeName: "php-app", + DefaultLayerName: "PHP App Server", + + Attributes: map[string]*opsworksLayerTypeAttribute{}, + } + + return layerType.SchemaResource() +} diff --git a/builtin/providers/aws/resource_aws_opsworks_rails_app_layer.go b/builtin/providers/aws/resource_aws_opsworks_rails_app_layer.go new file mode 100644 index 000000000..54a0084dd --- /dev/null +++ b/builtin/providers/aws/resource_aws_opsworks_rails_app_layer.go @@ -0,0 +1,47 @@ +package aws + +import ( + "github.com/hashicorp/terraform/helper/schema" +) + +func resourceAwsOpsworksRailsAppLayer() *schema.Resource { + layerType := &opsworksLayerType{ + TypeName: "rails-app", + DefaultLayerName: "Rails App Server", + + Attributes: map[string]*opsworksLayerTypeAttribute{ + "ruby_version": &opsworksLayerTypeAttribute{ + AttrName: "RubyVersion", + Type: schema.TypeString, + Default: "2.0.0", + }, + "app_server": &opsworksLayerTypeAttribute{ + AttrName: "RailsStack", + Type: schema.TypeString, + Default: "apache_passenger", + }, + "passenger_version": &opsworksLayerTypeAttribute{ + AttrName: "PassengerVersion", + Type: schema.TypeString, + Default: "4.0.46", + }, + "rubygems_version": &opsworksLayerTypeAttribute{ + AttrName: "RubygemsVersion", + Type: schema.TypeString, + Default: "2.2.2", + }, + "manage_bundler": &opsworksLayerTypeAttribute{ + AttrName: "ManageBundler", + Type: schema.TypeBool, + Default: true, + }, + "bundler_version": &opsworksLayerTypeAttribute{ + AttrName: "BundlerVersion", + Type: schema.TypeString, + Default: "1.5.3", + }, + }, + } + + return layerType.SchemaResource() +} diff --git a/builtin/providers/aws/resource_aws_opsworks_stack.go b/builtin/providers/aws/resource_aws_opsworks_stack.go new file mode 100644 index 000000000..8eeda3f05 --- /dev/null +++ b/builtin/providers/aws/resource_aws_opsworks_stack.go @@ -0,0 +1,456 @@ +package aws + +import ( + "fmt" + "log" + "strings" + "time" + + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/helper/schema" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/awserr" + "github.com/aws/aws-sdk-go/service/opsworks" +) + +func resourceAwsOpsworksStack() *schema.Resource { + return &schema.Resource{ + Create: resourceAwsOpsworksStackCreate, + Read: resourceAwsOpsworksStackRead, + Update: resourceAwsOpsworksStackUpdate, + Delete: resourceAwsOpsworksStackDelete, + + Schema: map[string]*schema.Schema{ + "id": &schema.Schema{ + Type: schema.TypeString, + Computed: true, + }, + + "name": &schema.Schema{ + Type: schema.TypeString, + Required: true, + }, + + "region": &schema.Schema{ + Type: schema.TypeString, + ForceNew: true, + Required: true, + }, + + "service_role_arn": &schema.Schema{ + Type: schema.TypeString, + Required: true, + }, + + "default_instance_profile_arn": &schema.Schema{ + Type: schema.TypeString, + Required: true, + }, + + "color": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + }, + + "configuration_manager_name": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + Default: "Chef", + }, + + "configuration_manager_version": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + Default: "11.4", + }, + + "manage_berkshelf": &schema.Schema{ + Type: schema.TypeBool, + Optional: true, + Default: false, + }, + + "berkshelf_version": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + Default: "3.2.0", + }, + + "custom_cookbooks_source": &schema.Schema{ + Type: schema.TypeList, + Optional: true, + Computed: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "type": &schema.Schema{ + Type: schema.TypeString, + Required: true, + }, + + "url": &schema.Schema{ + Type: schema.TypeString, + Required: true, + }, + + "username": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + }, + + "password": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + }, + + "revision": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + }, + + "ssh_key": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + }, + }, + }, + }, + + "custom_json": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + }, + + "default_availability_zone": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + Computed: true, + }, + + "default_os": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + Default: "Ubuntu 12.04 LTS", + }, + + "default_root_device_type": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + Default: "instance-store", + }, + + "default_ssh_key_name": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + }, + + "default_subnet_id": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + }, + + "hostname_theme": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + Default: "Layer_Dependent", + }, + + "use_custom_cookbooks": &schema.Schema{ + Type: schema.TypeBool, + Optional: true, + Default: false, + }, + + "use_opsworks_security_groups": &schema.Schema{ + Type: schema.TypeBool, + Optional: true, + Default: true, + }, + + "vpc_id": &schema.Schema{ + Type: schema.TypeString, + ForceNew: true, + Optional: true, + }, + }, + } +} + +func resourceAwsOpsworksStackValidate(d *schema.ResourceData) error { + cookbooksSourceCount := d.Get("custom_cookbooks_source.#").(int) + if cookbooksSourceCount > 1 { + return fmt.Errorf("Only one custom_cookbooks_source is permitted") + } + + vpcId := d.Get("vpc_id").(string) + if vpcId != "" { + if d.Get("default_subnet_id").(string) == "" { + return fmt.Errorf("default_subnet_id must be set if vpc_id is set") + } + } else { + if d.Get("default_availability_zone").(string) == "" { + return fmt.Errorf("either vpc_id or default_availability_zone must be set") + } + } + + return nil +} + +func resourceAwsOpsworksStackCustomCookbooksSource(d *schema.ResourceData) *opsworks.Source { + count := d.Get("custom_cookbooks_source.#").(int) + if count == 0 { + return nil + } + + return &opsworks.Source{ + Type: aws.String(d.Get("custom_cookbooks_source.0.type").(string)), + Url: aws.String(d.Get("custom_cookbooks_source.0.url").(string)), + Username: aws.String(d.Get("custom_cookbooks_source.0.username").(string)), + Password: aws.String(d.Get("custom_cookbooks_source.0.password").(string)), + Revision: aws.String(d.Get("custom_cookbooks_source.0.revision").(string)), + SshKey: aws.String(d.Get("custom_cookbooks_source.0.ssh_key").(string)), + } +} + +func resourceAwsOpsworksSetStackCustomCookbooksSource(d *schema.ResourceData, v *opsworks.Source) { + nv := make([]interface{}, 0, 1) + if v != nil { + m := make(map[string]interface{}) + if v.Type != nil { + m["type"] = *v.Type + } + if v.Url != nil { + m["url"] = *v.Url + } + if v.Username != nil { + m["username"] = *v.Username + } + if v.Password != nil { + m["password"] = *v.Password + } + if v.Revision != nil { + m["revision"] = *v.Revision + } + if v.SshKey != nil { + m["ssh_key"] = *v.SshKey + } + nv = append(nv, m) + } + + err := d.Set("custom_cookbooks_source", nv) + if err != nil { + // should never happen + panic(err) + } +} + +func resourceAwsOpsworksStackRead(d *schema.ResourceData, meta interface{}) error { + client := meta.(*AWSClient).opsworksconn + + req := &opsworks.DescribeStacksInput{ + StackIds: []*string{ + aws.String(d.Id()), + }, + } + + log.Printf("[DEBUG] Reading OpsWorks stack: %s", d.Id()) + + resp, err := client.DescribeStacks(req) + if err != nil { + if awserr, ok := err.(awserr.Error); ok { + if awserr.Code() == "ResourceNotFoundException" { + d.SetId("") + return nil + } + } + return err + } + + stack := resp.Stacks[0] + d.Set("name", stack.Name) + d.Set("region", stack.Region) + d.Set("default_instance_profile_arn", stack.DefaultInstanceProfileArn) + d.Set("service_role_arn", stack.ServiceRoleArn) + d.Set("default_availability_zone", stack.DefaultAvailabilityZone) + d.Set("default_os", stack.DefaultOs) + d.Set("default_root_device_type", stack.DefaultRootDeviceType) + d.Set("default_ssh_key_name", stack.DefaultSshKeyName) + d.Set("default_subnet_id", stack.DefaultSubnetId) + d.Set("hostname_theme", stack.HostnameTheme) + d.Set("use_custom_cookbooks", stack.UseCustomCookbooks) + d.Set("use_opsworks_security_groups", stack.UseOpsworksSecurityGroups) + d.Set("vpc_id", stack.VpcId) + if color, ok := stack.Attributes["Color"]; ok { + d.Set("color", color) + } + if stack.ConfigurationManager != nil { + d.Set("configuration_manager_name", stack.ConfigurationManager.Name) + d.Set("configuration_manager_version", stack.ConfigurationManager.Version) + } + if stack.ChefConfiguration != nil { + d.Set("berkshelf_version", stack.ChefConfiguration.BerkshelfVersion) + d.Set("manage_berkshelf", stack.ChefConfiguration.ManageBerkshelf) + } + resourceAwsOpsworksSetStackCustomCookbooksSource(d, stack.CustomCookbooksSource) + + return nil +} + +func resourceAwsOpsworksStackCreate(d *schema.ResourceData, meta interface{}) error { + client := meta.(*AWSClient).opsworksconn + + err := resourceAwsOpsworksStackValidate(d) + if err != nil { + return err + } + + req := &opsworks.CreateStackInput{ + DefaultInstanceProfileArn: aws.String(d.Get("default_instance_profile_arn").(string)), + Name: aws.String(d.Get("name").(string)), + Region: aws.String(d.Get("region").(string)), + ServiceRoleArn: aws.String(d.Get("service_role_arn").(string)), + } + inVpc := false + if vpcId, ok := d.GetOk("vpc_id"); ok { + req.VpcId = aws.String(vpcId.(string)) + inVpc = true + } + if defaultSubnetId, ok := d.GetOk("default_subnet_id"); ok { + req.DefaultSubnetId = aws.String(defaultSubnetId.(string)) + } + if defaultAvailabilityZone, ok := d.GetOk("default_availability_zone"); ok { + req.DefaultAvailabilityZone = aws.String(defaultAvailabilityZone.(string)) + } + + log.Printf("[DEBUG] Creating OpsWorks stack: %s", *req.Name) + + var resp *opsworks.CreateStackOutput + err = resource.Retry(20*time.Minute, func() error { + var cerr error + resp, cerr = client.CreateStack(req) + if cerr != nil { + if opserr, ok := cerr.(awserr.Error); ok { + // If Terraform is also managing the service IAM role, + // it may have just been created and not yet be + // propagated. + // AWS doesn't provide a machine-readable code for this + // specific error, so we're forced to do fragile message + // matching. + // The full error we're looking for looks something like + // the following: + // Service Role Arn: [...] is not yet propagated, please try again in a couple of minutes + if opserr.Code() == "ValidationException" && strings.Contains(opserr.Message(), "not yet propagated") { + log.Printf("[INFO] Waiting for service IAM role to propagate") + return cerr + } + } + return resource.RetryError{Err: cerr} + } + return nil + }) + if err != nil { + return err + } + + stackId := *resp.StackId + d.SetId(stackId) + d.Set("id", stackId) + + if inVpc { + // For VPC-based stacks, OpsWorks asynchronously creates some default + // security groups which must exist before layers can be created. + // Unfortunately it doesn't tell us what the ids of these are, so + // we can't actually check for them. Instead, we just wait a nominal + // amount of time for their creation to complete. + log.Print("[INFO] Waiting for OpsWorks built-in security groups to be created") + time.Sleep(30 * time.Second) + } + + return resourceAwsOpsworksStackUpdate(d, meta) +} + +func resourceAwsOpsworksStackUpdate(d *schema.ResourceData, meta interface{}) error { + client := meta.(*AWSClient).opsworksconn + + err := resourceAwsOpsworksStackValidate(d) + if err != nil { + return err + } + + req := &opsworks.UpdateStackInput{ + CustomJson: aws.String(d.Get("custom_json").(string)), + DefaultInstanceProfileArn: aws.String(d.Get("default_instance_profile_arn").(string)), + DefaultRootDeviceType: aws.String(d.Get("default_root_device_type").(string)), + DefaultSshKeyName: aws.String(d.Get("default_ssh_key_name").(string)), + Name: aws.String(d.Get("name").(string)), + ServiceRoleArn: aws.String(d.Get("service_role_arn").(string)), + StackId: aws.String(d.Id()), + UseCustomCookbooks: aws.Bool(d.Get("use_custom_cookbooks").(bool)), + UseOpsworksSecurityGroups: aws.Bool(d.Get("use_opsworks_security_groups").(bool)), + Attributes: make(map[string]*string), + CustomCookbooksSource: resourceAwsOpsworksStackCustomCookbooksSource(d), + } + if v, ok := d.GetOk("default_os"); ok { + req.DefaultOs = aws.String(v.(string)) + } + if v, ok := d.GetOk("default_subnet_id"); ok { + req.DefaultSubnetId = aws.String(v.(string)) + } + if v, ok := d.GetOk("default_availability_zone"); ok { + req.DefaultAvailabilityZone = aws.String(v.(string)) + } + if v, ok := d.GetOk("hostname_theme"); ok { + req.HostnameTheme = aws.String(v.(string)) + } + if v, ok := d.GetOk("color"); ok { + req.Attributes["Color"] = aws.String(v.(string)) + } + req.ChefConfiguration = &opsworks.ChefConfiguration{ + BerkshelfVersion: aws.String(d.Get("berkshelf_version").(string)), + ManageBerkshelf: aws.Bool(d.Get("manage_berkshelf").(bool)), + } + req.ConfigurationManager = &opsworks.StackConfigurationManager{ + Name: aws.String(d.Get("configuration_manager_name").(string)), + Version: aws.String(d.Get("configuration_manager_version").(string)), + } + + log.Printf("[DEBUG] Updating OpsWorks stack: %s", d.Id()) + + _, err = client.UpdateStack(req) + if err != nil { + return err + } + + return resourceAwsOpsworksStackRead(d, meta) +} + +func resourceAwsOpsworksStackDelete(d *schema.ResourceData, meta interface{}) error { + client := meta.(*AWSClient).opsworksconn + + req := &opsworks.DeleteStackInput{ + StackId: aws.String(d.Id()), + } + + log.Printf("[DEBUG] Deleting OpsWorks stack: %s", d.Id()) + + _, err := client.DeleteStack(req) + if err != nil { + return err + } + + // For a stack in a VPC, OpsWorks has created some default security groups + // in the VPC, which it will now delete. + // Unfortunately, the security groups are deleted asynchronously and there + // is no robust way for us to determine when it is done. The VPC itself + // isn't deletable until the security groups are cleaned up, so this could + // make 'terraform destroy' fail if the VPC is also managed and we don't + // wait for the security groups to be deleted. + // There is no robust way to check for this, so we'll just wait a + // nominal amount of time. + if _, ok := d.GetOk("vpc_id"); ok { + log.Print("[INFO] Waiting for Opsworks built-in security groups to be deleted") + time.Sleep(30 * time.Second) + } + + return nil +} diff --git a/builtin/providers/aws/resource_aws_opsworks_stack_test.go b/builtin/providers/aws/resource_aws_opsworks_stack_test.go new file mode 100644 index 000000000..b740b6a20 --- /dev/null +++ b/builtin/providers/aws/resource_aws_opsworks_stack_test.go @@ -0,0 +1,353 @@ +package aws + +import ( + "fmt" + "testing" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/iam" + "github.com/aws/aws-sdk-go/service/opsworks" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +// These tests assume the existence of predefined Opsworks IAM roles named `aws-opsworks-ec2-role` +// and `aws-opsworks-service-role`. + +/////////////////////////////// +//// Tests for the No-VPC case +/////////////////////////////// + +var testAccAwsOpsworksStackConfigNoVpcCreate = ` +resource "aws_opsworks_stack" "tf-acc" { + name = "tf-opsworks-acc" + region = "us-west-2" + service_role_arn = "%s" + default_instance_profile_arn = "%s" + default_availability_zone = "us-west-2a" + default_os = "Amazon Linux 2014.09" + default_root_device_type = "ebs" + custom_json = "{\"key\": \"value\"}" + configuration_manager_version = "11.10" + use_opsworks_security_groups = false +} +` +var testAccAWSOpsworksStackConfigNoVpcUpdate = ` +resource "aws_opsworks_stack" "tf-acc" { + name = "tf-opsworks-acc" + region = "us-west-2" + service_role_arn = "%s" + default_instance_profile_arn = "%s" + default_availability_zone = "us-west-2a" + default_os = "Amazon Linux 2014.09" + default_root_device_type = "ebs" + custom_json = "{\"key\": \"value\"}" + configuration_manager_version = "11.10" + use_opsworks_security_groups = false + use_custom_cookbooks = true + manage_berkshelf = true + custom_cookbooks_source { + type = "git" + revision = "master" + url = "https://github.com/awslabs/opsworks-example-cookbooks.git" + } +} +` + +func TestAccAwsOpsworksStackNoVpc(t *testing.T) { + opsiam := testAccAwsOpsworksStackIam{} + testAccAwsOpsworksStackPopulateIam(t, &opsiam) + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAwsOpsworksStackDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: fmt.Sprintf(testAccAwsOpsworksStackConfigNoVpcCreate, opsiam.ServiceRoleArn, opsiam.InstanceProfileArn), + Check: testAccAwsOpsworksStackCheckResourceAttrsCreate, + }, + resource.TestStep{ + Config: fmt.Sprintf(testAccAWSOpsworksStackConfigNoVpcUpdate, opsiam.ServiceRoleArn, opsiam.InstanceProfileArn), + Check: testAccAwsOpsworksStackCheckResourceAttrsUpdate, + }, + }, + }) +} + +//////////////////////////// +//// Tests for the VPC case +//////////////////////////// + +var testAccAwsOpsworksStackConfigVpcCreate = ` +resource "aws_vpc" "tf-acc" { + cidr_block = "10.3.5.0/24" +} +resource "aws_subnet" "tf-acc" { + vpc_id = "${aws_vpc.tf-acc.id}" + cidr_block = "${aws_vpc.tf-acc.cidr_block}" + availability_zone = "us-west-2a" +} +resource "aws_opsworks_stack" "tf-acc" { + name = "tf-opsworks-acc" + region = "us-west-2" + vpc_id = "${aws_vpc.tf-acc.id}" + default_subnet_id = "${aws_subnet.tf-acc.id}" + service_role_arn = "%s" + default_instance_profile_arn = "%s" + default_os = "Amazon Linux 2014.09" + default_root_device_type = "ebs" + custom_json = "{\"key\": \"value\"}" + configuration_manager_version = "11.10" + use_opsworks_security_groups = false +} +` + +var testAccAWSOpsworksStackConfigVpcUpdate = ` +resource "aws_vpc" "tf-acc" { + cidr_block = "10.3.5.0/24" +} +resource "aws_subnet" "tf-acc" { + vpc_id = "${aws_vpc.tf-acc.id}" + cidr_block = "${aws_vpc.tf-acc.cidr_block}" + availability_zone = "us-west-2a" +} +resource "aws_opsworks_stack" "tf-acc" { + name = "tf-opsworks-acc" + region = "us-west-2" + vpc_id = "${aws_vpc.tf-acc.id}" + default_subnet_id = "${aws_subnet.tf-acc.id}" + service_role_arn = "%s" + default_instance_profile_arn = "%s" + default_os = "Amazon Linux 2014.09" + default_root_device_type = "ebs" + custom_json = "{\"key\": \"value\"}" + configuration_manager_version = "11.10" + use_opsworks_security_groups = false + use_custom_cookbooks = true + manage_berkshelf = true + custom_cookbooks_source { + type = "git" + revision = "master" + url = "https://github.com/awslabs/opsworks-example-cookbooks.git" + } +} +` + +func TestAccAwsOpsworksStackVpc(t *testing.T) { + opsiam := testAccAwsOpsworksStackIam{} + testAccAwsOpsworksStackPopulateIam(t, &opsiam) + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAwsOpsworksStackDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: fmt.Sprintf(testAccAwsOpsworksStackConfigVpcCreate, opsiam.ServiceRoleArn, opsiam.InstanceProfileArn), + Check: testAccAwsOpsworksStackCheckResourceAttrsCreate, + }, + resource.TestStep{ + Config: fmt.Sprintf(testAccAWSOpsworksStackConfigVpcUpdate, opsiam.ServiceRoleArn, opsiam.InstanceProfileArn), + Check: resource.ComposeTestCheckFunc( + testAccAwsOpsworksStackCheckResourceAttrsUpdate, + testAccAwsOpsworksCheckVpc, + ), + }, + }, + }) +} + +//////////////////////////// +//// Checkers and Utilities +//////////////////////////// + +var testAccAwsOpsworksStackCheckResourceAttrsCreate = resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr( + "aws_opsworks_stack.tf-acc", + "name", + "tf-opsworks-acc", + ), + resource.TestCheckResourceAttr( + "aws_opsworks_stack.tf-acc", + "default_availability_zone", + "us-west-2a", + ), + resource.TestCheckResourceAttr( + "aws_opsworks_stack.tf-acc", + "default_os", + "Amazon Linux 2014.09", + ), + resource.TestCheckResourceAttr( + "aws_opsworks_stack.tf-acc", + "default_root_device_type", + "ebs", + ), + resource.TestCheckResourceAttr( + "aws_opsworks_stack.tf-acc", + "custom_json", + `{"key": "value"}`, + ), + resource.TestCheckResourceAttr( + "aws_opsworks_stack.tf-acc", + "configuration_manager_version", + "11.10", + ), + resource.TestCheckResourceAttr( + "aws_opsworks_stack.tf-acc", + "use_opsworks_security_groups", + "false", + ), +) + +var testAccAwsOpsworksStackCheckResourceAttrsUpdate = resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr( + "aws_opsworks_stack.tf-acc", + "name", + "tf-opsworks-acc", + ), + resource.TestCheckResourceAttr( + "aws_opsworks_stack.tf-acc", + "default_availability_zone", + "us-west-2a", + ), + resource.TestCheckResourceAttr( + "aws_opsworks_stack.tf-acc", + "default_os", + "Amazon Linux 2014.09", + ), + resource.TestCheckResourceAttr( + "aws_opsworks_stack.tf-acc", + "default_root_device_type", + "ebs", + ), + resource.TestCheckResourceAttr( + "aws_opsworks_stack.tf-acc", + "custom_json", + `{"key": "value"}`, + ), + resource.TestCheckResourceAttr( + "aws_opsworks_stack.tf-acc", + "configuration_manager_version", + "11.10", + ), + resource.TestCheckResourceAttr( + "aws_opsworks_stack.tf-acc", + "use_opsworks_security_groups", + "false", + ), + resource.TestCheckResourceAttr( + "aws_opsworks_stack.tf-acc", + "use_custom_cookbooks", + "true", + ), + resource.TestCheckResourceAttr( + "aws_opsworks_stack.tf-acc", + "manage_berkshelf", + "true", + ), + resource.TestCheckResourceAttr( + "aws_opsworks_stack.tf-acc", + "custom_cookbooks_source.0.type", + "git", + ), + resource.TestCheckResourceAttr( + "aws_opsworks_stack.tf-acc", + "custom_cookbooks_source.0.revision", + "master", + ), + resource.TestCheckResourceAttr( + "aws_opsworks_stack.tf-acc", + "custom_cookbooks_source.0.url", + "https://github.com/awslabs/opsworks-example-cookbooks.git", + ), +) + +func testAccAwsOpsworksCheckVpc(s *terraform.State) error { + rs, ok := s.RootModule().Resources["aws_opsworks_stack.tf-acc"] + if !ok { + return fmt.Errorf("Not found: %s", "aws_opsworks_stack.tf-acc") + } + if rs.Primary.ID == "" { + return fmt.Errorf("No ID is set") + } + + p := rs.Primary + + opsworksconn := testAccProvider.Meta().(*AWSClient).opsworksconn + describeOpts := &opsworks.DescribeStacksInput{ + StackIds: []*string{aws.String(p.ID)}, + } + resp, err := opsworksconn.DescribeStacks(describeOpts) + if err != nil { + return err + } + if len(resp.Stacks) == 0 { + return fmt.Errorf("No stack %s not found", p.ID) + } + if p.Attributes["vpc_id"] != *resp.Stacks[0].VpcId { + return fmt.Errorf("VPCID Got %s, expected %s", *resp.Stacks[0].VpcId, p.Attributes["vpc_id"]) + } + if p.Attributes["default_subnet_id"] != *resp.Stacks[0].DefaultSubnetId { + return fmt.Errorf("VPCID Got %s, expected %s", *resp.Stacks[0].DefaultSubnetId, p.Attributes["default_subnet_id"]) + } + return nil +} + +func testAccCheckAwsOpsworksStackDestroy(s *terraform.State) error { + if len(s.RootModule().Resources) > 0 { + return fmt.Errorf("Expected all resources to be gone, but found: %#v", s.RootModule().Resources) + } + + return nil +} + +// Holds the two IAM object ARNs used in stack objects we'll create. +type testAccAwsOpsworksStackIam struct { + ServiceRoleArn string + InstanceProfileArn string +} + +func testAccAwsOpsworksStackPopulateIam(t *testing.T, opsiam *testAccAwsOpsworksStackIam) { + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccInstanceConfig_pre, // noop + Check: testAccCheckAwsOpsworksEnsureIam(t, opsiam), + }, + }, + }) +} + +func testAccCheckAwsOpsworksEnsureIam(t *testing.T, opsiam *testAccAwsOpsworksStackIam) func(*terraform.State) error { + return func(_ *terraform.State) error { + iamconn := testAccProvider.Meta().(*AWSClient).iamconn + + serviceRoleOpts := &iam.GetRoleInput{ + RoleName: aws.String("aws-opsworks-service-role"), + } + respServiceRole, err := iamconn.GetRole(serviceRoleOpts) + if err != nil { + return err + } + + instanceProfileOpts := &iam.GetInstanceProfileInput{ + InstanceProfileName: aws.String("aws-opsworks-ec2-role"), + } + respInstanceProfile, err := iamconn.GetInstanceProfile(instanceProfileOpts) + if err != nil { + return err + } + + opsiam.ServiceRoleArn = *respServiceRole.Role.Arn + opsiam.InstanceProfileArn = *respInstanceProfile.InstanceProfile.Arn + + t.Logf("[DEBUG] ServiceRoleARN for OpsWorks: %s", opsiam.ServiceRoleArn) + t.Logf("[DEBUG] Instance Profile ARN for OpsWorks: %s", opsiam.InstanceProfileArn) + + return nil + + } +} diff --git a/builtin/providers/aws/resource_aws_opsworks_static_web_layer.go b/builtin/providers/aws/resource_aws_opsworks_static_web_layer.go new file mode 100644 index 000000000..df91b1b1b --- /dev/null +++ b/builtin/providers/aws/resource_aws_opsworks_static_web_layer.go @@ -0,0 +1,16 @@ +package aws + +import ( + "github.com/hashicorp/terraform/helper/schema" +) + +func resourceAwsOpsworksStaticWebLayer() *schema.Resource { + layerType := &opsworksLayerType{ + TypeName: "web", + DefaultLayerName: "Static Web Server", + + Attributes: map[string]*opsworksLayerTypeAttribute{}, + } + + return layerType.SchemaResource() +} diff --git a/builtin/providers/aws/resource_aws_placement_group.go b/builtin/providers/aws/resource_aws_placement_group.go new file mode 100644 index 000000000..9f0452f75 --- /dev/null +++ b/builtin/providers/aws/resource_aws_placement_group.go @@ -0,0 +1,150 @@ +package aws + +import ( + "fmt" + "log" + "time" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/awserr" + "github.com/aws/aws-sdk-go/service/ec2" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/helper/schema" +) + +func resourceAwsPlacementGroup() *schema.Resource { + return &schema.Resource{ + Create: resourceAwsPlacementGroupCreate, + Read: resourceAwsPlacementGroupRead, + Delete: resourceAwsPlacementGroupDelete, + + Schema: map[string]*schema.Schema{ + "name": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + "strategy": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + }, + } +} + +func resourceAwsPlacementGroupCreate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).ec2conn + + name := d.Get("name").(string) + input := ec2.CreatePlacementGroupInput{ + GroupName: aws.String(name), + Strategy: aws.String(d.Get("strategy").(string)), + } + log.Printf("[DEBUG] Creating EC2 Placement group: %s", input) + _, err := conn.CreatePlacementGroup(&input) + if err != nil { + return err + } + + wait := resource.StateChangeConf{ + Pending: []string{"pending"}, + Target: "available", + Timeout: 5 * time.Minute, + MinTimeout: 1 * time.Second, + Refresh: func() (interface{}, string, error) { + out, err := conn.DescribePlacementGroups(&ec2.DescribePlacementGroupsInput{ + GroupNames: []*string{aws.String(name)}, + }) + + if err != nil { + return out, "", err + } + + if len(out.PlacementGroups) == 0 { + return out, "", fmt.Errorf("Placement group not found (%q)", name) + } + pg := out.PlacementGroups[0] + + return out, *pg.State, nil + }, + } + + _, err = wait.WaitForState() + if err != nil { + return err + } + + log.Printf("[DEBUG] EC2 Placement group created: %q", name) + + d.SetId(name) + + return resourceAwsPlacementGroupRead(d, meta) +} + +func resourceAwsPlacementGroupRead(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).ec2conn + input := ec2.DescribePlacementGroupsInput{ + GroupNames: []*string{aws.String(d.Get("name").(string))}, + } + out, err := conn.DescribePlacementGroups(&input) + if err != nil { + return err + } + pg := out.PlacementGroups[0] + + log.Printf("[DEBUG] Received EC2 Placement Group: %s", pg) + + d.Set("name", pg.GroupName) + d.Set("strategy", pg.Strategy) + + return nil +} + +func resourceAwsPlacementGroupDelete(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).ec2conn + + log.Printf("[DEBUG] Deleting EC2 Placement Group %q", d.Id()) + _, err := conn.DeletePlacementGroup(&ec2.DeletePlacementGroupInput{ + GroupName: aws.String(d.Id()), + }) + if err != nil { + return err + } + + wait := resource.StateChangeConf{ + Pending: []string{"deleting"}, + Target: "deleted", + Timeout: 5 * time.Minute, + MinTimeout: 1 * time.Second, + Refresh: func() (interface{}, string, error) { + out, err := conn.DescribePlacementGroups(&ec2.DescribePlacementGroupsInput{ + GroupNames: []*string{aws.String(d.Id())}, + }) + + if err != nil { + awsErr := err.(awserr.Error) + if awsErr.Code() == "InvalidPlacementGroup.Unknown" { + return out, "deleted", nil + } + return out, "", awsErr + } + + if len(out.PlacementGroups) == 0 { + return out, "deleted", nil + } + + pg := out.PlacementGroups[0] + + return out, *pg.State, nil + }, + } + + _, err = wait.WaitForState() + if err != nil { + return err + } + + d.SetId("") + return nil +} diff --git a/builtin/providers/aws/resource_aws_placement_group_test.go b/builtin/providers/aws/resource_aws_placement_group_test.go new file mode 100644 index 000000000..a68e43e92 --- /dev/null +++ b/builtin/providers/aws/resource_aws_placement_group_test.go @@ -0,0 +1,98 @@ +package aws + +import ( + "fmt" + "testing" + + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/ec2" +) + +func TestAccAWSPlacementGroup_basic(t *testing.T) { + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSPlacementGroupDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccAWSPlacementGroupConfig, + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSPlacementGroupExists("aws_placement_group.pg"), + ), + }, + }, + }) +} + +func testAccCheckAWSPlacementGroupDestroy(s *terraform.State) error { + conn := testAccProvider.Meta().(*AWSClient).ec2conn + + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_placement_group" { + continue + } + _, err := conn.DeletePlacementGroup(&ec2.DeletePlacementGroupInput{ + GroupName: aws.String(rs.Primary.ID), + }) + if err != nil { + return err + } + } + return nil +} + +func testAccCheckAWSPlacementGroupExists(n string) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[n] + if !ok { + return fmt.Errorf("Not found: %s", n) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No Placement Group ID is set") + } + + conn := testAccProvider.Meta().(*AWSClient).ec2conn + _, err := conn.DescribePlacementGroups(&ec2.DescribePlacementGroupsInput{ + GroupNames: []*string{aws.String(rs.Primary.ID)}, + }) + + if err != nil { + return fmt.Errorf("Placement Group error: %v", err) + } + return nil + } +} + +func testAccCheckAWSDestroyPlacementGroup(n string) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[n] + if !ok { + return fmt.Errorf("Not found: %s", n) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No Placement Group ID is set") + } + + conn := testAccProvider.Meta().(*AWSClient).ec2conn + _, err := conn.DeletePlacementGroup(&ec2.DeletePlacementGroupInput{ + GroupName: aws.String(rs.Primary.ID), + }) + + if err != nil { + return fmt.Errorf("Error destroying Placement Group (%s): %s", rs.Primary.ID, err) + } + return nil + } +} + +var testAccAWSPlacementGroupConfig = ` +resource "aws_placement_group" "pg" { + name = "tf-test-pg" + strategy = "cluster" +} +` diff --git a/builtin/providers/aws/resource_aws_rds_cluster.go b/builtin/providers/aws/resource_aws_rds_cluster.go new file mode 100644 index 000000000..57f3a27b3 --- /dev/null +++ b/builtin/providers/aws/resource_aws_rds_cluster.go @@ -0,0 +1,347 @@ +package aws + +import ( + "fmt" + "log" + "regexp" + "time" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/awserr" + "github.com/aws/aws-sdk-go/service/rds" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/helper/schema" +) + +func resourceAwsRDSCluster() *schema.Resource { + return &schema.Resource{ + Create: resourceAwsRDSClusterCreate, + Read: resourceAwsRDSClusterRead, + Update: resourceAwsRDSClusterUpdate, + Delete: resourceAwsRDSClusterDelete, + + Schema: map[string]*schema.Schema{ + + "availability_zones": &schema.Schema{ + Type: schema.TypeSet, + Elem: &schema.Schema{Type: schema.TypeString}, + Optional: true, + ForceNew: true, + Computed: true, + Set: schema.HashString, + }, + + "cluster_identifier": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + ValidateFunc: validateRdsId, + }, + + "cluster_members": &schema.Schema{ + Type: schema.TypeSet, + Elem: &schema.Schema{Type: schema.TypeString}, + Optional: true, + Computed: true, + Set: schema.HashString, + }, + + "database_name": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + }, + + "db_subnet_group_name": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ForceNew: true, + Computed: true, + }, + + "endpoint": &schema.Schema{ + Type: schema.TypeString, + Computed: true, + }, + + "engine": &schema.Schema{ + Type: schema.TypeString, + Computed: true, + }, + + "final_snapshot_identifier": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ValidateFunc: func(v interface{}, k string) (ws []string, es []error) { + value := v.(string) + if !regexp.MustCompile(`^[0-9A-Za-z-]+$`).MatchString(value) { + es = append(es, fmt.Errorf( + "only alphanumeric characters and hyphens allowed in %q", k)) + } + if regexp.MustCompile(`--`).MatchString(value) { + es = append(es, fmt.Errorf("%q cannot contain two consecutive hyphens", k)) + } + if regexp.MustCompile(`-$`).MatchString(value) { + es = append(es, fmt.Errorf("%q cannot end in a hyphen", k)) + } + return + }, + }, + + "master_username": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + + "master_password": &schema.Schema{ + Type: schema.TypeString, + Required: true, + }, + + "port": &schema.Schema{ + Type: schema.TypeInt, + Optional: true, + Computed: true, + }, + + // apply_immediately is used to determine when the update modifications + // take place. + // See http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.DBInstance.Modifying.html + "apply_immediately": &schema.Schema{ + Type: schema.TypeBool, + Optional: true, + Computed: true, + }, + + "vpc_security_group_ids": &schema.Schema{ + Type: schema.TypeSet, + Optional: true, + Computed: true, + Elem: &schema.Schema{Type: schema.TypeString}, + Set: schema.HashString, + }, + }, + } +} + +func resourceAwsRDSClusterCreate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).rdsconn + + createOpts := &rds.CreateDBClusterInput{ + DBClusterIdentifier: aws.String(d.Get("cluster_identifier").(string)), + Engine: aws.String("aurora"), + MasterUserPassword: aws.String(d.Get("master_password").(string)), + MasterUsername: aws.String(d.Get("master_username").(string)), + } + + if v := d.Get("database_name"); v.(string) != "" { + createOpts.DatabaseName = aws.String(v.(string)) + } + + if attr, ok := d.GetOk("port"); ok { + createOpts.Port = aws.Int64(int64(attr.(int))) + } + + if attr, ok := d.GetOk("db_subnet_group_name"); ok { + createOpts.DBSubnetGroupName = aws.String(attr.(string)) + } + + if attr := d.Get("vpc_security_group_ids").(*schema.Set); attr.Len() > 0 { + createOpts.VpcSecurityGroupIds = expandStringList(attr.List()) + } + + if attr := d.Get("availability_zones").(*schema.Set); attr.Len() > 0 { + createOpts.AvailabilityZones = expandStringList(attr.List()) + } + + log.Printf("[DEBUG] RDS Cluster create options: %s", createOpts) + resp, err := conn.CreateDBCluster(createOpts) + if err != nil { + log.Printf("[ERROR] Error creating RDS Cluster: %s", err) + return err + } + + log.Printf("[DEBUG]: Cluster create response: %s", resp) + d.SetId(*resp.DBCluster.DBClusterIdentifier) + stateConf := &resource.StateChangeConf{ + Pending: []string{"creating", "backing-up", "modifying"}, + Target: "available", + Refresh: resourceAwsRDSClusterStateRefreshFunc(d, meta), + Timeout: 5 * time.Minute, + MinTimeout: 3 * time.Second, + } + + // Wait, catching any errors + _, err = stateConf.WaitForState() + if err != nil { + return fmt.Errorf("[WARN] Error waiting for RDS Cluster state to be \"available\": %s", err) + } + + return resourceAwsRDSClusterRead(d, meta) +} + +func resourceAwsRDSClusterRead(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).rdsconn + + resp, err := conn.DescribeDBClusters(&rds.DescribeDBClustersInput{ + DBClusterIdentifier: aws.String(d.Id()), + }) + + if err != nil { + if awsErr, ok := err.(awserr.Error); ok { + if "DBClusterNotFoundFault" == awsErr.Code() { + d.SetId("") + log.Printf("[DEBUG] RDS Cluster (%s) not found", d.Id()) + return nil + } + } + log.Printf("[DEBUG] Error describing RDS Cluster (%s)", d.Id()) + return err + } + + var dbc *rds.DBCluster + for _, c := range resp.DBClusters { + if *c.DBClusterIdentifier == d.Id() { + dbc = c + } + } + + if dbc == nil { + log.Printf("[WARN] RDS Cluster (%s) not found", d.Id()) + d.SetId("") + return nil + } + + if err := d.Set("availability_zones", aws.StringValueSlice(dbc.AvailabilityZones)); err != nil { + return fmt.Errorf("[DEBUG] Error saving AvailabilityZones to state for RDS Cluster (%s): %s", d.Id(), err) + } + d.Set("database_name", dbc.DatabaseName) + d.Set("db_subnet_group_name", dbc.DBSubnetGroup) + d.Set("endpoint", dbc.Endpoint) + d.Set("engine", dbc.Engine) + d.Set("master_username", dbc.MasterUsername) + d.Set("port", dbc.Port) + + var vpcg []string + for _, g := range dbc.VpcSecurityGroups { + vpcg = append(vpcg, *g.VpcSecurityGroupId) + } + if err := d.Set("vpc_security_group_ids", vpcg); err != nil { + return fmt.Errorf("[DEBUG] Error saving VPC Security Group IDs to state for RDS Cluster (%s): %s", d.Id(), err) + } + + var cm []string + for _, m := range dbc.DBClusterMembers { + cm = append(cm, *m.DBInstanceIdentifier) + } + if err := d.Set("cluster_members", cm); err != nil { + return fmt.Errorf("[DEBUG] Error saving RDS Cluster Members to state for RDS Cluster (%s): %s", d.Id(), err) + } + + return nil +} + +func resourceAwsRDSClusterUpdate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).rdsconn + + req := &rds.ModifyDBClusterInput{ + ApplyImmediately: aws.Bool(d.Get("apply_immediately").(bool)), + DBClusterIdentifier: aws.String(d.Id()), + } + + if d.HasChange("master_password") { + req.MasterUserPassword = aws.String(d.Get("master_password").(string)) + } + + if d.HasChange("vpc_security_group_ids") { + if attr := d.Get("vpc_security_group_ids").(*schema.Set); attr.Len() > 0 { + req.VpcSecurityGroupIds = expandStringList(attr.List()) + } else { + req.VpcSecurityGroupIds = []*string{} + } + } + + _, err := conn.ModifyDBCluster(req) + if err != nil { + return fmt.Errorf("[WARN] Error modifying RDS Cluster (%s): %s", d.Id(), err) + } + + return resourceAwsRDSClusterRead(d, meta) +} + +func resourceAwsRDSClusterDelete(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).rdsconn + log.Printf("[DEBUG] Destroying RDS Cluster (%s)", d.Id()) + + deleteOpts := rds.DeleteDBClusterInput{ + DBClusterIdentifier: aws.String(d.Id()), + } + + finalSnapshot := d.Get("final_snapshot_identifier").(string) + if finalSnapshot == "" { + deleteOpts.SkipFinalSnapshot = aws.Bool(true) + } else { + deleteOpts.FinalDBSnapshotIdentifier = aws.String(finalSnapshot) + deleteOpts.SkipFinalSnapshot = aws.Bool(false) + } + + log.Printf("[DEBUG] RDS Cluster delete options: %s", deleteOpts) + _, err := conn.DeleteDBCluster(&deleteOpts) + + stateConf := &resource.StateChangeConf{ + Pending: []string{"deleting", "backing-up", "modifying"}, + Target: "destroyed", + Refresh: resourceAwsRDSClusterStateRefreshFunc(d, meta), + Timeout: 5 * time.Minute, + MinTimeout: 3 * time.Second, + } + + // Wait, catching any errors + _, err = stateConf.WaitForState() + if err != nil { + return fmt.Errorf("[WARN] Error deleting RDS Cluster (%s): %s", d.Id(), err) + } + + return nil +} + +func resourceAwsRDSClusterStateRefreshFunc( + d *schema.ResourceData, meta interface{}) resource.StateRefreshFunc { + return func() (interface{}, string, error) { + conn := meta.(*AWSClient).rdsconn + + resp, err := conn.DescribeDBClusters(&rds.DescribeDBClustersInput{ + DBClusterIdentifier: aws.String(d.Id()), + }) + + if err != nil { + if awsErr, ok := err.(awserr.Error); ok { + if "DBClusterNotFoundFault" == awsErr.Code() { + return 42, "destroyed", nil + } + } + log.Printf("[WARN] Error on retrieving DB Cluster (%s) when waiting: %s", d.Id(), err) + return nil, "", err + } + + var dbc *rds.DBCluster + + for _, c := range resp.DBClusters { + if *c.DBClusterIdentifier == d.Id() { + dbc = c + } + } + + if dbc == nil { + return 42, "destroyed", nil + } + + if dbc.Status != nil { + log.Printf("[DEBUG] DB Cluster status (%s): %s", d.Id(), *dbc.Status) + } + + return dbc, *dbc.Status, nil + } +} diff --git a/builtin/providers/aws/resource_aws_rds_cluster_instance.go b/builtin/providers/aws/resource_aws_rds_cluster_instance.go new file mode 100644 index 000000000..bdffd59d4 --- /dev/null +++ b/builtin/providers/aws/resource_aws_rds_cluster_instance.go @@ -0,0 +1,220 @@ +package aws + +import ( + "fmt" + "log" + "time" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/rds" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/helper/schema" +) + +func resourceAwsRDSClusterInstance() *schema.Resource { + return &schema.Resource{ + Create: resourceAwsRDSClusterInstanceCreate, + Read: resourceAwsRDSClusterInstanceRead, + Update: resourceAwsRDSClusterInstanceUpdate, + Delete: resourceAwsRDSClusterInstanceDelete, + + Schema: map[string]*schema.Schema{ + "identifier": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ForceNew: true, + ValidateFunc: validateRdsId, + }, + + "db_subnet_group_name": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ForceNew: true, + Computed: true, + }, + + "writer": &schema.Schema{ + Type: schema.TypeBool, + Computed: true, + }, + + "cluster_identifier": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + + "endpoint": &schema.Schema{ + Type: schema.TypeString, + Computed: true, + }, + + "port": &schema.Schema{ + Type: schema.TypeInt, + Computed: true, + }, + + "publicly_accessible": &schema.Schema{ + Type: schema.TypeBool, + Optional: true, + Default: false, + ForceNew: true, + }, + + "instance_class": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + + "tags": tagsSchema(), + }, + } +} + +func resourceAwsRDSClusterInstanceCreate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).rdsconn + tags := tagsFromMapRDS(d.Get("tags").(map[string]interface{})) + + createOpts := &rds.CreateDBInstanceInput{ + DBInstanceClass: aws.String(d.Get("instance_class").(string)), + DBClusterIdentifier: aws.String(d.Get("cluster_identifier").(string)), + Engine: aws.String("aurora"), + PubliclyAccessible: aws.Bool(d.Get("publicly_accessible").(bool)), + Tags: tags, + } + + if v := d.Get("identifier").(string); v != "" { + createOpts.DBInstanceIdentifier = aws.String(v) + } else { + createOpts.DBInstanceIdentifier = aws.String(resource.UniqueId()) + } + + if attr, ok := d.GetOk("db_subnet_group_name"); ok { + createOpts.DBSubnetGroupName = aws.String(attr.(string)) + } + + log.Printf("[DEBUG] Creating RDS DB Instance opts: %s", createOpts) + resp, err := conn.CreateDBInstance(createOpts) + if err != nil { + return err + } + + d.SetId(*resp.DBInstance.DBInstanceIdentifier) + + // reuse db_instance refresh func + stateConf := &resource.StateChangeConf{ + Pending: []string{"creating", "backing-up", "modifying"}, + Target: "available", + Refresh: resourceAwsDbInstanceStateRefreshFunc(d, meta), + Timeout: 40 * time.Minute, + MinTimeout: 10 * time.Second, + Delay: 10 * time.Second, + } + + // Wait, catching any errors + _, err = stateConf.WaitForState() + if err != nil { + return err + } + + return resourceAwsRDSClusterInstanceRead(d, meta) +} + +func resourceAwsRDSClusterInstanceRead(d *schema.ResourceData, meta interface{}) error { + db, err := resourceAwsDbInstanceRetrieve(d, meta) + if err != nil { + log.Printf("[WARN] Error on retrieving RDS Cluster Instance (%s): %s", d.Id(), err) + d.SetId("") + return nil + } + + // Retreive DB Cluster information, to determine if this Instance is a writer + conn := meta.(*AWSClient).rdsconn + resp, err := conn.DescribeDBClusters(&rds.DescribeDBClustersInput{ + DBClusterIdentifier: db.DBClusterIdentifier, + }) + + var dbc *rds.DBCluster + for _, c := range resp.DBClusters { + if *c.DBClusterIdentifier == *db.DBClusterIdentifier { + dbc = c + } + } + + if dbc == nil { + return fmt.Errorf("[WARN] Error finding RDS Cluster (%s) for Cluster Instance (%s): %s", + *db.DBClusterIdentifier, *db.DBInstanceIdentifier, err) + } + + for _, m := range dbc.DBClusterMembers { + if *db.DBInstanceIdentifier == *m.DBInstanceIdentifier { + if *m.IsClusterWriter == true { + d.Set("writer", true) + } else { + d.Set("writer", false) + } + } + } + + if db.Endpoint != nil { + d.Set("endpoint", db.Endpoint.Address) + d.Set("port", db.Endpoint.Port) + } + + d.Set("publicly_accessible", db.PubliclyAccessible) + + // Fetch and save tags + arn, err := buildRDSARN(d, meta) + if err != nil { + log.Printf("[DEBUG] Error building ARN for RDS Cluster Instance (%s), not setting Tags", *db.DBInstanceIdentifier) + } else { + if err := saveTagsRDS(conn, d, arn); err != nil { + log.Printf("[WARN] Failed to save tags for RDS Cluster Instance (%s): %s", *db.DBClusterIdentifier, err) + } + } + + return nil +} + +func resourceAwsRDSClusterInstanceUpdate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).rdsconn + + if arn, err := buildRDSARN(d, meta); err == nil { + if err := setTagsRDS(conn, d, arn); err != nil { + return err + } + } + + return resourceAwsRDSClusterInstanceRead(d, meta) +} + +func resourceAwsRDSClusterInstanceDelete(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).rdsconn + + log.Printf("[DEBUG] RDS Cluster Instance destroy: %v", d.Id()) + + opts := rds.DeleteDBInstanceInput{DBInstanceIdentifier: aws.String(d.Id())} + + log.Printf("[DEBUG] RDS Cluster Instance destroy configuration: %s", opts) + if _, err := conn.DeleteDBInstance(&opts); err != nil { + return err + } + + // re-uses db_instance refresh func + log.Println("[INFO] Waiting for RDS Cluster Instance to be destroyed") + stateConf := &resource.StateChangeConf{ + Pending: []string{"modifying", "deleting"}, + Target: "", + Refresh: resourceAwsDbInstanceStateRefreshFunc(d, meta), + Timeout: 40 * time.Minute, + MinTimeout: 10 * time.Second, + } + + if _, err := stateConf.WaitForState(); err != nil { + return err + } + + return nil + +} diff --git a/builtin/providers/aws/resource_aws_rds_cluster_instance_test.go b/builtin/providers/aws/resource_aws_rds_cluster_instance_test.go new file mode 100644 index 000000000..046132fad --- /dev/null +++ b/builtin/providers/aws/resource_aws_rds_cluster_instance_test.go @@ -0,0 +1,134 @@ +package aws + +import ( + "fmt" + "math/rand" + "strings" + "testing" + "time" + + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/awserr" + "github.com/aws/aws-sdk-go/service/rds" +) + +func TestAccAWSRDSClusterInstance_basic(t *testing.T) { + var v rds.DBInstance + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSClusterDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccAWSClusterInstanceConfig, + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSClusterInstanceExists("aws_rds_cluster_instance.cluster_instances", &v), + testAccCheckAWSDBClusterInstanceAttributes(&v), + ), + }, + }, + }) +} + +func testAccCheckAWSClusterInstanceDestroy(s *terraform.State) error { + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_rds_cluster" { + continue + } + + // Try to find the Group + conn := testAccProvider.Meta().(*AWSClient).rdsconn + var err error + resp, err := conn.DescribeDBInstances( + &rds.DescribeDBInstancesInput{ + DBInstanceIdentifier: aws.String(rs.Primary.ID), + }) + + if err == nil { + if len(resp.DBInstances) != 0 && + *resp.DBInstances[0].DBInstanceIdentifier == rs.Primary.ID { + return fmt.Errorf("DB Cluster Instance %s still exists", rs.Primary.ID) + } + } + + // Return nil if the Cluster Instance is already destroyed + if awsErr, ok := err.(awserr.Error); ok { + if awsErr.Code() == "DBInstanceNotFound" { + return nil + } + } + + return err + + } + + return nil +} + +func testAccCheckAWSDBClusterInstanceAttributes(v *rds.DBInstance) resource.TestCheckFunc { + return func(s *terraform.State) error { + + if *v.Engine != "aurora" { + return fmt.Errorf("bad engine, expected \"aurora\": %#v", *v.Engine) + } + + if !strings.HasPrefix(*v.DBClusterIdentifier, "tf-aurora-cluster") { + return fmt.Errorf("Bad Cluster Identifier prefix:\nexpected: %s\ngot: %s", "tf-aurora-cluster", *v.DBClusterIdentifier) + } + + return nil + } +} + +func testAccCheckAWSClusterInstanceExists(n string, v *rds.DBInstance) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[n] + if !ok { + return fmt.Errorf("Not found: %s", n) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No DB Instance ID is set") + } + + conn := testAccProvider.Meta().(*AWSClient).rdsconn + resp, err := conn.DescribeDBInstances(&rds.DescribeDBInstancesInput{ + DBInstanceIdentifier: aws.String(rs.Primary.ID), + }) + + if err != nil { + return err + } + + for _, d := range resp.DBInstances { + if *d.DBInstanceIdentifier == rs.Primary.ID { + *v = *d + return nil + } + } + + return fmt.Errorf("DB Cluster (%s) not found", rs.Primary.ID) + } +} + +// Add some random to the name, to avoid collision +var testAccAWSClusterInstanceConfig = fmt.Sprintf(` +resource "aws_rds_cluster" "default" { + cluster_identifier = "tf-aurora-cluster-test-%d" + availability_zones = ["us-west-2a","us-west-2b","us-west-2c"] + database_name = "mydb" + master_username = "foo" + master_password = "mustbeeightcharaters" +} + +resource "aws_rds_cluster_instance" "cluster_instances" { + identifier = "aurora-cluster-test-instance" + cluster_identifier = "${aws_rds_cluster.default.id}" + instance_class = "db.r3.large" +} + +`, rand.New(rand.NewSource(time.Now().UnixNano())).Int()) diff --git a/builtin/providers/aws/resource_aws_rds_cluster_test.go b/builtin/providers/aws/resource_aws_rds_cluster_test.go new file mode 100644 index 000000000..ffa2fa8e9 --- /dev/null +++ b/builtin/providers/aws/resource_aws_rds_cluster_test.go @@ -0,0 +1,108 @@ +package aws + +import ( + "fmt" + "math/rand" + "testing" + "time" + + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/awserr" + "github.com/aws/aws-sdk-go/service/rds" +) + +func TestAccAWSRDSCluster_basic(t *testing.T) { + var v rds.DBCluster + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSClusterDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccAWSClusterConfig, + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSClusterExists("aws_rds_cluster.default", &v), + ), + }, + }, + }) +} + +func testAccCheckAWSClusterDestroy(s *terraform.State) error { + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_rds_cluster" { + continue + } + + // Try to find the Group + conn := testAccProvider.Meta().(*AWSClient).rdsconn + var err error + resp, err := conn.DescribeDBClusters( + &rds.DescribeDBClustersInput{ + DBClusterIdentifier: aws.String(rs.Primary.ID), + }) + + if err == nil { + if len(resp.DBClusters) != 0 && + *resp.DBClusters[0].DBClusterIdentifier == rs.Primary.ID { + return fmt.Errorf("DB Cluster %s still exists", rs.Primary.ID) + } + } + + // Return nil if the cluster is already destroyed + if awsErr, ok := err.(awserr.Error); ok { + if awsErr.Code() == "DBClusterNotFound" { + return nil + } + } + + return err + } + + return nil +} + +func testAccCheckAWSClusterExists(n string, v *rds.DBCluster) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[n] + if !ok { + return fmt.Errorf("Not found: %s", n) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No DB Instance ID is set") + } + + conn := testAccProvider.Meta().(*AWSClient).rdsconn + resp, err := conn.DescribeDBClusters(&rds.DescribeDBClustersInput{ + DBClusterIdentifier: aws.String(rs.Primary.ID), + }) + + if err != nil { + return err + } + + for _, c := range resp.DBClusters { + if *c.DBClusterIdentifier == rs.Primary.ID { + *v = *c + return nil + } + } + + return fmt.Errorf("DB Cluster (%s) not found", rs.Primary.ID) + } +} + +// Add some random to the name, to avoid collision +var testAccAWSClusterConfig = fmt.Sprintf(` +resource "aws_rds_cluster" "default" { + cluster_identifier = "tf-aurora-cluster-%d" + availability_zones = ["us-west-2a","us-west-2b","us-west-2c"] + database_name = "mydb" + master_username = "foo" + master_password = "mustbeeightcharaters" +}`, rand.New(rand.NewSource(time.Now().UnixNano())).Int()) diff --git a/builtin/providers/aws/resource_aws_route_table_test.go b/builtin/providers/aws/resource_aws_route_table_test.go index 6eb8951fd..17fd4087e 100644 --- a/builtin/providers/aws/resource_aws_route_table_test.go +++ b/builtin/providers/aws/resource_aws_route_table_test.go @@ -2,6 +2,7 @@ package aws import ( "fmt" + "os" "testing" "github.com/aws/aws-sdk-go/aws" @@ -212,12 +213,16 @@ func testAccCheckRouteTableExists(n string, v *ec2.RouteTable) resource.TestChec } } -// TODO: re-enable this test. // VPC Peering connections are prefixed with pcx // Right now there is no VPC Peering resource -func _TestAccAWSRouteTable_vpcPeering(t *testing.T) { +func TestAccAWSRouteTable_vpcPeering(t *testing.T) { var v ec2.RouteTable + acctId := os.Getenv("TF_ACC_ID") + if acctId == "" && os.Getenv(resource.TestEnvVar) != "" { + t.Fatal("Error: Test TestAccAWSRouteTable_vpcPeering requires an Account ID in TF_ACC_ID ") + } + testCheck := func(*terraform.State) error { if len(v.Routes) != 2 { return fmt.Errorf("bad routes: %#v", v.Routes) @@ -243,7 +248,7 @@ func _TestAccAWSRouteTable_vpcPeering(t *testing.T) { CheckDestroy: testAccCheckRouteTableDestroy, Steps: []resource.TestStep{ resource.TestStep{ - Config: testAccRouteTableVpcPeeringConfig, + Config: testAccRouteTableVpcPeeringConfig(acctId), Check: resource.ComposeTestCheckFunc( testAccCheckRouteTableExists( "aws_route_table.foo", &v), @@ -395,11 +400,10 @@ resource "aws_route_table" "foo" { } ` -// TODO: re-enable this test. // VPC Peering connections are prefixed with pcx -// Right now there is no VPC Peering resource -const testAccRouteTableVpcPeeringConfig = ` -resource "aws_vpc" "foo" { +// This test requires an ENV var, TF_ACC_ID, with a valid AWS Account ID +func testAccRouteTableVpcPeeringConfig(acc string) string { + cfg := `resource "aws_vpc" "foo" { cidr_block = "10.1.0.0/16" } @@ -407,15 +411,34 @@ resource "aws_internet_gateway" "foo" { vpc_id = "${aws_vpc.foo.id}" } +resource "aws_vpc" "bar" { + cidr_block = "10.3.0.0/16" +} + +resource "aws_internet_gateway" "bar" { + vpc_id = "${aws_vpc.bar.id}" +} + +resource "aws_vpc_peering_connection" "foo" { + vpc_id = "${aws_vpc.foo.id}" + peer_vpc_id = "${aws_vpc.bar.id}" + peer_owner_id = "%s" + tags { + foo = "bar" + } +} + resource "aws_route_table" "foo" { vpc_id = "${aws_vpc.foo.id}" route { cidr_block = "10.2.0.0/16" - vpc_peering_connection_id = "pcx-12345" + vpc_peering_connection_id = "${aws_vpc_peering_connection.foo.id}" } } ` + return fmt.Sprintf(cfg, acc) +} const testAccRouteTableVgwRoutePropagationConfig = ` resource "aws_vpc" "foo" { diff --git a/builtin/providers/aws/resource_aws_s3_bucket.go b/builtin/providers/aws/resource_aws_s3_bucket.go index a329d4ff6..682999656 100644 --- a/builtin/providers/aws/resource_aws_s3_bucket.go +++ b/builtin/providers/aws/resource_aws_s3_bucket.go @@ -41,6 +41,39 @@ func resourceAwsS3Bucket() *schema.Resource { StateFunc: normalizeJson, }, + "cors_rule": &schema.Schema{ + Type: schema.TypeList, + Optional: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "allowed_headers": &schema.Schema{ + Type: schema.TypeList, + Optional: true, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "allowed_methods": &schema.Schema{ + Type: schema.TypeList, + Required: true, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "allowed_origins": &schema.Schema{ + Type: schema.TypeList, + Required: true, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "expose_headers": &schema.Schema{ + Type: schema.TypeList, + Optional: true, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "max_age_seconds": &schema.Schema{ + Type: schema.TypeInt, + Optional: true, + }, + }, + }, + }, + "website": &schema.Schema{ Type: schema.TypeList, Optional: true, @@ -168,6 +201,12 @@ func resourceAwsS3BucketUpdate(d *schema.ResourceData, meta interface{}) error { } } + if d.HasChange("cors_rule") { + if err := resourceAwsS3BucketCorsUpdate(s3conn, d); err != nil { + return err + } + } + if d.HasChange("website") { if err := resourceAwsS3BucketWebsiteUpdate(s3conn, d); err != nil { return err @@ -221,6 +260,27 @@ func resourceAwsS3BucketRead(d *schema.ResourceData, meta interface{}) error { } } + // Read the CORS + cors, err := s3conn.GetBucketCors(&s3.GetBucketCorsInput{ + Bucket: aws.String(d.Id()), + }) + log.Printf("[DEBUG] S3 bucket: %s, read CORS: %v", d.Id(), cors) + if err != nil { + rules := make([]map[string]interface{}, 0, len(cors.CORSRules)) + for _, ruleObject := range cors.CORSRules { + rule := make(map[string]interface{}) + rule["allowed_headers"] = ruleObject.AllowedHeaders + rule["allowed_methods"] = ruleObject.AllowedMethods + rule["allowed_origins"] = ruleObject.AllowedOrigins + rule["expose_headers"] = ruleObject.ExposeHeaders + rule["max_age_seconds"] = ruleObject.MaxAgeSeconds + rules = append(rules, rule) + } + if err := d.Set("cors_rule", rules); err != nil { + return fmt.Errorf("error reading S3 bucket \"%s\" CORS rules: %s", d.Id(), err) + } + } + // Read the website configuration ws, err := s3conn.GetBucketWebsite(&s3.GetBucketWebsiteInput{ Bucket: aws.String(d.Id()), @@ -400,6 +460,65 @@ func resourceAwsS3BucketPolicyUpdate(s3conn *s3.S3, d *schema.ResourceData) erro return nil } +func resourceAwsS3BucketCorsUpdate(s3conn *s3.S3, d *schema.ResourceData) error { + bucket := d.Get("bucket").(string) + rawCors := d.Get("cors_rule").([]interface{}) + + if len(rawCors) == 0 { + // Delete CORS + log.Printf("[DEBUG] S3 bucket: %s, delete CORS", bucket) + _, err := s3conn.DeleteBucketCors(&s3.DeleteBucketCorsInput{ + Bucket: aws.String(bucket), + }) + if err != nil { + return fmt.Errorf("Error deleting S3 CORS: %s", err) + } + } else { + // Put CORS + rules := make([]*s3.CORSRule, 0, len(rawCors)) + for _, cors := range rawCors { + corsMap := cors.(map[string]interface{}) + r := &s3.CORSRule{} + for k, v := range corsMap { + log.Printf("[DEBUG] S3 bucket: %s, put CORS: %#v, %#v", bucket, k, v) + if k == "max_age_seconds" { + r.MaxAgeSeconds = aws.Int64(int64(v.(int))) + } else { + vMap := make([]*string, len(v.([]interface{}))) + for i, vv := range v.([]interface{}) { + str := vv.(string) + vMap[i] = aws.String(str) + } + switch k { + case "allowed_headers": + r.AllowedHeaders = vMap + case "allowed_methods": + r.AllowedMethods = vMap + case "allowed_origins": + r.AllowedOrigins = vMap + case "expose_headers": + r.ExposeHeaders = vMap + } + } + } + rules = append(rules, r) + } + corsInput := &s3.PutBucketCorsInput{ + Bucket: aws.String(bucket), + CORSConfiguration: &s3.CORSConfiguration{ + CORSRules: rules, + }, + } + log.Printf("[DEBUG] S3 bucket: %s, put CORS: %#v", bucket, corsInput) + _, err := s3conn.PutBucketCors(corsInput) + if err != nil { + return fmt.Errorf("Error putting S3 CORS: %s", err) + } + } + + return nil +} + func resourceAwsS3BucketWebsiteUpdate(s3conn *s3.S3, d *schema.ResourceData) error { ws := d.Get("website").([]interface{}) @@ -464,6 +583,9 @@ func resourceAwsS3BucketWebsiteDelete(s3conn *s3.S3, d *schema.ResourceData) err return fmt.Errorf("Error deleting S3 website: %s", err) } + d.Set("website_endpoint", "") + d.Set("website_domain", "") + return nil } diff --git a/builtin/providers/aws/resource_aws_s3_bucket_object.go b/builtin/providers/aws/resource_aws_s3_bucket_object.go index 9d46952d0..b1c399dd1 100644 --- a/builtin/providers/aws/resource_aws_s3_bucket_object.go +++ b/builtin/providers/aws/resource_aws_s3_bucket_object.go @@ -1,7 +1,9 @@ package aws import ( + "bytes" "fmt" + "io" "log" "os" @@ -16,7 +18,6 @@ func resourceAwsS3BucketObject() *schema.Resource { return &schema.Resource{ Create: resourceAwsS3BucketObjectPut, Read: resourceAwsS3BucketObjectRead, - Update: resourceAwsS3BucketObjectPut, Delete: resourceAwsS3BucketObjectDelete, Schema: map[string]*schema.Schema{ @@ -26,6 +27,37 @@ func resourceAwsS3BucketObject() *schema.Resource { ForceNew: true, }, + "cache_control": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ForceNew: true, + }, + + "content_disposition": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ForceNew: true, + }, + + "content_encoding": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ForceNew: true, + }, + + "content_language": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ForceNew: true, + }, + + "content_type": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ForceNew: true, + Computed: true, + }, + "key": &schema.Schema{ Type: schema.TypeString, Required: true, @@ -33,9 +65,17 @@ func resourceAwsS3BucketObject() *schema.Resource { }, "source": &schema.Schema{ - Type: schema.TypeString, - Required: true, - ForceNew: true, + Type: schema.TypeString, + Optional: true, + ForceNew: true, + ConflictsWith: []string{"content"}, + }, + + "content": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ForceNew: true, + ConflictsWith: []string{"source"}, }, "etag": &schema.Schema{ @@ -51,21 +91,50 @@ func resourceAwsS3BucketObjectPut(d *schema.ResourceData, meta interface{}) erro bucket := d.Get("bucket").(string) key := d.Get("key").(string) - source := d.Get("source").(string) + var body io.ReadSeeker - file, err := os.Open(source) + if v, ok := d.GetOk("source"); ok { + source := v.(string) + file, err := os.Open(source) + if err != nil { + return fmt.Errorf("Error opening S3 bucket object source (%s): %s", source, err) + } - if err != nil { - return fmt.Errorf("Error opening S3 bucket object source (%s): %s", source, err) + body = file + } else if v, ok := d.GetOk("content"); ok { + content := v.(string) + body = bytes.NewReader([]byte(content)) + } else { + + return fmt.Errorf("Must specify \"source\" or \"content\" field") + } + putInput := &s3.PutObjectInput{ + Bucket: aws.String(bucket), + Key: aws.String(key), + Body: body, } - resp, err := s3conn.PutObject( - &s3.PutObjectInput{ - Bucket: aws.String(bucket), - Key: aws.String(key), - Body: file, - }) + if v, ok := d.GetOk("cache_control"); ok { + putInput.CacheControl = aws.String(v.(string)) + } + if v, ok := d.GetOk("content_type"); ok { + putInput.ContentType = aws.String(v.(string)) + } + + if v, ok := d.GetOk("content_encoding"); ok { + putInput.ContentEncoding = aws.String(v.(string)) + } + + if v, ok := d.GetOk("content_language"); ok { + putInput.ContentLanguage = aws.String(v.(string)) + } + + if v, ok := d.GetOk("content_disposition"); ok { + putInput.ContentDisposition = aws.String(v.(string)) + } + + resp, err := s3conn.PutObject(putInput) if err != nil { return fmt.Errorf("Error putting object in S3 bucket (%s): %s", bucket, err) } @@ -99,6 +168,12 @@ func resourceAwsS3BucketObjectRead(d *schema.ResourceData, meta interface{}) err return err } + d.Set("cache_control", resp.CacheControl) + d.Set("content_disposition", resp.ContentDisposition) + d.Set("content_encoding", resp.ContentEncoding) + d.Set("content_language", resp.ContentLanguage) + d.Set("content_type", resp.ContentType) + log.Printf("[DEBUG] Reading S3 Bucket Object meta: %s", resp) return nil } diff --git a/builtin/providers/aws/resource_aws_s3_bucket_object_test.go b/builtin/providers/aws/resource_aws_s3_bucket_object_test.go index 4f947736a..ea28f9d37 100644 --- a/builtin/providers/aws/resource_aws_s3_bucket_object_test.go +++ b/builtin/providers/aws/resource_aws_s3_bucket_object_test.go @@ -15,7 +15,7 @@ import ( var tf, err = ioutil.TempFile("", "tf") -func TestAccAWSS3BucketObject_basic(t *testing.T) { +func TestAccAWSS3BucketObject_source(t *testing.T) { // first write some data to the tempfile just so it's not 0 bytes. ioutil.WriteFile(tf.Name(), []byte("{anything will do }"), 0644) resource.Test(t, resource.TestCase{ @@ -29,13 +29,57 @@ func TestAccAWSS3BucketObject_basic(t *testing.T) { CheckDestroy: testAccCheckAWSS3BucketObjectDestroy, Steps: []resource.TestStep{ resource.TestStep{ - Config: testAccAWSS3BucketObjectConfig, + Config: testAccAWSS3BucketObjectConfigSource, Check: testAccCheckAWSS3BucketObjectExists("aws_s3_bucket_object.object"), }, }, }) } +func TestAccAWSS3BucketObject_content(t *testing.T) { + resource.Test(t, resource.TestCase{ + PreCheck: func() { + if err != nil { + panic(err) + } + testAccPreCheck(t) + }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSS3BucketObjectDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccAWSS3BucketObjectConfigContent, + Check: testAccCheckAWSS3BucketObjectExists("aws_s3_bucket_object.object"), + }, + }, + }) +} + +func TestAccAWSS3BucketObject_withContentCharacteristics(t *testing.T) { + // first write some data to the tempfile just so it's not 0 bytes. + ioutil.WriteFile(tf.Name(), []byte("{anything will do }"), 0644) + resource.Test(t, resource.TestCase{ + PreCheck: func() { + if err != nil { + panic(err) + } + testAccPreCheck(t) + }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSS3BucketObjectDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccAWSS3BucketObjectConfig_withContentCharacteristics, + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSS3BucketObjectExists("aws_s3_bucket_object.object"), + resource.TestCheckResourceAttr( + "aws_s3_bucket_object.object", "content_type", "binary/octet-stream"), + ), + }, + }, + }) +} + func testAccCheckAWSS3BucketObjectDestroy(s *terraform.State) error { s3conn := testAccProvider.Meta().(*AWSClient).s3conn @@ -86,14 +130,39 @@ func testAccCheckAWSS3BucketObjectExists(n string) resource.TestCheckFunc { } var randomBucket = randInt -var testAccAWSS3BucketObjectConfig = fmt.Sprintf(` +var testAccAWSS3BucketObjectConfigSource = fmt.Sprintf(` resource "aws_s3_bucket" "object_bucket" { - bucket = "tf-object-test-bucket-%d" + bucket = "tf-object-test-bucket-%d" } - resource "aws_s3_bucket_object" "object" { bucket = "${aws_s3_bucket.object_bucket.bucket}" key = "test-key" source = "%s" + content_type = "binary/octet-stream" } `, randomBucket, tf.Name()) + +var testAccAWSS3BucketObjectConfig_withContentCharacteristics = fmt.Sprintf(` +resource "aws_s3_bucket" "object_bucket_2" { + bucket = "tf-object-test-bucket-%d" +} + +resource "aws_s3_bucket_object" "object" { + bucket = "${aws_s3_bucket.object_bucket_2.bucket}" + key = "test-key" + source = "%s" + content_language = "en" + content_type = "binary/octet-stream" +} +`, randomBucket, tf.Name()) + +var testAccAWSS3BucketObjectConfigContent = fmt.Sprintf(` +resource "aws_s3_bucket" "object_bucket" { + bucket = "tf-object-test-bucket-%d" +} +resource "aws_s3_bucket_object" "object" { + bucket = "${aws_s3_bucket.object_bucket.bucket}" + key = "test-key" + content = "some_bucket_content" +} +`, randomBucket) diff --git a/builtin/providers/aws/resource_aws_s3_bucket_test.go b/builtin/providers/aws/resource_aws_s3_bucket_test.go index e494816b3..862a0620f 100644 --- a/builtin/providers/aws/resource_aws_s3_bucket_test.go +++ b/builtin/providers/aws/resource_aws_s3_bucket_test.go @@ -64,7 +64,7 @@ func TestAccAWSS3Bucket_Policy(t *testing.T) { }) } -func TestAccAWSS3Bucket_Website(t *testing.T) { +func TestAccAWSS3Bucket_Website_Simple(t *testing.T) { resource.Test(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, @@ -188,6 +188,34 @@ func TestAccAWSS3Bucket_Versioning(t *testing.T) { }) } +func TestAccAWSS3Bucket_Cors(t *testing.T) { + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSS3BucketDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccAWSS3BucketConfigWithCORS, + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSS3BucketExists("aws_s3_bucket.bucket"), + testAccCheckAWSS3BucketCors( + "aws_s3_bucket.bucket", + []*s3.CORSRule{ + &s3.CORSRule{ + AllowedHeaders: []*string{aws.String("*")}, + AllowedMethods: []*string{aws.String("PUT"), aws.String("POST")}, + AllowedOrigins: []*string{aws.String("https://www.example.com")}, + ExposeHeaders: []*string{aws.String("x-amz-server-side-encryption"), aws.String("ETag")}, + MaxAgeSeconds: aws.Int64(3000), + }, + }, + ), + ), + }, + }, + }) +} + func testAccCheckAWSS3BucketDestroy(s *terraform.State) error { conn := testAccProvider.Meta().(*AWSClient).s3conn @@ -370,6 +398,26 @@ func testAccCheckAWSS3BucketVersioning(n string, versioningStatus string) resour return nil } } +func testAccCheckAWSS3BucketCors(n string, corsRules []*s3.CORSRule) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, _ := s.RootModule().Resources[n] + conn := testAccProvider.Meta().(*AWSClient).s3conn + + out, err := conn.GetBucketCors(&s3.GetBucketCorsInput{ + Bucket: aws.String(rs.Primary.ID), + }) + + if err != nil { + return fmt.Errorf("GetBucketCors error: %v", err) + } + + if !reflect.DeepEqual(out.CORSRules, corsRules) { + return fmt.Errorf("bad error cors rule, expected: %v, got %v", corsRules, out.CORSRules) + } + + return nil + } +} // These need a bit of randomness as the name can only be used once globally // within AWS @@ -452,3 +500,17 @@ resource "aws_s3_bucket" "bucket" { } } `, randInt) + +var testAccAWSS3BucketConfigWithCORS = fmt.Sprintf(` +resource "aws_s3_bucket" "bucket" { + bucket = "tf-test-bucket-%d" + acl = "public-read" + cors_rule { + allowed_headers = ["*"] + allowed_methods = ["PUT","POST"] + allowed_origins = ["https://www.example.com"] + expose_headers = ["x-amz-server-side-encryption","ETag"] + max_age_seconds = 3000 + } +} +`, randInt) diff --git a/builtin/providers/aws/resource_aws_security_group_rule.go b/builtin/providers/aws/resource_aws_security_group_rule.go index 97b6d4025..55499cfd5 100644 --- a/builtin/providers/aws/resource_aws_security_group_rule.go +++ b/builtin/providers/aws/resource_aws_security_group_rule.go @@ -20,7 +20,7 @@ func resourceAwsSecurityGroupRule() *schema.Resource { Read: resourceAwsSecurityGroupRuleRead, Delete: resourceAwsSecurityGroupRuleDelete, - SchemaVersion: 1, + SchemaVersion: 2, MigrateState: resourceAwsSecurityGroupRuleMigrateState, Schema: map[string]*schema.Schema{ @@ -67,14 +67,15 @@ func resourceAwsSecurityGroupRule() *schema.Resource { Optional: true, ForceNew: true, Computed: true, - ConflictsWith: []string{"cidr_blocks"}, + ConflictsWith: []string{"cidr_blocks", "self"}, }, "self": &schema.Schema{ - Type: schema.TypeBool, - Optional: true, - Default: false, - ForceNew: true, + Type: schema.TypeBool, + Optional: true, + Default: false, + ForceNew: true, + ConflictsWith: []string{"cidr_blocks"}, }, }, } @@ -142,7 +143,7 @@ information and instructions for recovery. Error message: %s`, awsErr.Message()) ruleType, autherr) } - d.SetId(ipPermissionIDHash(ruleType, perm)) + d.SetId(ipPermissionIDHash(sg_id, ruleType, perm)) return resourceAwsSecurityGroupRuleRead(d, meta) } @@ -158,24 +159,69 @@ func resourceAwsSecurityGroupRuleRead(d *schema.ResourceData, meta interface{}) } var rule *ec2.IpPermission + var rules []*ec2.IpPermission ruleType := d.Get("type").(string) - var rl []*ec2.IpPermission switch ruleType { case "ingress": - rl = sg.IpPermissions + rules = sg.IpPermissions default: - rl = sg.IpPermissionsEgress + rules = sg.IpPermissionsEgress } - for _, r := range rl { - if d.Id() == ipPermissionIDHash(ruleType, r) { - rule = r + p := expandIPPerm(d, sg) + + if len(rules) == 0 { + return fmt.Errorf( + "[WARN] No %s rules were found for Security Group (%s) looking for Security Group Rule (%s)", + ruleType, *sg.GroupName, d.Id()) + } + + for _, r := range rules { + if r.ToPort != nil && *p.ToPort != *r.ToPort { + continue } + + if r.FromPort != nil && *p.FromPort != *r.FromPort { + continue + } + + if r.IpProtocol != nil && *p.IpProtocol != *r.IpProtocol { + continue + } + + remaining := len(p.IpRanges) + for _, ip := range p.IpRanges { + for _, rip := range r.IpRanges { + if *ip.CidrIp == *rip.CidrIp { + remaining-- + } + } + } + + if remaining > 0 { + continue + } + + remaining = len(p.UserIdGroupPairs) + for _, ip := range p.UserIdGroupPairs { + for _, rip := range r.UserIdGroupPairs { + if *ip.GroupId == *rip.GroupId { + remaining-- + } + } + } + + if remaining > 0 { + continue + } + + log.Printf("[DEBUG] Found rule for Security Group Rule (%s): %s", d.Id(), r) + rule = r } if rule == nil { - log.Printf("[DEBUG] Unable to find matching %s Security Group Rule for Group %s", - ruleType, sg_id) + log.Printf("[DEBUG] Unable to find matching %s Security Group Rule (%s) for Group %s", + ruleType, d.Id(), sg_id) d.SetId("") return nil } @@ -186,14 +232,14 @@ func resourceAwsSecurityGroupRuleRead(d *schema.ResourceData, meta interface{}) d.Set("type", ruleType) var cb []string - for _, c := range rule.IpRanges { + for _, c := range p.IpRanges { cb = append(cb, *c.CidrIp) } d.Set("cidr_blocks", cb) - if len(rule.UserIdGroupPairs) > 0 { - s := rule.UserIdGroupPairs[0] + if len(p.UserIdGroupPairs) > 0 { + s := p.UserIdGroupPairs[0] d.Set("source_security_group_id", *s.GroupId) } @@ -285,8 +331,9 @@ func (b ByGroupPair) Less(i, j int) bool { panic("mismatched security group rules, may be a terraform bug") } -func ipPermissionIDHash(ruleType string, ip *ec2.IpPermission) string { +func ipPermissionIDHash(sg_id, ruleType string, ip *ec2.IpPermission) string { var buf bytes.Buffer + buf.WriteString(fmt.Sprintf("%s-", sg_id)) if ip.FromPort != nil && *ip.FromPort > 0 { buf.WriteString(fmt.Sprintf("%d-", *ip.FromPort)) } @@ -326,7 +373,7 @@ func ipPermissionIDHash(ruleType string, ip *ec2.IpPermission) string { } } - return fmt.Sprintf("sg-%d", hashcode.String(buf.String())) + return fmt.Sprintf("sgrule-%d", hashcode.String(buf.String())) } func expandIPPerm(d *schema.ResourceData, sg *ec2.SecurityGroup) *ec2.IpPermission { diff --git a/builtin/providers/aws/resource_aws_security_group_rule_migrate.go b/builtin/providers/aws/resource_aws_security_group_rule_migrate.go index 98ecced70..0b57f3f17 100644 --- a/builtin/providers/aws/resource_aws_security_group_rule_migrate.go +++ b/builtin/providers/aws/resource_aws_security_group_rule_migrate.go @@ -17,6 +17,12 @@ func resourceAwsSecurityGroupRuleMigrateState( case 0: log.Println("[INFO] Found AWS Security Group State v0; migrating to v1") return migrateSGRuleStateV0toV1(is) + case 1: + log.Println("[INFO] Found AWS Security Group State v1; migrating to v2") + // migrating to version 2 of the schema is the same as 0->1, since the + // method signature has changed now and will use the security group id in + // the hash + return migrateSGRuleStateV0toV1(is) default: return is, fmt.Errorf("Unexpected schema version: %d", v) } @@ -37,7 +43,7 @@ func migrateSGRuleStateV0toV1(is *terraform.InstanceState) (*terraform.InstanceS } log.Printf("[DEBUG] Attributes before migration: %#v", is.Attributes) - newID := ipPermissionIDHash(is.Attributes["type"], perm) + newID := ipPermissionIDHash(is.Attributes["security_group_id"], is.Attributes["type"], perm) is.Attributes["id"] = newID is.ID = newID log.Printf("[DEBUG] Attributes after migration: %#v, new id: %s", is.Attributes, newID) diff --git a/builtin/providers/aws/resource_aws_security_group_rule_migrate_test.go b/builtin/providers/aws/resource_aws_security_group_rule_migrate_test.go index 664f05039..496834b8c 100644 --- a/builtin/providers/aws/resource_aws_security_group_rule_migrate_test.go +++ b/builtin/providers/aws/resource_aws_security_group_rule_migrate_test.go @@ -27,7 +27,7 @@ func TestAWSSecurityGroupRuleMigrateState(t *testing.T) { "from_port": "0", "source_security_group_id": "sg-11877275", }, - Expected: "sg-3766347571", + Expected: "sgrule-2889201120", }, "v0_2": { StateVersion: 0, @@ -44,7 +44,7 @@ func TestAWSSecurityGroupRuleMigrateState(t *testing.T) { "cidr_blocks.2": "172.16.3.0/24", "cidr_blocks.3": "172.16.4.0/24", "cidr_blocks.#": "4"}, - Expected: "sg-4100229787", + Expected: "sgrule-1826358977", }, } diff --git a/builtin/providers/aws/resource_aws_security_group_rule_test.go b/builtin/providers/aws/resource_aws_security_group_rule_test.go index c160703f3..f06dd3e13 100644 --- a/builtin/providers/aws/resource_aws_security_group_rule_test.go +++ b/builtin/providers/aws/resource_aws_security_group_rule_test.go @@ -2,7 +2,7 @@ package aws import ( "fmt" - "reflect" + "log" "testing" "github.com/aws/aws-sdk-go/aws" @@ -90,15 +90,15 @@ func TestIpPermissionIDHash(t *testing.T) { Type string Output string }{ - {simple, "ingress", "sg-82613597"}, - {egress, "egress", "sg-363054720"}, - {egress_all, "egress", "sg-2766285362"}, - {vpc_security_group_source, "egress", "sg-2661404947"}, - {security_group_source, "egress", "sg-1841245863"}, + {simple, "ingress", "sgrule-3403497314"}, + {egress, "egress", "sgrule-1173186295"}, + {egress_all, "egress", "sgrule-766323498"}, + {vpc_security_group_source, "egress", "sgrule-351225364"}, + {security_group_source, "egress", "sgrule-2198807188"}, } for _, tc := range cases { - actual := ipPermissionIDHash(tc.Type, tc.Input) + actual := ipPermissionIDHash("sg-12345", tc.Type, tc.Input) if actual != tc.Output { t.Errorf("input: %s - %s\noutput: %s", tc.Type, tc.Input, actual) } @@ -132,7 +132,7 @@ func TestAccAWSSecurityGroupRule_Ingress_VPC(t *testing.T) { Config: testAccAWSSecurityGroupRuleIngressConfig, Check: resource.ComposeTestCheckFunc( testAccCheckAWSSecurityGroupRuleExists("aws_security_group.web", &group), - testAccCheckAWSSecurityGroupRuleAttributes(&group, "ingress"), + testAccCheckAWSSecurityGroupRuleAttributes("aws_security_group_rule.ingress_1", &group, nil, "ingress"), resource.TestCheckResourceAttr( "aws_security_group_rule.ingress_1", "from_port", "80"), testRuleCount, @@ -169,7 +169,7 @@ func TestAccAWSSecurityGroupRule_Ingress_Classic(t *testing.T) { Config: testAccAWSSecurityGroupRuleIngressClassicConfig, Check: resource.ComposeTestCheckFunc( testAccCheckAWSSecurityGroupRuleExists("aws_security_group.web", &group), - testAccCheckAWSSecurityGroupRuleAttributes(&group, "ingress"), + testAccCheckAWSSecurityGroupRuleAttributes("aws_security_group_rule.ingress_1", &group, nil, "ingress"), resource.TestCheckResourceAttr( "aws_security_group_rule.ingress_1", "from_port", "80"), testRuleCount, @@ -231,7 +231,7 @@ func TestAccAWSSecurityGroupRule_Egress(t *testing.T) { Config: testAccAWSSecurityGroupRuleEgressConfig, Check: resource.ComposeTestCheckFunc( testAccCheckAWSSecurityGroupRuleExists("aws_security_group.web", &group), - testAccCheckAWSSecurityGroupRuleAttributes(&group, "egress"), + testAccCheckAWSSecurityGroupRuleAttributes("aws_security_group_rule.egress_1", &group, nil, "egress"), ), }, }, @@ -256,6 +256,92 @@ func TestAccAWSSecurityGroupRule_SelfReference(t *testing.T) { }) } +// testing partial match implementation +func TestAccAWSSecurityGroupRule_PartialMatching_basic(t *testing.T) { + var group ec2.SecurityGroup + + p := ec2.IpPermission{ + FromPort: aws.Int64(80), + ToPort: aws.Int64(80), + IpProtocol: aws.String("tcp"), + IpRanges: []*ec2.IpRange{ + &ec2.IpRange{CidrIp: aws.String("10.0.2.0/24")}, + &ec2.IpRange{CidrIp: aws.String("10.0.3.0/24")}, + &ec2.IpRange{CidrIp: aws.String("10.0.4.0/24")}, + }, + } + + o := ec2.IpPermission{ + FromPort: aws.Int64(80), + ToPort: aws.Int64(80), + IpProtocol: aws.String("tcp"), + IpRanges: []*ec2.IpRange{ + &ec2.IpRange{CidrIp: aws.String("10.0.5.0/24")}, + }, + } + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSSecurityGroupRuleDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccAWSSecurityGroupRulePartialMatching, + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSSecurityGroupRuleExists("aws_security_group.web", &group), + testAccCheckAWSSecurityGroupRuleAttributes("aws_security_group_rule.ingress", &group, &p, "ingress"), + testAccCheckAWSSecurityGroupRuleAttributes("aws_security_group_rule.other", &group, &o, "ingress"), + testAccCheckAWSSecurityGroupRuleAttributes("aws_security_group_rule.nat_ingress", &group, &o, "ingress"), + ), + }, + }, + }) +} + +func TestAccAWSSecurityGroupRule_PartialMatching_Source(t *testing.T) { + var group ec2.SecurityGroup + var nat ec2.SecurityGroup + var p ec2.IpPermission + + // This function creates the expected IPPermission with the group id from an + // external security group, needed because Security Group IDs are generated on + // AWS side and can't be known ahead of time. + setupSG := func(*terraform.State) error { + if nat.GroupId == nil { + return fmt.Errorf("Error: nat group has nil GroupID") + } + + p = ec2.IpPermission{ + FromPort: aws.Int64(80), + ToPort: aws.Int64(80), + IpProtocol: aws.String("tcp"), + UserIdGroupPairs: []*ec2.UserIdGroupPair{ + &ec2.UserIdGroupPair{GroupId: nat.GroupId}, + }, + } + + return nil + } + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSSecurityGroupRuleDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccAWSSecurityGroupRulePartialMatching_Source, + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSSecurityGroupRuleExists("aws_security_group.web", &group), + testAccCheckAWSSecurityGroupRuleExists("aws_security_group.nat", &nat), + setupSG, + testAccCheckAWSSecurityGroupRuleAttributes("aws_security_group_rule.source_ingress", &group, &p, "ingress"), + ), + }, + }, + }) + +} + func testAccCheckAWSSecurityGroupRuleDestroy(s *terraform.State) error { conn := testAccProvider.Meta().(*AWSClient).ec2conn @@ -319,14 +405,27 @@ func testAccCheckAWSSecurityGroupRuleExists(n string, group *ec2.SecurityGroup) } } -func testAccCheckAWSSecurityGroupRuleAttributes(group *ec2.SecurityGroup, ruleType string) resource.TestCheckFunc { +func testAccCheckAWSSecurityGroupRuleAttributes(n string, group *ec2.SecurityGroup, p *ec2.IpPermission, ruleType string) resource.TestCheckFunc { return func(s *terraform.State) error { - p := &ec2.IpPermission{ - FromPort: aws.Int64(80), - ToPort: aws.Int64(8000), - IpProtocol: aws.String("tcp"), - IpRanges: []*ec2.IpRange{&ec2.IpRange{CidrIp: aws.String("10.0.0.0/8")}}, + rs, ok := s.RootModule().Resources[n] + if !ok { + return fmt.Errorf("Security Group Rule Not found: %s", n) } + + if rs.Primary.ID == "" { + return fmt.Errorf("No Security Group Rule is set") + } + + if p == nil { + p = &ec2.IpPermission{ + FromPort: aws.Int64(80), + ToPort: aws.Int64(8000), + IpProtocol: aws.String("tcp"), + IpRanges: []*ec2.IpRange{&ec2.IpRange{CidrIp: aws.String("10.0.0.0/8")}}, + } + } + + var matchingRule *ec2.IpPermission var rules []*ec2.IpPermission if ruleType == "ingress" { rules = group.IpPermissions @@ -338,15 +437,53 @@ func testAccCheckAWSSecurityGroupRuleAttributes(group *ec2.SecurityGroup, ruleTy return fmt.Errorf("No IPPerms") } - // Compare our ingress - if !reflect.DeepEqual(rules[0], p) { - return fmt.Errorf( - "Got:\n\n%#v\n\nExpected:\n\n%#v\n", - rules[0], - p) + for _, r := range rules { + if r.ToPort != nil && *p.ToPort != *r.ToPort { + continue + } + + if r.FromPort != nil && *p.FromPort != *r.FromPort { + continue + } + + if r.IpProtocol != nil && *p.IpProtocol != *r.IpProtocol { + continue + } + + remaining := len(p.IpRanges) + for _, ip := range p.IpRanges { + for _, rip := range r.IpRanges { + if *ip.CidrIp == *rip.CidrIp { + remaining-- + } + } + } + + if remaining > 0 { + continue + } + + remaining = len(p.UserIdGroupPairs) + for _, ip := range p.UserIdGroupPairs { + for _, rip := range r.UserIdGroupPairs { + if *ip.GroupId == *rip.GroupId { + remaining-- + } + } + } + + if remaining > 0 { + continue + } + matchingRule = r } - return nil + if matchingRule != nil { + log.Printf("[DEBUG] Matching rule found : %s", matchingRule) + return nil + } + + return fmt.Errorf("Error here\n\tlooking for %s, wasn't found in %s", p, rules) } } @@ -480,3 +617,104 @@ resource "aws_security_group_rule" "self" { security_group_id = "${aws_security_group.web.id}" } ` + +const testAccAWSSecurityGroupRulePartialMatching = ` +resource "aws_vpc" "default" { + cidr_block = "10.0.0.0/16" + tags { + Name = "tf-sg-rule-bug" + } +} + +resource "aws_security_group" "web" { + name = "tf-other" + vpc_id = "${aws_vpc.default.id}" + tags { + Name = "tf-other-sg" + } +} + +resource "aws_security_group" "nat" { + name = "tf-nat" + vpc_id = "${aws_vpc.default.id}" + tags { + Name = "tf-nat-sg" + } +} + +resource "aws_security_group_rule" "ingress" { + type = "ingress" + from_port = 80 + to_port = 80 + protocol = "tcp" + cidr_blocks = ["10.0.2.0/24", "10.0.3.0/24", "10.0.4.0/24"] + + security_group_id = "${aws_security_group.web.id}" +} + +resource "aws_security_group_rule" "other" { + type = "ingress" + from_port = 80 + to_port = 80 + protocol = "tcp" + cidr_blocks = ["10.0.5.0/24"] + + security_group_id = "${aws_security_group.web.id}" +} + +// same a above, but different group, to guard against bad hashing +resource "aws_security_group_rule" "nat_ingress" { + type = "ingress" + from_port = 80 + to_port = 80 + protocol = "tcp" + cidr_blocks = ["10.0.2.0/24", "10.0.3.0/24", "10.0.4.0/24"] + + security_group_id = "${aws_security_group.nat.id}" +} +` + +const testAccAWSSecurityGroupRulePartialMatching_Source = ` +resource "aws_vpc" "default" { + cidr_block = "10.0.0.0/16" + tags { + Name = "tf-sg-rule-bug" + } +} + +resource "aws_security_group" "web" { + name = "tf-other" + vpc_id = "${aws_vpc.default.id}" + tags { + Name = "tf-other-sg" + } +} + +resource "aws_security_group" "nat" { + name = "tf-nat" + vpc_id = "${aws_vpc.default.id}" + tags { + Name = "tf-nat-sg" + } +} + +resource "aws_security_group_rule" "source_ingress" { + type = "ingress" + from_port = 80 + to_port = 80 + protocol = "tcp" + + source_security_group_id = "${aws_security_group.nat.id}" + security_group_id = "${aws_security_group.web.id}" +} + +resource "aws_security_group_rule" "other_ingress" { + type = "ingress" + from_port = 80 + to_port = 80 + protocol = "tcp" + cidr_blocks = ["10.0.2.0/24", "10.0.3.0/24", "10.0.4.0/24"] + + security_group_id = "${aws_security_group.web.id}" +} +` diff --git a/builtin/providers/aws/resource_aws_spot_instance_request.go b/builtin/providers/aws/resource_aws_spot_instance_request.go index 56de8992c..89384246c 100644 --- a/builtin/providers/aws/resource_aws_spot_instance_request.go +++ b/builtin/providers/aws/resource_aws_spot_instance_request.go @@ -36,6 +36,11 @@ func resourceAwsSpotInstanceRequest() *schema.Resource { Required: true, ForceNew: true, } + s["spot_type"] = &schema.Schema{ + Type: schema.TypeString, + Optional: true, + Default: "persistent", + } s["wait_for_fulfillment"] = &schema.Schema{ Type: schema.TypeBool, Optional: true, @@ -69,10 +74,7 @@ func resourceAwsSpotInstanceRequestCreate(d *schema.ResourceData, meta interface spotOpts := &ec2.RequestSpotInstancesInput{ SpotPrice: aws.String(d.Get("spot_price").(string)), - - // We always set the type to "persistent", since the imperative-like - // behavior of "one-time" does not map well to TF's declarative domain. - Type: aws.String("persistent"), + Type: aws.String(d.Get("spot_type").(string)), // Though the AWS API supports creating spot instance requests for multiple // instances, for TF purposes we fix this to one instance per request. diff --git a/builtin/providers/aws/resource_aws_vpc_peering_connection.go b/builtin/providers/aws/resource_aws_vpc_peering_connection.go index b279797f6..6b7c4dc52 100644 --- a/builtin/providers/aws/resource_aws_vpc_peering_connection.go +++ b/builtin/providers/aws/resource_aws_vpc_peering_connection.go @@ -127,6 +127,9 @@ func resourceVPCPeeringConnectionAccept(conn *ec2.EC2, id string) (string, error } resp, err := conn.AcceptVpcPeeringConnection(req) + if err != nil { + return "", err + } pc := resp.VpcPeeringConnection return *pc.Status.Code, err } @@ -153,16 +156,15 @@ func resourceAwsVPCPeeringUpdate(d *schema.ResourceData, meta interface{}) error } pc := pcRaw.(*ec2.VpcPeeringConnection) - if *pc.Status.Code == "pending-acceptance" { + if pc.Status != nil && *pc.Status.Code == "pending-acceptance" { status, err := resourceVPCPeeringConnectionAccept(conn, d.Id()) - - log.Printf( - "[DEBUG] VPC Peering connection accept status %s", - status) if err != nil { return err } + log.Printf( + "[DEBUG] VPC Peering connection accept status: %s", + status) } } diff --git a/builtin/providers/aws/resource_aws_vpc_peering_connection_test.go b/builtin/providers/aws/resource_aws_vpc_peering_connection_test.go index dc78a7082..ca92ce66a 100644 --- a/builtin/providers/aws/resource_aws_vpc_peering_connection_test.go +++ b/builtin/providers/aws/resource_aws_vpc_peering_connection_test.go @@ -36,6 +36,7 @@ func TestAccAWSVPCPeeringConnection_basic(t *testing.T) { func TestAccAWSVPCPeeringConnection_tags(t *testing.T) { var connection ec2.VpcPeeringConnection + peerId := os.Getenv("TF_PEER_ID") resource.Test(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, @@ -43,7 +44,7 @@ func TestAccAWSVPCPeeringConnection_tags(t *testing.T) { CheckDestroy: testAccCheckVpcDestroy, Steps: []resource.TestStep{ resource.TestStep{ - Config: testAccVpcPeeringConfigTags, + Config: fmt.Sprintf(testAccVpcPeeringConfigTags, peerId), Check: resource.ComposeTestCheckFunc( testAccCheckAWSVpcPeeringConnectionExists("aws_vpc_peering_connection.foo", &connection), testAccCheckTags(&connection.Tags, "foo", "bar"), @@ -117,6 +118,7 @@ resource "aws_vpc" "bar" { resource "aws_vpc_peering_connection" "foo" { vpc_id = "${aws_vpc.foo.id}" peer_vpc_id = "${aws_vpc.bar.id}" + auto_accept = true } ` @@ -132,6 +134,7 @@ resource "aws_vpc" "bar" { resource "aws_vpc_peering_connection" "foo" { vpc_id = "${aws_vpc.foo.id}" peer_vpc_id = "${aws_vpc.bar.id}" + peer_owner_id = "%s" tags { foo = "bar" } diff --git a/builtin/providers/aws/resource_vpn_connection_route.go b/builtin/providers/aws/resource_vpn_connection_route.go index 580ecff19..e6863f721 100644 --- a/builtin/providers/aws/resource_vpn_connection_route.go +++ b/builtin/providers/aws/resource_vpn_connection_route.go @@ -17,8 +17,6 @@ func resourceAwsVpnConnectionRoute() *schema.Resource { // You can't update a route. You can just delete one and make // a new one. Create: resourceAwsVpnConnectionRouteCreate, - Update: resourceAwsVpnConnectionRouteCreate, - Read: resourceAwsVpnConnectionRouteRead, Delete: resourceAwsVpnConnectionRouteDelete, diff --git a/builtin/providers/aws/structure.go b/builtin/providers/aws/structure.go index 9b1c0ab79..fd581c84a 100644 --- a/builtin/providers/aws/structure.go +++ b/builtin/providers/aws/structure.go @@ -4,13 +4,17 @@ import ( "bytes" "encoding/json" "fmt" + "regexp" "sort" "strings" "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/cloudformation" + "github.com/aws/aws-sdk-go/service/directoryservice" "github.com/aws/aws-sdk-go/service/ec2" "github.com/aws/aws-sdk-go/service/ecs" "github.com/aws/aws-sdk-go/service/elasticache" + elasticsearch "github.com/aws/aws-sdk-go/service/elasticsearchservice" "github.com/aws/aws-sdk-go/service/elb" "github.com/aws/aws-sdk-go/service/rds" "github.com/aws/aws-sdk-go/service/route53" @@ -368,7 +372,7 @@ func flattenElastiCacheParameters(list []*elasticache.Parameter) []map[string]in } // Takes the result of flatmap.Expand for an array of strings -// and returns a []string +// and returns a []*string func expandStringList(configured []interface{}) []*string { vs := make([]*string, 0, len(configured)) for _, v := range configured { @@ -377,6 +381,17 @@ func expandStringList(configured []interface{}) []*string { return vs } +// Takes list of pointers to strings. Expand to an array +// of raw strings and returns a []interface{} +// to keep compatibility w/ schema.NewSetschema.NewSet +func flattenStringList(list []*string) []interface{} { + vs := make([]interface{}, 0, len(list)) + for _, v := range list { + vs = append(vs, *v) + } + return vs +} + //Flattens an array of private ip addresses into a []string, where the elements returned are the IP strings e.g. "192.168.0.0" func flattenNetworkInterfacesPrivateIPAddresses(dtos []*ec2.NetworkInterfacePrivateIpAddress) []string { ips := make([]string, 0, len(dtos)) @@ -446,3 +461,198 @@ func expandResourceRecords(recs []interface{}, typeStr string) []*route53.Resour } return records } + +func validateRdsId(v interface{}, k string) (ws []string, errors []error) { + value := v.(string) + if !regexp.MustCompile(`^[0-9a-z-]+$`).MatchString(value) { + errors = append(errors, fmt.Errorf( + "only lowercase alphanumeric characters and hyphens allowed in %q", k)) + } + if !regexp.MustCompile(`^[a-z]`).MatchString(value) { + errors = append(errors, fmt.Errorf( + "first character of %q must be a letter", k)) + } + if regexp.MustCompile(`--`).MatchString(value) { + errors = append(errors, fmt.Errorf( + "%q cannot contain two consecutive hyphens", k)) + } + if regexp.MustCompile(`-$`).MatchString(value) { + errors = append(errors, fmt.Errorf( + "%q cannot end with a hyphen", k)) + } + return +} + +func expandESClusterConfig(m map[string]interface{}) *elasticsearch.ElasticsearchClusterConfig { + config := elasticsearch.ElasticsearchClusterConfig{} + + if v, ok := m["dedicated_master_enabled"]; ok { + isEnabled := v.(bool) + config.DedicatedMasterEnabled = aws.Bool(isEnabled) + + if isEnabled { + if v, ok := m["dedicated_master_count"]; ok && v.(int) > 0 { + config.DedicatedMasterCount = aws.Int64(int64(v.(int))) + } + if v, ok := m["dedicated_master_type"]; ok && v.(string) != "" { + config.DedicatedMasterType = aws.String(v.(string)) + } + } + } + + if v, ok := m["instance_count"]; ok { + config.InstanceCount = aws.Int64(int64(v.(int))) + } + if v, ok := m["instance_type"]; ok { + config.InstanceType = aws.String(v.(string)) + } + + if v, ok := m["zone_awareness_enabled"]; ok { + config.ZoneAwarenessEnabled = aws.Bool(v.(bool)) + } + + return &config +} + +func flattenESClusterConfig(c *elasticsearch.ElasticsearchClusterConfig) []map[string]interface{} { + m := map[string]interface{}{} + + if c.DedicatedMasterCount != nil { + m["dedicated_master_count"] = *c.DedicatedMasterCount + } + if c.DedicatedMasterEnabled != nil { + m["dedicated_master_enabled"] = *c.DedicatedMasterEnabled + } + if c.DedicatedMasterType != nil { + m["dedicated_master_type"] = *c.DedicatedMasterType + } + if c.InstanceCount != nil { + m["instance_count"] = *c.InstanceCount + } + if c.InstanceType != nil { + m["instance_type"] = *c.InstanceType + } + if c.ZoneAwarenessEnabled != nil { + m["zone_awareness_enabled"] = *c.ZoneAwarenessEnabled + } + + return []map[string]interface{}{m} +} + +func flattenESEBSOptions(o *elasticsearch.EBSOptions) []map[string]interface{} { + m := map[string]interface{}{} + + if o.EBSEnabled != nil { + m["ebs_enabled"] = *o.EBSEnabled + } + if o.Iops != nil { + m["iops"] = *o.Iops + } + if o.VolumeSize != nil { + m["volume_size"] = *o.VolumeSize + } + if o.VolumeType != nil { + m["volume_type"] = *o.VolumeType + } + + return []map[string]interface{}{m} +} + +func expandESEBSOptions(m map[string]interface{}) *elasticsearch.EBSOptions { + options := elasticsearch.EBSOptions{} + + if v, ok := m["ebs_enabled"]; ok { + options.EBSEnabled = aws.Bool(v.(bool)) + } + if v, ok := m["iops"]; ok && v.(int) > 0 { + options.Iops = aws.Int64(int64(v.(int))) + } + if v, ok := m["volume_size"]; ok && v.(int) > 0 { + options.VolumeSize = aws.Int64(int64(v.(int))) + } + if v, ok := m["volume_type"]; ok && v.(string) != "" { + options.VolumeType = aws.String(v.(string)) + } + + return &options +} + +func pointersMapToStringList(pointers map[string]*string) map[string]interface{} { + list := make(map[string]interface{}, len(pointers)) + for i, v := range pointers { + list[i] = *v + } + return list +} + +func stringMapToPointers(m map[string]interface{}) map[string]*string { + list := make(map[string]*string, len(m)) + for i, v := range m { + list[i] = aws.String(v.(string)) + } + return list +} + +func flattenDSVpcSettings( + s *directoryservice.DirectoryVpcSettingsDescription) []map[string]interface{} { + settings := make(map[string]interface{}, 0) + + settings["subnet_ids"] = schema.NewSet(schema.HashString, flattenStringList(s.SubnetIds)) + settings["vpc_id"] = *s.VpcId + + return []map[string]interface{}{settings} +} + +func expandCloudFormationParameters(params map[string]interface{}) []*cloudformation.Parameter { + var cfParams []*cloudformation.Parameter + for k, v := range params { + cfParams = append(cfParams, &cloudformation.Parameter{ + ParameterKey: aws.String(k), + ParameterValue: aws.String(v.(string)), + }) + } + + return cfParams +} + +// flattenCloudFormationParameters is flattening list of +// *cloudformation.Parameters and only returning existing +// parameters to avoid clash with default values +func flattenCloudFormationParameters(cfParams []*cloudformation.Parameter, + originalParams map[string]interface{}) map[string]interface{} { + params := make(map[string]interface{}, len(cfParams)) + for _, p := range cfParams { + _, isConfigured := originalParams[*p.ParameterKey] + if isConfigured { + params[*p.ParameterKey] = *p.ParameterValue + } + } + return params +} + +func expandCloudFormationTags(tags map[string]interface{}) []*cloudformation.Tag { + var cfTags []*cloudformation.Tag + for k, v := range tags { + cfTags = append(cfTags, &cloudformation.Tag{ + Key: aws.String(k), + Value: aws.String(v.(string)), + }) + } + return cfTags +} + +func flattenCloudFormationTags(cfTags []*cloudformation.Tag) map[string]string { + tags := make(map[string]string, len(cfTags)) + for _, t := range cfTags { + tags[*t.Key] = *t.Value + } + return tags +} + +func flattenCloudFormationOutputs(cfOutputs []*cloudformation.Output) map[string]string { + outputs := make(map[string]string, len(cfOutputs)) + for _, o := range cfOutputs { + outputs[*o.OutputKey] = *o.OutputValue + } + return outputs +} diff --git a/builtin/providers/aws/tagsEFS.go b/builtin/providers/aws/tagsEFS.go new file mode 100644 index 000000000..8303d6888 --- /dev/null +++ b/builtin/providers/aws/tagsEFS.go @@ -0,0 +1,94 @@ +package aws + +import ( + "log" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/efs" + "github.com/hashicorp/terraform/helper/schema" +) + +// setTags is a helper to set the tags for a resource. It expects the +// tags field to be named "tags" +func setTagsEFS(conn *efs.EFS, d *schema.ResourceData) error { + if d.HasChange("tags") { + oraw, nraw := d.GetChange("tags") + o := oraw.(map[string]interface{}) + n := nraw.(map[string]interface{}) + create, remove := diffTagsEFS(tagsFromMapEFS(o), tagsFromMapEFS(n)) + + // Set tags + if len(remove) > 0 { + log.Printf("[DEBUG] Removing tags: %#v", remove) + k := make([]*string, 0, len(remove)) + for _, t := range remove { + k = append(k, t.Key) + } + _, err := conn.DeleteTags(&efs.DeleteTagsInput{ + FileSystemId: aws.String(d.Id()), + TagKeys: k, + }) + if err != nil { + return err + } + } + if len(create) > 0 { + log.Printf("[DEBUG] Creating tags: %#v", create) + _, err := conn.CreateTags(&efs.CreateTagsInput{ + FileSystemId: aws.String(d.Id()), + Tags: create, + }) + if err != nil { + return err + } + } + } + + return nil +} + +// diffTags takes our tags locally and the ones remotely and returns +// the set of tags that must be created, and the set of tags that must +// be destroyed. +func diffTagsEFS(oldTags, newTags []*efs.Tag) ([]*efs.Tag, []*efs.Tag) { + // First, we're creating everything we have + create := make(map[string]interface{}) + for _, t := range newTags { + create[*t.Key] = *t.Value + } + + // Build the list of what to remove + var remove []*efs.Tag + for _, t := range oldTags { + old, ok := create[*t.Key] + if !ok || old != *t.Value { + // Delete it! + remove = append(remove, t) + } + } + + return tagsFromMapEFS(create), remove +} + +// tagsFromMap returns the tags for the given map of data. +func tagsFromMapEFS(m map[string]interface{}) []*efs.Tag { + var result []*efs.Tag + for k, v := range m { + result = append(result, &efs.Tag{ + Key: aws.String(k), + Value: aws.String(v.(string)), + }) + } + + return result +} + +// tagsToMap turns the list of tags into a map. +func tagsToMapEFS(ts []*efs.Tag) map[string]string { + result := make(map[string]string) + for _, t := range ts { + result[*t.Key] = *t.Value + } + + return result +} diff --git a/builtin/providers/aws/tagsEFS_test.go b/builtin/providers/aws/tagsEFS_test.go new file mode 100644 index 000000000..ca2ae8843 --- /dev/null +++ b/builtin/providers/aws/tagsEFS_test.go @@ -0,0 +1,85 @@ +package aws + +import ( + "fmt" + "reflect" + "testing" + + "github.com/aws/aws-sdk-go/service/efs" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +func TestDiffEFSTags(t *testing.T) { + cases := []struct { + Old, New map[string]interface{} + Create, Remove map[string]string + }{ + // Basic add/remove + { + Old: map[string]interface{}{ + "foo": "bar", + }, + New: map[string]interface{}{ + "bar": "baz", + }, + Create: map[string]string{ + "bar": "baz", + }, + Remove: map[string]string{ + "foo": "bar", + }, + }, + + // Modify + { + Old: map[string]interface{}{ + "foo": "bar", + }, + New: map[string]interface{}{ + "foo": "baz", + }, + Create: map[string]string{ + "foo": "baz", + }, + Remove: map[string]string{ + "foo": "bar", + }, + }, + } + + for i, tc := range cases { + c, r := diffTagsEFS(tagsFromMapEFS(tc.Old), tagsFromMapEFS(tc.New)) + cm := tagsToMapEFS(c) + rm := tagsToMapEFS(r) + if !reflect.DeepEqual(cm, tc.Create) { + t.Fatalf("%d: bad create: %#v", i, cm) + } + if !reflect.DeepEqual(rm, tc.Remove) { + t.Fatalf("%d: bad remove: %#v", i, rm) + } + } +} + +// testAccCheckTags can be used to check the tags on a resource. +func testAccCheckEFSTags( + ts *[]*efs.Tag, key string, value string) resource.TestCheckFunc { + return func(s *terraform.State) error { + m := tagsToMapEFS(*ts) + v, ok := m[key] + if value != "" && !ok { + return fmt.Errorf("Missing tag: %s", key) + } else if value == "" && ok { + return fmt.Errorf("Extra tag: %s", key) + } + if value == "" { + return nil + } + + if v != value { + return fmt.Errorf("%s: bad value: %s", key, v) + } + + return nil + } +} diff --git a/builtin/providers/aws/tagsRDS.go b/builtin/providers/aws/tagsRDS.go index 3e4e0c700..bcc3eb9ea 100644 --- a/builtin/providers/aws/tagsRDS.go +++ b/builtin/providers/aws/tagsRDS.go @@ -1,6 +1,7 @@ package aws import ( + "fmt" "log" "github.com/aws/aws-sdk-go/aws" @@ -19,7 +20,7 @@ func setTagsRDS(conn *rds.RDS, d *schema.ResourceData, arn string) error { // Set tags if len(remove) > 0 { - log.Printf("[DEBUG] Removing tags: %#v", remove) + log.Printf("[DEBUG] Removing tags: %s", remove) k := make([]*string, len(remove), len(remove)) for i, t := range remove { k[i] = t.Key @@ -34,7 +35,7 @@ func setTagsRDS(conn *rds.RDS, d *schema.ResourceData, arn string) error { } } if len(create) > 0 { - log.Printf("[DEBUG] Creating tags: %#v", create) + log.Printf("[DEBUG] Creating tags: %s", create) _, err := conn.AddTagsToResource(&rds.AddTagsToResourceInput{ ResourceName: aws.String(arn), Tags: create, @@ -93,3 +94,20 @@ func tagsToMapRDS(ts []*rds.Tag) map[string]string { return result } + +func saveTagsRDS(conn *rds.RDS, d *schema.ResourceData, arn string) error { + resp, err := conn.ListTagsForResource(&rds.ListTagsForResourceInput{ + ResourceName: aws.String(arn), + }) + + if err != nil { + return fmt.Errorf("[DEBUG] Error retreiving tags for ARN: %s", arn) + } + + var dt []*rds.Tag + if len(resp.TagList) > 0 { + dt = resp.TagList + } + + return d.Set("tags", tagsToMapRDS(dt)) +} diff --git a/builtin/providers/aws/tags_kinesis.go b/builtin/providers/aws/tags_kinesis.go new file mode 100644 index 000000000..c9562644d --- /dev/null +++ b/builtin/providers/aws/tags_kinesis.go @@ -0,0 +1,105 @@ +package aws + +import ( + "log" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/kinesis" + "github.com/hashicorp/terraform/helper/schema" +) + +// setTags is a helper to set the tags for a resource. It expects the +// tags field to be named "tags" +func setTagsKinesis(conn *kinesis.Kinesis, d *schema.ResourceData) error { + + sn := d.Get("name").(string) + + if d.HasChange("tags") { + oraw, nraw := d.GetChange("tags") + o := oraw.(map[string]interface{}) + n := nraw.(map[string]interface{}) + create, remove := diffTagsKinesis(tagsFromMapKinesis(o), tagsFromMapKinesis(n)) + + // Set tags + if len(remove) > 0 { + log.Printf("[DEBUG] Removing tags: %#v", remove) + k := make([]*string, len(remove), len(remove)) + for i, t := range remove { + k[i] = t.Key + } + + _, err := conn.RemoveTagsFromStream(&kinesis.RemoveTagsFromStreamInput{ + StreamName: aws.String(sn), + TagKeys: k, + }) + if err != nil { + return err + } + } + + if len(create) > 0 { + + log.Printf("[DEBUG] Creating tags: %#v", create) + t := make(map[string]*string) + for _, tag := range create { + t[*tag.Key] = tag.Value + } + + _, err := conn.AddTagsToStream(&kinesis.AddTagsToStreamInput{ + StreamName: aws.String(sn), + Tags: t, + }) + if err != nil { + return err + } + } + } + + return nil +} + +// diffTags takes our tags locally and the ones remotely and returns +// the set of tags that must be created, and the set of tags that must +// be destroyed. +func diffTagsKinesis(oldTags, newTags []*kinesis.Tag) ([]*kinesis.Tag, []*kinesis.Tag) { + // First, we're creating everything we have + create := make(map[string]interface{}) + for _, t := range newTags { + create[*t.Key] = *t.Value + } + + // Build the list of what to remove + var remove []*kinesis.Tag + for _, t := range oldTags { + old, ok := create[*t.Key] + if !ok || old != *t.Value { + // Delete it! + remove = append(remove, t) + } + } + + return tagsFromMapKinesis(create), remove +} + +// tagsFromMap returns the tags for the given map of data. +func tagsFromMapKinesis(m map[string]interface{}) []*kinesis.Tag { + var result []*kinesis.Tag + for k, v := range m { + result = append(result, &kinesis.Tag{ + Key: aws.String(k), + Value: aws.String(v.(string)), + }) + } + + return result +} + +// tagsToMap turns the list of tags into a map. +func tagsToMapKinesis(ts []*kinesis.Tag) map[string]string { + result := make(map[string]string) + for _, t := range ts { + result[*t.Key] = *t.Value + } + + return result +} diff --git a/builtin/providers/aws/tags_kinesis_test.go b/builtin/providers/aws/tags_kinesis_test.go new file mode 100644 index 000000000..d97365ad8 --- /dev/null +++ b/builtin/providers/aws/tags_kinesis_test.go @@ -0,0 +1,84 @@ +package aws + +import ( + "fmt" + "reflect" + "testing" + + "github.com/aws/aws-sdk-go/service/kinesis" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +func TestDiffTagsKinesis(t *testing.T) { + cases := []struct { + Old, New map[string]interface{} + Create, Remove map[string]string + }{ + // Basic add/remove + { + Old: map[string]interface{}{ + "foo": "bar", + }, + New: map[string]interface{}{ + "bar": "baz", + }, + Create: map[string]string{ + "bar": "baz", + }, + Remove: map[string]string{ + "foo": "bar", + }, + }, + + // Modify + { + Old: map[string]interface{}{ + "foo": "bar", + }, + New: map[string]interface{}{ + "foo": "baz", + }, + Create: map[string]string{ + "foo": "baz", + }, + Remove: map[string]string{ + "foo": "bar", + }, + }, + } + + for i, tc := range cases { + c, r := diffTagsKinesis(tagsFromMapKinesis(tc.Old), tagsFromMapKinesis(tc.New)) + cm := tagsToMapKinesis(c) + rm := tagsToMapKinesis(r) + if !reflect.DeepEqual(cm, tc.Create) { + t.Fatalf("%d: bad create: %#v", i, cm) + } + if !reflect.DeepEqual(rm, tc.Remove) { + t.Fatalf("%d: bad remove: %#v", i, rm) + } + } +} + +// testAccCheckTags can be used to check the tags on a resource. +func testAccCheckKinesisTags(ts []*kinesis.Tag, key string, value string) resource.TestCheckFunc { + return func(s *terraform.State) error { + m := tagsToMapKinesis(ts) + v, ok := m[key] + if value != "" && !ok { + return fmt.Errorf("Missing tag: %s", key) + } else if value == "" && ok { + return fmt.Errorf("Extra tag: %s", key) + } + if value == "" { + return nil + } + + if v != value { + return fmt.Errorf("%s: bad value: %s", key, v) + } + + return nil + } +} diff --git a/builtin/providers/aws/test-fixtures/saml-metadata-modified.xml b/builtin/providers/aws/test-fixtures/saml-metadata-modified.xml new file mode 100644 index 000000000..aaca7afc0 --- /dev/null +++ b/builtin/providers/aws/test-fixtures/saml-metadata-modified.xml @@ -0,0 +1,14 @@ + + + + + + MIIErDCCA5SgAwIBAgIOAU+PT8RBAAAAAHxJXEcwDQYJKoZIhvcNAQELBQAwgZAxKDAmBgNVBAMMH1NlbGZTaWduZWRDZXJ0XzAyU2VwMjAxNV8xODI2NTMxGDAWBgNVBAsMDzAwRDI0MDAwMDAwcEFvQTEXMBUGA1UECgwOU2FsZXNmb3JjZS5jb20xFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28xCzAJBgNVBAgMAkNBMQwwCgYDVQQGEwNVU0EwHhcNMTUwOTAyMTgyNjUzWhcNMTcwOTAyMTIwMDAwWjCBkDEoMCYGA1UEAwwfU2VsZlNpZ25lZENlcnRfMDJTZXAyMDE1XzE4MjY1MzEYMBYGA1UECwwPMDBEMjQwMDAwMDBwQW9BMRcwFQYDVQQKDA5TYWxlc2ZvcmNlLmNvbTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzELMAkGA1UECAwCQ0ExDDAKBgNVBAYTA1VTQTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAJp/wTRr9n1IWJpkRTjNpep47OKJrD2E6rGbJ18TG2RxtIz+zCn2JwH2aP3TULh0r0hhcg/pecv51RRcG7O19DBBaTQ5+KuoICQyKZy07/yDXSiZontTwkEYs06ssTwTHUcRXbcwTKv16L7omt0MjIhTTGfvtLOYiPwyvKvzAHg4eNuAcli0duVM78UIBORtdmy9C9ZcMh8yRJo5aPBq85wsE3JXU58ytyZzCHTBLH+2xFQrjYnUSEW+FOEEpI7o33MVdFBvWWg1R17HkWzcve4C30lqOHqvxBzyESZ/N1mMlmSt8gPFyB+mUXY99StJDJpnytbY8DwSzMQUo/sOVB0CAwEAAaOCAQAwgf0wHQYDVR0OBBYEFByu1EQqRQS0bYQBKS9K5qwKi+6IMA8GA1UdEwEB/wQFMAMBAf8wgcoGA1UdIwSBwjCBv4AUHK7URCpFBLRthAEpL0rmrAqL7oihgZakgZMwgZAxKDAmBgNVBAMMH1NlbGZTaWduZWRDZXJ0XzAyU2VwMjAxNV8xODI2NTMxGDAWBgNVBAsMDzAwRDI0MDAwMDAwcEFvQTEXMBUGA1UECgwOU2FsZXNmb3JjZS5jb20xFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28xCzAJBgNVBAgMAkNBMQwwCgYDVQQGEwNVU0GCDgFPj0/EQQAAAAB8SVxHMA0GCSqGSIb3DQEBCwUAA4IBAQA9O5o1tC71qJnkq+ABPo4A1aFKZVT/07GcBX4/wetcbYySL4Q2nR9pMgfPYYS1j+P2E3viPsQwPIWDUBwFkNsjjX5DSGEkLAioVGKRwJshRSCSynMcsVZbQkfBUiZXqhM0wzvoa/ALvGD+aSSb1m+x7lEpDYNwQKWaUW2VYcHWv9wjujMyy7dlj8E/jqM71mw7ThNl6k4+3RQ802dMa14txm8pkF0vZgfpV3tkqhBqtjBAicVCaveqr3r3iGqjvyilBgdY+0NR8szqzm7CD/Bkb22+/IgM/mXQuL9KHD/WADlSGmYKmG3SSahmcZxznYCnzcRNN9LVuXlz5cbljmBj + + + + urn:oasis:names:tc:SAML:1.1:nameid-format:unspecified + + + + diff --git a/builtin/providers/aws/test-fixtures/saml-metadata.xml b/builtin/providers/aws/test-fixtures/saml-metadata.xml new file mode 100644 index 000000000..69e353b77 --- /dev/null +++ b/builtin/providers/aws/test-fixtures/saml-metadata.xml @@ -0,0 +1,14 @@ + + + + + + MIIErDCCA5SgAwIBAgIOAU+PT8RBAAAAAHxJXEcwDQYJKoZIhvcNAQELBQAwgZAxKDAmBgNVBAMMH1NlbGZTaWduZWRDZXJ0XzAyU2VwMjAxNV8xODI2NTMxGDAWBgNVBAsMDzAwRDI0MDAwMDAwcEFvQTEXMBUGA1UECgwOU2FsZXNmb3JjZS5jb20xFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28xCzAJBgNVBAgMAkNBMQwwCgYDVQQGEwNVU0EwHhcNMTUwOTAyMTgyNjUzWhcNMTcwOTAyMTIwMDAwWjCBkDEoMCYGA1UEAwwfU2VsZlNpZ25lZENlcnRfMDJTZXAyMDE1XzE4MjY1MzEYMBYGA1UECwwPMDBEMjQwMDAwMDBwQW9BMRcwFQYDVQQKDA5TYWxlc2ZvcmNlLmNvbTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzELMAkGA1UECAwCQ0ExDDAKBgNVBAYTA1VTQTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAJp/wTRr9n1IWJpkRTjNpep47OKJrD2E6rGbJ18TG2RxtIz+zCn2JwH2aP3TULh0r0hhcg/pecv51RRcG7O19DBBaTQ5+KuoICQyKZy07/yDXSiZontTwkEYs06ssTwTHUcRXbcwTKv16L7omt0MjIhTTGfvtLOYiPwyvKvzAHg4eNuAcli0duVM78UIBORtdmy9C9ZcMh8yRJo5aPBq85wsE3JXU58ytyZzCHTBLH+2xFQrjYnUSEW+FOEEpI7o33MVdFBvWWg1R17HkWzcve4C30lqOHqvxBzyESZ/N1mMlmSt8gPFyB+mUXY99StJDJpnytbY8DwSzMQUo/sOVB0CAwEAAaOCAQAwgf0wHQYDVR0OBBYEFByu1EQqRQS0bYQBKS9K5qwKi+6IMA8GA1UdEwEB/wQFMAMBAf8wgcoGA1UdIwSBwjCBv4AUHK7URCpFBLRthAEpL0rmrAqL7oihgZakgZMwgZAxKDAmBgNVBAMMH1NlbGZTaWduZWRDZXJ0XzAyU2VwMjAxNV8xODI2NTMxGDAWBgNVBAsMDzAwRDI0MDAwMDAwcEFvQTEXMBUGA1UECgwOU2FsZXNmb3JjZS5jb20xFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28xCzAJBgNVBAgMAkNBMQwwCgYDVQQGEwNVU0GCDgFPj0/EQQAAAAB8SVxHMA0GCSqGSIb3DQEBCwUAA4IBAQA9O5o1tC71qJnkq+ABPo4A1aFKZVT/07GcBX4/wetcbYySL4Q2nR9pMgfPYYS1j+P2E3viPsQwPIWDUBwFkNsjjX5DSGEkLAioVGKRwJshRSCSynMcsVZbQkfBUiZXqhM0wzvoa/ALvGD+aSSb1m+x7lEpDYNwQKWaUW2VYcHWv9wjujMyy7dlj8E/jqM71mw7ThNl6k4+3RQ802dMa14txm8pkF0vZgfpV3tkqhBqtjBAicVCaveqr3r3iGqjvyilBgdY+0NR8szqzm7CD/Bkb22+/IgM/mXQuL9KHD/WADlSGmYKmG3SSahmcZxznYCnzcRNN9LVuXlz5cbljmBj + + + + urn:oasis:names:tc:SAML:1.1:nameid-format:unspecified + + + + diff --git a/builtin/providers/azure/config.go b/builtin/providers/azure/config.go index cbb23d58b..b096a10c4 100644 --- a/builtin/providers/azure/config.go +++ b/builtin/providers/azure/config.go @@ -98,7 +98,7 @@ func (c Client) getStorageServiceQueueClient(serviceName string) (storage.QueueS func (c *Config) NewClientFromSettingsData() (*Client, error) { mc, err := management.ClientFromPublishSettingsData(c.Settings, c.SubscriptionID) if err != nil { - return nil, nil + return nil, err } return &Client{ diff --git a/builtin/providers/azure/provider.go b/builtin/providers/azure/provider.go index fe100be35..975a93b00 100644 --- a/builtin/providers/azure/provider.go +++ b/builtin/providers/azure/provider.go @@ -64,22 +64,12 @@ func providerConfigure(d *schema.ResourceData) (interface{}, error) { Certificate: []byte(d.Get("certificate").(string)), } - settings := d.Get("settings_file").(string) - - if settings != "" { - if ok, _ := isFile(settings); ok { - settingsFile, err := homedir.Expand(settings) - if err != nil { - return nil, fmt.Errorf("Error expanding the settings file path: %s", err) - } - publishSettingsContent, err := ioutil.ReadFile(settingsFile) - if err != nil { - return nil, fmt.Errorf("Error reading settings file: %s", err) - } - config.Settings = publishSettingsContent - } else { - config.Settings = []byte(settings) - } + settingsFile := d.Get("settings_file").(string) + if settingsFile != "" { + // any errors from readSettings would have been caught at the validate + // step, so we can avoid handling them now + settings, _, _ := readSettings(settingsFile) + config.Settings = settings return config.NewClientFromSettingsData() } @@ -92,31 +82,39 @@ func providerConfigure(d *schema.ResourceData) (interface{}, error) { "or both a 'subscription_id' and 'certificate'.") } -func validateSettingsFile(v interface{}, k string) (warnings []string, errors []error) { +func validateSettingsFile(v interface{}, k string) ([]string, []error) { value := v.(string) - if value == "" { - return + return nil, nil } - var settings settingsData - if err := xml.Unmarshal([]byte(value), &settings); err != nil { - warnings = append(warnings, ` + _, warnings, errors := readSettings(value) + return warnings, errors +} + +const settingsPathWarnMsg = ` settings_file is not valid XML, so we are assuming it is a file path. This support will be removed in the future. Please update your configuration to use -${file("filename.publishsettings")} instead.`) - } else { +${file("filename.publishsettings")} instead.` + +func readSettings(pathOrContents string) (s []byte, ws []string, es []error) { + var settings settingsData + if err := xml.Unmarshal([]byte(pathOrContents), &settings); err == nil { + s = []byte(pathOrContents) return } - if ok, err := isFile(value); !ok { - errors = append(errors, - fmt.Errorf( - "account_file path could not be read from '%s': %s", - value, - err)) + ws = append(ws, settingsPathWarnMsg) + path, err := homedir.Expand(pathOrContents) + if err != nil { + es = append(es, fmt.Errorf("Error expanding path: %s", err)) + return } + s, err = ioutil.ReadFile(path) + if err != nil { + es = append(es, fmt.Errorf("Could not read file '%s': %s", path, err)) + } return } diff --git a/builtin/providers/azure/provider_test.go b/builtin/providers/azure/provider_test.go index 5c720640f..ca4017aae 100644 --- a/builtin/providers/azure/provider_test.go +++ b/builtin/providers/azure/provider_test.go @@ -3,12 +3,16 @@ package azure import ( "io" "io/ioutil" - "log" + "math/rand" "os" + "strings" "testing" + "time" + "github.com/hashicorp/terraform/config" "github.com/hashicorp/terraform/helper/schema" "github.com/hashicorp/terraform/terraform" + "github.com/mitchellh/go-homedir" ) var testAccProviders map[string]terraform.ResourceProvider @@ -67,20 +71,33 @@ func TestAzure_validateSettingsFile(t *testing.T) { if err != nil { t.Fatalf("Error creating temporary file in TestAzure_validateSettingsFile: %s", err) } + defer os.Remove(f.Name()) fx, err := ioutil.TempFile("", "tf-test-xml") if err != nil { t.Fatalf("Error creating temporary file with XML in TestAzure_validateSettingsFile: %s", err) } + defer os.Remove(fx.Name()) + + home, err := homedir.Dir() + if err != nil { + t.Fatalf("Error fetching homedir: %s", err) + } + fh, err := ioutil.TempFile(home, "tf-test-home") + if err != nil { + t.Fatalf("Error creating homedir-based temporary file: %s", err) + } + defer os.Remove(fh.Name()) _, err = io.WriteString(fx, "") if err != nil { t.Fatalf("Error writing XML File: %s", err) } - - log.Printf("fx name: %s", fx.Name()) fx.Close() + r := strings.NewReplacer(home, "~") + homePath := r.Replace(fh.Name()) + cases := []struct { Input string // String of XML or a path to an XML file W int // expected count of warnings @@ -89,6 +106,7 @@ func TestAzure_validateSettingsFile(t *testing.T) { {"test", 1, 1}, {f.Name(), 1, 0}, {fx.Name(), 1, 0}, + {homePath, 1, 0}, {"", 0, 0}, } @@ -104,6 +122,53 @@ func TestAzure_validateSettingsFile(t *testing.T) { } } +func TestAzure_providerConfigure(t *testing.T) { + home, err := homedir.Dir() + if err != nil { + t.Fatalf("Error fetching homedir: %s", err) + } + fh, err := ioutil.TempFile(home, "tf-test-home") + if err != nil { + t.Fatalf("Error creating homedir-based temporary file: %s", err) + } + defer os.Remove(fh.Name()) + + _, err = io.WriteString(fh, testAzurePublishSettingsStr) + if err != nil { + t.Fatalf("err: %s", err) + } + fh.Close() + + r := strings.NewReplacer(home, "~") + homePath := r.Replace(fh.Name()) + + cases := []struct { + SettingsFile string // String of XML or a path to an XML file + NilMeta bool // whether meta is expected to be nil + }{ + {testAzurePublishSettingsStr, false}, + {homePath, false}, + } + + for _, tc := range cases { + rp := Provider() + raw := map[string]interface{}{ + "settings_file": tc.SettingsFile, + } + + rawConfig, err := config.NewRawConfig(raw) + if err != nil { + t.Fatalf("err: %s", err) + } + + err = rp.Configure(terraform.NewResourceConfig(rawConfig)) + meta := rp.(*schema.Provider).Meta() + if (meta == nil) != tc.NilMeta { + t.Fatalf("expected NilMeta: %t, got meta: %#v", tc.NilMeta, meta) + } + } +} + func TestAzure_isFile(t *testing.T) { f, err := ioutil.TempFile("", "tf-test-file") if err != nil { @@ -129,3 +194,23 @@ func TestAzure_isFile(t *testing.T) { } } } + +func genRandInt() int { + return rand.New(rand.NewSource(time.Now().UnixNano())).Int() % 100000 +} + +// testAzurePublishSettingsStr is a revoked publishsettings file +const testAzurePublishSettingsStr = ` + + + + + + +` diff --git a/builtin/providers/azure/resource_azure_data_disk_test.go b/builtin/providers/azure/resource_azure_data_disk_test.go index dfad26b5e..2c6660f66 100644 --- a/builtin/providers/azure/resource_azure_data_disk_test.go +++ b/builtin/providers/azure/resource_azure_data_disk_test.go @@ -13,6 +13,7 @@ import ( func TestAccAzureDataDisk_basic(t *testing.T) { var disk virtualmachinedisk.DataDiskResponse + name := fmt.Sprintf("terraform-test%d", genRandInt()) resource.Test(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, @@ -20,13 +21,13 @@ func TestAccAzureDataDisk_basic(t *testing.T) { CheckDestroy: testAccCheckAzureDataDiskDestroy, Steps: []resource.TestStep{ resource.TestStep{ - Config: testAccAzureDataDisk_basic, + Config: testAccAzureDataDisk_basic(name), Check: resource.ComposeTestCheckFunc( testAccCheckAzureDataDiskExists( "azure_data_disk.foo", &disk), testAccCheckAzureDataDiskAttributes(&disk), resource.TestCheckResourceAttr( - "azure_data_disk.foo", "label", "terraform-test-0"), + "azure_data_disk.foo", "label", fmt.Sprintf("%s-0", name)), resource.TestCheckResourceAttr( "azure_data_disk.foo", "size", "10"), ), @@ -37,6 +38,7 @@ func TestAccAzureDataDisk_basic(t *testing.T) { func TestAccAzureDataDisk_update(t *testing.T) { var disk virtualmachinedisk.DataDiskResponse + name := fmt.Sprintf("terraform-test%d", genRandInt()) resource.Test(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, @@ -44,12 +46,12 @@ func TestAccAzureDataDisk_update(t *testing.T) { CheckDestroy: testAccCheckAzureDataDiskDestroy, Steps: []resource.TestStep{ resource.TestStep{ - Config: testAccAzureDataDisk_advanced, + Config: testAccAzureDataDisk_advanced(name), Check: resource.ComposeTestCheckFunc( testAccCheckAzureDataDiskExists( "azure_data_disk.foo", &disk), resource.TestCheckResourceAttr( - "azure_data_disk.foo", "label", "terraform-test1-1"), + "azure_data_disk.foo", "label", fmt.Sprintf("%s-1", name)), resource.TestCheckResourceAttr( "azure_data_disk.foo", "lun", "1"), resource.TestCheckResourceAttr( @@ -57,17 +59,17 @@ func TestAccAzureDataDisk_update(t *testing.T) { resource.TestCheckResourceAttr( "azure_data_disk.foo", "caching", "ReadOnly"), resource.TestCheckResourceAttr( - "azure_data_disk.foo", "virtual_machine", "terraform-test1"), + "azure_data_disk.foo", "virtual_machine", name), ), }, resource.TestStep{ - Config: testAccAzureDataDisk_update, + Config: testAccAzureDataDisk_update(name), Check: resource.ComposeTestCheckFunc( testAccCheckAzureDataDiskExists( "azure_data_disk.foo", &disk), resource.TestCheckResourceAttr( - "azure_data_disk.foo", "label", "terraform-test1-1"), + "azure_data_disk.foo", "label", fmt.Sprintf("%s-1", name)), resource.TestCheckResourceAttr( "azure_data_disk.foo", "lun", "2"), resource.TestCheckResourceAttr( @@ -168,68 +170,74 @@ func testAccCheckAzureDataDiskDestroy(s *terraform.State) error { return nil } -var testAccAzureDataDisk_basic = fmt.Sprintf(` -resource "azure_instance" "foo" { - name = "terraform-test" - image = "Ubuntu Server 14.04 LTS" - size = "Basic_A1" - storage_service_name = "%s" - location = "West US" - username = "terraform" - password = "Pass!admin123" +func testAccAzureDataDisk_basic(name string) string { + return fmt.Sprintf(` + resource "azure_instance" "foo" { + name = "%s" + image = "Ubuntu Server 14.04 LTS" + size = "Basic_A1" + storage_service_name = "%s" + location = "West US" + username = "terraform" + password = "Pass!admin123" + } + + resource "azure_data_disk" "foo" { + lun = 0 + size = 10 + storage_service_name = "${azure_instance.foo.storage_service_name}" + virtual_machine = "${azure_instance.foo.id}" + }`, name, testAccStorageServiceName) } -resource "azure_data_disk" "foo" { - lun = 0 - size = 10 - storage_service_name = "${azure_instance.foo.storage_service_name}" - virtual_machine = "${azure_instance.foo.id}" -}`, testAccStorageServiceName) +func testAccAzureDataDisk_advanced(name string) string { + return fmt.Sprintf(` + resource "azure_instance" "foo" { + name = "%s" + image = "Ubuntu Server 14.04 LTS" + size = "Basic_A1" + storage_service_name = "%s" + location = "West US" + username = "terraform" + password = "Pass!admin123" + } -var testAccAzureDataDisk_advanced = fmt.Sprintf(` -resource "azure_instance" "foo" { - name = "terraform-test1" - image = "Ubuntu Server 14.04 LTS" - size = "Basic_A1" - storage_service_name = "%s" - location = "West US" - username = "terraform" - password = "Pass!admin123" + resource "azure_data_disk" "foo" { + lun = 1 + size = 10 + caching = "ReadOnly" + storage_service_name = "${azure_instance.foo.storage_service_name}" + virtual_machine = "${azure_instance.foo.id}" + }`, name, testAccStorageServiceName) } -resource "azure_data_disk" "foo" { - lun = 1 - size = 10 - caching = "ReadOnly" - storage_service_name = "${azure_instance.foo.storage_service_name}" - virtual_machine = "${azure_instance.foo.id}" -}`, testAccStorageServiceName) +func testAccAzureDataDisk_update(name string) string { + return fmt.Sprintf(` + resource "azure_instance" "foo" { + name = "%s" + image = "Ubuntu Server 14.04 LTS" + size = "Basic_A1" + storage_service_name = "%s" + location = "West US" + username = "terraform" + password = "Pass!admin123" + } -var testAccAzureDataDisk_update = fmt.Sprintf(` -resource "azure_instance" "foo" { - name = "terraform-test1" - image = "Ubuntu Server 14.04 LTS" - size = "Basic_A1" - storage_service_name = "%s" - location = "West US" - username = "terraform" - password = "Pass!admin123" + resource "azure_instance" "bar" { + name = "terraform-test2" + image = "Ubuntu Server 14.04 LTS" + size = "Basic_A1" + storage_service_name = "${azure_instance.foo.storage_service_name}" + location = "West US" + username = "terraform" + password = "Pass!admin123" + } + + resource "azure_data_disk" "foo" { + lun = 2 + size = 20 + caching = "ReadWrite" + storage_service_name = "${azure_instance.bar.storage_service_name}" + virtual_machine = "${azure_instance.bar.id}" + }`, name, testAccStorageServiceName) } - -resource "azure_instance" "bar" { - name = "terraform-test2" - image = "Ubuntu Server 14.04 LTS" - size = "Basic_A1" - storage_service_name = "${azure_instance.foo.storage_service_name}" - location = "West US" - username = "terraform" - password = "Pass!admin123" -} - -resource "azure_data_disk" "foo" { - lun = 2 - size = 20 - caching = "ReadWrite" - storage_service_name = "${azure_instance.bar.storage_service_name}" - virtual_machine = "${azure_instance.bar.id}" -}`, testAccStorageServiceName) diff --git a/builtin/providers/azure/resource_azure_instance.go b/builtin/providers/azure/resource_azure_instance.go index fb264f28e..c95285ec2 100644 --- a/builtin/providers/azure/resource_azure_instance.go +++ b/builtin/providers/azure/resource_azure_instance.go @@ -297,15 +297,15 @@ func resourceAzureInstanceCreate(d *schema.ResourceData, meta interface{}) (err if err != nil { return fmt.Errorf("Error configuring %s for Windows: %s", name, err) } - + if domain_name, ok := d.GetOk("domain_name"); ok { err = vmutils.ConfigureWindowsToJoinDomain( - &role, - d.Get("domain_username").(string), - d.Get("domain_password").(string), - domain_name.(string), + &role, + d.Get("domain_username").(string), + d.Get("domain_password").(string), + domain_name.(string), d.Get("domain_ou").(string), - ) + ) if err != nil { return fmt.Errorf("Error configuring %s for WindowsToJoinDomain: %s", name, err) } diff --git a/builtin/providers/azure/resource_azure_instance_test.go b/builtin/providers/azure/resource_azure_instance_test.go index 79e712154..7e63486c3 100644 --- a/builtin/providers/azure/resource_azure_instance_test.go +++ b/builtin/providers/azure/resource_azure_instance_test.go @@ -446,7 +446,7 @@ resource "azure_security_group_rule" "foo" { resource "azure_instance" "foo" { name = "terraform-test1" - image = "Windows Server 2012 R2 Datacenter, April 2015" + image = "Windows Server 2012 R2 Datacenter, September 2015" size = "Basic_A1" storage_service_name = "%s" location = "West US" @@ -520,7 +520,7 @@ resource "azure_security_group_rule" "bar" { resource "azure_instance" "foo" { name = "terraform-test1" - image = "Windows Server 2012 R2 Datacenter, April 2015" + image = "Windows Server 2012 R2 Datacenter, September 2015" size = "Basic_A2" storage_service_name = "%s" location = "West US" diff --git a/builtin/providers/azure/resource_azure_storage_blob.go b/builtin/providers/azure/resource_azure_storage_blob.go index 4e870e0ad..9a3dca1a9 100644 --- a/builtin/providers/azure/resource_azure_storage_blob.go +++ b/builtin/providers/azure/resource_azure_storage_blob.go @@ -13,7 +13,6 @@ func resourceAzureStorageBlob() *schema.Resource { return &schema.Resource{ Create: resourceAzureStorageBlobCreate, Read: resourceAzureStorageBlobRead, - Update: resourceAzureStorageBlobUpdate, Exists: resourceAzureStorageBlobExists, Delete: resourceAzureStorageBlobDelete, @@ -122,17 +121,6 @@ func resourceAzureStorageBlobRead(d *schema.ResourceData, meta interface{}) erro return nil } -// resourceAzureStorageBlobUpdate does all the necessary API calls to -// update a blob on Azure. -func resourceAzureStorageBlobUpdate(d *schema.ResourceData, meta interface{}) error { - // NOTE: although empty as most parameters have ForceNew set; this is - // still required in case of changes to the storage_service_key - - // run the ExistsFunc beforehand to ensure the resource's existence nonetheless: - _, err := resourceAzureStorageBlobExists(d, meta) - return err -} - // resourceAzureStorageBlobExists does all the necessary API calls to // check for the existence of the blob on Azure. func resourceAzureStorageBlobExists(d *schema.ResourceData, meta interface{}) (bool, error) { diff --git a/builtin/providers/cloudstack/provider_test.go b/builtin/providers/cloudstack/provider_test.go index 1207fd085..b1b8442a5 100644 --- a/builtin/providers/cloudstack/provider_test.go +++ b/builtin/providers/cloudstack/provider_test.go @@ -32,18 +32,18 @@ func testSetValueOnResourceData(t *testing.T) { d := schema.ResourceData{} d.Set("id", "name") - setValueOrUUID(&d, "id", "name", "54711781-274e-41b2-83c0-17194d0108f7") + setValueOrID(&d, "id", "name", "54711781-274e-41b2-83c0-17194d0108f7") if d.Get("id").(string) != "name" { t.Fatal("err: 'id' does not match 'name'") } } -func testSetUUIDOnResourceData(t *testing.T) { +func testSetIDOnResourceData(t *testing.T) { d := schema.ResourceData{} d.Set("id", "54711781-274e-41b2-83c0-17194d0108f7") - setValueOrUUID(&d, "id", "name", "54711781-274e-41b2-83c0-17194d0108f7") + setValueOrID(&d, "id", "name", "54711781-274e-41b2-83c0-17194d0108f7") if d.Get("id").(string) != "54711781-274e-41b2-83c0-17194d0108f7" { t.Fatal("err: 'id' doest not match '54711781-274e-41b2-83c0-17194d0108f7'") diff --git a/builtin/providers/cloudstack/resource_cloudstack_disk.go b/builtin/providers/cloudstack/resource_cloudstack_disk.go index 30e7950da..63a788f66 100644 --- a/builtin/providers/cloudstack/resource_cloudstack_disk.go +++ b/builtin/providers/cloudstack/resource_cloudstack_disk.go @@ -80,12 +80,12 @@ func resourceCloudStackDiskCreate(d *schema.ResourceData, meta interface{}) erro // Create a new parameter struct p := cs.Volume.NewCreateVolumeParams(name) - // Retrieve the disk_offering UUID - diskofferingid, e := retrieveUUID(cs, "disk_offering", d.Get("disk_offering").(string)) + // Retrieve the disk_offering ID + diskofferingid, e := retrieveID(cs, "disk_offering", d.Get("disk_offering").(string)) if e != nil { return e.Error() } - // Set the disk_offering UUID + // Set the disk_offering ID p.SetDiskofferingid(diskofferingid) if d.Get("size").(int) != 0 { @@ -95,8 +95,8 @@ func resourceCloudStackDiskCreate(d *schema.ResourceData, meta interface{}) erro // If there is a project supplied, we retrieve and set the project id if project, ok := d.GetOk("project"); ok { - // Retrieve the project UUID - projectid, e := retrieveUUID(cs, "project", project.(string)) + // Retrieve the project ID + projectid, e := retrieveID(cs, "project", project.(string)) if e != nil { return e.Error() } @@ -104,8 +104,8 @@ func resourceCloudStackDiskCreate(d *schema.ResourceData, meta interface{}) erro p.SetProjectid(projectid) } - // Retrieve the zone UUID - zoneid, e := retrieveUUID(cs, "zone", d.Get("zone").(string)) + // Retrieve the zone ID + zoneid, e := retrieveID(cs, "zone", d.Get("zone").(string)) if e != nil { return e.Error() } @@ -118,7 +118,7 @@ func resourceCloudStackDiskCreate(d *schema.ResourceData, meta interface{}) erro return fmt.Errorf("Error creating the new disk %s: %s", name, err) } - // Set the volume UUID and partials + // Set the volume ID and partials d.SetId(r.Id) d.SetPartial("name") d.SetPartial("device") @@ -160,9 +160,9 @@ func resourceCloudStackDiskRead(d *schema.ResourceData, meta interface{}) error d.Set("attach", v.Attached != "") // If attached this will contain a timestamp when attached d.Set("size", int(v.Size/(1024*1024*1024))) // Needed to get GB's again - setValueOrUUID(d, "disk_offering", v.Diskofferingname, v.Diskofferingid) - setValueOrUUID(d, "project", v.Project, v.Projectid) - setValueOrUUID(d, "zone", v.Zonename, v.Zoneid) + setValueOrID(d, "disk_offering", v.Diskofferingname, v.Diskofferingid) + setValueOrID(d, "project", v.Project, v.Projectid) + setValueOrID(d, "zone", v.Zonename, v.Zoneid) if v.Attached != "" { // Get the virtual machine details @@ -184,7 +184,7 @@ func resourceCloudStackDiskRead(d *schema.ResourceData, meta interface{}) error } d.Set("device", retrieveDeviceName(v.Deviceid, c.Name)) - setValueOrUUID(d, "virtual_machine", v.Vmname, v.Virtualmachineid) + setValueOrID(d, "virtual_machine", v.Vmname, v.Virtualmachineid) } return nil @@ -205,13 +205,13 @@ func resourceCloudStackDiskUpdate(d *schema.ResourceData, meta interface{}) erro // Create a new parameter struct p := cs.Volume.NewResizeVolumeParams(d.Id()) - // Retrieve the disk_offering UUID - diskofferingid, e := retrieveUUID(cs, "disk_offering", d.Get("disk_offering").(string)) + // Retrieve the disk_offering ID + diskofferingid, e := retrieveID(cs, "disk_offering", d.Get("disk_offering").(string)) if e != nil { return e.Error() } - // Set the disk_offering UUID + // Set the disk_offering ID p.SetDiskofferingid(diskofferingid) if d.Get("size").(int) != 0 { @@ -228,7 +228,7 @@ func resourceCloudStackDiskUpdate(d *schema.ResourceData, meta interface{}) erro return fmt.Errorf("Error changing disk offering/size for disk %s: %s", name, err) } - // Update the volume UUID and set partials + // Update the volume ID and set partials d.SetId(r.Id) d.SetPartial("disk_offering") d.SetPartial("size") @@ -278,7 +278,7 @@ func resourceCloudStackDiskDelete(d *schema.ResourceData, meta interface{}) erro // Delete the voluem if _, err := cs.Volume.DeleteVolume(p); err != nil { - // This is a very poor way to be told the UUID does no longer exist :( + // This is a very poor way to be told the ID does no longer exist :( if strings.Contains(err.Error(), fmt.Sprintf( "Invalid parameter id value=%s due to incorrect long value format, "+ "or entity does not exist", d.Id())) { @@ -299,8 +299,8 @@ func resourceCloudStackDiskAttach(d *schema.ResourceData, meta interface{}) erro return err } - // Retrieve the virtual_machine UUID - virtualmachineid, e := retrieveUUID(cs, "virtual_machine", d.Get("virtual_machine").(string)) + // Retrieve the virtual_machine ID + virtualmachineid, e := retrieveID(cs, "virtual_machine", d.Get("virtual_machine").(string)) if e != nil { return e.Error() } @@ -341,13 +341,13 @@ func resourceCloudStackDiskDetach(d *schema.ResourceData, meta interface{}) erro // Create a new parameter struct p := cs.Volume.NewDetachVolumeParams() - // Set the volume UUID + // Set the volume ID p.SetId(d.Id()) // Detach the currently attached volume if _, err := cs.Volume.DetachVolume(p); err != nil { - // Retrieve the virtual_machine UUID - virtualmachineid, e := retrieveUUID(cs, "virtual_machine", d.Get("virtual_machine").(string)) + // Retrieve the virtual_machine ID + virtualmachineid, e := retrieveID(cs, "virtual_machine", d.Get("virtual_machine").(string)) if e != nil { return e.Error() } diff --git a/builtin/providers/cloudstack/resource_cloudstack_egress_firewall.go b/builtin/providers/cloudstack/resource_cloudstack_egress_firewall.go index e61ac0173..55209eadf 100644 --- a/builtin/providers/cloudstack/resource_cloudstack_egress_firewall.go +++ b/builtin/providers/cloudstack/resource_cloudstack_egress_firewall.go @@ -89,8 +89,8 @@ func resourceCloudStackEgressFirewallCreate(d *schema.ResourceData, meta interfa return err } - // Retrieve the network UUID - networkid, e := retrieveUUID(cs, "network", d.Get("network").(string)) + // Retrieve the network ID + networkid, e := retrieveID(cs, "network", d.Get("network").(string)) if e != nil { return e.Error() } @@ -222,7 +222,7 @@ func resourceCloudStackEgressFirewallRead(d *schema.ResourceData, meta interface // Get the rule r, count, err := cs.Firewall.GetEgressFirewallRuleByID(id.(string)) - // If the count == 0, there is no object found for this UUID + // If the count == 0, there is no object found for this ID if err != nil { if count == 0 { delete(uuids, "icmp") @@ -415,7 +415,7 @@ func resourceCloudStackEgressFirewallDeleteRule( // Delete the rule if _, err := cs.Firewall.DeleteEgressFirewallRule(p); err != nil { - // This is a very poor way to be told the UUID does no longer exist :( + // This is a very poor way to be told the ID does no longer exist :( if strings.Contains(err.Error(), fmt.Sprintf( "Invalid parameter id value=%s due to incorrect long value format, "+ "or entity does not exist", id.(string))) { diff --git a/builtin/providers/cloudstack/resource_cloudstack_egress_firewall_test.go b/builtin/providers/cloudstack/resource_cloudstack_egress_firewall_test.go index 05c3b985e..dbca8c32b 100644 --- a/builtin/providers/cloudstack/resource_cloudstack_egress_firewall_test.go +++ b/builtin/providers/cloudstack/resource_cloudstack_egress_firewall_test.go @@ -123,13 +123,13 @@ func testAccCheckCloudStackEgressFirewallRulesExist(n string) resource.TestCheck return fmt.Errorf("No firewall ID is set") } - for k, uuid := range rs.Primary.Attributes { + for k, id := range rs.Primary.Attributes { if !strings.Contains(k, ".uuids.") || strings.HasSuffix(k, ".uuids.#") { continue } cs := testAccProvider.Meta().(*cloudstack.CloudStackClient) - _, count, err := cs.Firewall.GetEgressFirewallRuleByID(uuid) + _, count, err := cs.Firewall.GetEgressFirewallRuleByID(id) if err != nil { return err @@ -156,12 +156,12 @@ func testAccCheckCloudStackEgressFirewallDestroy(s *terraform.State) error { return fmt.Errorf("No instance ID is set") } - for k, uuid := range rs.Primary.Attributes { + for k, id := range rs.Primary.Attributes { if !strings.Contains(k, ".uuids.") || strings.HasSuffix(k, ".uuids.#") { continue } - _, _, err := cs.Firewall.GetEgressFirewallRuleByID(uuid) + _, _, err := cs.Firewall.GetEgressFirewallRuleByID(id) if err == nil { return fmt.Errorf("Egress rule %s still exists", rs.Primary.ID) } diff --git a/builtin/providers/cloudstack/resource_cloudstack_firewall.go b/builtin/providers/cloudstack/resource_cloudstack_firewall.go index 48a780545..1e7ff8e70 100644 --- a/builtin/providers/cloudstack/resource_cloudstack_firewall.go +++ b/builtin/providers/cloudstack/resource_cloudstack_firewall.go @@ -89,8 +89,8 @@ func resourceCloudStackFirewallCreate(d *schema.ResourceData, meta interface{}) return err } - // Retrieve the ipaddress UUID - ipaddressid, e := retrieveUUID(cs, "ipaddress", d.Get("ipaddress").(string)) + // Retrieve the ipaddress ID + ipaddressid, e := retrieveID(cs, "ipaddress", d.Get("ipaddress").(string)) if e != nil { return e.Error() } @@ -222,7 +222,7 @@ func resourceCloudStackFirewallRead(d *schema.ResourceData, meta interface{}) er // Get the rule r, count, err := cs.Firewall.GetFirewallRuleByID(id.(string)) - // If the count == 0, there is no object found for this UUID + // If the count == 0, there is no object found for this ID if err != nil { if count == 0 { delete(uuids, "icmp") @@ -415,7 +415,7 @@ func resourceCloudStackFirewallDeleteRule( // Delete the rule if _, err := cs.Firewall.DeleteFirewallRule(p); err != nil { - // This is a very poor way to be told the UUID does no longer exist :( + // This is a very poor way to be told the ID does no longer exist :( if strings.Contains(err.Error(), fmt.Sprintf( "Invalid parameter id value=%s due to incorrect long value format, "+ "or entity does not exist", id.(string))) { diff --git a/builtin/providers/cloudstack/resource_cloudstack_firewall_test.go b/builtin/providers/cloudstack/resource_cloudstack_firewall_test.go index 2be2cebea..a86cdc3b2 100644 --- a/builtin/providers/cloudstack/resource_cloudstack_firewall_test.go +++ b/builtin/providers/cloudstack/resource_cloudstack_firewall_test.go @@ -110,13 +110,13 @@ func testAccCheckCloudStackFirewallRulesExist(n string) resource.TestCheckFunc { return fmt.Errorf("No firewall ID is set") } - for k, uuid := range rs.Primary.Attributes { + for k, id := range rs.Primary.Attributes { if !strings.Contains(k, ".uuids.") || strings.HasSuffix(k, ".uuids.#") { continue } cs := testAccProvider.Meta().(*cloudstack.CloudStackClient) - _, count, err := cs.Firewall.GetFirewallRuleByID(uuid) + _, count, err := cs.Firewall.GetFirewallRuleByID(id) if err != nil { return err @@ -143,12 +143,12 @@ func testAccCheckCloudStackFirewallDestroy(s *terraform.State) error { return fmt.Errorf("No instance ID is set") } - for k, uuid := range rs.Primary.Attributes { + for k, id := range rs.Primary.Attributes { if !strings.Contains(k, ".uuids.") || strings.HasSuffix(k, ".uuids.#") { continue } - _, _, err := cs.Firewall.GetFirewallRuleByID(uuid) + _, _, err := cs.Firewall.GetFirewallRuleByID(id) if err == nil { return fmt.Errorf("Firewall rule %s still exists", rs.Primary.ID) } diff --git a/builtin/providers/cloudstack/resource_cloudstack_instance.go b/builtin/providers/cloudstack/resource_cloudstack_instance.go index ea5d85caf..504a2dbbf 100644 --- a/builtin/providers/cloudstack/resource_cloudstack_instance.go +++ b/builtin/providers/cloudstack/resource_cloudstack_instance.go @@ -100,14 +100,14 @@ func resourceCloudStackInstance() *schema.Resource { func resourceCloudStackInstanceCreate(d *schema.ResourceData, meta interface{}) error { cs := meta.(*cloudstack.CloudStackClient) - // Retrieve the service_offering UUID - serviceofferingid, e := retrieveUUID(cs, "service_offering", d.Get("service_offering").(string)) + // Retrieve the service_offering ID + serviceofferingid, e := retrieveID(cs, "service_offering", d.Get("service_offering").(string)) if e != nil { return e.Error() } - // Retrieve the zone UUID - zoneid, e := retrieveUUID(cs, "zone", d.Get("zone").(string)) + // Retrieve the zone ID + zoneid, e := retrieveID(cs, "zone", d.Get("zone").(string)) if e != nil { return e.Error() } @@ -118,8 +118,8 @@ func resourceCloudStackInstanceCreate(d *schema.ResourceData, meta interface{}) return err } - // Retrieve the template UUID - templateid, e := retrieveTemplateUUID(cs, zone.Id, d.Get("template").(string)) + // Retrieve the template ID + templateid, e := retrieveTemplateID(cs, zone.Id, d.Get("template").(string)) if e != nil { return e.Error() } @@ -139,8 +139,8 @@ func resourceCloudStackInstanceCreate(d *schema.ResourceData, meta interface{}) } if zone.Networktype == "Advanced" { - // Retrieve the network UUID - networkid, e := retrieveUUID(cs, "network", d.Get("network").(string)) + // Retrieve the network ID + networkid, e := retrieveID(cs, "network", d.Get("network").(string)) if e != nil { return e.Error() } @@ -155,8 +155,8 @@ func resourceCloudStackInstanceCreate(d *schema.ResourceData, meta interface{}) // If there is a project supplied, we retrieve and set the project id if project, ok := d.GetOk("project"); ok { - // Retrieve the project UUID - projectid, e := retrieveUUID(cs, "project", project.(string)) + // Retrieve the project ID + projectid, e := retrieveID(cs, "project", project.(string)) if e != nil { return e.Error() } @@ -229,11 +229,11 @@ func resourceCloudStackInstanceRead(d *schema.ResourceData, meta interface{}) er d.Set("ipaddress", vm.Nic[0].Ipaddress) //NB cloudstack sometimes sends back the wrong keypair name, so dont update it - setValueOrUUID(d, "network", vm.Nic[0].Networkname, vm.Nic[0].Networkid) - setValueOrUUID(d, "service_offering", vm.Serviceofferingname, vm.Serviceofferingid) - setValueOrUUID(d, "template", vm.Templatename, vm.Templateid) - setValueOrUUID(d, "project", vm.Project, vm.Projectid) - setValueOrUUID(d, "zone", vm.Zonename, vm.Zoneid) + setValueOrID(d, "network", vm.Nic[0].Networkname, vm.Nic[0].Networkid) + setValueOrID(d, "service_offering", vm.Serviceofferingname, vm.Serviceofferingid) + setValueOrID(d, "template", vm.Templatename, vm.Templateid) + setValueOrID(d, "project", vm.Project, vm.Projectid) + setValueOrID(d, "zone", vm.Zonename, vm.Zoneid) return nil } @@ -278,8 +278,8 @@ func resourceCloudStackInstanceUpdate(d *schema.ResourceData, meta interface{}) if d.HasChange("service_offering") { log.Printf("[DEBUG] Service offering changed for %s, starting update", name) - // Retrieve the service_offering UUID - serviceofferingid, e := retrieveUUID(cs, "service_offering", d.Get("service_offering").(string)) + // Retrieve the service_offering ID + serviceofferingid, e := retrieveID(cs, "service_offering", d.Get("service_offering").(string)) if e != nil { return e.Error() } @@ -335,7 +335,7 @@ func resourceCloudStackInstanceDelete(d *schema.ResourceData, meta interface{}) log.Printf("[INFO] Destroying instance: %s", d.Get("name").(string)) if _, err := cs.VirtualMachine.DestroyVirtualMachine(p); err != nil { - // This is a very poor way to be told the UUID does no longer exist :( + // This is a very poor way to be told the ID does no longer exist :( if strings.Contains(err.Error(), fmt.Sprintf( "Invalid parameter id value=%s due to incorrect long value format, "+ "or entity does not exist", d.Id())) { diff --git a/builtin/providers/cloudstack/resource_cloudstack_ipaddress.go b/builtin/providers/cloudstack/resource_cloudstack_ipaddress.go index 7d958d104..e2e590f6b 100644 --- a/builtin/providers/cloudstack/resource_cloudstack_ipaddress.go +++ b/builtin/providers/cloudstack/resource_cloudstack_ipaddress.go @@ -53,8 +53,8 @@ func resourceCloudStackIPAddressCreate(d *schema.ResourceData, meta interface{}) p := cs.Address.NewAssociateIpAddressParams() if network, ok := d.GetOk("network"); ok { - // Retrieve the network UUID - networkid, e := retrieveUUID(cs, "network", network.(string)) + // Retrieve the network ID + networkid, e := retrieveID(cs, "network", network.(string)) if e != nil { return e.Error() } @@ -64,8 +64,8 @@ func resourceCloudStackIPAddressCreate(d *schema.ResourceData, meta interface{}) } if vpc, ok := d.GetOk("vpc"); ok { - // Retrieve the vpc UUID - vpcid, e := retrieveUUID(cs, "vpc", vpc.(string)) + // Retrieve the vpc ID + vpcid, e := retrieveID(cs, "vpc", vpc.(string)) if e != nil { return e.Error() } @@ -76,8 +76,8 @@ func resourceCloudStackIPAddressCreate(d *schema.ResourceData, meta interface{}) // If there is a project supplied, we retrieve and set the project id if project, ok := d.GetOk("project"); ok { - // Retrieve the project UUID - projectid, e := retrieveUUID(cs, "project", project.(string)) + // Retrieve the project ID + projectid, e := retrieveID(cs, "project", project.(string)) if e != nil { return e.Error() } @@ -122,7 +122,7 @@ func resourceCloudStackIPAddressRead(d *schema.ResourceData, meta interface{}) e return err } - setValueOrUUID(d, "network", n.Name, f.Associatednetworkid) + setValueOrID(d, "network", n.Name, f.Associatednetworkid) } if _, ok := d.GetOk("vpc"); ok { @@ -132,10 +132,10 @@ func resourceCloudStackIPAddressRead(d *schema.ResourceData, meta interface{}) e return err } - setValueOrUUID(d, "vpc", v.Name, f.Vpcid) + setValueOrID(d, "vpc", v.Name, f.Vpcid) } - setValueOrUUID(d, "project", f.Project, f.Projectid) + setValueOrID(d, "project", f.Project, f.Projectid) return nil } @@ -148,7 +148,7 @@ func resourceCloudStackIPAddressDelete(d *schema.ResourceData, meta interface{}) // Disassociate the IP address if _, err := cs.Address.DisassociateIpAddress(p); err != nil { - // This is a very poor way to be told the UUID does no longer exist :( + // This is a very poor way to be told the ID does no longer exist :( if strings.Contains(err.Error(), fmt.Sprintf( "Invalid parameter id value=%s due to incorrect long value format, "+ "or entity does not exist", d.Id())) { @@ -165,7 +165,7 @@ func verifyIPAddressParams(d *schema.ResourceData) error { _, network := d.GetOk("network") _, vpc := d.GetOk("vpc") - if (network && vpc) || (!network && !vpc) { + if network && vpc || !network && !vpc { return fmt.Errorf( "You must supply a value for either (so not both) the 'network' or 'vpc' parameter") } diff --git a/builtin/providers/cloudstack/resource_cloudstack_loadbalancer.go b/builtin/providers/cloudstack/resource_cloudstack_loadbalancer.go index 73bf1d493..6f8d5473f 100644 --- a/builtin/providers/cloudstack/resource_cloudstack_loadbalancer.go +++ b/builtin/providers/cloudstack/resource_cloudstack_loadbalancer.go @@ -89,9 +89,9 @@ func resourceCloudStackLoadBalancerRuleCreate(d *schema.ResourceData, meta inter p.SetDescription(d.Get("name").(string)) } - // Retrieve the network and the UUID + // Retrieve the network and the ID if network, ok := d.GetOk("network"); ok { - networkid, e := retrieveUUID(cs, "network", network.(string)) + networkid, e := retrieveID(cs, "network", network.(string)) if e != nil { return e.Error() } @@ -100,8 +100,8 @@ func resourceCloudStackLoadBalancerRuleCreate(d *schema.ResourceData, meta inter p.SetNetworkid(networkid) } - // Retrieve the ipaddress UUID - ipaddressid, e := retrieveUUID(cs, "ipaddress", d.Get("ipaddress").(string)) + // Retrieve the ipaddress ID + ipaddressid, e := retrieveID(cs, "ipaddress", d.Get("ipaddress").(string)) if e != nil { return e.Error() } @@ -113,7 +113,7 @@ func resourceCloudStackLoadBalancerRuleCreate(d *schema.ResourceData, meta inter return err } - // Set the load balancer rule UUID and set partials + // Set the load balancer rule ID and set partials d.SetId(r.Id) d.SetPartial("name") d.SetPartial("description") @@ -163,7 +163,7 @@ func resourceCloudStackLoadBalancerRuleRead(d *schema.ResourceData, meta interfa d.Set("public_port", lb.Publicport) d.Set("private_port", lb.Privateport) - setValueOrUUID(d, "ipaddress", lb.Publicip, lb.Publicipid) + setValueOrID(d, "ipaddress", lb.Publicip, lb.Publicipid) // Only set network if user specified it to avoid spurious diffs if _, ok := d.GetOk("network"); ok { @@ -171,7 +171,7 @@ func resourceCloudStackLoadBalancerRuleRead(d *schema.ResourceData, meta interfa if err != nil { return err } - setValueOrUUID(d, "network", network.Name, lb.Networkid) + setValueOrID(d, "network", network.Name, lb.Networkid) } return nil @@ -229,7 +229,7 @@ func resourceCloudStackLoadBalancerRuleDelete(d *schema.ResourceData, meta inter log.Printf("[INFO] Deleting load balancer rule: %s", d.Get("name").(string)) if _, err := cs.LoadBalancer.DeleteLoadBalancerRule(p); err != nil { - // This is a very poor way to be told the UUID does no longer exist :( + // This is a very poor way to be told the ID does no longer exist :( if !strings.Contains(err.Error(), fmt.Sprintf( "Invalid parameter id value=%s due to incorrect long value format, "+ "or entity does not exist", d.Id())) { diff --git a/builtin/providers/cloudstack/resource_cloudstack_loadbalancer_test.go b/builtin/providers/cloudstack/resource_cloudstack_loadbalancer_test.go index 59e119b16..a316d5988 100644 --- a/builtin/providers/cloudstack/resource_cloudstack_loadbalancer_test.go +++ b/builtin/providers/cloudstack/resource_cloudstack_loadbalancer_test.go @@ -223,12 +223,12 @@ func testAccCheckCloudStackLoadBalancerRuleDestroy(s *terraform.State) error { return fmt.Errorf("No Loadbalancer rule ID is set") } - for k, uuid := range rs.Primary.Attributes { + for k, id := range rs.Primary.Attributes { if !strings.Contains(k, "uuid") { continue } - _, _, err := cs.LoadBalancer.GetLoadBalancerRuleByID(uuid) + _, _, err := cs.LoadBalancer.GetLoadBalancerRuleByID(id) if err == nil { return fmt.Errorf("Loadbalancer rule %s still exists", rs.Primary.ID) } diff --git a/builtin/providers/cloudstack/resource_cloudstack_network.go b/builtin/providers/cloudstack/resource_cloudstack_network.go index 9be63b927..a76beae32 100644 --- a/builtin/providers/cloudstack/resource_cloudstack_network.go +++ b/builtin/providers/cloudstack/resource_cloudstack_network.go @@ -72,14 +72,14 @@ func resourceCloudStackNetworkCreate(d *schema.ResourceData, meta interface{}) e name := d.Get("name").(string) - // Retrieve the network_offering UUID - networkofferingid, e := retrieveUUID(cs, "network_offering", d.Get("network_offering").(string)) + // Retrieve the network_offering ID + networkofferingid, e := retrieveID(cs, "network_offering", d.Get("network_offering").(string)) if e != nil { return e.Error() } - // Retrieve the zone UUID - zoneid, e := retrieveUUID(cs, "zone", d.Get("zone").(string)) + // Retrieve the zone ID + zoneid, e := retrieveID(cs, "zone", d.Get("zone").(string)) if e != nil { return e.Error() } @@ -108,27 +108,27 @@ func resourceCloudStackNetworkCreate(d *schema.ResourceData, meta interface{}) e // Check is this network needs to be created in a VPC vpc := d.Get("vpc").(string) if vpc != "" { - // Retrieve the vpc UUID - vpcid, e := retrieveUUID(cs, "vpc", vpc) + // Retrieve the vpc ID + vpcid, e := retrieveID(cs, "vpc", vpc) if e != nil { return e.Error() } - // Set the vpc UUID + // Set the vpc ID p.SetVpcid(vpcid) // Since we're in a VPC, check if we want to assiciate an ACL list aclid := d.Get("aclid").(string) if aclid != "" { - // Set the acl UUID + // Set the acl ID p.SetAclid(aclid) } } // If there is a project supplied, we retrieve and set the project id if project, ok := d.GetOk("project"); ok { - // Retrieve the project UUID - projectid, e := retrieveUUID(cs, "project", project.(string)) + // Retrieve the project ID + projectid, e := retrieveID(cs, "project", project.(string)) if e != nil { return e.Error() } @@ -167,9 +167,9 @@ func resourceCloudStackNetworkRead(d *schema.ResourceData, meta interface{}) err d.Set("display_text", n.Displaytext) d.Set("cidr", n.Cidr) - setValueOrUUID(d, "network_offering", n.Networkofferingname, n.Networkofferingid) - setValueOrUUID(d, "project", n.Project, n.Projectid) - setValueOrUUID(d, "zone", n.Zonename, n.Zoneid) + setValueOrID(d, "network_offering", n.Networkofferingname, n.Networkofferingid) + setValueOrID(d, "project", n.Project, n.Projectid) + setValueOrID(d, "zone", n.Zonename, n.Zoneid) return nil } @@ -200,8 +200,8 @@ func resourceCloudStackNetworkUpdate(d *schema.ResourceData, meta interface{}) e // Check if the network offering is changed if d.HasChange("network_offering") { - // Retrieve the network_offering UUID - networkofferingid, e := retrieveUUID(cs, "network_offering", d.Get("network_offering").(string)) + // Retrieve the network_offering ID + networkofferingid, e := retrieveID(cs, "network_offering", d.Get("network_offering").(string)) if e != nil { return e.Error() } @@ -228,7 +228,7 @@ func resourceCloudStackNetworkDelete(d *schema.ResourceData, meta interface{}) e // Delete the network _, err := cs.Network.DeleteNetwork(p) if err != nil { - // This is a very poor way to be told the UUID does no longer exist :( + // This is a very poor way to be told the ID does no longer exist :( if strings.Contains(err.Error(), fmt.Sprintf( "Invalid parameter id value=%s due to incorrect long value format, "+ "or entity does not exist", d.Id())) { diff --git a/builtin/providers/cloudstack/resource_cloudstack_network_acl.go b/builtin/providers/cloudstack/resource_cloudstack_network_acl.go index 7f073bbf8..2504b762b 100644 --- a/builtin/providers/cloudstack/resource_cloudstack_network_acl.go +++ b/builtin/providers/cloudstack/resource_cloudstack_network_acl.go @@ -43,8 +43,8 @@ func resourceCloudStackNetworkACLCreate(d *schema.ResourceData, meta interface{} name := d.Get("name").(string) - // Retrieve the vpc UUID - vpcid, e := retrieveUUID(cs, "vpc", d.Get("vpc").(string)) + // Retrieve the vpc ID + vpcid, e := retrieveID(cs, "vpc", d.Get("vpc").(string)) if e != nil { return e.Error() } @@ -95,7 +95,7 @@ func resourceCloudStackNetworkACLRead(d *schema.ResourceData, meta interface{}) return err } - setValueOrUUID(d, "vpc", v.Name, v.Id) + setValueOrID(d, "vpc", v.Name, v.Id) return nil } @@ -109,7 +109,7 @@ func resourceCloudStackNetworkACLDelete(d *schema.ResourceData, meta interface{} // Delete the network ACL list _, err := cs.NetworkACL.DeleteNetworkACLList(p) if err != nil { - // This is a very poor way to be told the UUID does no longer exist :( + // This is a very poor way to be told the ID does no longer exist :( if strings.Contains(err.Error(), fmt.Sprintf( "Invalid parameter id value=%s due to incorrect long value format, "+ "or entity does not exist", d.Id())) { diff --git a/builtin/providers/cloudstack/resource_cloudstack_network_acl_rule.go b/builtin/providers/cloudstack/resource_cloudstack_network_acl_rule.go index fceeb7d45..ba2650484 100644 --- a/builtin/providers/cloudstack/resource_cloudstack_network_acl_rule.go +++ b/builtin/providers/cloudstack/resource_cloudstack_network_acl_rule.go @@ -247,7 +247,7 @@ func resourceCloudStackNetworkACLRuleRead(d *schema.ResourceData, meta interface // Get the rule r, count, err := cs.NetworkACL.GetNetworkACLByID(id.(string)) - // If the count == 0, there is no object found for this UUID + // If the count == 0, there is no object found for this ID if err != nil { if count == 0 { delete(uuids, "icmp") @@ -275,7 +275,7 @@ func resourceCloudStackNetworkACLRuleRead(d *schema.ResourceData, meta interface // Get the rule r, count, err := cs.NetworkACL.GetNetworkACLByID(id.(string)) - // If the count == 0, there is no object found for this UUID + // If the count == 0, there is no object found for this ID if err != nil { if count == 0 { delete(uuids, "all") @@ -469,7 +469,7 @@ func resourceCloudStackNetworkACLRuleDeleteRule( // Delete the rule if _, err := cs.NetworkACL.DeleteNetworkACL(p); err != nil { - // This is a very poor way to be told the UUID does no longer exist :( + // This is a very poor way to be told the ID does no longer exist :( if strings.Contains(err.Error(), fmt.Sprintf( "Invalid parameter id value=%s due to incorrect long value format, "+ "or entity does not exist", id.(string))) { diff --git a/builtin/providers/cloudstack/resource_cloudstack_network_acl_rule_test.go b/builtin/providers/cloudstack/resource_cloudstack_network_acl_rule_test.go index 5f450f931..6f2370f5b 100644 --- a/builtin/providers/cloudstack/resource_cloudstack_network_acl_rule_test.go +++ b/builtin/providers/cloudstack/resource_cloudstack_network_acl_rule_test.go @@ -122,13 +122,13 @@ func testAccCheckCloudStackNetworkACLRulesExist(n string) resource.TestCheckFunc return fmt.Errorf("No network ACL rule ID is set") } - for k, uuid := range rs.Primary.Attributes { + for k, id := range rs.Primary.Attributes { if !strings.Contains(k, ".uuids.") || strings.HasSuffix(k, ".uuids.#") { continue } cs := testAccProvider.Meta().(*cloudstack.CloudStackClient) - _, count, err := cs.NetworkACL.GetNetworkACLByID(uuid) + _, count, err := cs.NetworkACL.GetNetworkACLByID(id) if err != nil { return err @@ -155,12 +155,12 @@ func testAccCheckCloudStackNetworkACLRuleDestroy(s *terraform.State) error { return fmt.Errorf("No network ACL rule ID is set") } - for k, uuid := range rs.Primary.Attributes { + for k, id := range rs.Primary.Attributes { if !strings.Contains(k, ".uuids.") || strings.HasSuffix(k, ".uuids.#") { continue } - _, _, err := cs.NetworkACL.GetNetworkACLByID(uuid) + _, _, err := cs.NetworkACL.GetNetworkACLByID(id) if err == nil { return fmt.Errorf("Network ACL rule %s still exists", rs.Primary.ID) } diff --git a/builtin/providers/cloudstack/resource_cloudstack_nic.go b/builtin/providers/cloudstack/resource_cloudstack_nic.go index 2eb89c80b..e118a5fe9 100644 --- a/builtin/providers/cloudstack/resource_cloudstack_nic.go +++ b/builtin/providers/cloudstack/resource_cloudstack_nic.go @@ -41,14 +41,14 @@ func resourceCloudStackNIC() *schema.Resource { func resourceCloudStackNICCreate(d *schema.ResourceData, meta interface{}) error { cs := meta.(*cloudstack.CloudStackClient) - // Retrieve the network UUID - networkid, e := retrieveUUID(cs, "network", d.Get("network").(string)) + // Retrieve the network ID + networkid, e := retrieveID(cs, "network", d.Get("network").(string)) if e != nil { return e.Error() } - // Retrieve the virtual_machine UUID - virtualmachineid, e := retrieveUUID(cs, "virtual_machine", d.Get("virtual_machine").(string)) + // Retrieve the virtual_machine ID + virtualmachineid, e := retrieveID(cs, "virtual_machine", d.Get("virtual_machine").(string)) if e != nil { return e.Error() } @@ -103,8 +103,8 @@ func resourceCloudStackNICRead(d *schema.ResourceData, meta interface{}) error { for _, n := range vm.Nic { if n.Id == d.Id() { d.Set("ipaddress", n.Ipaddress) - setValueOrUUID(d, "network", n.Networkname, n.Networkid) - setValueOrUUID(d, "virtual_machine", vm.Name, vm.Id) + setValueOrID(d, "network", n.Networkname, n.Networkid) + setValueOrID(d, "virtual_machine", vm.Name, vm.Id) found = true break } @@ -121,8 +121,8 @@ func resourceCloudStackNICRead(d *schema.ResourceData, meta interface{}) error { func resourceCloudStackNICDelete(d *schema.ResourceData, meta interface{}) error { cs := meta.(*cloudstack.CloudStackClient) - // Retrieve the virtual_machine UUID - virtualmachineid, e := retrieveUUID(cs, "virtual_machine", d.Get("virtual_machine").(string)) + // Retrieve the virtual_machine ID + virtualmachineid, e := retrieveID(cs, "virtual_machine", d.Get("virtual_machine").(string)) if e != nil { return e.Error() } @@ -133,7 +133,7 @@ func resourceCloudStackNICDelete(d *schema.ResourceData, meta interface{}) error // Remove the NIC _, err := cs.VirtualMachine.RemoveNicFromVirtualMachine(p) if err != nil { - // This is a very poor way to be told the UUID does no longer exist :( + // This is a very poor way to be told the ID does no longer exist :( if strings.Contains(err.Error(), fmt.Sprintf( "Invalid parameter id value=%s due to incorrect long value format, "+ "or entity does not exist", d.Id())) { diff --git a/builtin/providers/cloudstack/resource_cloudstack_port_forward.go b/builtin/providers/cloudstack/resource_cloudstack_port_forward.go index 3781fc1ae..0bec41af5 100644 --- a/builtin/providers/cloudstack/resource_cloudstack_port_forward.go +++ b/builtin/providers/cloudstack/resource_cloudstack_port_forward.go @@ -72,8 +72,8 @@ func resourceCloudStackPortForward() *schema.Resource { func resourceCloudStackPortForwardCreate(d *schema.ResourceData, meta interface{}) error { cs := meta.(*cloudstack.CloudStackClient) - // Retrieve the ipaddress UUID - ipaddressid, e := retrieveUUID(cs, "ipaddress", d.Get("ipaddress").(string)) + // Retrieve the ipaddress ID + ipaddressid, e := retrieveID(cs, "ipaddress", d.Get("ipaddress").(string)) if e != nil { return e.Error() } @@ -115,8 +115,8 @@ func resourceCloudStackPortForwardCreateForward( return err } - // Retrieve the virtual_machine UUID - virtualmachineid, e := retrieveUUID(cs, "virtual_machine", forward["virtual_machine"].(string)) + // Retrieve the virtual_machine ID + virtualmachineid, e := retrieveID(cs, "virtual_machine", forward["virtual_machine"].(string)) if e != nil { return e.Error() } @@ -167,7 +167,7 @@ func resourceCloudStackPortForwardRead(d *schema.ResourceData, meta interface{}) // Get the forward r, count, err := cs.Firewall.GetPortForwardingRuleByID(id.(string)) - // If the count == 0, there is no object found for this UUID + // If the count == 0, there is no object found for this ID if err != nil { if count == 0 { forward["uuid"] = "" @@ -192,7 +192,7 @@ func resourceCloudStackPortForwardRead(d *schema.ResourceData, meta interface{}) forward["private_port"] = privPort forward["public_port"] = pubPort - if isUUID(forward["virtual_machine"].(string)) { + if isID(forward["virtual_machine"].(string)) { forward["virtual_machine"] = r.Virtualmachineid } else { forward["virtual_machine"] = r.Virtualmachinename @@ -317,7 +317,7 @@ func resourceCloudStackPortForwardDeleteForward( // Delete the forward if _, err := cs.Firewall.DeletePortForwardingRule(p); err != nil { - // This is a very poor way to be told the UUID does no longer exist :( + // This is a very poor way to be told the ID does no longer exist :( if !strings.Contains(err.Error(), fmt.Sprintf( "Invalid parameter id value=%s due to incorrect long value format, "+ "or entity does not exist", forward["uuid"].(string))) { @@ -325,6 +325,7 @@ func resourceCloudStackPortForwardDeleteForward( } } + // Empty the UUID of this rule forward["uuid"] = "" return nil diff --git a/builtin/providers/cloudstack/resource_cloudstack_port_forward_test.go b/builtin/providers/cloudstack/resource_cloudstack_port_forward_test.go index 39ebfe8f6..b0851753f 100644 --- a/builtin/providers/cloudstack/resource_cloudstack_port_forward_test.go +++ b/builtin/providers/cloudstack/resource_cloudstack_port_forward_test.go @@ -102,13 +102,13 @@ func testAccCheckCloudStackPortForwardsExist(n string) resource.TestCheckFunc { return fmt.Errorf("No port forward ID is set") } - for k, uuid := range rs.Primary.Attributes { + for k, id := range rs.Primary.Attributes { if !strings.Contains(k, "uuid") { continue } cs := testAccProvider.Meta().(*cloudstack.CloudStackClient) - _, count, err := cs.Firewall.GetPortForwardingRuleByID(uuid) + _, count, err := cs.Firewall.GetPortForwardingRuleByID(id) if err != nil { return err @@ -135,12 +135,12 @@ func testAccCheckCloudStackPortForwardDestroy(s *terraform.State) error { return fmt.Errorf("No port forward ID is set") } - for k, uuid := range rs.Primary.Attributes { + for k, id := range rs.Primary.Attributes { if !strings.Contains(k, "uuid") { continue } - _, _, err := cs.Firewall.GetPortForwardingRuleByID(uuid) + _, _, err := cs.Firewall.GetPortForwardingRuleByID(id) if err == nil { return fmt.Errorf("Port forward %s still exists", rs.Primary.ID) } diff --git a/builtin/providers/cloudstack/resource_cloudstack_secondary_ipaddress.go b/builtin/providers/cloudstack/resource_cloudstack_secondary_ipaddress.go index 1c491be44..697e55eb4 100644 --- a/builtin/providers/cloudstack/resource_cloudstack_secondary_ipaddress.go +++ b/builtin/providers/cloudstack/resource_cloudstack_secondary_ipaddress.go @@ -44,8 +44,8 @@ func resourceCloudStackSecondaryIPAddressCreate(d *schema.ResourceData, meta int nicid := d.Get("nicid").(string) if nicid == "" { - // Retrieve the virtual_machine UUID - virtualmachineid, e := retrieveUUID(cs, "virtual_machine", d.Get("virtual_machine").(string)) + // Retrieve the virtual_machine ID + virtualmachineid, e := retrieveID(cs, "virtual_machine", d.Get("virtual_machine").(string)) if e != nil { return e.Error() } @@ -84,8 +84,8 @@ func resourceCloudStackSecondaryIPAddressCreate(d *schema.ResourceData, meta int func resourceCloudStackSecondaryIPAddressRead(d *schema.ResourceData, meta interface{}) error { cs := meta.(*cloudstack.CloudStackClient) - // Retrieve the virtual_machine UUID - virtualmachineid, e := retrieveUUID(cs, "virtual_machine", d.Get("virtual_machine").(string)) + // Retrieve the virtual_machine ID + virtualmachineid, e := retrieveID(cs, "virtual_machine", d.Get("virtual_machine").(string)) if e != nil { return e.Error() } @@ -146,7 +146,7 @@ func resourceCloudStackSecondaryIPAddressDelete(d *schema.ResourceData, meta int log.Printf("[INFO] Removing secondary IP address: %s", d.Get("ipaddress").(string)) if _, err := cs.Nic.RemoveIpFromNic(p); err != nil { - // This is a very poor way to be told the UUID does no longer exist :( + // This is a very poor way to be told the ID does no longer exist :( if strings.Contains(err.Error(), fmt.Sprintf( "Invalid parameter id value=%s due to incorrect long value format, "+ "or entity does not exist", d.Id())) { diff --git a/builtin/providers/cloudstack/resource_cloudstack_secondary_ipaddress_test.go b/builtin/providers/cloudstack/resource_cloudstack_secondary_ipaddress_test.go index e0c353e20..beedcd2cb 100644 --- a/builtin/providers/cloudstack/resource_cloudstack_secondary_ipaddress_test.go +++ b/builtin/providers/cloudstack/resource_cloudstack_secondary_ipaddress_test.go @@ -64,8 +64,8 @@ func testAccCheckCloudStackSecondaryIPAddressExists( cs := testAccProvider.Meta().(*cloudstack.CloudStackClient) - // Retrieve the virtual_machine UUID - virtualmachineid, e := retrieveUUID( + // Retrieve the virtual_machine ID + virtualmachineid, e := retrieveID( cs, "virtual_machine", rs.Primary.Attributes["virtual_machine"]) if e != nil { return e.Error() @@ -136,8 +136,8 @@ func testAccCheckCloudStackSecondaryIPAddressDestroy(s *terraform.State) error { return fmt.Errorf("No IP address ID is set") } - // Retrieve the virtual_machine UUID - virtualmachineid, e := retrieveUUID( + // Retrieve the virtual_machine ID + virtualmachineid, e := retrieveID( cs, "virtual_machine", rs.Primary.Attributes["virtual_machine"]) if e != nil { return e.Error() diff --git a/builtin/providers/cloudstack/resource_cloudstack_ssh_keypair.go b/builtin/providers/cloudstack/resource_cloudstack_ssh_keypair.go index 9fb859a22..8f6f0f9c5 100644 --- a/builtin/providers/cloudstack/resource_cloudstack_ssh_keypair.go +++ b/builtin/providers/cloudstack/resource_cloudstack_ssh_keypair.go @@ -116,7 +116,7 @@ func resourceCloudStackSSHKeyPairDelete(d *schema.ResourceData, meta interface{} // Remove the SSH Keypair _, err := cs.SSH.DeleteSSHKeyPair(p) if err != nil { - // This is a very poor way to be told the UUID does no longer exist :( + // This is a very poor way to be told the ID does no longer exist :( if strings.Contains(err.Error(), fmt.Sprintf( "A key pair with name '%s' does not exist for account", d.Id())) { return nil diff --git a/builtin/providers/cloudstack/resource_cloudstack_template.go b/builtin/providers/cloudstack/resource_cloudstack_template.go index 15c6ebec4..04aaca22e 100644 --- a/builtin/providers/cloudstack/resource_cloudstack_template.go +++ b/builtin/providers/cloudstack/resource_cloudstack_template.go @@ -51,6 +51,12 @@ func resourceCloudStackTemplate() *schema.Resource { ForceNew: true, }, + "project": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ForceNew: true, + }, + "zone": &schema.Schema{ Type: schema.TypeString, Required: true, @@ -118,14 +124,14 @@ func resourceCloudStackTemplateCreate(d *schema.ResourceData, meta interface{}) displaytext = name } - // Retrieve the os_type UUID - ostypeid, e := retrieveUUID(cs, "os_type", d.Get("os_type").(string)) + // Retrieve the os_type ID + ostypeid, e := retrieveID(cs, "os_type", d.Get("os_type").(string)) if e != nil { return e.Error() } - // Retrieve the zone UUID - zoneid, e := retrieveUUID(cs, "zone", d.Get("zone").(string)) + // Retrieve the zone ID + zoneid, e := retrieveID(cs, "zone", d.Get("zone").(string)) if e != nil { return e.Error() } @@ -161,6 +167,17 @@ func resourceCloudStackTemplateCreate(d *schema.ResourceData, meta interface{}) p.SetPasswordenabled(v.(bool)) } + // If there is a project supplied, we retrieve and set the project id + if project, ok := d.GetOk("project"); ok { + // Retrieve the project ID + projectid, e := retrieveID(cs, "project", project.(string)) + if e != nil { + return e.Error() + } + // Set the default project ID + p.SetProjectid(projectid) + } + // Create the new template r, err := cs.Template.RegisterTemplate(p) if err != nil { @@ -219,8 +236,9 @@ func resourceCloudStackTemplateRead(d *schema.ResourceData, meta interface{}) er d.Set("password_enabled", t.Passwordenabled) d.Set("is_ready", t.Isready) - setValueOrUUID(d, "os_type", t.Ostypename, t.Ostypeid) - setValueOrUUID(d, "zone", t.Zonename, t.Zoneid) + setValueOrID(d, "os_type", t.Ostypename, t.Ostypeid) + setValueOrID(d, "project", t.Project, t.Projectid) + setValueOrID(d, "zone", t.Zonename, t.Zoneid) return nil } @@ -249,7 +267,7 @@ func resourceCloudStackTemplateUpdate(d *schema.ResourceData, meta interface{}) } if d.HasChange("os_type") { - ostypeid, e := retrieveUUID(cs, "os_type", d.Get("os_type").(string)) + ostypeid, e := retrieveID(cs, "os_type", d.Get("os_type").(string)) if e != nil { return e.Error() } @@ -278,7 +296,7 @@ func resourceCloudStackTemplateDelete(d *schema.ResourceData, meta interface{}) log.Printf("[INFO] Deleting template: %s", d.Get("name").(string)) _, err := cs.Template.DeleteTemplate(p) if err != nil { - // This is a very poor way to be told the UUID does no longer exist :( + // This is a very poor way to be told the ID does no longer exist :( if strings.Contains(err.Error(), fmt.Sprintf( "Invalid parameter id value=%s due to incorrect long value format, "+ "or entity does not exist", d.Id())) { diff --git a/builtin/providers/cloudstack/resource_cloudstack_vpc.go b/builtin/providers/cloudstack/resource_cloudstack_vpc.go index 8fe132d94..d99a4042a 100644 --- a/builtin/providers/cloudstack/resource_cloudstack_vpc.go +++ b/builtin/providers/cloudstack/resource_cloudstack_vpc.go @@ -40,12 +40,24 @@ func resourceCloudStackVPC() *schema.Resource { ForceNew: true, }, + "network_domain": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + }, + "project": &schema.Schema{ Type: schema.TypeString, Optional: true, ForceNew: true, }, + "source_nat_ip": &schema.Schema{ + Type: schema.TypeString, + Computed: true, + }, + "zone": &schema.Schema{ Type: schema.TypeString, Required: true, @@ -60,14 +72,14 @@ func resourceCloudStackVPCCreate(d *schema.ResourceData, meta interface{}) error name := d.Get("name").(string) - // Retrieve the vpc_offering UUID - vpcofferingid, e := retrieveUUID(cs, "vpc_offering", d.Get("vpc_offering").(string)) + // Retrieve the vpc_offering ID + vpcofferingid, e := retrieveID(cs, "vpc_offering", d.Get("vpc_offering").(string)) if e != nil { return e.Error() } - // Retrieve the zone UUID - zoneid, e := retrieveUUID(cs, "zone", d.Get("zone").(string)) + // Retrieve the zone ID + zoneid, e := retrieveID(cs, "zone", d.Get("zone").(string)) if e != nil { return e.Error() } @@ -79,12 +91,24 @@ func resourceCloudStackVPCCreate(d *schema.ResourceData, meta interface{}) error } // Create a new parameter struct - p := cs.VPC.NewCreateVPCParams(d.Get("cidr").(string), displaytext.(string), name, vpcofferingid, zoneid) + p := cs.VPC.NewCreateVPCParams( + d.Get("cidr").(string), + displaytext.(string), + name, + vpcofferingid, + zoneid, + ) + + // If there is a network domain supplied, make sure to add it to the request + if networkDomain, ok := d.GetOk("network_domain"); ok { + // Set the network domain + p.SetNetworkdomain(networkDomain.(string)) + } // If there is a project supplied, we retrieve and set the project id if project, ok := d.GetOk("project"); ok { - // Retrieve the project UUID - projectid, e := retrieveUUID(cs, "project", project.(string)) + // Retrieve the project ID + projectid, e := retrieveID(cs, "project", project.(string)) if e != nil { return e.Error() } @@ -122,6 +146,7 @@ func resourceCloudStackVPCRead(d *schema.ResourceData, meta interface{}) error { d.Set("name", v.Name) d.Set("display_text", v.Displaytext) d.Set("cidr", v.Cidr) + d.Set("network_domain", v.Networkdomain) // Get the VPC offering details o, _, err := cs.VPC.GetVPCOfferingByID(v.Vpcofferingid) @@ -129,9 +154,30 @@ func resourceCloudStackVPCRead(d *schema.ResourceData, meta interface{}) error { return err } - setValueOrUUID(d, "vpc_offering", o.Name, v.Vpcofferingid) - setValueOrUUID(d, "project", v.Project, v.Projectid) - setValueOrUUID(d, "zone", v.Zonename, v.Zoneid) + setValueOrID(d, "vpc_offering", o.Name, v.Vpcofferingid) + setValueOrID(d, "project", v.Project, v.Projectid) + setValueOrID(d, "zone", v.Zonename, v.Zoneid) + + // Create a new parameter struct + p := cs.Address.NewListPublicIpAddressesParams() + p.SetVpcid(d.Id()) + p.SetIssourcenat(true) + + if _, ok := d.GetOk("project"); ok { + p.SetProjectid(v.Projectid) + } + + // Get the source NAT IP assigned to the VPC + l, err := cs.Address.ListPublicIpAddresses(p) + if err != nil { + return err + } + + if l.Count != 1 { + return fmt.Errorf("Unexpected number (%d) of source NAT IPs returned", l.Count) + } + + d.Set("source_nat_ip", l.PublicIpAddresses[0].Ipaddress) return nil } @@ -172,7 +218,7 @@ func resourceCloudStackVPCDelete(d *schema.ResourceData, meta interface{}) error // Delete the VPC _, err := cs.VPC.DeleteVPC(p) if err != nil { - // This is a very poor way to be told the UUID does no longer exist :( + // This is a very poor way to be told the ID does no longer exist :( if strings.Contains(err.Error(), fmt.Sprintf( "Invalid parameter id value=%s due to incorrect long value format, "+ "or entity does not exist", d.Id())) { diff --git a/builtin/providers/cloudstack/resource_cloudstack_vpc_test.go b/builtin/providers/cloudstack/resource_cloudstack_vpc_test.go index 011358a95..7c1d1492e 100644 --- a/builtin/providers/cloudstack/resource_cloudstack_vpc_test.go +++ b/builtin/providers/cloudstack/resource_cloudstack_vpc_test.go @@ -76,6 +76,10 @@ func testAccCheckCloudStackVPCAttributes( return fmt.Errorf("Bad VPC CIDR: %s", vpc.Cidr) } + if vpc.Networkdomain != "terraform-domain" { + return fmt.Errorf("Bad network domain: %s", vpc.Networkdomain) + } + return nil } } @@ -107,6 +111,7 @@ resource "cloudstack_vpc" "foo" { display_text = "terraform-vpc-text" cidr = "%s" vpc_offering = "%s" + network_domain = "terraform-domain" zone = "%s" }`, CLOUDSTACK_VPC_CIDR_1, diff --git a/builtin/providers/cloudstack/resource_cloudstack_vpn_connection.go b/builtin/providers/cloudstack/resource_cloudstack_vpn_connection.go index b036890a5..322f07a2c 100644 --- a/builtin/providers/cloudstack/resource_cloudstack_vpn_connection.go +++ b/builtin/providers/cloudstack/resource_cloudstack_vpn_connection.go @@ -81,7 +81,7 @@ func resourceCloudStackVPNConnectionDelete(d *schema.ResourceData, meta interfac // Delete the VPN Connection _, err := cs.VPN.DeleteVpnConnection(p) if err != nil { - // This is a very poor way to be told the UUID does no longer exist :( + // This is a very poor way to be told the ID does no longer exist :( if strings.Contains(err.Error(), fmt.Sprintf( "Invalid parameter id value=%s due to incorrect long value format, "+ "or entity does not exist", d.Id())) { diff --git a/builtin/providers/cloudstack/resource_cloudstack_vpn_customer_gateway.go b/builtin/providers/cloudstack/resource_cloudstack_vpn_customer_gateway.go index f27e28d38..b049c0319 100644 --- a/builtin/providers/cloudstack/resource_cloudstack_vpn_customer_gateway.go +++ b/builtin/providers/cloudstack/resource_cloudstack_vpn_customer_gateway.go @@ -179,7 +179,7 @@ func resourceCloudStackVPNCustomerGatewayDelete(d *schema.ResourceData, meta int // Delete the VPN Customer Gateway _, err := cs.VPN.DeleteVpnCustomerGateway(p) if err != nil { - // This is a very poor way to be told the UUID does no longer exist :( + // This is a very poor way to be told the ID does no longer exist :( if strings.Contains(err.Error(), fmt.Sprintf( "Invalid parameter id value=%s due to incorrect long value format, "+ "or entity does not exist", d.Id())) { diff --git a/builtin/providers/cloudstack/resource_cloudstack_vpn_gateway.go b/builtin/providers/cloudstack/resource_cloudstack_vpn_gateway.go index 704511ca8..17533a3a6 100644 --- a/builtin/providers/cloudstack/resource_cloudstack_vpn_gateway.go +++ b/builtin/providers/cloudstack/resource_cloudstack_vpn_gateway.go @@ -33,8 +33,8 @@ func resourceCloudStackVPNGateway() *schema.Resource { func resourceCloudStackVPNGatewayCreate(d *schema.ResourceData, meta interface{}) error { cs := meta.(*cloudstack.CloudStackClient) - // Retrieve the VPC UUID - vpcid, e := retrieveUUID(cs, "vpc", d.Get("vpc").(string)) + // Retrieve the VPC ID + vpcid, e := retrieveID(cs, "vpc", d.Get("vpc").(string)) if e != nil { return e.Error() } @@ -69,7 +69,7 @@ func resourceCloudStackVPNGatewayRead(d *schema.ResourceData, meta interface{}) return err } - setValueOrUUID(d, "vpc", d.Get("vpc").(string), v.Vpcid) + setValueOrID(d, "vpc", d.Get("vpc").(string), v.Vpcid) d.Set("public_ip", v.Publicip) @@ -85,7 +85,7 @@ func resourceCloudStackVPNGatewayDelete(d *schema.ResourceData, meta interface{} // Delete the VPN Gateway _, err := cs.VPN.DeleteVpnGateway(p) if err != nil { - // This is a very poor way to be told the UUID does no longer exist :( + // This is a very poor way to be told the ID does no longer exist :( if strings.Contains(err.Error(), fmt.Sprintf( "Invalid parameter id value=%s due to incorrect long value format, "+ "or entity does not exist", d.Id())) { diff --git a/builtin/providers/cloudstack/resources.go b/builtin/providers/cloudstack/resources.go index cc826492f..f7115e793 100644 --- a/builtin/providers/cloudstack/resources.go +++ b/builtin/providers/cloudstack/resources.go @@ -10,6 +10,9 @@ import ( "github.com/xanzy/go-cloudstack/cloudstack" ) +// CloudStack uses a "special" ID of -1 to define an unlimited resource +const UnlimitedResourceID = "-1" + type retrieveError struct { name string value string @@ -17,43 +20,51 @@ type retrieveError struct { } func (e *retrieveError) Error() error { - return fmt.Errorf("Error retrieving UUID of %s %s: %s", e.name, e.value, e.err) + return fmt.Errorf("Error retrieving ID of %s %s: %s", e.name, e.value, e.err) } -func setValueOrUUID(d *schema.ResourceData, key string, value string, uuid string) { - if isUUID(d.Get(key).(string)) { - d.Set(key, uuid) +func setValueOrID(d *schema.ResourceData, key string, value string, id string) { + if isID(d.Get(key).(string)) { + // If the given id is an empty string, check if the configured value matches + // the UnlimitedResourceID in which case we set id to UnlimitedResourceID + if id == "" && d.Get(key).(string) == UnlimitedResourceID { + id = UnlimitedResourceID + } + + d.Set(key, id) } else { d.Set(key, value) } } -func retrieveUUID(cs *cloudstack.CloudStackClient, name, value string) (uuid string, e *retrieveError) { - // If the supplied value isn't a UUID, try to retrieve the UUID ourselves - if isUUID(value) { +func retrieveID(cs *cloudstack.CloudStackClient, name, value string) (id string, e *retrieveError) { + // If the supplied value isn't a ID, try to retrieve the ID ourselves + if isID(value) { return value, nil } - log.Printf("[DEBUG] Retrieving UUID of %s: %s", name, value) + log.Printf("[DEBUG] Retrieving ID of %s: %s", name, value) var err error switch name { case "disk_offering": - uuid, err = cs.DiskOffering.GetDiskOfferingID(value) + id, err = cs.DiskOffering.GetDiskOfferingID(value) case "virtual_machine": - uuid, err = cs.VirtualMachine.GetVirtualMachineID(value) + id, err = cs.VirtualMachine.GetVirtualMachineID(value) case "service_offering": - uuid, err = cs.ServiceOffering.GetServiceOfferingID(value) + id, err = cs.ServiceOffering.GetServiceOfferingID(value) case "network_offering": - uuid, err = cs.NetworkOffering.GetNetworkOfferingID(value) + id, err = cs.NetworkOffering.GetNetworkOfferingID(value) + case "project": + id, err = cs.Project.GetProjectID(value) case "vpc_offering": - uuid, err = cs.VPC.GetVPCOfferingID(value) + id, err = cs.VPC.GetVPCOfferingID(value) case "vpc": - uuid, err = cs.VPC.GetVPCID(value) + id, err = cs.VPC.GetVPCID(value) case "network": - uuid, err = cs.Network.GetNetworkID(value) + id, err = cs.Network.GetNetworkID(value) case "zone": - uuid, err = cs.Zone.GetZoneID(value) + id, err = cs.Zone.GetZoneID(value) case "ipaddress": p := cs.Address.NewListPublicIpAddressesParams() p.SetIpaddress(value) @@ -63,10 +74,10 @@ func retrieveUUID(cs *cloudstack.CloudStackClient, name, value string) (uuid str break } if l.Count == 1 { - uuid = l.PublicIpAddresses[0].Id + id = l.PublicIpAddresses[0].Id break } - err = fmt.Errorf("Could not find UUID of IP address: %s", value) + err = fmt.Errorf("Could not find ID of IP address: %s", value) case "os_type": p := cs.GuestOS.NewListOsTypesParams() p.SetDescription(value) @@ -76,43 +87,42 @@ func retrieveUUID(cs *cloudstack.CloudStackClient, name, value string) (uuid str break } if l.Count == 1 { - uuid = l.OsTypes[0].Id + id = l.OsTypes[0].Id break } - err = fmt.Errorf("Could not find UUID of OS Type: %s", value) - case "project": - uuid, err = cs.Project.GetProjectID(value) + err = fmt.Errorf("Could not find ID of OS Type: %s", value) default: - return uuid, &retrieveError{name: name, value: value, + return id, &retrieveError{name: name, value: value, err: fmt.Errorf("Unknown request: %s", name)} } if err != nil { - return uuid, &retrieveError{name: name, value: value, err: err} + return id, &retrieveError{name: name, value: value, err: err} } - return uuid, nil + return id, nil } -func retrieveTemplateUUID(cs *cloudstack.CloudStackClient, zoneid, value string) (uuid string, e *retrieveError) { - // If the supplied value isn't a UUID, try to retrieve the UUID ourselves - if isUUID(value) { +func retrieveTemplateID(cs *cloudstack.CloudStackClient, zoneid, value string) (id string, e *retrieveError) { + // If the supplied value isn't a ID, try to retrieve the ID ourselves + if isID(value) { return value, nil } - log.Printf("[DEBUG] Retrieving UUID of template: %s", value) + log.Printf("[DEBUG] Retrieving ID of template: %s", value) - uuid, err := cs.Template.GetTemplateID(value, "executable", zoneid) + id, err := cs.Template.GetTemplateID(value, "executable", zoneid) if err != nil { - return uuid, &retrieveError{name: "template", value: value, err: err} + return id, &retrieveError{name: "template", value: value, err: err} } - return uuid, nil + return id, nil } -func isUUID(s string) bool { - re := regexp.MustCompile(`^[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}$`) - return re.MatchString(s) +// ID can be either a UUID or a UnlimitedResourceID +func isID(id string) bool { + re := regexp.MustCompile(`^([0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}|-1)$`) + return re.MatchString(id) } // RetryFunc is the function retried n times diff --git a/builtin/providers/digitalocean/config.go b/builtin/providers/digitalocean/config.go index c9a43bc09..498bf790b 100644 --- a/builtin/providers/digitalocean/config.go +++ b/builtin/providers/digitalocean/config.go @@ -3,7 +3,8 @@ package digitalocean import ( "log" - "github.com/pearkes/digitalocean" + "github.com/digitalocean/godo" + "golang.org/x/oauth2" ) type Config struct { @@ -11,14 +12,14 @@ type Config struct { } // Client() returns a new client for accessing digital ocean. -func (c *Config) Client() (*digitalocean.Client, error) { - client, err := digitalocean.NewClient(c.Token) +func (c *Config) Client() (*godo.Client, error) { + tokenSrc := oauth2.StaticTokenSource(&oauth2.Token{ + AccessToken: c.Token, + }) - log.Printf("[INFO] DigitalOcean Client configured for URL: %s", client.URL) + client := godo.NewClient(oauth2.NewClient(oauth2.NoContext, tokenSrc)) - if err != nil { - return nil, err - } + log.Printf("[INFO] DigitalOcean Client configured for URL: %s", client.BaseURL.String()) return client, nil } diff --git a/builtin/providers/digitalocean/resource_digitalocean_domain.go b/builtin/providers/digitalocean/resource_digitalocean_domain.go index 8ab5f1884..d7c4edca1 100644 --- a/builtin/providers/digitalocean/resource_digitalocean_domain.go +++ b/builtin/providers/digitalocean/resource_digitalocean_domain.go @@ -5,8 +5,8 @@ import ( "log" "strings" + "github.com/digitalocean/godo" "github.com/hashicorp/terraform/helper/schema" - "github.com/pearkes/digitalocean" ) func resourceDigitalOceanDomain() *schema.Resource { @@ -32,30 +32,31 @@ func resourceDigitalOceanDomain() *schema.Resource { } func resourceDigitalOceanDomainCreate(d *schema.ResourceData, meta interface{}) error { - client := meta.(*digitalocean.Client) + client := meta.(*godo.Client) // Build up our creation options - opts := &digitalocean.CreateDomain{ + + opts := &godo.DomainCreateRequest{ Name: d.Get("name").(string), IPAddress: d.Get("ip_address").(string), } log.Printf("[DEBUG] Domain create configuration: %#v", opts) - name, err := client.CreateDomain(opts) + domain, _, err := client.Domains.Create(opts) if err != nil { return fmt.Errorf("Error creating Domain: %s", err) } - d.SetId(name) - log.Printf("[INFO] Domain Name: %s", name) + d.SetId(domain.Name) + log.Printf("[INFO] Domain Name: %s", domain.Name) return resourceDigitalOceanDomainRead(d, meta) } func resourceDigitalOceanDomainRead(d *schema.ResourceData, meta interface{}) error { - client := meta.(*digitalocean.Client) + client := meta.(*godo.Client) - domain, err := client.RetrieveDomain(d.Id()) + domain, _, err := client.Domains.Get(d.Id()) if err != nil { // If the domain is somehow already destroyed, mark as // successfully gone @@ -73,10 +74,10 @@ func resourceDigitalOceanDomainRead(d *schema.ResourceData, meta interface{}) er } func resourceDigitalOceanDomainDelete(d *schema.ResourceData, meta interface{}) error { - client := meta.(*digitalocean.Client) + client := meta.(*godo.Client) log.Printf("[INFO] Deleting Domain: %s", d.Id()) - err := client.DestroyDomain(d.Id()) + _, err := client.Domains.Delete(d.Id()) if err != nil { return fmt.Errorf("Error deleting Domain: %s", err) } diff --git a/builtin/providers/digitalocean/resource_digitalocean_domain_test.go b/builtin/providers/digitalocean/resource_digitalocean_domain_test.go index 918eea155..2801414ee 100644 --- a/builtin/providers/digitalocean/resource_digitalocean_domain_test.go +++ b/builtin/providers/digitalocean/resource_digitalocean_domain_test.go @@ -4,13 +4,13 @@ import ( "fmt" "testing" + "github.com/digitalocean/godo" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/terraform" - "github.com/pearkes/digitalocean" ) func TestAccDigitalOceanDomain_Basic(t *testing.T) { - var domain digitalocean.Domain + var domain godo.Domain resource.Test(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, @@ -33,7 +33,7 @@ func TestAccDigitalOceanDomain_Basic(t *testing.T) { } func testAccCheckDigitalOceanDomainDestroy(s *terraform.State) error { - client := testAccProvider.Meta().(*digitalocean.Client) + client := testAccProvider.Meta().(*godo.Client) for _, rs := range s.RootModule().Resources { if rs.Type != "digitalocean_domain" { @@ -41,17 +41,17 @@ func testAccCheckDigitalOceanDomainDestroy(s *terraform.State) error { } // Try to find the domain - _, err := client.RetrieveDomain(rs.Primary.ID) + _, _, err := client.Domains.Get(rs.Primary.ID) if err == nil { - fmt.Errorf("Domain still exists") + return fmt.Errorf("Domain still exists") } } return nil } -func testAccCheckDigitalOceanDomainAttributes(domain *digitalocean.Domain) resource.TestCheckFunc { +func testAccCheckDigitalOceanDomainAttributes(domain *godo.Domain) resource.TestCheckFunc { return func(s *terraform.State) error { if domain.Name != "foobar-test-terraform.com" { @@ -62,7 +62,7 @@ func testAccCheckDigitalOceanDomainAttributes(domain *digitalocean.Domain) resou } } -func testAccCheckDigitalOceanDomainExists(n string, domain *digitalocean.Domain) resource.TestCheckFunc { +func testAccCheckDigitalOceanDomainExists(n string, domain *godo.Domain) resource.TestCheckFunc { return func(s *terraform.State) error { rs, ok := s.RootModule().Resources[n] @@ -74,9 +74,9 @@ func testAccCheckDigitalOceanDomainExists(n string, domain *digitalocean.Domain) return fmt.Errorf("No Record ID is set") } - client := testAccProvider.Meta().(*digitalocean.Client) + client := testAccProvider.Meta().(*godo.Client) - foundDomain, err := client.RetrieveDomain(rs.Primary.ID) + foundDomain, _, err := client.Domains.Get(rs.Primary.ID) if err != nil { return err @@ -86,7 +86,7 @@ func testAccCheckDigitalOceanDomainExists(n string, domain *digitalocean.Domain) return fmt.Errorf("Record not found") } - *domain = foundDomain + *domain = *foundDomain return nil } diff --git a/builtin/providers/digitalocean/resource_digitalocean_droplet.go b/builtin/providers/digitalocean/resource_digitalocean_droplet.go index 88c0c6d07..49feba1c9 100644 --- a/builtin/providers/digitalocean/resource_digitalocean_droplet.go +++ b/builtin/providers/digitalocean/resource_digitalocean_droplet.go @@ -3,12 +3,13 @@ package digitalocean import ( "fmt" "log" + "strconv" "strings" "time" + "github.com/digitalocean/godo" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/helper/schema" - "github.com/pearkes/digitalocean" ) func resourceDigitalOceanDroplet() *schema.Resource { @@ -39,6 +40,10 @@ func resourceDigitalOceanDroplet() *schema.Resource { "size": &schema.Schema{ Type: schema.TypeString, Required: true, + StateFunc: func(val interface{}) string { + // DO API V2 size slug is always lowercase + return strings.ToLower(val.(string)) + }, }, "status": &schema.Schema{ @@ -101,11 +106,13 @@ func resourceDigitalOceanDroplet() *schema.Resource { } func resourceDigitalOceanDropletCreate(d *schema.ResourceData, meta interface{}) error { - client := meta.(*digitalocean.Client) + client := meta.(*godo.Client) // Build up our creation options - opts := &digitalocean.CreateDroplet{ - Image: d.Get("image").(string), + opts := &godo.DropletCreateRequest{ + Image: godo.DropletCreateImage{ + Slug: d.Get("image").(string), + }, Name: d.Get("name").(string), Region: d.Get("region").(string), Size: d.Get("size").(string), @@ -116,7 +123,7 @@ func resourceDigitalOceanDropletCreate(d *schema.ResourceData, meta interface{}) } if attr, ok := d.GetOk("ipv6"); ok { - opts.IPV6 = attr.(bool) + opts.IPv6 = attr.(bool) } if attr, ok := d.GetOk("private_networking"); ok { @@ -128,25 +135,32 @@ func resourceDigitalOceanDropletCreate(d *schema.ResourceData, meta interface{}) } // Get configured ssh_keys - ssh_keys := d.Get("ssh_keys.#").(int) - if ssh_keys > 0 { - opts.SSHKeys = make([]string, 0, ssh_keys) - for i := 0; i < ssh_keys; i++ { + sshKeys := d.Get("ssh_keys.#").(int) + if sshKeys > 0 { + opts.SSHKeys = make([]godo.DropletCreateSSHKey, 0, sshKeys) + for i := 0; i < sshKeys; i++ { key := fmt.Sprintf("ssh_keys.%d", i) - opts.SSHKeys = append(opts.SSHKeys, d.Get(key).(string)) + id, err := strconv.Atoi(d.Get(key).(string)) + if err != nil { + return err + } + + opts.SSHKeys = append(opts.SSHKeys, godo.DropletCreateSSHKey{ + ID: id, + }) } } log.Printf("[DEBUG] Droplet create configuration: %#v", opts) - id, err := client.CreateDroplet(opts) + droplet, _, err := client.Droplets.Create(opts) if err != nil { return fmt.Errorf("Error creating droplet: %s", err) } // Assign the droplets id - d.SetId(id) + d.SetId(strconv.Itoa(droplet.ID)) log.Printf("[INFO] Droplet ID: %s", d.Id()) @@ -160,10 +174,15 @@ func resourceDigitalOceanDropletCreate(d *schema.ResourceData, meta interface{}) } func resourceDigitalOceanDropletRead(d *schema.ResourceData, meta interface{}) error { - client := meta.(*digitalocean.Client) + client := meta.(*godo.Client) + + id, err := strconv.Atoi(d.Id()) + if err != nil { + return fmt.Errorf("invalid droplet id: %v", err) + } // Retrieve the droplet properties for updating the state - droplet, err := client.RetrieveDroplet(d.Id()) + droplet, _, err := client.Droplets.Get(id) if err != nil { // check if the droplet no longer exists. if err.Error() == "Error retrieving droplet: API Error: 404 Not Found" { @@ -174,48 +193,70 @@ func resourceDigitalOceanDropletRead(d *schema.ResourceData, meta interface{}) e return fmt.Errorf("Error retrieving droplet: %s", err) } - if droplet.ImageSlug() != "" { - d.Set("image", droplet.ImageSlug()) + if droplet.Image.Slug != "" { + d.Set("image", droplet.Image.Slug) } else { - d.Set("image", droplet.ImageId()) + d.Set("image", droplet.Image.ID) } d.Set("name", droplet.Name) - d.Set("region", droplet.RegionSlug()) - d.Set("size", droplet.SizeSlug) + d.Set("region", droplet.Region.Slug) + d.Set("size", droplet.Size.Slug) d.Set("status", droplet.Status) - d.Set("locked", droplet.IsLocked()) + d.Set("locked", strconv.FormatBool(droplet.Locked)) - if droplet.IPV6Address("public") != "" { + if publicIPv6 := findIPv6AddrByType(droplet, "public"); publicIPv6 != "" { d.Set("ipv6", true) - d.Set("ipv6_address", droplet.IPV6Address("public")) - d.Set("ipv6_address_private", droplet.IPV6Address("private")) + d.Set("ipv6_address", publicIPv6) + d.Set("ipv6_address_private", findIPv6AddrByType(droplet, "private")) } - d.Set("ipv4_address", droplet.IPV4Address("public")) + d.Set("ipv4_address", findIPv4AddrByType(droplet, "public")) - if droplet.NetworkingType() == "private" { + if privateIPv4 := findIPv4AddrByType(droplet, "private"); privateIPv4 != "" { d.Set("private_networking", true) - d.Set("ipv4_address_private", droplet.IPV4Address("private")) + d.Set("ipv4_address_private", privateIPv4) } // Initialize the connection info d.SetConnInfo(map[string]string{ "type": "ssh", - "host": droplet.IPV4Address("public"), + "host": findIPv4AddrByType(droplet, "public"), }) return nil } +func findIPv6AddrByType(d *godo.Droplet, addrType string) string { + for _, addr := range d.Networks.V6 { + if addr.Type == addrType { + return addr.IPAddress + } + } + return "" +} + +func findIPv4AddrByType(d *godo.Droplet, addrType string) string { + for _, addr := range d.Networks.V4 { + if addr.Type == addrType { + return addr.IPAddress + } + } + return "" +} + func resourceDigitalOceanDropletUpdate(d *schema.ResourceData, meta interface{}) error { - client := meta.(*digitalocean.Client) + client := meta.(*godo.Client) + + id, err := strconv.Atoi(d.Id()) + if err != nil { + return fmt.Errorf("invalid droplet id: %v", err) + } if d.HasChange("size") { oldSize, newSize := d.GetChange("size") - err := client.PowerOff(d.Id()) - + _, _, err = client.DropletActions.PowerOff(id) if err != nil && !strings.Contains(err.Error(), "Droplet is already powered off") { return fmt.Errorf( "Error powering off droplet (%s): %s", d.Id(), err) @@ -229,7 +270,7 @@ func resourceDigitalOceanDropletUpdate(d *schema.ResourceData, meta interface{}) } // Resize the droplet - err = client.Resize(d.Id(), newSize.(string)) + _, _, err = client.DropletActions.Resize(id, newSize.(string), true) if err != nil { newErr := powerOnAndWait(d, meta) if newErr != nil { @@ -254,7 +295,7 @@ func resourceDigitalOceanDropletUpdate(d *schema.ResourceData, meta interface{}) "Error waiting for resize droplet (%s) to finish: %s", d.Id(), err) } - err = client.PowerOn(d.Id()) + _, _, err = client.DropletActions.PowerOn(id) if err != nil { return fmt.Errorf( @@ -272,7 +313,7 @@ func resourceDigitalOceanDropletUpdate(d *schema.ResourceData, meta interface{}) oldName, newName := d.GetChange("name") // Rename the droplet - err := client.Rename(d.Id(), newName.(string)) + _, _, err = client.DropletActions.Rename(id, newName.(string)) if err != nil { return fmt.Errorf( @@ -292,7 +333,7 @@ func resourceDigitalOceanDropletUpdate(d *schema.ResourceData, meta interface{}) // As there is no way to disable private networking, // we only check if it needs to be enabled if d.HasChange("private_networking") && d.Get("private_networking").(bool) { - err := client.EnablePrivateNetworking(d.Id()) + _, _, err = client.DropletActions.EnablePrivateNetworking(id) if err != nil { return fmt.Errorf( @@ -309,7 +350,7 @@ func resourceDigitalOceanDropletUpdate(d *schema.ResourceData, meta interface{}) // As there is no way to disable IPv6, we only check if it needs to be enabled if d.HasChange("ipv6") && d.Get("ipv6").(bool) { - err := client.EnableIPV6s(d.Id()) + _, _, err = client.DropletActions.EnableIPv6(id) if err != nil { return fmt.Errorf( @@ -330,9 +371,14 @@ func resourceDigitalOceanDropletUpdate(d *schema.ResourceData, meta interface{}) } func resourceDigitalOceanDropletDelete(d *schema.ResourceData, meta interface{}) error { - client := meta.(*digitalocean.Client) + client := meta.(*godo.Client) - _, err := WaitForDropletAttribute( + id, err := strconv.Atoi(d.Id()) + if err != nil { + return fmt.Errorf("invalid droplet id: %v", err) + } + + _, err = WaitForDropletAttribute( d, "false", []string{"", "true"}, "locked", meta) if err != nil { @@ -343,7 +389,7 @@ func resourceDigitalOceanDropletDelete(d *schema.ResourceData, meta interface{}) log.Printf("[INFO] Deleting droplet: %s", d.Id()) // Destroy the droplet - err = client.DestroyDroplet(d.Id()) + _, err = client.Droplets.Delete(id) // Handle remotely destroyed droplets if err != nil && strings.Contains(err.Error(), "404 Not Found") { @@ -386,9 +432,14 @@ func WaitForDropletAttribute( // cleaner and more efficient func newDropletStateRefreshFunc( d *schema.ResourceData, attribute string, meta interface{}) resource.StateRefreshFunc { - client := meta.(*digitalocean.Client) + client := meta.(*godo.Client) return func() (interface{}, string, error) { - err := resourceDigitalOceanDropletRead(d, meta) + id, err := strconv.Atoi(d.Id()) + if err != nil { + return nil, "", err + } + + err = resourceDigitalOceanDropletRead(d, meta) if err != nil { return nil, "", err } @@ -404,7 +455,7 @@ func newDropletStateRefreshFunc( // See if we can access our attribute if attr, ok := d.GetOk(attribute); ok { // Retrieve the droplet properties - droplet, err := client.RetrieveDroplet(d.Id()) + droplet, _, err := client.Droplets.Get(id) if err != nil { return nil, "", fmt.Errorf("Error retrieving droplet: %s", err) } @@ -418,8 +469,13 @@ func newDropletStateRefreshFunc( // Powers on the droplet and waits for it to be active func powerOnAndWait(d *schema.ResourceData, meta interface{}) error { - client := meta.(*digitalocean.Client) - err := client.PowerOn(d.Id()) + id, err := strconv.Atoi(d.Id()) + if err != nil { + return fmt.Errorf("invalid droplet id: %v", err) + } + + client := meta.(*godo.Client) + _, _, err = client.DropletActions.PowerOn(id) if err != nil { return err } diff --git a/builtin/providers/digitalocean/resource_digitalocean_droplet_test.go b/builtin/providers/digitalocean/resource_digitalocean_droplet_test.go index 587612e01..730718c3f 100644 --- a/builtin/providers/digitalocean/resource_digitalocean_droplet_test.go +++ b/builtin/providers/digitalocean/resource_digitalocean_droplet_test.go @@ -2,16 +2,17 @@ package digitalocean import ( "fmt" + "strconv" "strings" "testing" + "github.com/digitalocean/godo" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/terraform" - "github.com/pearkes/digitalocean" ) func TestAccDigitalOceanDroplet_Basic(t *testing.T) { - var droplet digitalocean.Droplet + var droplet godo.Droplet resource.Test(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, @@ -40,7 +41,7 @@ func TestAccDigitalOceanDroplet_Basic(t *testing.T) { } func TestAccDigitalOceanDroplet_Update(t *testing.T) { - var droplet digitalocean.Droplet + var droplet godo.Droplet resource.Test(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, @@ -71,7 +72,7 @@ func TestAccDigitalOceanDroplet_Update(t *testing.T) { } func TestAccDigitalOceanDroplet_PrivateNetworkingIpv6(t *testing.T) { - var droplet digitalocean.Droplet + var droplet godo.Droplet resource.Test(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, @@ -94,15 +95,20 @@ func TestAccDigitalOceanDroplet_PrivateNetworkingIpv6(t *testing.T) { } func testAccCheckDigitalOceanDropletDestroy(s *terraform.State) error { - client := testAccProvider.Meta().(*digitalocean.Client) + client := testAccProvider.Meta().(*godo.Client) for _, rs := range s.RootModule().Resources { if rs.Type != "digitalocean_droplet" { continue } + id, err := strconv.Atoi(rs.Primary.ID) + if err != nil { + return err + } + // Try to find the Droplet - _, err := client.RetrieveDroplet(rs.Primary.ID) + _, _, err = client.Droplets.Get(id) // Wait @@ -116,19 +122,19 @@ func testAccCheckDigitalOceanDropletDestroy(s *terraform.State) error { return nil } -func testAccCheckDigitalOceanDropletAttributes(droplet *digitalocean.Droplet) resource.TestCheckFunc { +func testAccCheckDigitalOceanDropletAttributes(droplet *godo.Droplet) resource.TestCheckFunc { return func(s *terraform.State) error { - if droplet.ImageSlug() != "centos-5-8-x32" { - return fmt.Errorf("Bad image_slug: %s", droplet.ImageSlug()) + if droplet.Image.Slug != "centos-5-8-x32" { + return fmt.Errorf("Bad image_slug: %s", droplet.Image.Slug) } - if droplet.SizeSlug != "512mb" { - return fmt.Errorf("Bad size_slug: %s", droplet.SizeSlug) + if droplet.Size.Slug != "512mb" { + return fmt.Errorf("Bad size_slug: %s", droplet.Size.Slug) } - if droplet.RegionSlug() != "nyc3" { - return fmt.Errorf("Bad region_slug: %s", droplet.RegionSlug()) + if droplet.Region.Slug != "nyc3" { + return fmt.Errorf("Bad region_slug: %s", droplet.Region.Slug) } if droplet.Name != "foo" { @@ -138,10 +144,10 @@ func testAccCheckDigitalOceanDropletAttributes(droplet *digitalocean.Droplet) re } } -func testAccCheckDigitalOceanDropletRenamedAndResized(droplet *digitalocean.Droplet) resource.TestCheckFunc { +func testAccCheckDigitalOceanDropletRenamedAndResized(droplet *godo.Droplet) resource.TestCheckFunc { return func(s *terraform.State) error { - if droplet.SizeSlug != "1gb" { + if droplet.Size.Slug != "1gb" { return fmt.Errorf("Bad size_slug: %s", droplet.SizeSlug) } @@ -153,50 +159,46 @@ func testAccCheckDigitalOceanDropletRenamedAndResized(droplet *digitalocean.Drop } } -func testAccCheckDigitalOceanDropletAttributes_PrivateNetworkingIpv6(droplet *digitalocean.Droplet) resource.TestCheckFunc { +func testAccCheckDigitalOceanDropletAttributes_PrivateNetworkingIpv6(droplet *godo.Droplet) resource.TestCheckFunc { return func(s *terraform.State) error { - if droplet.ImageSlug() != "centos-5-8-x32" { - return fmt.Errorf("Bad image_slug: %s", droplet.ImageSlug()) + if droplet.Image.Slug != "centos-5-8-x32" { + return fmt.Errorf("Bad image_slug: %s", droplet.Image.Slug) } - if droplet.SizeSlug != "1gb" { - return fmt.Errorf("Bad size_slug: %s", droplet.SizeSlug) + if droplet.Size.Slug != "1gb" { + return fmt.Errorf("Bad size_slug: %s", droplet.Size.Slug) } - if droplet.RegionSlug() != "sgp1" { - return fmt.Errorf("Bad region_slug: %s", droplet.RegionSlug()) + if droplet.Region.Slug != "sgp1" { + return fmt.Errorf("Bad region_slug: %s", droplet.Region.Slug) } if droplet.Name != "baz" { return fmt.Errorf("Bad name: %s", droplet.Name) } - if droplet.IPV4Address("private") == "" { - return fmt.Errorf("No ipv4 private: %s", droplet.IPV4Address("private")) + if findIPv4AddrByType(droplet, "private") == "" { + return fmt.Errorf("No ipv4 private: %s", findIPv4AddrByType(droplet, "private")) } // if droplet.IPV6Address("private") == "" { // return fmt.Errorf("No ipv6 private: %s", droplet.IPV6Address("private")) // } - if droplet.NetworkingType() != "private" { - return fmt.Errorf("Bad networking type: %s", droplet.NetworkingType()) + if findIPv4AddrByType(droplet, "public") == "" { + return fmt.Errorf("No ipv4 public: %s", findIPv4AddrByType(droplet, "public")) } - if droplet.IPV4Address("public") == "" { - return fmt.Errorf("No ipv4 public: %s", droplet.IPV4Address("public")) - } - - if droplet.IPV6Address("public") == "" { - return fmt.Errorf("No ipv6 public: %s", droplet.IPV6Address("public")) + if findIPv6AddrByType(droplet, "public") == "" { + return fmt.Errorf("No ipv6 public: %s", findIPv6AddrByType(droplet, "public")) } return nil } } -func testAccCheckDigitalOceanDropletExists(n string, droplet *digitalocean.Droplet) resource.TestCheckFunc { +func testAccCheckDigitalOceanDropletExists(n string, droplet *godo.Droplet) resource.TestCheckFunc { return func(s *terraform.State) error { rs, ok := s.RootModule().Resources[n] if !ok { @@ -207,19 +209,25 @@ func testAccCheckDigitalOceanDropletExists(n string, droplet *digitalocean.Dropl return fmt.Errorf("No Droplet ID is set") } - client := testAccProvider.Meta().(*digitalocean.Client) + client := testAccProvider.Meta().(*godo.Client) - retrieveDroplet, err := client.RetrieveDroplet(rs.Primary.ID) + id, err := strconv.Atoi(rs.Primary.ID) + if err != nil { + return err + } + + // Try to find the Droplet + retrieveDroplet, _, err := client.Droplets.Get(id) if err != nil { return err } - if retrieveDroplet.StringId() != rs.Primary.ID { + if strconv.Itoa(retrieveDroplet.ID) != rs.Primary.ID { return fmt.Errorf("Droplet not found") } - *droplet = retrieveDroplet + *droplet = *retrieveDroplet return nil } @@ -230,7 +238,7 @@ func testAccCheckDigitalOceanDropletExists(n string, droplet *digitalocean.Dropl // other test already // //func Test_new_droplet_state_refresh_func(t *testing.T) { -// droplet := digitalocean.Droplet{ +// droplet := godo.Droplet{ // Name: "foobar", // } // resourceMap, _ := resource_digitalocean_droplet_update_state( diff --git a/builtin/providers/digitalocean/resource_digitalocean_record.go b/builtin/providers/digitalocean/resource_digitalocean_record.go index 2ff095aae..ebcb2e0f8 100644 --- a/builtin/providers/digitalocean/resource_digitalocean_record.go +++ b/builtin/providers/digitalocean/resource_digitalocean_record.go @@ -3,10 +3,11 @@ package digitalocean import ( "fmt" "log" + "strconv" "strings" + "github.com/digitalocean/godo" "github.com/hashicorp/terraform/helper/schema" - "github.com/pearkes/digitalocean" ) func resourceDigitalOceanRecord() *schema.Resource { @@ -66,34 +67,55 @@ func resourceDigitalOceanRecord() *schema.Resource { } func resourceDigitalOceanRecordCreate(d *schema.ResourceData, meta interface{}) error { - client := meta.(*digitalocean.Client) + client := meta.(*godo.Client) - newRecord := digitalocean.CreateRecord{ - Type: d.Get("type").(string), - Name: d.Get("name").(string), - Data: d.Get("value").(string), - Priority: d.Get("priority").(string), - Port: d.Get("port").(string), - Weight: d.Get("weight").(string), + newRecord := godo.DomainRecordEditRequest{ + Type: d.Get("type").(string), + Name: d.Get("name").(string), + Data: d.Get("value").(string), + } + + var err error + if priority := d.Get("priority").(string); priority != "" { + newRecord.Priority, err = strconv.Atoi(priority) + if err != nil { + return fmt.Errorf("Failed to parse priority as an integer: %v", err) + } + } + if port := d.Get("port").(string); port != "" { + newRecord.Port, err = strconv.Atoi(port) + if err != nil { + return fmt.Errorf("Failed to parse port as an integer: %v", err) + } + } + if weight := d.Get("weight").(string); weight != "" { + newRecord.Weight, err = strconv.Atoi(weight) + if err != nil { + return fmt.Errorf("Failed to parse weight as an integer: %v", err) + } } log.Printf("[DEBUG] record create configuration: %#v", newRecord) - recId, err := client.CreateRecord(d.Get("domain").(string), &newRecord) + rec, _, err := client.Domains.CreateRecord(d.Get("domain").(string), &newRecord) if err != nil { return fmt.Errorf("Failed to create record: %s", err) } - d.SetId(recId) + d.SetId(strconv.Itoa(rec.ID)) log.Printf("[INFO] Record ID: %s", d.Id()) return resourceDigitalOceanRecordRead(d, meta) } func resourceDigitalOceanRecordRead(d *schema.ResourceData, meta interface{}) error { - client := meta.(*digitalocean.Client) + client := meta.(*godo.Client) domain := d.Get("domain").(string) + id, err := strconv.Atoi(d.Id()) + if err != nil { + return fmt.Errorf("invalid record ID: %v", err) + } - rec, err := client.RetrieveRecord(domain, d.Id()) + rec, _, err := client.Domains.Record(domain, id) if err != nil { // If the record is somehow already destroyed, mark as // successfully gone @@ -120,23 +142,29 @@ func resourceDigitalOceanRecordRead(d *schema.ResourceData, meta interface{}) er d.Set("name", rec.Name) d.Set("type", rec.Type) d.Set("value", rec.Data) - d.Set("weight", rec.StringWeight()) - d.Set("priority", rec.StringPriority()) - d.Set("port", rec.StringPort()) + d.Set("weight", strconv.Itoa(rec.Weight)) + d.Set("priority", strconv.Itoa(rec.Priority)) + d.Set("port", strconv.Itoa(rec.Port)) return nil } func resourceDigitalOceanRecordUpdate(d *schema.ResourceData, meta interface{}) error { - client := meta.(*digitalocean.Client) + client := meta.(*godo.Client) - var updateRecord digitalocean.UpdateRecord - if v, ok := d.GetOk("name"); ok { - updateRecord.Name = v.(string) + domain := d.Get("domain").(string) + id, err := strconv.Atoi(d.Id()) + if err != nil { + return fmt.Errorf("invalid record ID: %v", err) } - log.Printf("[DEBUG] record update configuration: %#v", updateRecord) - err := client.UpdateRecord(d.Get("domain").(string), d.Id(), &updateRecord) + var editRecord godo.DomainRecordEditRequest + if v, ok := d.GetOk("name"); ok { + editRecord.Name = v.(string) + } + + log.Printf("[DEBUG] record update configuration: %#v", editRecord) + _, _, err = client.Domains.EditRecord(domain, id, &editRecord) if err != nil { return fmt.Errorf("Failed to update record: %s", err) } @@ -145,11 +173,17 @@ func resourceDigitalOceanRecordUpdate(d *schema.ResourceData, meta interface{}) } func resourceDigitalOceanRecordDelete(d *schema.ResourceData, meta interface{}) error { - client := meta.(*digitalocean.Client) + client := meta.(*godo.Client) - log.Printf( - "[INFO] Deleting record: %s, %s", d.Get("domain").(string), d.Id()) - err := client.DestroyRecord(d.Get("domain").(string), d.Id()) + domain := d.Get("domain").(string) + id, err := strconv.Atoi(d.Id()) + if err != nil { + return fmt.Errorf("invalid record ID: %v", err) + } + + log.Printf("[INFO] Deleting record: %s, %d", domain, id) + + _, err = client.Domains.DeleteRecord(domain, id) if err != nil { // If the record is somehow already destroyed, mark as // successfully gone diff --git a/builtin/providers/digitalocean/resource_digitalocean_record_test.go b/builtin/providers/digitalocean/resource_digitalocean_record_test.go index 139fd30b7..7811ee9c8 100644 --- a/builtin/providers/digitalocean/resource_digitalocean_record_test.go +++ b/builtin/providers/digitalocean/resource_digitalocean_record_test.go @@ -2,15 +2,16 @@ package digitalocean import ( "fmt" + "strconv" "testing" + "github.com/digitalocean/godo" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/terraform" - "github.com/pearkes/digitalocean" ) func TestAccDigitalOceanRecord_Basic(t *testing.T) { - var record digitalocean.Record + var record godo.DomainRecord resource.Test(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, @@ -35,7 +36,7 @@ func TestAccDigitalOceanRecord_Basic(t *testing.T) { } func TestAccDigitalOceanRecord_Updated(t *testing.T) { - var record digitalocean.Record + var record godo.DomainRecord resource.Test(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, @@ -77,7 +78,7 @@ func TestAccDigitalOceanRecord_Updated(t *testing.T) { } func TestAccDigitalOceanRecord_HostnameValue(t *testing.T) { - var record digitalocean.Record + var record godo.DomainRecord resource.Test(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, @@ -104,7 +105,7 @@ func TestAccDigitalOceanRecord_HostnameValue(t *testing.T) { } func TestAccDigitalOceanRecord_RelativeHostnameValue(t *testing.T) { - var record digitalocean.Record + var record godo.DomainRecord resource.Test(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, @@ -131,7 +132,7 @@ func TestAccDigitalOceanRecord_RelativeHostnameValue(t *testing.T) { } func TestAccDigitalOceanRecord_ExternalHostnameValue(t *testing.T) { - var record digitalocean.Record + var record godo.DomainRecord resource.Test(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, @@ -158,14 +159,19 @@ func TestAccDigitalOceanRecord_ExternalHostnameValue(t *testing.T) { } func testAccCheckDigitalOceanRecordDestroy(s *terraform.State) error { - client := testAccProvider.Meta().(*digitalocean.Client) + client := testAccProvider.Meta().(*godo.Client) for _, rs := range s.RootModule().Resources { if rs.Type != "digitalocean_record" { continue } + domain := rs.Primary.Attributes["domain"] + id, err := strconv.Atoi(rs.Primary.ID) + if err != nil { + return err + } - _, err := client.RetrieveRecord(rs.Primary.Attributes["domain"], rs.Primary.ID) + _, _, err = client.Domains.Record(domain, id) if err == nil { return fmt.Errorf("Record still exists") @@ -175,7 +181,7 @@ func testAccCheckDigitalOceanRecordDestroy(s *terraform.State) error { return nil } -func testAccCheckDigitalOceanRecordAttributes(record *digitalocean.Record) resource.TestCheckFunc { +func testAccCheckDigitalOceanRecordAttributes(record *godo.DomainRecord) resource.TestCheckFunc { return func(s *terraform.State) error { if record.Data != "192.168.0.10" { @@ -186,7 +192,7 @@ func testAccCheckDigitalOceanRecordAttributes(record *digitalocean.Record) resou } } -func testAccCheckDigitalOceanRecordAttributesUpdated(record *digitalocean.Record) resource.TestCheckFunc { +func testAccCheckDigitalOceanRecordAttributesUpdated(record *godo.DomainRecord) resource.TestCheckFunc { return func(s *terraform.State) error { if record.Data != "192.168.0.11" { @@ -197,7 +203,7 @@ func testAccCheckDigitalOceanRecordAttributesUpdated(record *digitalocean.Record } } -func testAccCheckDigitalOceanRecordExists(n string, record *digitalocean.Record) resource.TestCheckFunc { +func testAccCheckDigitalOceanRecordExists(n string, record *godo.DomainRecord) resource.TestCheckFunc { return func(s *terraform.State) error { rs, ok := s.RootModule().Resources[n] @@ -209,25 +215,31 @@ func testAccCheckDigitalOceanRecordExists(n string, record *digitalocean.Record) return fmt.Errorf("No Record ID is set") } - client := testAccProvider.Meta().(*digitalocean.Client) + client := testAccProvider.Meta().(*godo.Client) - foundRecord, err := client.RetrieveRecord(rs.Primary.Attributes["domain"], rs.Primary.ID) + domain := rs.Primary.Attributes["domain"] + id, err := strconv.Atoi(rs.Primary.ID) + if err != nil { + return err + } + + foundRecord, _, err := client.Domains.Record(domain, id) if err != nil { return err } - if foundRecord.StringId() != rs.Primary.ID { + if strconv.Itoa(foundRecord.ID) != rs.Primary.ID { return fmt.Errorf("Record not found") } - *record = foundRecord + *record = *foundRecord return nil } } -func testAccCheckDigitalOceanRecordAttributesHostname(data string, record *digitalocean.Record) resource.TestCheckFunc { +func testAccCheckDigitalOceanRecordAttributesHostname(data string, record *godo.DomainRecord) resource.TestCheckFunc { return func(s *terraform.State) error { if record.Data != data { diff --git a/builtin/providers/digitalocean/resource_digitalocean_ssh_key.go b/builtin/providers/digitalocean/resource_digitalocean_ssh_key.go index 96a4ad80d..d6eb96f09 100644 --- a/builtin/providers/digitalocean/resource_digitalocean_ssh_key.go +++ b/builtin/providers/digitalocean/resource_digitalocean_ssh_key.go @@ -3,10 +3,11 @@ package digitalocean import ( "fmt" "log" + "strconv" "strings" + "github.com/digitalocean/godo" "github.com/hashicorp/terraform/helper/schema" - "github.com/pearkes/digitalocean" ) func resourceDigitalOceanSSHKey() *schema.Resource { @@ -42,30 +43,35 @@ func resourceDigitalOceanSSHKey() *schema.Resource { } func resourceDigitalOceanSSHKeyCreate(d *schema.ResourceData, meta interface{}) error { - client := meta.(*digitalocean.Client) + client := meta.(*godo.Client) // Build up our creation options - opts := &digitalocean.CreateSSHKey{ + opts := &godo.KeyCreateRequest{ Name: d.Get("name").(string), PublicKey: d.Get("public_key").(string), } log.Printf("[DEBUG] SSH Key create configuration: %#v", opts) - id, err := client.CreateSSHKey(opts) + key, _, err := client.Keys.Create(opts) if err != nil { return fmt.Errorf("Error creating SSH Key: %s", err) } - d.SetId(id) - log.Printf("[INFO] SSH Key: %s", id) + d.SetId(strconv.Itoa(key.ID)) + log.Printf("[INFO] SSH Key: %d", key.ID) return resourceDigitalOceanSSHKeyRead(d, meta) } func resourceDigitalOceanSSHKeyRead(d *schema.ResourceData, meta interface{}) error { - client := meta.(*digitalocean.Client) + client := meta.(*godo.Client) - key, err := client.RetrieveSSHKey(d.Id()) + id, err := strconv.Atoi(d.Id()) + if err != nil { + return fmt.Errorf("invalid SSH key id: %v", err) + } + + key, _, err := client.Keys.GetByID(id) if err != nil { // If the key is somehow already destroyed, mark as // successfully gone @@ -84,7 +90,12 @@ func resourceDigitalOceanSSHKeyRead(d *schema.ResourceData, meta interface{}) er } func resourceDigitalOceanSSHKeyUpdate(d *schema.ResourceData, meta interface{}) error { - client := meta.(*digitalocean.Client) + client := meta.(*godo.Client) + + id, err := strconv.Atoi(d.Id()) + if err != nil { + return fmt.Errorf("invalid SSH key id: %v", err) + } var newName string if v, ok := d.GetOk("name"); ok { @@ -92,7 +103,10 @@ func resourceDigitalOceanSSHKeyUpdate(d *schema.ResourceData, meta interface{}) } log.Printf("[DEBUG] SSH key update name: %#v", newName) - err := client.RenameSSHKey(d.Id(), newName) + opts := &godo.KeyUpdateRequest{ + Name: newName, + } + _, _, err = client.Keys.UpdateByID(id, opts) if err != nil { return fmt.Errorf("Failed to update SSH key: %s", err) } @@ -101,10 +115,15 @@ func resourceDigitalOceanSSHKeyUpdate(d *schema.ResourceData, meta interface{}) } func resourceDigitalOceanSSHKeyDelete(d *schema.ResourceData, meta interface{}) error { - client := meta.(*digitalocean.Client) + client := meta.(*godo.Client) - log.Printf("[INFO] Deleting SSH key: %s", d.Id()) - err := client.DestroySSHKey(d.Id()) + id, err := strconv.Atoi(d.Id()) + if err != nil { + return fmt.Errorf("invalid SSH key id: %v", err) + } + + log.Printf("[INFO] Deleting SSH key: %d", id) + _, err = client.Keys.DeleteByID(id) if err != nil { return fmt.Errorf("Error deleting SSH key: %s", err) } diff --git a/builtin/providers/digitalocean/resource_digitalocean_ssh_key_test.go b/builtin/providers/digitalocean/resource_digitalocean_ssh_key_test.go index 009366e18..3aebe1821 100644 --- a/builtin/providers/digitalocean/resource_digitalocean_ssh_key_test.go +++ b/builtin/providers/digitalocean/resource_digitalocean_ssh_key_test.go @@ -6,13 +6,13 @@ import ( "strings" "testing" + "github.com/digitalocean/godo" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/terraform" - "github.com/pearkes/digitalocean" ) func TestAccDigitalOceanSSHKey_Basic(t *testing.T) { - var key digitalocean.SSHKey + var key godo.Key resource.Test(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, @@ -35,15 +35,20 @@ func TestAccDigitalOceanSSHKey_Basic(t *testing.T) { } func testAccCheckDigitalOceanSSHKeyDestroy(s *terraform.State) error { - client := testAccProvider.Meta().(*digitalocean.Client) + client := testAccProvider.Meta().(*godo.Client) for _, rs := range s.RootModule().Resources { if rs.Type != "digitalocean_ssh_key" { continue } + id, err := strconv.Atoi(rs.Primary.ID) + if err != nil { + return err + } + // Try to find the key - _, err := client.RetrieveSSHKey(rs.Primary.ID) + _, _, err = client.Keys.GetByID(id) if err == nil { fmt.Errorf("SSH key still exists") @@ -53,7 +58,7 @@ func testAccCheckDigitalOceanSSHKeyDestroy(s *terraform.State) error { return nil } -func testAccCheckDigitalOceanSSHKeyAttributes(key *digitalocean.SSHKey) resource.TestCheckFunc { +func testAccCheckDigitalOceanSSHKeyAttributes(key *godo.Key) resource.TestCheckFunc { return func(s *terraform.State) error { if key.Name != "foobar" { @@ -64,7 +69,7 @@ func testAccCheckDigitalOceanSSHKeyAttributes(key *digitalocean.SSHKey) resource } } -func testAccCheckDigitalOceanSSHKeyExists(n string, key *digitalocean.SSHKey) resource.TestCheckFunc { +func testAccCheckDigitalOceanSSHKeyExists(n string, key *godo.Key) resource.TestCheckFunc { return func(s *terraform.State) error { rs, ok := s.RootModule().Resources[n] @@ -76,19 +81,25 @@ func testAccCheckDigitalOceanSSHKeyExists(n string, key *digitalocean.SSHKey) re return fmt.Errorf("No Record ID is set") } - client := testAccProvider.Meta().(*digitalocean.Client) + client := testAccProvider.Meta().(*godo.Client) - foundKey, err := client.RetrieveSSHKey(rs.Primary.ID) + id, err := strconv.Atoi(rs.Primary.ID) + if err != nil { + return err + } + + // Try to find the key + foundKey, _, err := client.Keys.GetByID(id) if err != nil { return err } - if strconv.Itoa(int(foundKey.Id)) != rs.Primary.ID { + if strconv.Itoa(foundKey.ID) != rs.Primary.ID { return fmt.Errorf("Record not found") } - *key = foundKey + *key = *foundKey return nil } diff --git a/builtin/providers/dme/config.go b/builtin/providers/dme/config.go index 514df0d10..cc75f120c 100644 --- a/builtin/providers/dme/config.go +++ b/builtin/providers/dme/config.go @@ -2,8 +2,10 @@ package dme import ( "fmt" - "github.com/soniah/dnsmadeeasy" "log" + + "github.com/hashicorp/go-cleanhttp" + "github.com/soniah/dnsmadeeasy" ) // Config contains DNSMadeEasy provider settings @@ -20,6 +22,8 @@ func (c *Config) Client() (*dnsmadeeasy.Client, error) { return nil, fmt.Errorf("Error setting up client: %s", err) } + client.HTTP = cleanhttp.DefaultClient() + if c.UseSandbox { client.URL = dnsmadeeasy.SandboxURL } diff --git a/builtin/providers/docker/resource_docker_container_funcs.go b/builtin/providers/docker/resource_docker_container_funcs.go index 058a4411b..aa74a4e1d 100644 --- a/builtin/providers/docker/resource_docker_container_funcs.go +++ b/builtin/providers/docker/resource_docker_container_funcs.go @@ -148,7 +148,7 @@ func resourceDockerContainerRead(d *schema.ResourceData, meta interface{}) error } if container.State.Running || - (!container.State.Running && !d.Get("must_run").(bool)) { + !container.State.Running && !d.Get("must_run").(bool) { break } diff --git a/builtin/providers/docker/resource_docker_image_funcs.go b/builtin/providers/docker/resource_docker_image_funcs.go index f45dd2226..454113c5f 100644 --- a/builtin/providers/docker/resource_docker_image_funcs.go +++ b/builtin/providers/docker/resource_docker_image_funcs.go @@ -83,7 +83,7 @@ func pullImage(data *Data, client *dc.Client, image string) error { splitPortRepo := strings.Split(splitImageName[1], "/") pullOpts.Registry = splitImageName[0] + ":" + splitPortRepo[0] pullOpts.Tag = splitImageName[2] - pullOpts.Repository = strings.Join(splitPortRepo[1:], "/") + pullOpts.Repository = pullOpts.Registry + "/" + strings.Join(splitPortRepo[1:], "/") // It's either registry:port/username/repo, registry:port/repo, // or repo:tag with default registry @@ -98,7 +98,7 @@ func pullImage(data *Data, client *dc.Client, image string) error { // registry:port/username/repo or registry:port/repo default: pullOpts.Registry = splitImageName[0] + ":" + splitPortRepo[0] - pullOpts.Repository = strings.Join(splitPortRepo[1:], "/") + pullOpts.Repository = pullOpts.Registry + "/" + strings.Join(splitPortRepo[1:], "/") pullOpts.Tag = "latest" } diff --git a/builtin/providers/docker/resource_docker_image_test.go b/builtin/providers/docker/resource_docker_image_test.go index 14dfb29b7..0f0f0707a 100644 --- a/builtin/providers/docker/resource_docker_image_test.go +++ b/builtin/providers/docker/resource_docker_image_test.go @@ -24,9 +24,34 @@ func TestAccDockerImage_basic(t *testing.T) { }) } +func TestAddDockerImage_private(t *testing.T) { + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAddDockerPrivateImageConfig, + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr( + "docker_image.foobar", + "latest", + "2c40b0526b6358710fd09e7b8c022429268cc61703b4777e528ac9d469a07ca1"), + ), + }, + }, + }) +} + const testAccDockerImageConfig = ` resource "docker_image" "foo" { name = "ubuntu:trusty-20150320" keep_updated = true } ` + +const testAddDockerPrivateImageConfig = ` +resource "docker_image" "foobar" { + name = "gcr.io:443/google_containers/pause:0.8.0" + keep_updated = true +} +` diff --git a/builtin/providers/google/compute_operation.go b/builtin/providers/google/compute_operation.go new file mode 100644 index 000000000..987e983b4 --- /dev/null +++ b/builtin/providers/google/compute_operation.go @@ -0,0 +1,158 @@ +package google + +import ( + "bytes" + "fmt" + "log" + "time" + + "github.com/hashicorp/terraform/helper/resource" + "google.golang.org/api/compute/v1" +) + +// OperationWaitType is an enum specifying what type of operation +// we're waiting on. +type ComputeOperationWaitType byte + +const ( + ComputeOperationWaitInvalid ComputeOperationWaitType = iota + ComputeOperationWaitGlobal + ComputeOperationWaitRegion + ComputeOperationWaitZone +) + +type ComputeOperationWaiter struct { + Service *compute.Service + Op *compute.Operation + Project string + Region string + Type ComputeOperationWaitType + Zone string +} + +func (w *ComputeOperationWaiter) RefreshFunc() resource.StateRefreshFunc { + return func() (interface{}, string, error) { + var op *compute.Operation + var err error + + switch w.Type { + case ComputeOperationWaitGlobal: + op, err = w.Service.GlobalOperations.Get( + w.Project, w.Op.Name).Do() + case ComputeOperationWaitRegion: + op, err = w.Service.RegionOperations.Get( + w.Project, w.Region, w.Op.Name).Do() + case ComputeOperationWaitZone: + op, err = w.Service.ZoneOperations.Get( + w.Project, w.Zone, w.Op.Name).Do() + default: + return nil, "bad-type", fmt.Errorf( + "Invalid wait type: %#v", w.Type) + } + + if err != nil { + return nil, "", err + } + + log.Printf("[DEBUG] Got %q when asking for operation %q", op.Status, w.Op.Name) + + return op, op.Status, nil + } +} + +func (w *ComputeOperationWaiter) Conf() *resource.StateChangeConf { + return &resource.StateChangeConf{ + Pending: []string{"PENDING", "RUNNING"}, + Target: "DONE", + Refresh: w.RefreshFunc(), + } +} + +// ComputeOperationError wraps compute.OperationError and implements the +// error interface so it can be returned. +type ComputeOperationError compute.OperationError + +func (e ComputeOperationError) Error() string { + var buf bytes.Buffer + + for _, err := range e.Errors { + buf.WriteString(err.Message + "\n") + } + + return buf.String() +} + +func computeOperationWaitGlobal(config *Config, op *compute.Operation, activity string) error { + w := &ComputeOperationWaiter{ + Service: config.clientCompute, + Op: op, + Project: config.Project, + Type: ComputeOperationWaitGlobal, + } + + state := w.Conf() + state.Delay = 10 * time.Second + state.Timeout = 4 * time.Minute + state.MinTimeout = 2 * time.Second + opRaw, err := state.WaitForState() + if err != nil { + return fmt.Errorf("Error waiting for %s: %s", activity, err) + } + + op = opRaw.(*compute.Operation) + if op.Error != nil { + return ComputeOperationError(*op.Error) + } + + return nil +} + +func computeOperationWaitRegion(config *Config, op *compute.Operation, region, activity string) error { + w := &ComputeOperationWaiter{ + Service: config.clientCompute, + Op: op, + Project: config.Project, + Type: ComputeOperationWaitRegion, + Region: region, + } + + state := w.Conf() + state.Delay = 10 * time.Second + state.Timeout = 4 * time.Minute + state.MinTimeout = 2 * time.Second + opRaw, err := state.WaitForState() + if err != nil { + return fmt.Errorf("Error waiting for %s: %s", activity, err) + } + + op = opRaw.(*compute.Operation) + if op.Error != nil { + return ComputeOperationError(*op.Error) + } + + return nil +} + +func computeOperationWaitZone(config *Config, op *compute.Operation, zone, activity string) error { + w := &ComputeOperationWaiter{ + Service: config.clientCompute, + Op: op, + Project: config.Project, + Zone: zone, + Type: ComputeOperationWaitZone, + } + state := w.Conf() + state.Delay = 10 * time.Second + state.Timeout = 4 * time.Minute + state.MinTimeout = 2 * time.Second + opRaw, err := state.WaitForState() + if err != nil { + return fmt.Errorf("Error waiting for %s: %s", activity, err) + } + op = opRaw.(*compute.Operation) + if op.Error != nil { + // Return the error + return ComputeOperationError(*op.Error) + } + return nil +} diff --git a/builtin/providers/google/config.go b/builtin/providers/google/config.go index 6bfa3553d..567ab1322 100644 --- a/builtin/providers/google/config.go +++ b/builtin/providers/google/config.go @@ -10,8 +10,7 @@ import ( "runtime" "strings" - // TODO(dcunnin): Use version code from version.go - // "github.com/hashicorp/terraform" + "github.com/hashicorp/terraform/terraform" "golang.org/x/oauth2" "golang.org/x/oauth2/google" "golang.org/x/oauth2/jwt" @@ -36,6 +35,13 @@ type Config struct { func (c *Config) loadAndValidate() error { var account accountFile + clientScopes := []string{ + "https://www.googleapis.com/auth/compute", + "https://www.googleapis.com/auth/cloud-platform", + "https://www.googleapis.com/auth/ndev.clouddns.readwrite", + "https://www.googleapis.com/auth/devstorage.full_control", + } + if c.AccountFile == "" { c.AccountFile = os.Getenv("GOOGLE_ACCOUNT_FILE") @@ -79,13 +85,6 @@ func (c *Config) loadAndValidate() error { } } - clientScopes := []string{ - "https://www.googleapis.com/auth/compute", - "https://www.googleapis.com/auth/cloud-platform", - "https://www.googleapis.com/auth/ndev.clouddns.readwrite", - "https://www.googleapis.com/auth/devstorage.full_control", - } - // Get the token for use in our requests log.Printf("[INFO] Requesting Google token...") log.Printf("[INFO] -- Email: %s", account.ClientEmail) @@ -105,25 +104,19 @@ func (c *Config) loadAndValidate() error { client = conf.Client(oauth2.NoContext) } else { - log.Printf("[INFO] Requesting Google token via GCE Service Role...") - client = &http.Client{ - Transport: &oauth2.Transport{ - // Fetch from Google Compute Engine's metadata server to retrieve - // an access token for the provided account. - // If no account is specified, "default" is used. - Source: google.ComputeTokenSource(""), - }, + log.Printf("[INFO] Authenticating using DefaultClient"); + err := error(nil) + client, err = google.DefaultClient(oauth2.NoContext, clientScopes...) + if err != nil { + return err } - } - // Build UserAgent - versionString := "0.0.0" - // TODO(dcunnin): Use Terraform's version code from version.go - // versionString := main.Version - // if main.VersionPrerelease != "" { - // versionString = fmt.Sprintf("%s-%s", versionString, main.VersionPrerelease) - // } + versionString := terraform.Version + prerelease := terraform.VersionPrerelease + if len(prerelease) > 0 { + versionString = fmt.Sprintf("%s-%s", versionString, prerelease) + } userAgent := fmt.Sprintf( "(%s %s) Terraform/%s", runtime.GOOS, runtime.GOARCH, versionString) diff --git a/builtin/providers/google/metadata.go b/builtin/providers/google/metadata.go index bc609ac88..e75c45022 100644 --- a/builtin/providers/google/metadata.go +++ b/builtin/providers/google/metadata.go @@ -23,7 +23,7 @@ func MetadataRetryWrapper(update func() error) error { } } - return fmt.Errorf("Failed to update metadata after %d retries", attempt); + return fmt.Errorf("Failed to update metadata after %d retries", attempt) } // Update the metadata (serverMD) according to the provided diff (oldMDMap v @@ -51,7 +51,7 @@ func MetadataUpdate(oldMDMap map[string]interface{}, newMDMap map[string]interfa // Reformat old metadata into a list serverMD.Items = nil for key, val := range curMDMap { - v := val; + v := val serverMD.Items = append(serverMD.Items, &compute.MetadataItems{ Key: key, Value: &v, @@ -60,7 +60,7 @@ func MetadataUpdate(oldMDMap map[string]interface{}, newMDMap map[string]interfa } // Format metadata from the server data format -> schema data format -func MetadataFormatSchema(md *compute.Metadata) (map[string]interface{}) { +func MetadataFormatSchema(md *compute.Metadata) map[string]interface{} { newMD := make(map[string]interface{}) for _, kv := range md.Items { diff --git a/builtin/providers/google/operation.go b/builtin/providers/google/operation.go deleted file mode 100644 index 0971e3f5b..000000000 --- a/builtin/providers/google/operation.go +++ /dev/null @@ -1,82 +0,0 @@ -package google - -import ( - "bytes" - "fmt" - "log" - - "github.com/hashicorp/terraform/helper/resource" - "google.golang.org/api/compute/v1" -) - -// OperationWaitType is an enum specifying what type of operation -// we're waiting on. -type OperationWaitType byte - -const ( - OperationWaitInvalid OperationWaitType = iota - OperationWaitGlobal - OperationWaitRegion - OperationWaitZone -) - -type OperationWaiter struct { - Service *compute.Service - Op *compute.Operation - Project string - Region string - Type OperationWaitType - Zone string -} - -func (w *OperationWaiter) RefreshFunc() resource.StateRefreshFunc { - return func() (interface{}, string, error) { - var op *compute.Operation - var err error - - switch w.Type { - case OperationWaitGlobal: - op, err = w.Service.GlobalOperations.Get( - w.Project, w.Op.Name).Do() - case OperationWaitRegion: - op, err = w.Service.RegionOperations.Get( - w.Project, w.Region, w.Op.Name).Do() - case OperationWaitZone: - op, err = w.Service.ZoneOperations.Get( - w.Project, w.Zone, w.Op.Name).Do() - default: - return nil, "bad-type", fmt.Errorf( - "Invalid wait type: %#v", w.Type) - } - - if err != nil { - return nil, "", err - } - - log.Printf("[DEBUG] Got %q when asking for operation %q", op.Status, w.Op.Name) - - return op, op.Status, nil - } -} - -func (w *OperationWaiter) Conf() *resource.StateChangeConf { - return &resource.StateChangeConf{ - Pending: []string{"PENDING", "RUNNING"}, - Target: "DONE", - Refresh: w.RefreshFunc(), - } -} - -// OperationError wraps compute.OperationError and implements the -// error interface so it can be returned. -type OperationError compute.OperationError - -func (e OperationError) Error() string { - var buf bytes.Buffer - - for _, err := range e.Errors { - buf.WriteString(err.Message + "\n") - } - - return buf.String() -} diff --git a/builtin/providers/google/provider.go b/builtin/providers/google/provider.go index a023b81c9..acafd851c 100644 --- a/builtin/providers/google/provider.go +++ b/builtin/providers/google/provider.go @@ -15,7 +15,7 @@ func Provider() terraform.ResourceProvider { Schema: map[string]*schema.Schema{ "account_file": &schema.Schema{ Type: schema.TypeString, - Required: true, + Optional: true, DefaultFunc: schema.EnvDefaultFunc("GOOGLE_ACCOUNT_FILE", nil), ValidateFunc: validateAccountFile, }, @@ -54,7 +54,9 @@ func Provider() terraform.ResourceProvider { "google_dns_record_set": resourceDnsRecordSet(), "google_compute_instance_group_manager": resourceComputeInstanceGroupManager(), "google_storage_bucket": resourceStorageBucket(), + "google_storage_bucket_acl": resourceStorageBucketAcl(), "google_storage_bucket_object": resourceStorageBucketObject(), + "google_storage_object_acl": resourceStorageObjectAcl(), }, ConfigureFunc: providerConfigure, @@ -76,6 +78,10 @@ func providerConfigure(d *schema.ResourceData) (interface{}, error) { } func validateAccountFile(v interface{}, k string) (warnings []string, errors []error) { + if v == nil { + return + } + value := v.(string) if value == "" { diff --git a/builtin/providers/google/resource_compute_address.go b/builtin/providers/google/resource_compute_address.go index 721d67d14..0027df230 100644 --- a/builtin/providers/google/resource_compute_address.go +++ b/builtin/providers/google/resource_compute_address.go @@ -3,7 +3,6 @@ package google import ( "fmt" "log" - "time" "github.com/hashicorp/terraform/helper/schema" "google.golang.org/api/compute/v1" @@ -65,28 +64,9 @@ func resourceComputeAddressCreate(d *schema.ResourceData, meta interface{}) erro // It probably maybe worked, so store the ID now d.SetId(addr.Name) - // Wait for the operation to complete - w := &OperationWaiter{ - Service: config.clientCompute, - Op: op, - Project: config.Project, - Region: region, - Type: OperationWaitRegion, - } - state := w.Conf() - state.Timeout = 2 * time.Minute - state.MinTimeout = 1 * time.Second - opRaw, err := state.WaitForState() + err = computeOperationWaitRegion(config, op, region, "Creating Address") if err != nil { - return fmt.Errorf("Error waiting for address to create: %s", err) - } - op = opRaw.(*compute.Operation) - if op.Error != nil { - // The resource didn't actually create - d.SetId("") - - // Return the error - return OperationError(*op.Error) + return err } return resourceComputeAddressRead(d, meta) @@ -128,25 +108,9 @@ func resourceComputeAddressDelete(d *schema.ResourceData, meta interface{}) erro return fmt.Errorf("Error deleting address: %s", err) } - // Wait for the operation to complete - w := &OperationWaiter{ - Service: config.clientCompute, - Op: op, - Project: config.Project, - Region: region, - Type: OperationWaitRegion, - } - state := w.Conf() - state.Timeout = 2 * time.Minute - state.MinTimeout = 1 * time.Second - opRaw, err := state.WaitForState() + err = computeOperationWaitRegion(config, op, region, "Deleting Address") if err != nil { - return fmt.Errorf("Error waiting for address to delete: %s", err) - } - op = opRaw.(*compute.Operation) - if op.Error != nil { - // Return the error - return OperationError(*op.Error) + return err } d.SetId("") diff --git a/builtin/providers/google/resource_compute_autoscaler.go b/builtin/providers/google/resource_compute_autoscaler.go index 10b7c84ef..8539c62b3 100644 --- a/builtin/providers/google/resource_compute_autoscaler.go +++ b/builtin/providers/google/resource_compute_autoscaler.go @@ -3,7 +3,6 @@ package google import ( "fmt" "log" - "time" "github.com/hashicorp/terraform/helper/schema" "google.golang.org/api/compute/v1" @@ -224,28 +223,9 @@ func resourceComputeAutoscalerCreate(d *schema.ResourceData, meta interface{}) e // It probably maybe worked, so store the ID now d.SetId(scaler.Name) - // Wait for the operation to complete - w := &OperationWaiter{ - Service: config.clientCompute, - Op: op, - Project: config.Project, - Type: OperationWaitZone, - Zone: zone.Name, - } - state := w.Conf() - state.Timeout = 2 * time.Minute - state.MinTimeout = 1 * time.Second - opRaw, err := state.WaitForState() + err = computeOperationWaitZone(config, op, zone.Name, "Creating Autoscaler") if err != nil { - return fmt.Errorf("Error waiting for Autoscaler to create: %s", err) - } - op = opRaw.(*compute.Operation) - if op.Error != nil { - // The resource didn't actually create - d.SetId("") - - // Return the error - return OperationError(*op.Error) + return err } return resourceComputeAutoscalerRead(d, meta) @@ -292,25 +272,9 @@ func resourceComputeAutoscalerUpdate(d *schema.ResourceData, meta interface{}) e // It probably maybe worked, so store the ID now d.SetId(scaler.Name) - // Wait for the operation to complete - w := &OperationWaiter{ - Service: config.clientCompute, - Op: op, - Project: config.Project, - Type: OperationWaitZone, - Zone: zone, - } - state := w.Conf() - state.Timeout = 2 * time.Minute - state.MinTimeout = 1 * time.Second - opRaw, err := state.WaitForState() + err = computeOperationWaitZone(config, op, zone, "Updating Autoscaler") if err != nil { - return fmt.Errorf("Error waiting for Autoscaler to update: %s", err) - } - op = opRaw.(*compute.Operation) - if op.Error != nil { - // Return the error - return OperationError(*op.Error) + return err } return resourceComputeAutoscalerRead(d, meta) @@ -326,25 +290,9 @@ func resourceComputeAutoscalerDelete(d *schema.ResourceData, meta interface{}) e return fmt.Errorf("Error deleting autoscaler: %s", err) } - // Wait for the operation to complete - w := &OperationWaiter{ - Service: config.clientCompute, - Op: op, - Project: config.Project, - Type: OperationWaitZone, - Zone: zone, - } - state := w.Conf() - state.Timeout = 2 * time.Minute - state.MinTimeout = 1 * time.Second - opRaw, err := state.WaitForState() + err = computeOperationWaitZone(config, op, zone, "Deleting Autoscaler") if err != nil { - return fmt.Errorf("Error waiting for Autoscaler to delete: %s", err) - } - op = opRaw.(*compute.Operation) - if op.Error != nil { - // Return the error - return OperationError(*op.Error) + return err } d.SetId("") diff --git a/builtin/providers/google/resource_compute_backend_service.go b/builtin/providers/google/resource_compute_backend_service.go index a8826f8e4..ead6e2402 100644 --- a/builtin/providers/google/resource_compute_backend_service.go +++ b/builtin/providers/google/resource_compute_backend_service.go @@ -5,7 +5,6 @@ import ( "fmt" "log" "regexp" - "time" "github.com/hashicorp/terraform/helper/hashcode" "github.com/hashicorp/terraform/helper/schema" @@ -67,6 +66,12 @@ func resourceComputeBackendService() *schema.Resource { Optional: true, }, + "region": &schema.Schema{ + Type: schema.TypeString, + ForceNew: true, + Optional: true, + }, + "health_checks": &schema.Schema{ Type: schema.TypeSet, Elem: &schema.Schema{Type: schema.TypeString}, @@ -165,28 +170,9 @@ func resourceComputeBackendServiceCreate(d *schema.ResourceData, meta interface{ d.SetId(service.Name) - // Wait for the operation to complete - w := &OperationWaiter{ - Service: config.clientCompute, - Op: op, - Project: config.Project, - Region: config.Region, - Type: OperationWaitGlobal, - } - state := w.Conf() - state.Timeout = 2 * time.Minute - state.MinTimeout = 1 * time.Second - opRaw, err := state.WaitForState() + err = computeOperationWaitGlobal(config, op, "Creating Backend Service") if err != nil { - return fmt.Errorf("Error waiting for backend service to create: %s", err) - } - op = opRaw.(*compute.Operation) - if op.Error != nil { - // The resource didn't actually create - d.SetId("") - - // Return the error - return OperationError(*op.Error) + return err } return resourceComputeBackendServiceRead(d, meta) @@ -261,25 +247,9 @@ func resourceComputeBackendServiceUpdate(d *schema.ResourceData, meta interface{ d.SetId(service.Name) - // Wait for the operation to complete - w := &OperationWaiter{ - Service: config.clientCompute, - Op: op, - Project: config.Project, - Region: config.Region, - Type: OperationWaitGlobal, - } - state := w.Conf() - state.Timeout = 2 * time.Minute - state.MinTimeout = 1 * time.Second - opRaw, err := state.WaitForState() + err = computeOperationWaitGlobal(config, op, "Updating Backend Service") if err != nil { - return fmt.Errorf("Error waiting for backend service to update: %s", err) - } - op = opRaw.(*compute.Operation) - if op.Error != nil { - // Return the error - return OperationError(*op.Error) + return err } return resourceComputeBackendServiceRead(d, meta) @@ -295,25 +265,9 @@ func resourceComputeBackendServiceDelete(d *schema.ResourceData, meta interface{ return fmt.Errorf("Error deleting backend service: %s", err) } - // Wait for the operation to complete - w := &OperationWaiter{ - Service: config.clientCompute, - Op: op, - Project: config.Project, - Region: config.Region, - Type: OperationWaitGlobal, - } - state := w.Conf() - state.Timeout = 2 * time.Minute - state.MinTimeout = 1 * time.Second - opRaw, err := state.WaitForState() + err = computeOperationWaitGlobal(config, op, "Deleting Backend Service") if err != nil { - return fmt.Errorf("Error waiting for backend service to delete: %s", err) - } - op = opRaw.(*compute.Operation) - if op.Error != nil { - // Return the error - return OperationError(*op.Error) + return err } d.SetId("") diff --git a/builtin/providers/google/resource_compute_disk.go b/builtin/providers/google/resource_compute_disk.go index 7202e45d9..1118702d6 100644 --- a/builtin/providers/google/resource_compute_disk.go +++ b/builtin/providers/google/resource_compute_disk.go @@ -3,7 +3,6 @@ package google import ( "fmt" "log" - "time" "github.com/hashicorp/terraform/helper/schema" "google.golang.org/api/compute/v1" @@ -128,37 +127,10 @@ func resourceComputeDiskCreate(d *schema.ResourceData, meta interface{}) error { // It probably maybe worked, so store the ID now d.SetId(disk.Name) - // Wait for the operation to complete - w := &OperationWaiter{ - Service: config.clientCompute, - Op: op, - Project: config.Project, - Zone: d.Get("zone").(string), - Type: OperationWaitZone, - } - state := w.Conf() - - if disk.SourceSnapshot != "" { - //creating disk from snapshot takes some time - state.Timeout = 10 * time.Minute - } else { - state.Timeout = 2 * time.Minute - } - - state.MinTimeout = 1 * time.Second - opRaw, err := state.WaitForState() + err = computeOperationWaitZone(config, op, d.Get("zone").(string), "Creating Disk") if err != nil { - return fmt.Errorf("Error waiting for disk to create: %s", err) + return err } - op = opRaw.(*compute.Operation) - if op.Error != nil { - // The resource didn't actually create - d.SetId("") - - // Return the error - return OperationError(*op.Error) - } - return resourceComputeDiskRead(d, meta) } @@ -193,25 +165,10 @@ func resourceComputeDiskDelete(d *schema.ResourceData, meta interface{}) error { return fmt.Errorf("Error deleting disk: %s", err) } - // Wait for the operation to complete - w := &OperationWaiter{ - Service: config.clientCompute, - Op: op, - Project: config.Project, - Zone: d.Get("zone").(string), - Type: OperationWaitZone, - } - state := w.Conf() - state.Timeout = 2 * time.Minute - state.MinTimeout = 1 * time.Second - opRaw, err := state.WaitForState() + zone := d.Get("zone").(string) + err = computeOperationWaitZone(config, op, zone, "Creating Disk") if err != nil { - return fmt.Errorf("Error waiting for disk to delete: %s", err) - } - op = opRaw.(*compute.Operation) - if op.Error != nil { - // Return the error - return OperationError(*op.Error) + return err } d.SetId("") diff --git a/builtin/providers/google/resource_compute_firewall.go b/builtin/providers/google/resource_compute_firewall.go index 2a2433a87..1cec2c826 100644 --- a/builtin/providers/google/resource_compute_firewall.go +++ b/builtin/providers/google/resource_compute_firewall.go @@ -4,7 +4,6 @@ import ( "bytes" "fmt" "sort" - "time" "github.com/hashicorp/terraform/helper/hashcode" "github.com/hashicorp/terraform/helper/schema" @@ -135,27 +134,9 @@ func resourceComputeFirewallCreate(d *schema.ResourceData, meta interface{}) err // It probably maybe worked, so store the ID now d.SetId(firewall.Name) - // Wait for the operation to complete - w := &OperationWaiter{ - Service: config.clientCompute, - Op: op, - Project: config.Project, - Type: OperationWaitGlobal, - } - state := w.Conf() - state.Timeout = 2 * time.Minute - state.MinTimeout = 1 * time.Second - opRaw, err := state.WaitForState() + err = computeOperationWaitGlobal(config, op, "Creating Firewall") if err != nil { - return fmt.Errorf("Error waiting for firewall to create: %s", err) - } - op = opRaw.(*compute.Operation) - if op.Error != nil { - // The resource didn't actually create - d.SetId("") - - // Return the error - return OperationError(*op.Error) + return err } return resourceComputeFirewallRead(d, meta) @@ -198,24 +179,9 @@ func resourceComputeFirewallUpdate(d *schema.ResourceData, meta interface{}) err return fmt.Errorf("Error updating firewall: %s", err) } - // Wait for the operation to complete - w := &OperationWaiter{ - Service: config.clientCompute, - Op: op, - Project: config.Project, - Type: OperationWaitGlobal, - } - state := w.Conf() - state.Timeout = 2 * time.Minute - state.MinTimeout = 1 * time.Second - opRaw, err := state.WaitForState() + err = computeOperationWaitGlobal(config, op, "Updating Firewall") if err != nil { - return fmt.Errorf("Error waiting for firewall to update: %s", err) - } - op = opRaw.(*compute.Operation) - if op.Error != nil { - // Return the error - return OperationError(*op.Error) + return err } d.Partial(false) @@ -233,24 +199,9 @@ func resourceComputeFirewallDelete(d *schema.ResourceData, meta interface{}) err return fmt.Errorf("Error deleting firewall: %s", err) } - // Wait for the operation to complete - w := &OperationWaiter{ - Service: config.clientCompute, - Op: op, - Project: config.Project, - Type: OperationWaitGlobal, - } - state := w.Conf() - state.Timeout = 2 * time.Minute - state.MinTimeout = 1 * time.Second - opRaw, err := state.WaitForState() + err = computeOperationWaitGlobal(config, op, "Deleting Firewall") if err != nil { - return fmt.Errorf("Error waiting for firewall to delete: %s", err) - } - op = opRaw.(*compute.Operation) - if op.Error != nil { - // Return the error - return OperationError(*op.Error) + return err } d.SetId("") diff --git a/builtin/providers/google/resource_compute_forwarding_rule.go b/builtin/providers/google/resource_compute_forwarding_rule.go index 0c905ead7..ac4851e51 100644 --- a/builtin/providers/google/resource_compute_forwarding_rule.go +++ b/builtin/providers/google/resource_compute_forwarding_rule.go @@ -3,7 +3,6 @@ package google import ( "fmt" "log" - "time" "github.com/hashicorp/terraform/helper/schema" "google.golang.org/api/compute/v1" @@ -94,28 +93,9 @@ func resourceComputeForwardingRuleCreate(d *schema.ResourceData, meta interface{ // It probably maybe worked, so store the ID now d.SetId(frule.Name) - // Wait for the operation to complete - w := &OperationWaiter{ - Service: config.clientCompute, - Op: op, - Region: region, - Project: config.Project, - Type: OperationWaitRegion, - } - state := w.Conf() - state.Timeout = 2 * time.Minute - state.MinTimeout = 1 * time.Second - opRaw, err := state.WaitForState() + err = computeOperationWaitRegion(config, op, region, "Creating Fowarding Rule") if err != nil { - return fmt.Errorf("Error waiting for ForwardingRule to create: %s", err) - } - op = opRaw.(*compute.Operation) - if op.Error != nil { - // The resource didn't actually create - d.SetId("") - - // Return the error - return OperationError(*op.Error) + return err } return resourceComputeForwardingRuleRead(d, meta) @@ -137,29 +117,11 @@ func resourceComputeForwardingRuleUpdate(d *schema.ResourceData, meta interface{ return fmt.Errorf("Error updating target: %s", err) } - // Wait for the operation to complete - w := &OperationWaiter{ - Service: config.clientCompute, - Op: op, - Region: region, - Project: config.Project, - Type: OperationWaitRegion, - } - state := w.Conf() - state.Timeout = 2 * time.Minute - state.MinTimeout = 1 * time.Second - opRaw, err := state.WaitForState() + err = computeOperationWaitRegion(config, op, region, "Updating Forwarding Rule") if err != nil { - return fmt.Errorf("Error waiting for ForwardingRule to update target: %s", err) + return err } - op = opRaw.(*compute.Operation) - if op.Error != nil { - // The resource didn't actually create - d.SetId("") - // Return the error - return OperationError(*op.Error) - } d.SetPartial("target") } @@ -206,25 +168,9 @@ func resourceComputeForwardingRuleDelete(d *schema.ResourceData, meta interface{ return fmt.Errorf("Error deleting ForwardingRule: %s", err) } - // Wait for the operation to complete - w := &OperationWaiter{ - Service: config.clientCompute, - Op: op, - Region: region, - Project: config.Project, - Type: OperationWaitRegion, - } - state := w.Conf() - state.Timeout = 2 * time.Minute - state.MinTimeout = 1 * time.Second - opRaw, err := state.WaitForState() + err = computeOperationWaitRegion(config, op, region, "Deleting Forwarding Rule") if err != nil { - return fmt.Errorf("Error waiting for ForwardingRule to delete: %s", err) - } - op = opRaw.(*compute.Operation) - if op.Error != nil { - // Return the error - return OperationError(*op.Error) + return err } d.SetId("") diff --git a/builtin/providers/google/resource_compute_http_health_check.go b/builtin/providers/google/resource_compute_http_health_check.go index 4dfe3a03d..c53267afd 100644 --- a/builtin/providers/google/resource_compute_http_health_check.go +++ b/builtin/providers/google/resource_compute_http_health_check.go @@ -3,7 +3,6 @@ package google import ( "fmt" "log" - "time" "github.com/hashicorp/terraform/helper/schema" "google.golang.org/api/compute/v1" @@ -121,27 +120,9 @@ func resourceComputeHttpHealthCheckCreate(d *schema.ResourceData, meta interface // It probably maybe worked, so store the ID now d.SetId(hchk.Name) - // Wait for the operation to complete - w := &OperationWaiter{ - Service: config.clientCompute, - Op: op, - Project: config.Project, - Type: OperationWaitGlobal, - } - state := w.Conf() - state.Timeout = 2 * time.Minute - state.MinTimeout = 1 * time.Second - opRaw, err := state.WaitForState() + err = computeOperationWaitGlobal(config, op, "Creating Http Health Check") if err != nil { - return fmt.Errorf("Error waiting for HttpHealthCheck to create: %s", err) - } - op = opRaw.(*compute.Operation) - if op.Error != nil { - // The resource didn't actually create - d.SetId("") - - // Return the error - return OperationError(*op.Error) + return err } return resourceComputeHttpHealthCheckRead(d, meta) @@ -190,27 +171,9 @@ func resourceComputeHttpHealthCheckUpdate(d *schema.ResourceData, meta interface // It probably maybe worked, so store the ID now d.SetId(hchk.Name) - // Wait for the operation to complete - w := &OperationWaiter{ - Service: config.clientCompute, - Op: op, - Project: config.Project, - Type: OperationWaitGlobal, - } - state := w.Conf() - state.Timeout = 2 * time.Minute - state.MinTimeout = 1 * time.Second - opRaw, err := state.WaitForState() + err = computeOperationWaitGlobal(config, op, "Updating Http Health Check") if err != nil { - return fmt.Errorf("Error waiting for HttpHealthCheck to patch: %s", err) - } - op = opRaw.(*compute.Operation) - if op.Error != nil { - // The resource didn't actually create - d.SetId("") - - // Return the error - return OperationError(*op.Error) + return err } return resourceComputeHttpHealthCheckRead(d, meta) @@ -254,24 +217,9 @@ func resourceComputeHttpHealthCheckDelete(d *schema.ResourceData, meta interface return fmt.Errorf("Error deleting HttpHealthCheck: %s", err) } - // Wait for the operation to complete - w := &OperationWaiter{ - Service: config.clientCompute, - Op: op, - Project: config.Project, - Type: OperationWaitGlobal, - } - state := w.Conf() - state.Timeout = 2 * time.Minute - state.MinTimeout = 1 * time.Second - opRaw, err := state.WaitForState() + err = computeOperationWaitGlobal(config, op, "Deleting Http Health Check") if err != nil { - return fmt.Errorf("Error waiting for HttpHealthCheck to delete: %s", err) - } - op = opRaw.(*compute.Operation) - if op.Error != nil { - // Return the error - return OperationError(*op.Error) + return err } d.SetId("") diff --git a/builtin/providers/google/resource_compute_instance.go b/builtin/providers/google/resource_compute_instance.go index 2a03a7f94..e3f002404 100644 --- a/builtin/providers/google/resource_compute_instance.go +++ b/builtin/providers/google/resource_compute_instance.go @@ -3,7 +3,6 @@ package google import ( "fmt" "log" - "time" "github.com/hashicorp/terraform/helper/hashcode" "github.com/hashicorp/terraform/helper/schema" @@ -198,9 +197,10 @@ func resourceComputeInstance() *schema.Resource { }, "metadata": &schema.Schema{ - Type: schema.TypeMap, - Optional: true, - Elem: schema.TypeString, + Type: schema.TypeMap, + Optional: true, + Elem: schema.TypeString, + ValidateFunc: validateInstanceMetadata, }, "service_account": &schema.Schema{ @@ -231,6 +231,29 @@ func resourceComputeInstance() *schema.Resource { }, }, + "scheduling": &schema.Schema{ + Type: schema.TypeList, + Optional: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "on_host_maintenance": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + }, + + "automatic_restart": &schema.Schema{ + Type: schema.TypeBool, + Optional: true, + }, + + "preemptible": &schema.Schema{ + Type: schema.TypeBool, + Optional: true, + }, + }, + }, + }, + "tags": &schema.Schema{ Type: schema.TypeSet, Optional: true, @@ -273,32 +296,6 @@ func getInstance(config *Config, d *schema.ResourceData) (*compute.Instance, err return instance, nil } -func resourceOperationWaitZone( - config *Config, op *compute.Operation, zone string, activity string) error { - - w := &OperationWaiter{ - Service: config.clientCompute, - Op: op, - Project: config.Project, - Zone: zone, - Type: OperationWaitZone, - } - state := w.Conf() - state.Delay = 10 * time.Second - state.Timeout = 10 * time.Minute - state.MinTimeout = 2 * time.Second - opRaw, err := state.WaitForState() - if err != nil { - return fmt.Errorf("Error waiting for %s: %s", activity, err) - } - op = opRaw.(*compute.Operation) - if op.Error != nil { - // Return the error - return OperationError(*op.Error) - } - return nil -} - func resourceComputeInstanceCreate(d *schema.ResourceData, meta interface{}) error { config := meta.(*Config) @@ -492,6 +489,21 @@ func resourceComputeInstanceCreate(d *schema.ResourceData, meta interface{}) err serviceAccounts = append(serviceAccounts, serviceAccount) } + prefix := "scheduling.0" + scheduling := &compute.Scheduling{} + + if val, ok := d.GetOk(prefix + ".automatic_restart"); ok { + scheduling.AutomaticRestart = val.(bool) + } + + if val, ok := d.GetOk(prefix + ".preemptible"); ok { + scheduling.Preemptible = val.(bool) + } + + if val, ok := d.GetOk(prefix + ".on_host_maintenance"); ok { + scheduling.OnHostMaintenance = val.(string) + } + metadata, err := resourceInstanceMetadata(d) if err != nil { return fmt.Errorf("Error creating metadata: %s", err) @@ -508,6 +520,7 @@ func resourceComputeInstanceCreate(d *schema.ResourceData, meta interface{}) err NetworkInterfaces: networkInterfaces, Tags: resourceInstanceTags(d), ServiceAccounts: serviceAccounts, + Scheduling: scheduling, } log.Printf("[INFO] Requesting instance creation") @@ -521,7 +534,7 @@ func resourceComputeInstanceCreate(d *schema.ResourceData, meta interface{}) err d.SetId(instance.Name) // Wait for the operation to complete - waitErr := resourceOperationWaitZone(config, op, zone.Name, "instance to create") + waitErr := computeOperationWaitZone(config, op, zone.Name, "instance to create") if waitErr != nil { // The resource didn't actually create d.SetId("") @@ -534,15 +547,22 @@ func resourceComputeInstanceCreate(d *schema.ResourceData, meta interface{}) err func resourceComputeInstanceRead(d *schema.ResourceData, meta interface{}) error { config := meta.(*Config) - instance, err := getInstance(config, d); + instance, err := getInstance(config, d) if err != nil { return err } - // Synch metadata + // Synch metadata md := instance.Metadata - if err = d.Set("metadata", MetadataFormatSchema(md)); err != nil { + _md := MetadataFormatSchema(md) + delete(_md, "startup-script") + + if script, scriptExists := d.GetOk("metadata_startup_script"); scriptExists { + d.Set("metadata_startup_script", script) + } + + if err = d.Set("metadata", _md); err != nil { return fmt.Errorf("Error setting metadata: %s", err) } @@ -662,6 +682,7 @@ func resourceComputeInstanceRead(d *schema.ResourceData, meta interface{}) error } d.Set("self_link", instance.SelfLink) + d.SetId(instance.Name) return nil } @@ -671,7 +692,7 @@ func resourceComputeInstanceUpdate(d *schema.ResourceData, meta interface{}) err zone := d.Get("zone").(string) - instance, err := getInstance(config, d); + instance, err := getInstance(config, d) if err != nil { return err } @@ -682,10 +703,17 @@ func resourceComputeInstanceUpdate(d *schema.ResourceData, meta interface{}) err // If the Metadata has changed, then update that. if d.HasChange("metadata") { o, n := d.GetChange("metadata") + if script, scriptExists := d.GetOk("metadata_startup_script"); scriptExists { + if _, ok := n.(map[string]interface{})["startup-script"]; ok { + return fmt.Errorf("Only one of metadata.startup-script and metadata_startup_script may be defined") + } + + n.(map[string]interface{})["startup-script"] = script + } updateMD := func() error { // Reload the instance in the case of a fingerprint mismatch - instance, err = getInstance(config, d); + instance, err = getInstance(config, d) if err != nil { return err } @@ -703,7 +731,7 @@ func resourceComputeInstanceUpdate(d *schema.ResourceData, meta interface{}) err return fmt.Errorf("Error updating metadata: %s", err) } - opErr := resourceOperationWaitZone(config, op, zone, "metadata to update") + opErr := computeOperationWaitZone(config, op, zone, "metadata to update") if opErr != nil { return opErr } @@ -723,7 +751,7 @@ func resourceComputeInstanceUpdate(d *schema.ResourceData, meta interface{}) err return fmt.Errorf("Error updating tags: %s", err) } - opErr := resourceOperationWaitZone(config, op, zone, "tags to update") + opErr := computeOperationWaitZone(config, op, zone, "tags to update") if opErr != nil { return opErr } @@ -731,6 +759,38 @@ func resourceComputeInstanceUpdate(d *schema.ResourceData, meta interface{}) err d.SetPartial("tags") } + if d.HasChange("scheduling") { + prefix := "scheduling.0" + scheduling := &compute.Scheduling{} + + if val, ok := d.GetOk(prefix + ".automatic_restart"); ok { + scheduling.AutomaticRestart = val.(bool) + } + + if val, ok := d.GetOk(prefix + ".preemptible"); ok { + scheduling.Preemptible = val.(bool) + } + + if val, ok := d.GetOk(prefix + ".on_host_maintenance"); ok { + scheduling.OnHostMaintenance = val.(string) + } + + op, err := config.clientCompute.Instances.SetScheduling(config.Project, + zone, d.Id(), scheduling).Do() + + if err != nil { + return fmt.Errorf("Error updating scheduling policy: %s", err) + } + + opErr := computeOperationWaitZone(config, op, zone, + "scheduling policy update") + if opErr != nil { + return opErr + } + + d.SetPartial("scheduling"); + } + networkInterfacesCount := d.Get("network_interface.#").(int) if networkInterfacesCount > 0 { // Sanity check @@ -764,7 +824,7 @@ func resourceComputeInstanceUpdate(d *schema.ResourceData, meta interface{}) err if err != nil { return fmt.Errorf("Error deleting old access_config: %s", err) } - opErr := resourceOperationWaitZone(config, op, zone, "old access_config to delete") + opErr := computeOperationWaitZone(config, op, zone, "old access_config to delete") if opErr != nil { return opErr } @@ -783,7 +843,7 @@ func resourceComputeInstanceUpdate(d *schema.ResourceData, meta interface{}) err if err != nil { return fmt.Errorf("Error adding new access_config: %s", err) } - opErr := resourceOperationWaitZone(config, op, zone, "new access_config to add") + opErr := computeOperationWaitZone(config, op, zone, "new access_config to add") if opErr != nil { return opErr } @@ -809,7 +869,7 @@ func resourceComputeInstanceDelete(d *schema.ResourceData, meta interface{}) err } // Wait for the operation to complete - opErr := resourceOperationWaitZone(config, op, zone, "instance to delete") + opErr := computeOperationWaitZone(config, op, zone, "instance to delete") if opErr != nil { return opErr } @@ -821,13 +881,8 @@ func resourceComputeInstanceDelete(d *schema.ResourceData, meta interface{}) err func resourceInstanceMetadata(d *schema.ResourceData) (*compute.Metadata, error) { m := &compute.Metadata{} mdMap := d.Get("metadata").(map[string]interface{}) - _, mapScriptExists := mdMap["startup-script"] - dScript, dScriptExists := d.GetOk("metadata_startup_script") - if mapScriptExists && dScriptExists { - return nil, fmt.Errorf("Not allowed to have both metadata_startup_script and metadata.startup-script") - } - if dScriptExists { - mdMap["startup-script"] = dScript + if v, ok := d.GetOk("metadata_startup_script"); ok && v.(string) != "" { + mdMap["startup-script"] = v } if len(mdMap) > 0 { m.Items = make([]*compute.MetadataItems, 0, len(mdMap)) @@ -863,3 +918,12 @@ func resourceInstanceTags(d *schema.ResourceData) *compute.Tags { return tags } + +func validateInstanceMetadata(v interface{}, k string) (ws []string, es []error) { + mdMap := v.(map[string]interface{}) + if _, ok := mdMap["startup-script"]; ok { + es = append(es, fmt.Errorf( + "Use metadata_startup_script instead of a startup-script key in %q.", k)) + } + return +} diff --git a/builtin/providers/google/resource_compute_instance_group_manager.go b/builtin/providers/google/resource_compute_instance_group_manager.go index 9651c935f..938738146 100644 --- a/builtin/providers/google/resource_compute_instance_group_manager.go +++ b/builtin/providers/google/resource_compute_instance_group_manager.go @@ -3,7 +3,7 @@ package google import ( "fmt" "log" - "time" + "strings" "google.golang.org/api/compute/v1" "google.golang.org/api/googleapi" @@ -82,26 +82,6 @@ func resourceComputeInstanceGroupManager() *schema.Resource { } } -func waitOpZone(config *Config, op *compute.Operation, zone string, - resource string, action string) (*compute.Operation, error) { - - w := &OperationWaiter{ - Service: config.clientCompute, - Op: op, - Project: config.Project, - Zone: zone, - Type: OperationWaitZone, - } - state := w.Conf() - state.Timeout = 8 * time.Minute - state.MinTimeout = 1 * time.Second - opRaw, err := state.WaitForState() - if err != nil { - return nil, fmt.Errorf("Error waiting for %s to %s: %s", resource, action, err) - } - return opRaw.(*compute.Operation), nil -} - func resourceComputeInstanceGroupManagerCreate(d *schema.ResourceData, meta interface{}) error { config := meta.(*Config) @@ -143,16 +123,10 @@ func resourceComputeInstanceGroupManagerCreate(d *schema.ResourceData, meta inte d.SetId(manager.Name) // Wait for the operation to complete - op, err = waitOpZone(config, op, d.Get("zone").(string), "InstanceGroupManager", "create") + err = computeOperationWaitZone(config, op, d.Get("zone").(string), "Creating InstanceGroupManager") if err != nil { return err } - if op.Error != nil { - // The resource didn't actually create - d.SetId("") - // Return the error - return OperationError(*op.Error) - } return resourceComputeInstanceGroupManagerRead(d, meta) } @@ -208,13 +182,10 @@ func resourceComputeInstanceGroupManagerUpdate(d *schema.ResourceData, meta inte } // Wait for the operation to complete - op, err = waitOpZone(config, op, d.Get("zone").(string), "InstanceGroupManager", "update TargetPools") + err = computeOperationWaitZone(config, op, d.Get("zone").(string), "Updating InstanceGroupManager") if err != nil { return err } - if op.Error != nil { - return OperationError(*op.Error) - } d.SetPartial("target_pools") } @@ -233,13 +204,10 @@ func resourceComputeInstanceGroupManagerUpdate(d *schema.ResourceData, meta inte } // Wait for the operation to complete - op, err = waitOpZone(config, op, d.Get("zone").(string), "InstanceGroupManager", "update instance template") + err = computeOperationWaitZone(config, op, d.Get("zone").(string), "Updating InstanceGroupManager") if err != nil { return err } - if op.Error != nil { - return OperationError(*op.Error) - } d.SetPartial("instance_template") } @@ -257,13 +225,10 @@ func resourceComputeInstanceGroupManagerUpdate(d *schema.ResourceData, meta inte } // Wait for the operation to complete - op, err = waitOpZone(config, op, d.Get("zone").(string), "InstanceGroupManager", "update target_size") + err = computeOperationWaitZone(config, op, d.Get("zone").(string), "Updating InstanceGroupManager") if err != nil { return err } - if op.Error != nil { - return OperationError(*op.Error) - } } d.SetPartial("target_size") @@ -283,17 +248,32 @@ func resourceComputeInstanceGroupManagerDelete(d *schema.ResourceData, meta inte return fmt.Errorf("Error deleting instance group manager: %s", err) } - // Wait for the operation to complete - op, err = waitOpZone(config, op, d.Get("zone").(string), "InstanceGroupManager", "delete") - if err != nil { - return err - } - if op.Error != nil { - // The resource didn't actually create - d.SetId("") + currentSize := int64(d.Get("target_size").(int)) - // Return the error - return OperationError(*op.Error) + // Wait for the operation to complete + err = computeOperationWaitZone(config, op, d.Get("zone").(string), "Deleting InstanceGroupManager") + + for err != nil && currentSize > 0 { + if !strings.Contains(err.Error(), "timeout") { + return err; + } + + instanceGroup, err := config.clientCompute.InstanceGroups.Get( + config.Project, d.Get("zone").(string), d.Id()).Do() + + if err != nil { + return fmt.Errorf("Error getting instance group size: %s", err); + } + + if instanceGroup.Size >= currentSize { + return fmt.Errorf("Error, instance group isn't shrinking during delete") + } + + log.Printf("[INFO] timeout occured, but instance group is shrinking (%d < %d)", instanceGroup.Size, currentSize) + + currentSize = instanceGroup.Size + + err = computeOperationWaitZone(config, op, d.Get("zone").(string), "Deleting InstanceGroupManager") } d.SetId("") diff --git a/builtin/providers/google/resource_compute_instance_template.go b/builtin/providers/google/resource_compute_instance_template.go index 060f4bb39..ec85f1ba6 100644 --- a/builtin/providers/google/resource_compute_instance_template.go +++ b/builtin/providers/google/resource_compute_instance_template.go @@ -2,7 +2,6 @@ package google import ( "fmt" - "time" "github.com/hashicorp/terraform/helper/hashcode" "github.com/hashicorp/terraform/helper/schema" @@ -305,11 +304,9 @@ func buildNetworks(d *schema.ResourceData, meta interface{}) (error, []*compute. for i := 0; i < networksCount; i++ { prefix := fmt.Sprintf("network_interface.%d", i) - source := "global/networks/default" + source := "global/networks/" if v, ok := d.GetOk(prefix + ".network"); ok { - if v.(string) != "default" { - source = v.(string) - } + source += v.(string) } // Build the networkInterface @@ -401,28 +398,9 @@ func resourceComputeInstanceTemplateCreate(d *schema.ResourceData, meta interfac // Store the ID now d.SetId(instanceTemplate.Name) - // Wait for the operation to complete - w := &OperationWaiter{ - Service: config.clientCompute, - Op: op, - Project: config.Project, - Type: OperationWaitGlobal, - } - state := w.Conf() - state.Delay = 10 * time.Second - state.Timeout = 10 * time.Minute - state.MinTimeout = 2 * time.Second - opRaw, err := state.WaitForState() + err = computeOperationWaitGlobal(config, op, "Creating Instance Template") if err != nil { - return fmt.Errorf("Error waiting for instance template to create: %s", err) - } - op = opRaw.(*compute.Operation) - if op.Error != nil { - // The resource didn't actually create - d.SetId("") - - // Return the error - return OperationError(*op.Error) + return err } return resourceComputeInstanceTemplateRead(d, meta) @@ -467,25 +445,9 @@ func resourceComputeInstanceTemplateDelete(d *schema.ResourceData, meta interfac return fmt.Errorf("Error deleting instance template: %s", err) } - // Wait for the operation to complete - w := &OperationWaiter{ - Service: config.clientCompute, - Op: op, - Project: config.Project, - Type: OperationWaitGlobal, - } - state := w.Conf() - state.Delay = 5 * time.Second - state.Timeout = 5 * time.Minute - state.MinTimeout = 2 * time.Second - opRaw, err := state.WaitForState() + err = computeOperationWaitGlobal(config, op, "Deleting Instance Template") if err != nil { - return fmt.Errorf("Error waiting for instance template to delete: %s", err) - } - op = opRaw.(*compute.Operation) - if op.Error != nil { - // Return the error - return OperationError(*op.Error) + return err } d.SetId("") diff --git a/builtin/providers/google/resource_compute_instance_test.go b/builtin/providers/google/resource_compute_instance_test.go index 394e66dbf..4cee16a51 100644 --- a/builtin/providers/google/resource_compute_instance_test.go +++ b/builtin/providers/google/resource_compute_instance_test.go @@ -32,7 +32,7 @@ func TestAccComputeInstance_basic_deprecated_network(t *testing.T) { }) } -func TestAccComputeInstance_basic(t *testing.T) { +func TestAccComputeInstance_basic1(t *testing.T) { var instance compute.Instance resource.Test(t, resource.TestCase{ @@ -272,6 +272,25 @@ func TestAccComputeInstance_service_account(t *testing.T) { }) } +func TestAccComputeInstance_scheduling(t *testing.T) { + var instance compute.Instance + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckComputeInstanceDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccComputeInstance_scheduling, + Check: resource.ComposeTestCheckFunc( + testAccCheckComputeInstanceExists( + "google_compute_instance.foobar", &instance), + ), + }, + }, + }) +} + func testAccCheckComputeInstanceDestroy(s *terraform.State) error { config := testAccProvider.Meta().(*Config) @@ -376,7 +395,7 @@ func testAccCheckComputeInstanceDisk(instance *compute.Instance, source string, } for _, disk := range instance.Disks { - if strings.LastIndex(disk.Source, "/"+source) == (len(disk.Source)-len(source)-1) && disk.AutoDelete == delete && disk.Boot == boot { + if strings.LastIndex(disk.Source, "/"+source) == len(disk.Source)-len(source)-1 && disk.AutoDelete == delete && disk.Boot == boot { return nil } } @@ -672,3 +691,21 @@ resource "google_compute_instance" "foobar" { ] } }` + +const testAccComputeInstance_scheduling = ` +resource "google_compute_instance" "foobar" { + name = "terraform-test" + machine_type = "n1-standard-1" + zone = "us-central1-a" + + disk { + image = "debian-7-wheezy-v20140814" + } + + network_interface { + network = "default" + } + + scheduling { + } +}` diff --git a/builtin/providers/google/resource_compute_network.go b/builtin/providers/google/resource_compute_network.go index 5e581eff2..5a61f2ad6 100644 --- a/builtin/providers/google/resource_compute_network.go +++ b/builtin/providers/google/resource_compute_network.go @@ -3,7 +3,6 @@ package google import ( "fmt" "log" - "time" "github.com/hashicorp/terraform/helper/schema" "google.golang.org/api/compute/v1" @@ -60,27 +59,9 @@ func resourceComputeNetworkCreate(d *schema.ResourceData, meta interface{}) erro // It probably maybe worked, so store the ID now d.SetId(network.Name) - // Wait for the operation to complete - w := &OperationWaiter{ - Service: config.clientCompute, - Op: op, - Project: config.Project, - Type: OperationWaitGlobal, - } - state := w.Conf() - state.Timeout = 2 * time.Minute - state.MinTimeout = 1 * time.Second - opRaw, err := state.WaitForState() + err = computeOperationWaitGlobal(config, op, "Creating Network") if err != nil { - return fmt.Errorf("Error waiting for network to create: %s", err) - } - op = opRaw.(*compute.Operation) - if op.Error != nil { - // The resource didn't actually create - d.SetId("") - - // Return the error - return OperationError(*op.Error) + return err } return resourceComputeNetworkRead(d, meta) @@ -118,24 +99,9 @@ func resourceComputeNetworkDelete(d *schema.ResourceData, meta interface{}) erro return fmt.Errorf("Error deleting network: %s", err) } - // Wait for the operation to complete - w := &OperationWaiter{ - Service: config.clientCompute, - Op: op, - Project: config.Project, - Type: OperationWaitGlobal, - } - state := w.Conf() - state.Timeout = 2 * time.Minute - state.MinTimeout = 1 * time.Second - opRaw, err := state.WaitForState() + err = computeOperationWaitGlobal(config, op, "Deleting Network") if err != nil { - return fmt.Errorf("Error waiting for network to delete: %s", err) - } - op = opRaw.(*compute.Operation) - if op.Error != nil { - // Return the error - return OperationError(*op.Error) + return err } d.SetId("") diff --git a/builtin/providers/google/resource_compute_project_metadata.go b/builtin/providers/google/resource_compute_project_metadata.go index 3471d9110..c2f8a4a5f 100644 --- a/builtin/providers/google/resource_compute_project_metadata.go +++ b/builtin/providers/google/resource_compute_project_metadata.go @@ -3,7 +3,6 @@ package google import ( "fmt" "log" - "time" // "github.com/hashicorp/terraform/helper/hashcode" "github.com/hashicorp/terraform/helper/schema" @@ -30,30 +29,6 @@ func resourceComputeProjectMetadata() *schema.Resource { } } -func resourceOperationWaitGlobal(config *Config, op *compute.Operation, activity string) error { - w := &OperationWaiter{ - Service: config.clientCompute, - Op: op, - Project: config.Project, - Type: OperationWaitGlobal, - } - - state := w.Conf() - state.Timeout = 2 * time.Minute - state.MinTimeout = 1 * time.Second - opRaw, err := state.WaitForState() - if err != nil { - return fmt.Errorf("Error waiting for %s: %s", activity, err) - } - - op = opRaw.(*compute.Operation) - if op.Error != nil { - return OperationError(*op.Error) - } - - return nil -} - func resourceComputeProjectMetadataCreate(d *schema.ResourceData, meta interface{}) error { config := meta.(*Config) @@ -92,15 +67,15 @@ func resourceComputeProjectMetadataCreate(d *schema.ResourceData, meta interface log.Printf("[DEBUG] SetCommonMetadata: %d (%s)", op.Id, op.SelfLink) - return resourceOperationWaitGlobal(config, op, "SetCommonMetadata") + return computeOperationWaitGlobal(config, op, "SetCommonMetadata") } err := MetadataRetryWrapper(createMD) if err != nil { - return err; + return err } - return resourceComputeProjectMetadataRead(d, meta); + return resourceComputeProjectMetadataRead(d, meta) } func resourceComputeProjectMetadataRead(d *schema.ResourceData, meta interface{}) error { @@ -140,7 +115,7 @@ func resourceComputeProjectMetadataUpdate(d *schema.ResourceData, meta interface md := project.CommonInstanceMetadata - MetadataUpdate(o.(map[string]interface{}), n.(map[string]interface{}), md) + MetadataUpdate(o.(map[string]interface{}), n.(map[string]interface{}), md) op, err := config.clientCompute.Projects.SetCommonInstanceMetadata(config.Project, md).Do() @@ -153,15 +128,15 @@ func resourceComputeProjectMetadataUpdate(d *schema.ResourceData, meta interface // Optimistic locking requires the fingerprint received to match // the fingerprint we send the server, if there is a mismatch then we // are working on old data, and must retry - return resourceOperationWaitGlobal(config, op, "SetCommonMetadata") + return computeOperationWaitGlobal(config, op, "SetCommonMetadata") } err := MetadataRetryWrapper(updateMD) if err != nil { - return err; + return err } - return resourceComputeProjectMetadataRead(d, meta); + return resourceComputeProjectMetadataRead(d, meta) } return nil @@ -186,7 +161,7 @@ func resourceComputeProjectMetadataDelete(d *schema.ResourceData, meta interface log.Printf("[DEBUG] SetCommonMetadata: %d (%s)", op.Id, op.SelfLink) - err = resourceOperationWaitGlobal(config, op, "SetCommonMetadata") + err = computeOperationWaitGlobal(config, op, "SetCommonMetadata") if err != nil { return err } diff --git a/builtin/providers/google/resource_compute_route.go b/builtin/providers/google/resource_compute_route.go index 53176c871..82b43d358 100644 --- a/builtin/providers/google/resource_compute_route.go +++ b/builtin/providers/google/resource_compute_route.go @@ -3,7 +3,6 @@ package google import ( "fmt" "log" - "time" "github.com/hashicorp/terraform/helper/hashcode" "github.com/hashicorp/terraform/helper/schema" @@ -171,27 +170,9 @@ func resourceComputeRouteCreate(d *schema.ResourceData, meta interface{}) error // It probably maybe worked, so store the ID now d.SetId(route.Name) - // Wait for the operation to complete - w := &OperationWaiter{ - Service: config.clientCompute, - Op: op, - Project: config.Project, - Type: OperationWaitGlobal, - } - state := w.Conf() - state.Timeout = 2 * time.Minute - state.MinTimeout = 1 * time.Second - opRaw, err := state.WaitForState() + err = computeOperationWaitGlobal(config, op, "Creating Route") if err != nil { - return fmt.Errorf("Error waiting for route to create: %s", err) - } - op = opRaw.(*compute.Operation) - if op.Error != nil { - // The resource didn't actually create - d.SetId("") - - // Return the error - return OperationError(*op.Error) + return err } return resourceComputeRouteRead(d, meta) @@ -228,24 +209,9 @@ func resourceComputeRouteDelete(d *schema.ResourceData, meta interface{}) error return fmt.Errorf("Error deleting route: %s", err) } - // Wait for the operation to complete - w := &OperationWaiter{ - Service: config.clientCompute, - Op: op, - Project: config.Project, - Type: OperationWaitGlobal, - } - state := w.Conf() - state.Timeout = 2 * time.Minute - state.MinTimeout = 1 * time.Second - opRaw, err := state.WaitForState() + err = computeOperationWaitGlobal(config, op, "Deleting Route") if err != nil { - return fmt.Errorf("Error waiting for route to delete: %s", err) - } - op = opRaw.(*compute.Operation) - if op.Error != nil { - // Return the error - return OperationError(*op.Error) + return err } d.SetId("") diff --git a/builtin/providers/google/resource_compute_target_pool.go b/builtin/providers/google/resource_compute_target_pool.go index 83611e2bd..91e83a46a 100644 --- a/builtin/providers/google/resource_compute_target_pool.go +++ b/builtin/providers/google/resource_compute_target_pool.go @@ -4,7 +4,6 @@ import ( "fmt" "log" "strings" - "time" "github.com/hashicorp/terraform/helper/schema" "google.golang.org/api/compute/v1" @@ -67,6 +66,12 @@ func resourceComputeTargetPool() *schema.Resource { Optional: true, ForceNew: true, }, + + "region": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ForceNew: true, + }, }, } } @@ -79,26 +84,6 @@ func convertStringArr(ifaceArr []interface{}) []string { return arr } -func waitOp(config *Config, op *compute.Operation, - resource string, action string) (*compute.Operation, error) { - - w := &OperationWaiter{ - Service: config.clientCompute, - Op: op, - Region: config.Region, - Project: config.Project, - Type: OperationWaitRegion, - } - state := w.Conf() - state.Timeout = 2 * time.Minute - state.MinTimeout = 1 * time.Second - opRaw, err := state.WaitForState() - if err != nil { - return nil, fmt.Errorf("Error waiting for %s to %s: %s", resource, action, err) - } - return opRaw.(*compute.Operation), nil -} - // Healthchecks need to exist before being referred to from the target pool. func convertHealthChecks(config *Config, names []string) ([]string, error) { urls := make([]string, len(names)) @@ -136,6 +121,7 @@ func convertInstances(config *Config, names []string) ([]string, error) { func resourceComputeTargetPoolCreate(d *schema.ResourceData, meta interface{}) error { config := meta.(*Config) + region := getOptionalRegion(d, config) hchkUrls, err := convertHealthChecks( config, convertStringArr(d.Get("health_checks").([]interface{}))) @@ -163,7 +149,7 @@ func resourceComputeTargetPoolCreate(d *schema.ResourceData, meta interface{}) e } log.Printf("[DEBUG] TargetPool insert request: %#v", tpool) op, err := config.clientCompute.TargetPools.Insert( - config.Project, config.Region, tpool).Do() + config.Project, region, tpool).Do() if err != nil { return fmt.Errorf("Error creating TargetPool: %s", err) } @@ -171,17 +157,10 @@ func resourceComputeTargetPoolCreate(d *schema.ResourceData, meta interface{}) e // It probably maybe worked, so store the ID now d.SetId(tpool.Name) - op, err = waitOp(config, op, "TargetPool", "create") + err = computeOperationWaitRegion(config, op, region, "Creating Target Pool") if err != nil { return err } - if op.Error != nil { - // The resource didn't actually create - d.SetId("") - // Return the error - return OperationError(*op.Error) - } - return resourceComputeTargetPoolRead(d, meta) } @@ -217,6 +196,7 @@ func calcAddRemove(from []string, to []string) ([]string, []string) { func resourceComputeTargetPoolUpdate(d *schema.ResourceData, meta interface{}) error { config := meta.(*Config) + region := getOptionalRegion(d, config) d.Partial(true) @@ -242,18 +222,15 @@ func resourceComputeTargetPoolUpdate(d *schema.ResourceData, meta interface{}) e removeReq.HealthChecks[i] = &compute.HealthCheckReference{HealthCheck: v} } op, err := config.clientCompute.TargetPools.RemoveHealthCheck( - config.Project, config.Region, d.Id(), removeReq).Do() + config.Project, region, d.Id(), removeReq).Do() if err != nil { return fmt.Errorf("Error updating health_check: %s", err) } - op, err = waitOp(config, op, "TargetPool", "removing HealthChecks") + + err = computeOperationWaitRegion(config, op, region, "Updating Target Pool") if err != nil { return err } - if op.Error != nil { - return OperationError(*op.Error) - } - addReq := &compute.TargetPoolsAddHealthCheckRequest{ HealthChecks: make([]*compute.HealthCheckReference, len(add)), } @@ -261,18 +238,15 @@ func resourceComputeTargetPoolUpdate(d *schema.ResourceData, meta interface{}) e addReq.HealthChecks[i] = &compute.HealthCheckReference{HealthCheck: v} } op, err = config.clientCompute.TargetPools.AddHealthCheck( - config.Project, config.Region, d.Id(), addReq).Do() + config.Project, region, d.Id(), addReq).Do() if err != nil { return fmt.Errorf("Error updating health_check: %s", err) } - op, err = waitOp(config, op, "TargetPool", "adding HealthChecks") + + err = computeOperationWaitRegion(config, op, region, "Updating Target Pool") if err != nil { return err } - if op.Error != nil { - return OperationError(*op.Error) - } - d.SetPartial("health_checks") } @@ -298,18 +272,15 @@ func resourceComputeTargetPoolUpdate(d *schema.ResourceData, meta interface{}) e addReq.Instances[i] = &compute.InstanceReference{Instance: v} } op, err := config.clientCompute.TargetPools.AddInstance( - config.Project, config.Region, d.Id(), addReq).Do() + config.Project, region, d.Id(), addReq).Do() if err != nil { return fmt.Errorf("Error updating instances: %s", err) } - op, err = waitOp(config, op, "TargetPool", "adding instances") + + err = computeOperationWaitRegion(config, op, region, "Updating Target Pool") if err != nil { return err } - if op.Error != nil { - return OperationError(*op.Error) - } - removeReq := &compute.TargetPoolsRemoveInstanceRequest{ Instances: make([]*compute.InstanceReference, len(remove)), } @@ -317,18 +288,14 @@ func resourceComputeTargetPoolUpdate(d *schema.ResourceData, meta interface{}) e removeReq.Instances[i] = &compute.InstanceReference{Instance: v} } op, err = config.clientCompute.TargetPools.RemoveInstance( - config.Project, config.Region, d.Id(), removeReq).Do() + config.Project, region, d.Id(), removeReq).Do() if err != nil { return fmt.Errorf("Error updating instances: %s", err) } - op, err = waitOp(config, op, "TargetPool", "removing instances") + err = computeOperationWaitRegion(config, op, region, "Updating Target Pool") if err != nil { return err } - if op.Error != nil { - return OperationError(*op.Error) - } - d.SetPartial("instances") } @@ -338,19 +305,15 @@ func resourceComputeTargetPoolUpdate(d *schema.ResourceData, meta interface{}) e Target: bpool_name, } op, err := config.clientCompute.TargetPools.SetBackup( - config.Project, config.Region, d.Id(), tref).Do() + config.Project, region, d.Id(), tref).Do() if err != nil { return fmt.Errorf("Error updating backup_pool: %s", err) } - op, err = waitOp(config, op, "TargetPool", "updating backup_pool") + err = computeOperationWaitRegion(config, op, region, "Updating Target Pool") if err != nil { return err } - if op.Error != nil { - return OperationError(*op.Error) - } - d.SetPartial("backup_pool") } @@ -361,9 +324,10 @@ func resourceComputeTargetPoolUpdate(d *schema.ResourceData, meta interface{}) e func resourceComputeTargetPoolRead(d *schema.ResourceData, meta interface{}) error { config := meta.(*Config) + region := getOptionalRegion(d, config) tpool, err := config.clientCompute.TargetPools.Get( - config.Project, config.Region, d.Id()).Do() + config.Project, region, d.Id()).Do() if err != nil { if gerr, ok := err.(*googleapi.Error); ok && gerr.Code == 404 { // The resource doesn't exist anymore @@ -382,22 +346,19 @@ func resourceComputeTargetPoolRead(d *schema.ResourceData, meta interface{}) err func resourceComputeTargetPoolDelete(d *schema.ResourceData, meta interface{}) error { config := meta.(*Config) + region := getOptionalRegion(d, config) // Delete the TargetPool op, err := config.clientCompute.TargetPools.Delete( - config.Project, config.Region, d.Id()).Do() + config.Project, region, d.Id()).Do() if err != nil { return fmt.Errorf("Error deleting TargetPool: %s", err) } - op, err = waitOp(config, op, "TargetPool", "delete") + err = computeOperationWaitRegion(config, op, region, "Deleting Target Pool") if err != nil { return err } - if op.Error != nil { - return OperationError(*op.Error) - } - d.SetId("") return nil } diff --git a/builtin/providers/google/resource_compute_vpn_gateway.go b/builtin/providers/google/resource_compute_vpn_gateway.go index 01a6c4b94..bd5350b9c 100644 --- a/builtin/providers/google/resource_compute_vpn_gateway.go +++ b/builtin/providers/google/resource_compute_vpn_gateway.go @@ -56,8 +56,8 @@ func resourceComputeVpnGatewayCreate(d *schema.ResourceData, meta interface{}) e vpnGatewaysService := compute.NewTargetVpnGatewaysService(config.clientCompute) vpnGateway := &compute.TargetVpnGateway{ - Name: name, - Network: network, + Name: name, + Network: network, } if v, ok := d.GetOk("description"); ok { @@ -69,7 +69,7 @@ func resourceComputeVpnGatewayCreate(d *schema.ResourceData, meta interface{}) e return fmt.Errorf("Error Inserting VPN Gateway %s into network %s: %s", name, network, err) } - err = resourceOperationWaitRegion(config, op, region, "Inserting VPN Gateway") + err = computeOperationWaitRegion(config, op, region, "Inserting VPN Gateway") if err != nil { return fmt.Errorf("Error Waiting to Insert VPN Gateway %s into network %s: %s", name, network, err) } @@ -111,7 +111,7 @@ func resourceComputeVpnGatewayDelete(d *schema.ResourceData, meta interface{}) e return fmt.Errorf("Error Reading VPN Gateway %s: %s", name, err) } - err = resourceOperationWaitRegion(config, op, region, "Deleting VPN Gateway") + err = computeOperationWaitRegion(config, op, region, "Deleting VPN Gateway") if err != nil { return fmt.Errorf("Error Waiting to Delete VPN Gateway %s: %s", name, err) } diff --git a/builtin/providers/google/resource_compute_vpn_tunnel.go b/builtin/providers/google/resource_compute_vpn_tunnel.go index 55848d546..172f96a90 100644 --- a/builtin/providers/google/resource_compute_vpn_tunnel.go +++ b/builtin/providers/google/resource_compute_vpn_tunnel.go @@ -2,7 +2,6 @@ package google import ( "fmt" - "time" "github.com/hashicorp/terraform/helper/schema" @@ -66,31 +65,6 @@ func resourceComputeVpnTunnel() *schema.Resource { } } -func resourceOperationWaitRegion(config *Config, op *compute.Operation, region, activity string) error { - w := &OperationWaiter{ - Service: config.clientCompute, - Op: op, - Project: config.Project, - Type: OperationWaitRegion, - Region: region, - } - - state := w.Conf() - state.Timeout = 2 * time.Minute - state.MinTimeout = 1 * time.Second - opRaw, err := state.WaitForState() - if err != nil { - return fmt.Errorf("Error waiting for %s: %s", activity, err) - } - - op = opRaw.(*compute.Operation) - if op.Error != nil { - return OperationError(*op.Error) - } - - return nil -} - func resourceComputeVpnTunnelCreate(d *schema.ResourceData, meta interface{}) error { config := meta.(*Config) @@ -125,7 +99,7 @@ func resourceComputeVpnTunnelCreate(d *schema.ResourceData, meta interface{}) er return fmt.Errorf("Error Inserting VPN Tunnel %s : %s", name, err) } - err = resourceOperationWaitRegion(config, op, region, "Inserting VPN Tunnel") + err = computeOperationWaitRegion(config, op, region, "Inserting VPN Tunnel") if err != nil { return fmt.Errorf("Error Waiting to Insert VPN Tunnel %s: %s", name, err) } @@ -169,7 +143,7 @@ func resourceComputeVpnTunnelDelete(d *schema.ResourceData, meta interface{}) er return fmt.Errorf("Error Reading VPN Tunnel %s: %s", name, err) } - err = resourceOperationWaitRegion(config, op, region, "Deleting VPN Tunnel") + err = computeOperationWaitRegion(config, op, region, "Deleting VPN Tunnel") if err != nil { return fmt.Errorf("Error Waiting to Delete VPN Tunnel %s: %s", name, err) } diff --git a/builtin/providers/google/resource_container_cluster_test.go b/builtin/providers/google/resource_container_cluster_test.go index 72f398a07..ea4a5a597 100644 --- a/builtin/providers/google/resource_container_cluster_test.go +++ b/builtin/providers/google/resource_container_cluster_test.go @@ -113,7 +113,7 @@ resource "google_container_cluster" "with_node_config" { } node_config { - machine_type = "f1-micro" + machine_type = "g1-small" disk_size_gb = 15 oauth_scopes = [ "https://www.googleapis.com/auth/compute", diff --git a/builtin/providers/google/resource_storage_bucket.go b/builtin/providers/google/resource_storage_bucket.go index de03d5f6d..9118119a8 100644 --- a/builtin/providers/google/resource_storage_bucket.go +++ b/builtin/providers/google/resource_storage_bucket.go @@ -24,10 +24,10 @@ func resourceStorageBucket() *schema.Resource { ForceNew: true, }, "predefined_acl": &schema.Schema{ - Type: schema.TypeString, - Default: "projectPrivate", - Optional: true, - ForceNew: true, + Type: schema.TypeString, + Deprecated: "Please use resource \"storage_bucket_acl.predefined_acl\" instead.", + Optional: true, + ForceNew: true, }, "location": &schema.Schema{ Type: schema.TypeString, @@ -69,7 +69,6 @@ func resourceStorageBucketCreate(d *schema.ResourceData, meta interface{}) error // Get the bucket and acl bucket := d.Get("name").(string) - acl := d.Get("predefined_acl").(string) location := d.Get("location").(string) // Create a bucket, setting the acl, location and name. @@ -95,7 +94,12 @@ func resourceStorageBucketCreate(d *schema.ResourceData, meta interface{}) error } } - res, err := config.clientStorage.Buckets.Insert(config.Project, sb).PredefinedAcl(acl).Do() + call := config.clientStorage.Buckets.Insert(config.Project, sb) + if v, ok := d.GetOk("predefined_acl"); ok { + call = call.PredefinedAcl(v.(string)) + } + + res, err := call.Do() if err != nil { fmt.Printf("Error creating bucket %s: %v", bucket, err) @@ -124,8 +128,8 @@ func resourceStorageBucketUpdate(d *schema.ResourceData, meta interface{}) error return fmt.Errorf("At most one website block is allowed") } - // Setting fields to "" to be explicit that the PATCH call will - // delete this field. + // Setting fields to "" to be explicit that the PATCH call will + // delete this field. if len(websites) == 0 { sb.Website.NotFoundPage = "" sb.Website.MainPageSuffix = "" diff --git a/builtin/providers/google/resource_storage_bucket_acl.go b/builtin/providers/google/resource_storage_bucket_acl.go new file mode 100644 index 000000000..3b866e0ad --- /dev/null +++ b/builtin/providers/google/resource_storage_bucket_acl.go @@ -0,0 +1,291 @@ +package google + +import ( + "fmt" + "log" + "strings" + + "github.com/hashicorp/terraform/helper/schema" + + "google.golang.org/api/storage/v1" +) + +func resourceStorageBucketAcl() *schema.Resource { + return &schema.Resource{ + Create: resourceStorageBucketAclCreate, + Read: resourceStorageBucketAclRead, + Update: resourceStorageBucketAclUpdate, + Delete: resourceStorageBucketAclDelete, + + Schema: map[string]*schema.Schema{ + "bucket": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + "predefined_acl": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ForceNew: true, + }, + "role_entity": &schema.Schema{ + Type: schema.TypeList, + Optional: true, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "default_acl": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + }, + }, + } +} + +type RoleEntity struct { + Role string + Entity string +} + +func getBucketAclId(bucket string) string { + return bucket + "-acl" +} + +func getRoleEntityPair(role_entity string) (*RoleEntity, error) { + split := strings.Split(role_entity, ":") + if len(split) != 2 { + return nil, fmt.Errorf("Error, each role entity pair must be " + + "formatted as ROLE:entity") + } + + return &RoleEntity{Role: split[0], Entity: split[1]}, nil +} + +func resourceStorageBucketAclCreate(d *schema.ResourceData, meta interface{}) error { + config := meta.(*Config) + + bucket := d.Get("bucket").(string) + predefined_acl := "" + default_acl := "" + role_entity := make([]interface{}, 0) + + if v, ok := d.GetOk("predefined_acl"); ok { + predefined_acl = v.(string) + } + + if v, ok := d.GetOk("role_entity"); ok { + role_entity = v.([]interface{}) + } + + if v, ok := d.GetOk("default_acl"); ok { + default_acl = v.(string) + } + + if len(predefined_acl) > 0 { + if len(role_entity) > 0 { + return fmt.Errorf("Error, you cannot specify both " + + "\"predefined_acl\" and \"role_entity\"") + } + + res, err := config.clientStorage.Buckets.Get(bucket).Do() + + if err != nil { + return fmt.Errorf("Error reading bucket %s: %v", bucket, err) + } + + res, err = config.clientStorage.Buckets.Update(bucket, + res).PredefinedAcl(predefined_acl).Do() + + if err != nil { + return fmt.Errorf("Error updating bucket %s: %v", bucket, err) + } + + return resourceStorageBucketAclRead(d, meta) + } else if len(role_entity) > 0 { + for _, v := range role_entity { + pair, err := getRoleEntityPair(v.(string)) + + bucketAccessControl := &storage.BucketAccessControl{ + Role: pair.Role, + Entity: pair.Entity, + } + + log.Printf("[DEBUG]: storing re %s-%s", pair.Role, pair.Entity) + + _, err = config.clientStorage.BucketAccessControls.Insert(bucket, bucketAccessControl).Do() + + if err != nil { + return fmt.Errorf("Error updating ACL for bucket %s: %v", bucket, err) + } + } + + return resourceStorageBucketAclRead(d, meta) + } + + if len(default_acl) > 0 { + res, err := config.clientStorage.Buckets.Get(bucket).Do() + + if err != nil { + return fmt.Errorf("Error reading bucket %s: %v", bucket, err) + } + + res, err = config.clientStorage.Buckets.Update(bucket, + res).PredefinedDefaultObjectAcl(default_acl).Do() + + if err != nil { + return fmt.Errorf("Error updating bucket %s: %v", bucket, err) + } + + return resourceStorageBucketAclRead(d, meta) + } + + return nil +} + +func resourceStorageBucketAclRead(d *schema.ResourceData, meta interface{}) error { + config := meta.(*Config) + + bucket := d.Get("bucket").(string) + + // Predefined ACLs cannot easily be parsed once they have been processed + // by the GCP server + if _, ok := d.GetOk("predefined_acl"); !ok { + role_entity := make([]interface{}, 0) + re_local := d.Get("role_entity").([]interface{}) + re_local_map := make(map[string]string) + for _, v := range re_local { + res, err := getRoleEntityPair(v.(string)) + + if err != nil { + return fmt.Errorf( + "Old state has malformed Role/Entity pair: %v", err) + } + + re_local_map[res.Entity] = res.Role + } + + res, err := config.clientStorage.BucketAccessControls.List(bucket).Do() + + if err != nil { + return err + } + + for _, v := range res.Items { + log.Printf("[DEBUG]: examining re %s-%s", v.Role, v.Entity) + // We only store updates to the locally defined access controls + if _, in := re_local_map[v.Entity]; in { + role_entity = append(role_entity, fmt.Sprintf("%s:%s", v.Role, v.Entity)) + log.Printf("[DEBUG]: saving re %s-%s", v.Role, v.Entity) + } + } + + d.Set("role_entity", role_entity) + } + + d.SetId(getBucketAclId(bucket)) + return nil +} + +func resourceStorageBucketAclUpdate(d *schema.ResourceData, meta interface{}) error { + config := meta.(*Config) + + bucket := d.Get("bucket").(string) + + if d.HasChange("role_entity") { + o, n := d.GetChange("role_entity") + old_re, new_re := o.([]interface{}), n.([]interface{}) + + old_re_map := make(map[string]string) + for _, v := range old_re { + res, err := getRoleEntityPair(v.(string)) + + if err != nil { + return fmt.Errorf( + "Old state has malformed Role/Entity pair: %v", err) + } + + old_re_map[res.Entity] = res.Role + } + + for _, v := range new_re { + pair, err := getRoleEntityPair(v.(string)) + + bucketAccessControl := &storage.BucketAccessControl{ + Role: pair.Role, + Entity: pair.Entity, + } + + // If the old state is missing this entity, it needs to + // be created. Otherwise it is updated + if _, ok := old_re_map[pair.Entity]; ok { + _, err = config.clientStorage.BucketAccessControls.Update( + bucket, pair.Entity, bucketAccessControl).Do() + } else { + _, err = config.clientStorage.BucketAccessControls.Insert( + bucket, bucketAccessControl).Do() + } + + // Now we only store the keys that have to be removed + delete(old_re_map, pair.Entity) + + if err != nil { + return fmt.Errorf("Error updating ACL for bucket %s: %v", bucket, err) + } + } + + for entity, _ := range old_re_map { + log.Printf("[DEBUG]: removing entity %s", entity) + err := config.clientStorage.BucketAccessControls.Delete(bucket, entity).Do() + + if err != nil { + return fmt.Errorf("Error updating ACL for bucket %s: %v", bucket, err) + } + } + + return resourceStorageBucketAclRead(d, meta) + } + + if d.HasChange("default_acl") { + default_acl := d.Get("default_acl").(string) + + res, err := config.clientStorage.Buckets.Get(bucket).Do() + + if err != nil { + return fmt.Errorf("Error reading bucket %s: %v", bucket, err) + } + + res, err = config.clientStorage.Buckets.Update(bucket, + res).PredefinedDefaultObjectAcl(default_acl).Do() + + if err != nil { + return fmt.Errorf("Error updating bucket %s: %v", bucket, err) + } + + return resourceStorageBucketAclRead(d, meta) + } + + return nil +} + +func resourceStorageBucketAclDelete(d *schema.ResourceData, meta interface{}) error { + config := meta.(*Config) + + bucket := d.Get("bucket").(string) + + re_local := d.Get("role_entity").([]interface{}) + for _, v := range re_local { + res, err := getRoleEntityPair(v.(string)) + if err != nil { + return err + } + + log.Printf("[DEBUG]: removing entity %s", res.Entity) + + err = config.clientStorage.BucketAccessControls.Delete(bucket, res.Entity).Do() + + if err != nil { + return fmt.Errorf("Error deleting entity %s ACL: %s", res.Entity, err) + } + } + + return nil +} diff --git a/builtin/providers/google/resource_storage_bucket_acl_test.go b/builtin/providers/google/resource_storage_bucket_acl_test.go new file mode 100644 index 000000000..9cdc2b173 --- /dev/null +++ b/builtin/providers/google/resource_storage_bucket_acl_test.go @@ -0,0 +1,231 @@ +package google + +import ( + "fmt" + "math/rand" + "testing" + "time" + + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" + + //"google.golang.org/api/storage/v1" +) + +var roleEntityBasic1 = "OWNER:user-omeemail@gmail.com" + +var roleEntityBasic2 = "READER:user-anotheremail@gmail.com" + +var roleEntityBasic3_owner = "OWNER:user-yetanotheremail@gmail.com" + +var roleEntityBasic3_reader = "READER:user-yetanotheremail@gmail.com" + +var testAclBucketName = fmt.Sprintf("%s-%d", "tf-test-acl-bucket", rand.New(rand.NewSource(time.Now().UnixNano())).Int()) + +func TestAccGoogleStorageBucketAcl_basic(t *testing.T) { + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccGoogleStorageBucketAclDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testGoogleStorageBucketsAclBasic1, + Check: resource.ComposeTestCheckFunc( + testAccCheckGoogleStorageBucketAcl(testAclBucketName, roleEntityBasic1), + testAccCheckGoogleStorageBucketAcl(testAclBucketName, roleEntityBasic2), + ), + }, + }, + }) +} + +func TestAccGoogleStorageBucketAcl_upgrade(t *testing.T) { + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccGoogleStorageBucketAclDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testGoogleStorageBucketsAclBasic1, + Check: resource.ComposeTestCheckFunc( + testAccCheckGoogleStorageBucketAcl(testAclBucketName, roleEntityBasic1), + testAccCheckGoogleStorageBucketAcl(testAclBucketName, roleEntityBasic2), + ), + }, + + resource.TestStep{ + Config: testGoogleStorageBucketsAclBasic2, + Check: resource.ComposeTestCheckFunc( + testAccCheckGoogleStorageBucketAcl(testAclBucketName, roleEntityBasic2), + testAccCheckGoogleStorageBucketAcl(testAclBucketName, roleEntityBasic3_owner), + ), + }, + + resource.TestStep{ + Config: testGoogleStorageBucketsAclBasicDelete, + Check: resource.ComposeTestCheckFunc( + testAccCheckGoogleStorageBucketAclDelete(testAclBucketName, roleEntityBasic1), + testAccCheckGoogleStorageBucketAclDelete(testAclBucketName, roleEntityBasic2), + testAccCheckGoogleStorageBucketAclDelete(testAclBucketName, roleEntityBasic3_owner), + ), + }, + }, + }) +} + +func TestAccGoogleStorageBucketAcl_downgrade(t *testing.T) { + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccGoogleStorageBucketAclDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testGoogleStorageBucketsAclBasic2, + Check: resource.ComposeTestCheckFunc( + testAccCheckGoogleStorageBucketAcl(testAclBucketName, roleEntityBasic2), + testAccCheckGoogleStorageBucketAcl(testAclBucketName, roleEntityBasic3_owner), + ), + }, + + resource.TestStep{ + Config: testGoogleStorageBucketsAclBasic3, + Check: resource.ComposeTestCheckFunc( + testAccCheckGoogleStorageBucketAcl(testAclBucketName, roleEntityBasic2), + testAccCheckGoogleStorageBucketAcl(testAclBucketName, roleEntityBasic3_reader), + ), + }, + + resource.TestStep{ + Config: testGoogleStorageBucketsAclBasicDelete, + Check: resource.ComposeTestCheckFunc( + testAccCheckGoogleStorageBucketAclDelete(testAclBucketName, roleEntityBasic1), + testAccCheckGoogleStorageBucketAclDelete(testAclBucketName, roleEntityBasic2), + testAccCheckGoogleStorageBucketAclDelete(testAclBucketName, roleEntityBasic3_owner), + ), + }, + }, + }) +} + +func TestAccGoogleStorageBucketAcl_predefined(t *testing.T) { + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccGoogleStorageBucketAclDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testGoogleStorageBucketsAclPredefined, + }, + }, + }) +} + +func testAccCheckGoogleStorageBucketAclDelete(bucket, roleEntityS string) resource.TestCheckFunc { + return func(s *terraform.State) error { + roleEntity, _ := getRoleEntityPair(roleEntityS) + config := testAccProvider.Meta().(*Config) + + _, err := config.clientStorage.BucketAccessControls.Get(bucket, roleEntity.Entity).Do() + + if err != nil { + return nil + } + + return fmt.Errorf("Error, entity %s still exists", roleEntity.Entity) + } +} + +func testAccCheckGoogleStorageBucketAcl(bucket, roleEntityS string) resource.TestCheckFunc { + return func(s *terraform.State) error { + roleEntity, _ := getRoleEntityPair(roleEntityS) + config := testAccProvider.Meta().(*Config) + + res, err := config.clientStorage.BucketAccessControls.Get(bucket, roleEntity.Entity).Do() + + if err != nil { + return fmt.Errorf("Error retrieving contents of acl for bucket %s: %s", bucket, err) + } + + if res.Role != roleEntity.Role { + return fmt.Errorf("Error, Role mismatch %s != %s", res.Role, roleEntity.Role) + } + + return nil + } +} + +func testAccGoogleStorageBucketAclDestroy(s *terraform.State) error { + config := testAccProvider.Meta().(*Config) + + for _, rs := range s.RootModule().Resources { + if rs.Type != "google_storage_bucket_acl" { + continue + } + + bucket := rs.Primary.Attributes["bucket"] + + _, err := config.clientStorage.BucketAccessControls.List(bucket).Do() + + if err == nil { + return fmt.Errorf("Acl for bucket %s still exists", bucket) + } + } + + return nil +} + +var testGoogleStorageBucketsAclBasic1 = fmt.Sprintf(` +resource "google_storage_bucket" "bucket" { + name = "%s" +} + +resource "google_storage_bucket_acl" "acl" { + bucket = "${google_storage_bucket.bucket.name}" + role_entity = ["%s", "%s"] +} +`, testAclBucketName, roleEntityBasic1, roleEntityBasic2) + +var testGoogleStorageBucketsAclBasic2 = fmt.Sprintf(` +resource "google_storage_bucket" "bucket" { + name = "%s" +} + +resource "google_storage_bucket_acl" "acl" { + bucket = "${google_storage_bucket.bucket.name}" + role_entity = ["%s", "%s"] +} +`, testAclBucketName, roleEntityBasic2, roleEntityBasic3_owner) + +var testGoogleStorageBucketsAclBasicDelete = fmt.Sprintf(` +resource "google_storage_bucket" "bucket" { + name = "%s" +} + +resource "google_storage_bucket_acl" "acl" { + bucket = "${google_storage_bucket.bucket.name}" + role_entity = [] +} +`, testAclBucketName) + +var testGoogleStorageBucketsAclBasic3 = fmt.Sprintf(` +resource "google_storage_bucket" "bucket" { + name = "%s" +} + +resource "google_storage_bucket_acl" "acl" { + bucket = "${google_storage_bucket.bucket.name}" + role_entity = ["%s", "%s"] +} +`, testAclBucketName, roleEntityBasic2, roleEntityBasic3_reader) + +var testGoogleStorageBucketsAclPredefined = fmt.Sprintf(` +resource "google_storage_bucket" "bucket" { + name = "%s" +} + +resource "google_storage_bucket_acl" "acl" { + bucket = "${google_storage_bucket.bucket.name}" + predefined_acl = "projectPrivate" + default_acl = "projectPrivate" +} +`, testAclBucketName) diff --git a/builtin/providers/google/resource_storage_bucket_object.go b/builtin/providers/google/resource_storage_bucket_object.go index cd5fe7d9c..231153a85 100644 --- a/builtin/providers/google/resource_storage_bucket_object.go +++ b/builtin/providers/google/resource_storage_bucket_object.go @@ -1,8 +1,8 @@ package google import ( - "os" "fmt" + "os" "github.com/hashicorp/terraform/helper/schema" @@ -13,7 +13,6 @@ func resourceStorageBucketObject() *schema.Resource { return &schema.Resource{ Create: resourceStorageBucketObjectCreate, Read: resourceStorageBucketObjectRead, - Update: resourceStorageBucketObjectUpdate, Delete: resourceStorageBucketObjectDelete, Schema: map[string]*schema.Schema{ @@ -33,10 +32,10 @@ func resourceStorageBucketObject() *schema.Resource { ForceNew: true, }, "predefined_acl": &schema.Schema{ - Type: schema.TypeString, - Default: "projectPrivate", - Optional: true, - ForceNew: true, + Type: schema.TypeString, + Deprecated: "Please use resource \"storage_object_acl.predefined_acl\" instead.", + Optional: true, + ForceNew: true, }, "md5hash": &schema.Schema{ Type: schema.TypeString, @@ -60,7 +59,6 @@ func resourceStorageBucketObjectCreate(d *schema.ResourceData, meta interface{}) bucket := d.Get("bucket").(string) name := d.Get("name").(string) source := d.Get("source").(string) - acl := d.Get("predefined_acl").(string) file, err := os.Open(source) if err != nil { @@ -73,7 +71,9 @@ func resourceStorageBucketObjectCreate(d *schema.ResourceData, meta interface{}) insertCall := objectsService.Insert(bucket, object) insertCall.Name(name) insertCall.Media(file) - insertCall.PredefinedAcl(acl) + if v, ok := d.GetOk("predefined_acl"); ok { + insertCall.PredefinedAcl(v.(string)) + } _, err = insertCall.Do() @@ -107,12 +107,6 @@ func resourceStorageBucketObjectRead(d *schema.ResourceData, meta interface{}) e return nil } -func resourceStorageBucketObjectUpdate(d *schema.ResourceData, meta interface{}) error { - // The Cloud storage API doesn't support updating object data contents, - // only metadata. So once we implement metadata we'll have work to do here - return nil -} - func resourceStorageBucketObjectDelete(d *schema.ResourceData, meta interface{}) error { config := meta.(*Config) diff --git a/builtin/providers/google/resource_storage_bucket_object_test.go b/builtin/providers/google/resource_storage_bucket_object_test.go index d7be902a1..e84822fdd 100644 --- a/builtin/providers/google/resource_storage_bucket_object_test.go +++ b/builtin/providers/google/resource_storage_bucket_object_test.go @@ -1,11 +1,11 @@ package google import ( - "fmt" - "testing" - "io/ioutil" "crypto/md5" "encoding/base64" + "fmt" + "io/ioutil" + "testing" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/terraform" @@ -48,7 +48,6 @@ func testAccCheckGoogleStorageObject(bucket, object, md5 string) resource.TestCh objectsService := storage.NewObjectsService(config.clientStorage) - getCall := objectsService.Get(bucket, object) res, err := getCall.Do() @@ -56,7 +55,7 @@ func testAccCheckGoogleStorageObject(bucket, object, md5 string) resource.TestCh return fmt.Errorf("Error retrieving contents of object %s: %s", object, err) } - if (md5 != res.Md5Hash) { + if md5 != res.Md5Hash { return fmt.Errorf("Error contents of %s garbled, md5 hashes don't match (%s, %s)", object, md5, res.Md5Hash) } diff --git a/builtin/providers/google/resource_storage_bucket_test.go b/builtin/providers/google/resource_storage_bucket_test.go index a7b59c61a..8e8330050 100644 --- a/builtin/providers/google/resource_storage_bucket_test.go +++ b/builtin/providers/google/resource_storage_bucket_test.go @@ -27,8 +27,6 @@ func TestAccStorage_basic(t *testing.T) { Check: resource.ComposeTestCheckFunc( testAccCheckCloudStorageBucketExists( "google_storage_bucket.bucket", &bucketName), - resource.TestCheckResourceAttr( - "google_storage_bucket.bucket", "predefined_acl", "projectPrivate"), resource.TestCheckResourceAttr( "google_storage_bucket.bucket", "location", "US"), resource.TestCheckResourceAttr( @@ -52,8 +50,6 @@ func TestAccStorageCustomAttributes(t *testing.T) { Check: resource.ComposeTestCheckFunc( testAccCheckCloudStorageBucketExists( "google_storage_bucket.bucket", &bucketName), - resource.TestCheckResourceAttr( - "google_storage_bucket.bucket", "predefined_acl", "publicReadWrite"), resource.TestCheckResourceAttr( "google_storage_bucket.bucket", "location", "EU"), resource.TestCheckResourceAttr( @@ -77,8 +73,6 @@ func TestAccStorageBucketUpdate(t *testing.T) { Check: resource.ComposeTestCheckFunc( testAccCheckCloudStorageBucketExists( "google_storage_bucket.bucket", &bucketName), - resource.TestCheckResourceAttr( - "google_storage_bucket.bucket", "predefined_acl", "projectPrivate"), resource.TestCheckResourceAttr( "google_storage_bucket.bucket", "location", "US"), resource.TestCheckResourceAttr( diff --git a/builtin/providers/google/resource_storage_object_acl.go b/builtin/providers/google/resource_storage_object_acl.go new file mode 100644 index 000000000..5212f81db --- /dev/null +++ b/builtin/providers/google/resource_storage_object_acl.go @@ -0,0 +1,253 @@ +package google + +import ( + "fmt" + "log" + + "github.com/hashicorp/terraform/helper/schema" + + "google.golang.org/api/storage/v1" +) + +func resourceStorageObjectAcl() *schema.Resource { + return &schema.Resource{ + Create: resourceStorageObjectAclCreate, + Read: resourceStorageObjectAclRead, + Update: resourceStorageObjectAclUpdate, + Delete: resourceStorageObjectAclDelete, + + Schema: map[string]*schema.Schema{ + "bucket": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + "object": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + "role_entity": &schema.Schema{ + Type: schema.TypeList, + Optional: true, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "predefined_acl": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ForceNew: true, + }, + }, + } +} + +func getObjectAclId(object string) string { + return object + "-acl" +} + +func resourceStorageObjectAclCreate(d *schema.ResourceData, meta interface{}) error { + config := meta.(*Config) + + bucket := d.Get("bucket").(string) + object := d.Get("object").(string) + + predefined_acl := "" + role_entity := make([]interface{}, 0) + + if v, ok := d.GetOk("predefined_acl"); ok { + predefined_acl = v.(string) + } + + if v, ok := d.GetOk("role_entity"); ok { + role_entity = v.([]interface{}) + } + + if len(predefined_acl) > 0 { + if len(role_entity) > 0 { + return fmt.Errorf("Error, you cannot specify both " + + "\"predefined_acl\" and \"role_entity\"") + } + + res, err := config.clientStorage.Objects.Get(bucket, object).Do() + + if err != nil { + return fmt.Errorf("Error reading object %s: %v", bucket, err) + } + + res, err = config.clientStorage.Objects.Update(bucket, object, + res).PredefinedAcl(predefined_acl).Do() + + if err != nil { + return fmt.Errorf("Error updating object %s: %v", bucket, err) + } + + return resourceStorageBucketAclRead(d, meta) + } else if len(role_entity) > 0 { + for _, v := range role_entity { + pair, err := getRoleEntityPair(v.(string)) + + objectAccessControl := &storage.ObjectAccessControl{ + Role: pair.Role, + Entity: pair.Entity, + } + + log.Printf("[DEBUG]: setting role = %s, entity = %s", pair.Role, pair.Entity) + + _, err = config.clientStorage.ObjectAccessControls.Insert(bucket, + object, objectAccessControl).Do() + + if err != nil { + return fmt.Errorf("Error setting ACL for %s on object %s: %v", pair.Entity, object, err) + } + } + + return resourceStorageObjectAclRead(d, meta) + } + + return fmt.Errorf("Error, you must specify either " + + "\"predefined_acl\" or \"role_entity\"") +} + +func resourceStorageObjectAclRead(d *schema.ResourceData, meta interface{}) error { + config := meta.(*Config) + + bucket := d.Get("bucket").(string) + object := d.Get("object").(string) + + // Predefined ACLs cannot easily be parsed once they have been processed + // by the GCP server + if _, ok := d.GetOk("predefined_acl"); !ok { + role_entity := make([]interface{}, 0) + re_local := d.Get("role_entity").([]interface{}) + re_local_map := make(map[string]string) + for _, v := range re_local { + res, err := getRoleEntityPair(v.(string)) + + if err != nil { + return fmt.Errorf( + "Old state has malformed Role/Entity pair: %v", err) + } + + re_local_map[res.Entity] = res.Role + } + + res, err := config.clientStorage.ObjectAccessControls.List(bucket, object).Do() + + if err != nil { + return err + } + + for _, v := range res.Items { + role := "" + entity := "" + for key, val := range v.(map[string]interface{}) { + if key == "role" { + role = val.(string) + } else if key == "entity" { + entity = val.(string) + } + } + if _, in := re_local_map[entity]; in { + role_entity = append(role_entity, fmt.Sprintf("%s:%s", role, entity)) + log.Printf("[DEBUG]: saving re %s-%s", role, entity) + } + } + + d.Set("role_entity", role_entity) + } + + d.SetId(getObjectAclId(object)) + return nil +} + +func resourceStorageObjectAclUpdate(d *schema.ResourceData, meta interface{}) error { + config := meta.(*Config) + + bucket := d.Get("bucket").(string) + object := d.Get("object").(string) + + if d.HasChange("role_entity") { + o, n := d.GetChange("role_entity") + old_re, new_re := o.([]interface{}), n.([]interface{}) + + old_re_map := make(map[string]string) + for _, v := range old_re { + res, err := getRoleEntityPair(v.(string)) + + if err != nil { + return fmt.Errorf( + "Old state has malformed Role/Entity pair: %v", err) + } + + old_re_map[res.Entity] = res.Role + } + + for _, v := range new_re { + pair, err := getRoleEntityPair(v.(string)) + + objectAccessControl := &storage.ObjectAccessControl{ + Role: pair.Role, + Entity: pair.Entity, + } + + // If the old state is missing this entity, it needs to + // be created. Otherwise it is updated + if _, ok := old_re_map[pair.Entity]; ok { + _, err = config.clientStorage.ObjectAccessControls.Update( + bucket, object, pair.Entity, objectAccessControl).Do() + } else { + _, err = config.clientStorage.ObjectAccessControls.Insert( + bucket, object, objectAccessControl).Do() + } + + // Now we only store the keys that have to be removed + delete(old_re_map, pair.Entity) + + if err != nil { + return fmt.Errorf("Error updating ACL for object %s: %v", bucket, err) + } + } + + for entity, _ := range old_re_map { + log.Printf("[DEBUG]: removing entity %s", entity) + err := config.clientStorage.ObjectAccessControls.Delete(bucket, object, entity).Do() + + if err != nil { + return fmt.Errorf("Error updating ACL for object %s: %v", bucket, err) + } + } + + return resourceStorageObjectAclRead(d, meta) + } + + return nil +} + +func resourceStorageObjectAclDelete(d *schema.ResourceData, meta interface{}) error { + config := meta.(*Config) + + bucket := d.Get("bucket").(string) + object := d.Get("object").(string) + + re_local := d.Get("role_entity").([]interface{}) + for _, v := range re_local { + res, err := getRoleEntityPair(v.(string)) + if err != nil { + return err + } + + entity := res.Entity + + log.Printf("[DEBUG]: removing entity %s", entity) + + err = config.clientStorage.ObjectAccessControls.Delete(bucket, object, + entity).Do() + + if err != nil { + return fmt.Errorf("Error deleting entity %s ACL: %s", + entity, err) + } + } + + return nil +} diff --git a/builtin/providers/google/resource_storage_object_acl_test.go b/builtin/providers/google/resource_storage_object_acl_test.go new file mode 100644 index 000000000..ff14f683c --- /dev/null +++ b/builtin/providers/google/resource_storage_object_acl_test.go @@ -0,0 +1,310 @@ +package google + +import ( + "fmt" + "io/ioutil" + "math/rand" + "testing" + "time" + + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" + + //"google.golang.org/api/storage/v1" +) + +var tfObjectAcl, errObjectAcl = ioutil.TempFile("", "tf-gce-test") +var testAclObjectName = fmt.Sprintf("%s-%d", "tf-test-acl-object", + rand.New(rand.NewSource(time.Now().UnixNano())).Int()) + +func TestAccGoogleStorageObjectAcl_basic(t *testing.T) { + objectData := []byte("data data data") + ioutil.WriteFile(tfObjectAcl.Name(), objectData, 0644) + resource.Test(t, resource.TestCase{ + PreCheck: func() { + if errObjectAcl != nil { + panic(errObjectAcl) + } + testAccPreCheck(t) + }, + Providers: testAccProviders, + CheckDestroy: testAccGoogleStorageObjectAclDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testGoogleStorageObjectsAclBasic1, + Check: resource.ComposeTestCheckFunc( + testAccCheckGoogleStorageObjectAcl(testAclBucketName, + testAclObjectName, roleEntityBasic1), + testAccCheckGoogleStorageObjectAcl(testAclBucketName, + testAclObjectName, roleEntityBasic2), + ), + }, + }, + }) +} + +func TestAccGoogleStorageObjectAcl_upgrade(t *testing.T) { + objectData := []byte("data data data") + ioutil.WriteFile(tfObjectAcl.Name(), objectData, 0644) + resource.Test(t, resource.TestCase{ + PreCheck: func() { + if errObjectAcl != nil { + panic(errObjectAcl) + } + testAccPreCheck(t) + }, + Providers: testAccProviders, + CheckDestroy: testAccGoogleStorageObjectAclDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testGoogleStorageObjectsAclBasic1, + Check: resource.ComposeTestCheckFunc( + testAccCheckGoogleStorageObjectAcl(testAclBucketName, + testAclObjectName, roleEntityBasic1), + testAccCheckGoogleStorageObjectAcl(testAclBucketName, + testAclObjectName, roleEntityBasic2), + ), + }, + + resource.TestStep{ + Config: testGoogleStorageObjectsAclBasic2, + Check: resource.ComposeTestCheckFunc( + testAccCheckGoogleStorageObjectAcl(testAclBucketName, + testAclObjectName, roleEntityBasic2), + testAccCheckGoogleStorageObjectAcl(testAclBucketName, + testAclObjectName, roleEntityBasic3_owner), + ), + }, + + resource.TestStep{ + Config: testGoogleStorageObjectsAclBasicDelete, + Check: resource.ComposeTestCheckFunc( + testAccCheckGoogleStorageObjectAclDelete(testAclBucketName, + testAclObjectName, roleEntityBasic1), + testAccCheckGoogleStorageObjectAclDelete(testAclBucketName, + testAclObjectName, roleEntityBasic2), + testAccCheckGoogleStorageObjectAclDelete(testAclBucketName, + testAclObjectName, roleEntityBasic3_reader), + ), + }, + }, + }) +} + +func TestAccGoogleStorageObjectAcl_downgrade(t *testing.T) { + objectData := []byte("data data data") + ioutil.WriteFile(tfObjectAcl.Name(), objectData, 0644) + resource.Test(t, resource.TestCase{ + PreCheck: func() { + if errObjectAcl != nil { + panic(errObjectAcl) + } + testAccPreCheck(t) + }, + Providers: testAccProviders, + CheckDestroy: testAccGoogleStorageObjectAclDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testGoogleStorageObjectsAclBasic2, + Check: resource.ComposeTestCheckFunc( + testAccCheckGoogleStorageObjectAcl(testAclBucketName, + testAclObjectName, roleEntityBasic2), + testAccCheckGoogleStorageObjectAcl(testAclBucketName, + testAclObjectName, roleEntityBasic3_owner), + ), + }, + + resource.TestStep{ + Config: testGoogleStorageObjectsAclBasic3, + Check: resource.ComposeTestCheckFunc( + testAccCheckGoogleStorageObjectAcl(testAclBucketName, + testAclObjectName, roleEntityBasic2), + testAccCheckGoogleStorageObjectAcl(testAclBucketName, + testAclObjectName, roleEntityBasic3_reader), + ), + }, + + resource.TestStep{ + Config: testGoogleStorageObjectsAclBasicDelete, + Check: resource.ComposeTestCheckFunc( + testAccCheckGoogleStorageObjectAclDelete(testAclBucketName, + testAclObjectName, roleEntityBasic1), + testAccCheckGoogleStorageObjectAclDelete(testAclBucketName, + testAclObjectName, roleEntityBasic2), + testAccCheckGoogleStorageObjectAclDelete(testAclBucketName, + testAclObjectName, roleEntityBasic3_reader), + ), + }, + }, + }) +} + +func TestAccGoogleStorageObjectAcl_predefined(t *testing.T) { + objectData := []byte("data data data") + ioutil.WriteFile(tfObjectAcl.Name(), objectData, 0644) + resource.Test(t, resource.TestCase{ + PreCheck: func() { + if errObjectAcl != nil { + panic(errObjectAcl) + } + testAccPreCheck(t) + }, + Providers: testAccProviders, + CheckDestroy: testAccGoogleStorageObjectAclDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testGoogleStorageObjectsAclPredefined, + }, + }, + }) +} + +func testAccCheckGoogleStorageObjectAcl(bucket, object, roleEntityS string) resource.TestCheckFunc { + return func(s *terraform.State) error { + roleEntity, _ := getRoleEntityPair(roleEntityS) + config := testAccProvider.Meta().(*Config) + + res, err := config.clientStorage.ObjectAccessControls.Get(bucket, + object, roleEntity.Entity).Do() + + if err != nil { + return fmt.Errorf("Error retrieving contents of acl for bucket %s: %s", bucket, err) + } + + if res.Role != roleEntity.Role { + return fmt.Errorf("Error, Role mismatch %s != %s", res.Role, roleEntity.Role) + } + + return nil + } +} + +func testAccCheckGoogleStorageObjectAclDelete(bucket, object, roleEntityS string) resource.TestCheckFunc { + return func(s *terraform.State) error { + roleEntity, _ := getRoleEntityPair(roleEntityS) + config := testAccProvider.Meta().(*Config) + + _, err := config.clientStorage.ObjectAccessControls.Get(bucket, + object, roleEntity.Entity).Do() + + if err != nil { + return nil + } + + return fmt.Errorf("Error, Entity still exists %s", roleEntity.Entity) + } +} + +func testAccGoogleStorageObjectAclDestroy(s *terraform.State) error { + config := testAccProvider.Meta().(*Config) + + for _, rs := range s.RootModule().Resources { + if rs.Type != "google_storage_bucket_acl" { + continue + } + + bucket := rs.Primary.Attributes["bucket"] + object := rs.Primary.Attributes["object"] + + _, err := config.clientStorage.ObjectAccessControls.List(bucket, object).Do() + + if err == nil { + return fmt.Errorf("Acl for bucket %s still exists", bucket) + } + } + + return nil +} + +var testGoogleStorageObjectsAclBasicDelete = fmt.Sprintf(` +resource "google_storage_bucket" "bucket" { + name = "%s" +} + +resource "google_storage_bucket_object" "object" { + name = "%s" + bucket = "${google_storage_bucket.bucket.name}" + source = "%s" +} + +resource "google_storage_object_acl" "acl" { + object = "${google_storage_bucket_object.object.name}" + bucket = "${google_storage_bucket.bucket.name}" + role_entity = [] +} +`, testAclBucketName, testAclObjectName, tfObjectAcl.Name()) + +var testGoogleStorageObjectsAclBasic1 = fmt.Sprintf(` +resource "google_storage_bucket" "bucket" { + name = "%s" +} + +resource "google_storage_bucket_object" "object" { + name = "%s" + bucket = "${google_storage_bucket.bucket.name}" + source = "%s" +} + +resource "google_storage_object_acl" "acl" { + object = "${google_storage_bucket_object.object.name}" + bucket = "${google_storage_bucket.bucket.name}" + role_entity = ["%s", "%s"] +} +`, testAclBucketName, testAclObjectName, tfObjectAcl.Name(), + roleEntityBasic1, roleEntityBasic2) + +var testGoogleStorageObjectsAclBasic2 = fmt.Sprintf(` +resource "google_storage_bucket" "bucket" { + name = "%s" +} + +resource "google_storage_bucket_object" "object" { + name = "%s" + bucket = "${google_storage_bucket.bucket.name}" + source = "%s" +} + +resource "google_storage_object_acl" "acl" { + object = "${google_storage_bucket_object.object.name}" + bucket = "${google_storage_bucket.bucket.name}" + role_entity = ["%s", "%s"] +} +`, testAclBucketName, testAclObjectName, tfObjectAcl.Name(), + roleEntityBasic2, roleEntityBasic3_owner) + +var testGoogleStorageObjectsAclBasic3 = fmt.Sprintf(` +resource "google_storage_bucket" "bucket" { + name = "%s" +} + +resource "google_storage_bucket_object" "object" { + name = "%s" + bucket = "${google_storage_bucket.bucket.name}" + source = "%s" +} + +resource "google_storage_object_acl" "acl" { + object = "${google_storage_bucket_object.object.name}" + bucket = "${google_storage_bucket.bucket.name}" + role_entity = ["%s", "%s"] +} +`, testAclBucketName, testAclObjectName, tfObjectAcl.Name(), + roleEntityBasic2, roleEntityBasic3_reader) + +var testGoogleStorageObjectsAclPredefined = fmt.Sprintf(` +resource "google_storage_bucket" "bucket" { + name = "%s" +} + +resource "google_storage_bucket_object" "object" { + name = "%s" + bucket = "${google_storage_bucket.bucket.name}" + source = "%s" +} + +resource "google_storage_object_acl" "acl" { + object = "${google_storage_bucket_object.object.name}" + bucket = "${google_storage_bucket.bucket.name}" + predefined_acl = "projectPrivate" +} +`, testAclBucketName, testAclObjectName, tfObjectAcl.Name()) diff --git a/builtin/providers/google/service_scope.go b/builtin/providers/google/service_scope.go index 3985a9cc9..d4c518125 100644 --- a/builtin/providers/google/service_scope.go +++ b/builtin/providers/google/service_scope.go @@ -4,18 +4,22 @@ func canonicalizeServiceScope(scope string) string { // This is a convenience map of short names used by the gcloud tool // to the GCE auth endpoints they alias to. scopeMap := map[string]string{ - "bigquery": "https://www.googleapis.com/auth/bigquery", - "compute-ro": "https://www.googleapis.com/auth/compute.readonly", - "compute-rw": "https://www.googleapis.com/auth/compute", - "datastore": "https://www.googleapis.com/auth/datastore", - "logging-write": "https://www.googleapis.com/auth/logging.write", - "sql": "https://www.googleapis.com/auth/sqlservice", - "sql-admin": "https://www.googleapis.com/auth/sqlservice.admin", - "storage-full": "https://www.googleapis.com/auth/devstorage.full_control", - "storage-ro": "https://www.googleapis.com/auth/devstorage.read_only", - "storage-rw": "https://www.googleapis.com/auth/devstorage.read_write", - "taskqueue": "https://www.googleapis.com/auth/taskqueue", - "userinfo-email": "https://www.googleapis.com/auth/userinfo.email", + "bigquery": "https://www.googleapis.com/auth/bigquery", + "cloud-platform": "https://www.googleapis.com/auth/cloud-platform", + "compute-ro": "https://www.googleapis.com/auth/compute.readonly", + "compute-rw": "https://www.googleapis.com/auth/compute", + "datastore": "https://www.googleapis.com/auth/datastore", + "logging-write": "https://www.googleapis.com/auth/logging.write", + "monitoring": "https://www.googleapis.com/auth/monitoring", + "sql": "https://www.googleapis.com/auth/sqlservice", + "sql-admin": "https://www.googleapis.com/auth/sqlservice.admin", + "storage-full": "https://www.googleapis.com/auth/devstorage.full_control", + "storage-ro": "https://www.googleapis.com/auth/devstorage.read_only", + "storage-rw": "https://www.googleapis.com/auth/devstorage.read_write", + "taskqueue": "https://www.googleapis.com/auth/taskqueue", + "useraccounts-ro": "https://www.googleapis.com/auth/cloud.useraccounts.readonly", + "useraccounts-rw": "https://www.googleapis.com/auth/cloud.useraccounts", + "userinfo-email": "https://www.googleapis.com/auth/userinfo.email", } if matchedUrl, ok := scopeMap[scope]; ok { diff --git a/builtin/providers/heroku/resource_heroku_app.go b/builtin/providers/heroku/resource_heroku_app.go index 52954aa5d..4c2f3bf97 100644 --- a/builtin/providers/heroku/resource_heroku_app.go +++ b/builtin/providers/heroku/resource_heroku_app.go @@ -5,7 +5,7 @@ import ( "log" "github.com/cyberdelia/heroku-go/v3" - "github.com/hashicorp/terraform/helper/multierror" + "github.com/hashicorp/go-multierror" "github.com/hashicorp/terraform/helper/schema" ) diff --git a/builtin/providers/null/resource.go b/builtin/providers/null/resource.go index 0badf346c..a8467ad3d 100644 --- a/builtin/providers/null/resource.go +++ b/builtin/providers/null/resource.go @@ -16,10 +16,15 @@ func resource() *schema.Resource { return &schema.Resource{ Create: resourceCreate, Read: resourceRead, - Update: resourceUpdate, Delete: resourceDelete, - Schema: map[string]*schema.Schema{}, + Schema: map[string]*schema.Schema{ + "triggers": &schema.Schema{ + Type: schema.TypeMap, + Optional: true, + ForceNew: true, + }, + }, } } @@ -32,10 +37,6 @@ func resourceRead(d *schema.ResourceData, meta interface{}) error { return nil } -func resourceUpdate(d *schema.ResourceData, meta interface{}) error { - return nil -} - func resourceDelete(d *schema.ResourceData, meta interface{}) error { d.SetId("") return nil diff --git a/builtin/providers/openstack/resource_openstack_blockstorage_volume_v1.go b/builtin/providers/openstack/resource_openstack_blockstorage_volume_v1.go index cd5a5d567..e049269a9 100644 --- a/builtin/providers/openstack/resource_openstack_blockstorage_volume_v1.go +++ b/builtin/providers/openstack/resource_openstack_blockstorage_volume_v1.go @@ -136,7 +136,7 @@ func resourceBlockStorageVolumeV1Create(d *schema.ResourceData, meta interface{} v.ID) stateConf := &resource.StateChangeConf{ - Pending: []string{"downloading"}, + Pending: []string{"downloading"}, Target: "available", Refresh: VolumeV1StateRefreshFunc(blockStorageClient, v.ID), Timeout: 10 * time.Minute, diff --git a/builtin/providers/openstack/resource_openstack_compute_instance_v2.go b/builtin/providers/openstack/resource_openstack_compute_instance_v2.go index f08f94031..3101f41bc 100644 --- a/builtin/providers/openstack/resource_openstack_compute_instance_v2.go +++ b/builtin/providers/openstack/resource_openstack_compute_instance_v2.go @@ -610,14 +610,6 @@ func resourceComputeInstanceV2Update(d *schema.ResourceData, meta interface{}) e log.Printf("[DEBUG] Security groups to remove: %v", secgroupsToRemove) - for _, g := range secgroupsToAdd.List() { - err := secgroups.AddServerToGroup(computeClient, d.Id(), g.(string)).ExtractErr() - if err != nil { - return fmt.Errorf("Error adding security group to OpenStack server (%s): %s", d.Id(), err) - } - log.Printf("[DEBUG] Added security group (%s) to instance (%s)", g.(string), d.Id()) - } - for _, g := range secgroupsToRemove.List() { err := secgroups.RemoveServerFromGroup(computeClient, d.Id(), g.(string)).ExtractErr() if err != nil { @@ -634,6 +626,14 @@ func resourceComputeInstanceV2Update(d *schema.ResourceData, meta interface{}) e log.Printf("[DEBUG] Removed security group (%s) from instance (%s)", g.(string), d.Id()) } } + for _, g := range secgroupsToAdd.List() { + err := secgroups.AddServerToGroup(computeClient, d.Id(), g.(string)).ExtractErr() + if err != nil { + return fmt.Errorf("Error adding security group to OpenStack server (%s): %s", d.Id(), err) + } + log.Printf("[DEBUG] Added security group (%s) to instance (%s)", g.(string), d.Id()) + } + } if d.HasChange("admin_pass") { diff --git a/builtin/providers/openstack/resource_openstack_compute_secgroup_v2.go b/builtin/providers/openstack/resource_openstack_compute_secgroup_v2.go index e6d8be8ea..18812cb59 100644 --- a/builtin/providers/openstack/resource_openstack_compute_secgroup_v2.go +++ b/builtin/providers/openstack/resource_openstack_compute_secgroup_v2.go @@ -219,7 +219,7 @@ func resourceComputeSecGroupV2Delete(d *schema.ResourceData, meta interface{}) e } func resourceSecGroupRulesV2(d *schema.ResourceData) []secgroups.CreateRuleOpts { - rawRules := (d.Get("rule")).([]interface{}) + rawRules := d.Get("rule").([]interface{}) createRuleOptsList := make([]secgroups.CreateRuleOpts, len(rawRules)) for i, raw := range rawRules { rawMap := raw.(map[string]interface{}) diff --git a/builtin/providers/openstack/resource_openstack_lb_pool_v1.go b/builtin/providers/openstack/resource_openstack_lb_pool_v1.go index 1384796d5..64e0436db 100644 --- a/builtin/providers/openstack/resource_openstack_lb_pool_v1.go +++ b/builtin/providers/openstack/resource_openstack_lb_pool_v1.go @@ -292,7 +292,7 @@ func resourcePoolMonitorIDsV1(d *schema.ResourceData) []string { } func resourcePoolMembersV1(d *schema.ResourceData) []members.CreateOpts { - memberOptsRaw := (d.Get("member")).(*schema.Set) + memberOptsRaw := d.Get("member").(*schema.Set) memberOpts := make([]members.CreateOpts, memberOptsRaw.Len()) for i, raw := range memberOptsRaw.List() { rawMap := raw.(map[string]interface{}) diff --git a/builtin/providers/openstack/resource_openstack_networking_floatingip_v2.go b/builtin/providers/openstack/resource_openstack_networking_floatingip_v2.go index 1b81c6a96..37f1ca7cf 100644 --- a/builtin/providers/openstack/resource_openstack_networking_floatingip_v2.go +++ b/builtin/providers/openstack/resource_openstack_networking_floatingip_v2.go @@ -14,6 +14,7 @@ func resourceNetworkingFloatingIPV2() *schema.Resource { return &schema.Resource{ Create: resourceNetworkFloatingIPV2Create, Read: resourceNetworkFloatingIPV2Read, + Update: resourceNetworkFloatingIPV2Update, Delete: resourceNetworkFloatingIPV2Delete, Schema: map[string]*schema.Schema{ @@ -33,6 +34,11 @@ func resourceNetworkingFloatingIPV2() *schema.Resource { ForceNew: true, DefaultFunc: envDefaultFunc("OS_POOL_NAME"), }, + "port_id": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + Default: "", + }, }, } } @@ -53,6 +59,7 @@ func resourceNetworkFloatingIPV2Create(d *schema.ResourceData, meta interface{}) } createOpts := floatingips.CreateOpts{ FloatingNetworkID: poolID, + PortID: d.Get("port_id").(string), } log.Printf("[DEBUG] Create Options: %#v", createOpts) floatingIP, err := floatingips.Create(networkClient, createOpts).Extract() @@ -78,6 +85,7 @@ func resourceNetworkFloatingIPV2Read(d *schema.ResourceData, meta interface{}) e } d.Set("address", floatingIP.FloatingIP) + d.Set("port_id", floatingIP.PortID) poolName, err := getNetworkName(d, meta, floatingIP.FloatingNetworkID) if err != nil { return fmt.Errorf("Error retrieving floating IP pool name: %s", err) @@ -87,6 +95,29 @@ func resourceNetworkFloatingIPV2Read(d *schema.ResourceData, meta interface{}) e return nil } +func resourceNetworkFloatingIPV2Update(d *schema.ResourceData, meta interface{}) error { + config := meta.(*Config) + networkClient, err := config.networkingV2Client(d.Get("region").(string)) + if err != nil { + return fmt.Errorf("Error creating OpenStack network client: %s", err) + } + + var updateOpts floatingips.UpdateOpts + + if d.HasChange("port_id") { + updateOpts.PortID = d.Get("port_id").(string) + } + + log.Printf("[DEBUG] Update Options: %#v", updateOpts) + + _, err = floatingips.Update(networkClient, d.Id(), updateOpts).Extract() + if err != nil { + return fmt.Errorf("Error updating floating IP: %s", err) + } + + return resourceNetworkFloatingIPV2Read(d, meta) +} + func resourceNetworkFloatingIPV2Delete(d *schema.ResourceData, meta interface{}) error { config := meta.(*Config) networkClient, err := config.networkingV2Client(d.Get("region").(string)) diff --git a/builtin/providers/packet/config.go b/builtin/providers/packet/config.go new file mode 100644 index 000000000..bce54bf48 --- /dev/null +++ b/builtin/providers/packet/config.go @@ -0,0 +1,19 @@ +package packet + +import ( + "github.com/hashicorp/go-cleanhttp" + "github.com/packethost/packngo" +) + +const ( + consumerToken = "aZ9GmqHTPtxevvFq9SK3Pi2yr9YCbRzduCSXF2SNem5sjB91mDq7Th3ZwTtRqMWZ" +) + +type Config struct { + AuthToken string +} + +// Client() returns a new client for accessing packet. +func (c *Config) Client() *packngo.Client { + return packngo.NewClient(consumerToken, c.AuthToken, cleanhttp.DefaultClient()) +} diff --git a/builtin/providers/packet/provider.go b/builtin/providers/packet/provider.go new file mode 100644 index 000000000..c1efd6e83 --- /dev/null +++ b/builtin/providers/packet/provider.go @@ -0,0 +1,36 @@ +package packet + +import ( + "github.com/hashicorp/terraform/helper/schema" + "github.com/hashicorp/terraform/terraform" +) + +// Provider returns a schema.Provider for Packet. +func Provider() terraform.ResourceProvider { + return &schema.Provider{ + Schema: map[string]*schema.Schema{ + "auth_token": &schema.Schema{ + Type: schema.TypeString, + Required: true, + DefaultFunc: schema.EnvDefaultFunc("PACKET_AUTH_TOKEN", nil), + Description: "The API auth key for API operations.", + }, + }, + + ResourcesMap: map[string]*schema.Resource{ + "packet_device": resourcePacketDevice(), + "packet_ssh_key": resourcePacketSSHKey(), + "packet_project": resourcePacketProject(), + }, + + ConfigureFunc: providerConfigure, + } +} + +func providerConfigure(d *schema.ResourceData) (interface{}, error) { + config := Config{ + AuthToken: d.Get("auth_token").(string), + } + + return config.Client(), nil +} diff --git a/builtin/providers/packet/provider_test.go b/builtin/providers/packet/provider_test.go new file mode 100644 index 000000000..5483c4fb0 --- /dev/null +++ b/builtin/providers/packet/provider_test.go @@ -0,0 +1,35 @@ +package packet + +import ( + "os" + "testing" + + "github.com/hashicorp/terraform/helper/schema" + "github.com/hashicorp/terraform/terraform" +) + +var testAccProviders map[string]terraform.ResourceProvider +var testAccProvider *schema.Provider + +func init() { + testAccProvider = Provider().(*schema.Provider) + testAccProviders = map[string]terraform.ResourceProvider{ + "packet": testAccProvider, + } +} + +func TestProvider(t *testing.T) { + if err := Provider().(*schema.Provider).InternalValidate(); err != nil { + t.Fatalf("err: %s", err) + } +} + +func TestProvider_impl(t *testing.T) { + var _ terraform.ResourceProvider = Provider() +} + +func testAccPreCheck(t *testing.T) { + if v := os.Getenv("PACKET_AUTH_TOKEN"); v == "" { + t.Fatal("PACKET_AUTH_TOKEN must be set for acceptance tests") + } +} diff --git a/builtin/providers/packet/resource_packet_device.go b/builtin/providers/packet/resource_packet_device.go new file mode 100644 index 000000000..56fc7afe5 --- /dev/null +++ b/builtin/providers/packet/resource_packet_device.go @@ -0,0 +1,302 @@ +package packet + +import ( + "fmt" + "log" + "time" + + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/helper/schema" + "github.com/packethost/packngo" +) + +func resourcePacketDevice() *schema.Resource { + return &schema.Resource{ + Create: resourcePacketDeviceCreate, + Read: resourcePacketDeviceRead, + Update: resourcePacketDeviceUpdate, + Delete: resourcePacketDeviceDelete, + + Schema: map[string]*schema.Schema{ + "project_id": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + + "hostname": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + + "operating_system": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + + "facility": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + + "plan": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + + "billing_cycle": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + + "state": &schema.Schema{ + Type: schema.TypeString, + Computed: true, + }, + + "locked": &schema.Schema{ + Type: schema.TypeBool, + Computed: true, + }, + + "network": &schema.Schema{ + Type: schema.TypeList, + Computed: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "address": &schema.Schema{ + Type: schema.TypeString, + Computed: true, + }, + + "gateway": &schema.Schema{ + Type: schema.TypeString, + Computed: true, + }, + + "family": &schema.Schema{ + Type: schema.TypeInt, + Computed: true, + }, + + "cidr": &schema.Schema{ + Type: schema.TypeInt, + Computed: true, + }, + + "public": &schema.Schema{ + Type: schema.TypeBool, + Computed: true, + }, + }, + }, + }, + + "created": &schema.Schema{ + Type: schema.TypeString, + Computed: true, + }, + + "updated": &schema.Schema{ + Type: schema.TypeString, + Computed: true, + }, + + "user_data": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + }, + + "tags": &schema.Schema{ + Type: schema.TypeList, + Optional: true, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + }, + } +} + +func resourcePacketDeviceCreate(d *schema.ResourceData, meta interface{}) error { + client := meta.(*packngo.Client) + + createRequest := &packngo.DeviceCreateRequest{ + HostName: d.Get("hostname").(string), + Plan: d.Get("plan").(string), + Facility: d.Get("facility").(string), + OS: d.Get("operating_system").(string), + BillingCycle: d.Get("billing_cycle").(string), + ProjectID: d.Get("project_id").(string), + } + + if attr, ok := d.GetOk("user_data"); ok { + createRequest.UserData = attr.(string) + } + + tags := d.Get("tags.#").(int) + if tags > 0 { + createRequest.Tags = make([]string, 0, tags) + for i := 0; i < tags; i++ { + key := fmt.Sprintf("tags.%d", i) + createRequest.Tags = append(createRequest.Tags, d.Get(key).(string)) + } + } + + log.Printf("[DEBUG] Device create configuration: %#v", createRequest) + + newDevice, _, err := client.Devices.Create(createRequest) + if err != nil { + return fmt.Errorf("Error creating device: %s", err) + } + + // Assign the device id + d.SetId(newDevice.ID) + + log.Printf("[INFO] Device ID: %s", d.Id()) + + _, err = WaitForDeviceAttribute(d, "active", []string{"provisioning"}, "state", meta) + if err != nil { + return fmt.Errorf( + "Error waiting for device (%s) to become ready: %s", d.Id(), err) + } + + return resourcePacketDeviceRead(d, meta) +} + +func resourcePacketDeviceRead(d *schema.ResourceData, meta interface{}) error { + client := meta.(*packngo.Client) + + // Retrieve the device properties for updating the state + device, _, err := client.Devices.Get(d.Id()) + if err != nil { + return fmt.Errorf("Error retrieving device: %s", err) + } + + d.Set("name", device.Hostname) + d.Set("plan", device.Plan.Slug) + d.Set("facility", device.Facility.Code) + d.Set("operating_system", device.OS.Slug) + d.Set("state", device.State) + d.Set("billing_cycle", device.BillingCycle) + d.Set("locked", device.Locked) + d.Set("created", device.Created) + d.Set("udpated", device.Updated) + + tags := make([]string, 0) + for _, tag := range device.Tags { + tags = append(tags, tag) + } + d.Set("tags", tags) + + networks := make([]map[string]interface{}, 0, 1) + for _, ip := range device.Network { + network := make(map[string]interface{}) + network["address"] = ip.Address + network["gateway"] = ip.Gateway + network["family"] = ip.Family + network["cidr"] = ip.Cidr + network["public"] = ip.Public + networks = append(networks, network) + } + d.Set("network", networks) + + return nil +} + +func resourcePacketDeviceUpdate(d *schema.ResourceData, meta interface{}) error { + client := meta.(*packngo.Client) + + if d.HasChange("locked") && d.Get("locked").(bool) { + _, err := client.Devices.Lock(d.Id()) + + if err != nil { + return fmt.Errorf( + "Error locking device (%s): %s", d.Id(), err) + } + } else if d.HasChange("locked") { + _, err := client.Devices.Unlock(d.Id()) + + if err != nil { + return fmt.Errorf( + "Error unlocking device (%s): %s", d.Id(), err) + } + } + + return resourcePacketDeviceRead(d, meta) +} + +func resourcePacketDeviceDelete(d *schema.ResourceData, meta interface{}) error { + client := meta.(*packngo.Client) + + log.Printf("[INFO] Deleting device: %s", d.Id()) + if _, err := client.Devices.Delete(d.Id()); err != nil { + return fmt.Errorf("Error deleting device: %s", err) + } + + return nil +} + +func WaitForDeviceAttribute( + d *schema.ResourceData, target string, pending []string, attribute string, meta interface{}) (interface{}, error) { + // Wait for the device so we can get the networking attributes + // that show up after a while + log.Printf( + "[INFO] Waiting for device (%s) to have %s of %s", + d.Id(), attribute, target) + + stateConf := &resource.StateChangeConf{ + Pending: pending, + Target: target, + Refresh: newDeviceStateRefreshFunc(d, attribute, meta), + Timeout: 60 * time.Minute, + Delay: 10 * time.Second, + MinTimeout: 3 * time.Second, + } + + return stateConf.WaitForState() +} + +func newDeviceStateRefreshFunc( + d *schema.ResourceData, attribute string, meta interface{}) resource.StateRefreshFunc { + client := meta.(*packngo.Client) + return func() (interface{}, string, error) { + err := resourcePacketDeviceRead(d, meta) + if err != nil { + return nil, "", err + } + + // See if we can access our attribute + if attr, ok := d.GetOk(attribute); ok { + // Retrieve the device properties + device, _, err := client.Devices.Get(d.Id()) + if err != nil { + return nil, "", fmt.Errorf("Error retrieving device: %s", err) + } + + return &device, attr.(string), nil + } + + return nil, "", nil + } +} + +// Powers on the device and waits for it to be active +func powerOnAndWait(d *schema.ResourceData, meta interface{}) error { + client := meta.(*packngo.Client) + _, err := client.Devices.PowerOn(d.Id()) + if err != nil { + return err + } + + // Wait for power on + _, err = WaitForDeviceAttribute(d, "active", []string{"off"}, "state", client) + if err != nil { + return err + } + + return nil +} diff --git a/builtin/providers/packet/resource_packet_project.go b/builtin/providers/packet/resource_packet_project.go new file mode 100644 index 000000000..e41ef1381 --- /dev/null +++ b/builtin/providers/packet/resource_packet_project.go @@ -0,0 +1,123 @@ +package packet + +import ( + "fmt" + "log" + "strings" + + "github.com/hashicorp/terraform/helper/schema" + "github.com/packethost/packngo" +) + +func resourcePacketProject() *schema.Resource { + return &schema.Resource{ + Create: resourcePacketProjectCreate, + Read: resourcePacketProjectRead, + Update: resourcePacketProjectUpdate, + Delete: resourcePacketProjectDelete, + + Schema: map[string]*schema.Schema{ + "id": &schema.Schema{ + Type: schema.TypeString, + Computed: true, + }, + + "name": &schema.Schema{ + Type: schema.TypeString, + Required: true, + }, + + "payment_method": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + }, + + "created": &schema.Schema{ + Type: schema.TypeString, + Computed: true, + }, + + "updated": &schema.Schema{ + Type: schema.TypeString, + Computed: true, + }, + }, + } +} + +func resourcePacketProjectCreate(d *schema.ResourceData, meta interface{}) error { + client := meta.(*packngo.Client) + + createRequest := &packngo.ProjectCreateRequest{ + Name: d.Get("name").(string), + PaymentMethod: d.Get("payment_method").(string), + } + + log.Printf("[DEBUG] Project create configuration: %#v", createRequest) + project, _, err := client.Projects.Create(createRequest) + if err != nil { + return fmt.Errorf("Error creating Project: %s", err) + } + + d.SetId(project.ID) + log.Printf("[INFO] Project created: %s", project.ID) + + return resourcePacketProjectRead(d, meta) +} + +func resourcePacketProjectRead(d *schema.ResourceData, meta interface{}) error { + client := meta.(*packngo.Client) + + key, _, err := client.Projects.Get(d.Id()) + if err != nil { + // If the project somehow already destroyed, mark as + // succesfully gone + if strings.Contains(err.Error(), "404") { + d.SetId("") + return nil + } + + return fmt.Errorf("Error retrieving Project: %s", err) + } + + d.Set("id", key.ID) + d.Set("name", key.Name) + d.Set("created", key.Created) + d.Set("updated", key.Updated) + + return nil +} + +func resourcePacketProjectUpdate(d *schema.ResourceData, meta interface{}) error { + client := meta.(*packngo.Client) + + updateRequest := &packngo.ProjectUpdateRequest{ + ID: d.Get("id").(string), + Name: d.Get("name").(string), + } + + if attr, ok := d.GetOk("payment_method"); ok { + updateRequest.PaymentMethod = attr.(string) + } + + log.Printf("[DEBUG] Project update: %#v", d.Get("id")) + _, _, err := client.Projects.Update(updateRequest) + if err != nil { + return fmt.Errorf("Failed to update Project: %s", err) + } + + return resourcePacketProjectRead(d, meta) +} + +func resourcePacketProjectDelete(d *schema.ResourceData, meta interface{}) error { + client := meta.(*packngo.Client) + + log.Printf("[INFO] Deleting Project: %s", d.Id()) + _, err := client.Projects.Delete(d.Id()) + if err != nil { + return fmt.Errorf("Error deleting SSH key: %s", err) + } + + d.SetId("") + return nil +} diff --git a/builtin/providers/packet/resource_packet_project_test.go b/builtin/providers/packet/resource_packet_project_test.go new file mode 100644 index 000000000..b0179cfbe --- /dev/null +++ b/builtin/providers/packet/resource_packet_project_test.go @@ -0,0 +1,95 @@ +package packet + +import ( + "fmt" + "testing" + + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" + "github.com/packethost/packngo" +) + +func TestAccPacketProject_Basic(t *testing.T) { + var project packngo.Project + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckPacketProjectDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccCheckPacketProjectConfig_basic, + Check: resource.ComposeTestCheckFunc( + testAccCheckPacketProjectExists("packet_project.foobar", &project), + testAccCheckPacketProjectAttributes(&project), + resource.TestCheckResourceAttr( + "packet_project.foobar", "name", "foobar"), + ), + }, + }, + }) +} + +func testAccCheckPacketProjectDestroy(s *terraform.State) error { + client := testAccProvider.Meta().(*packngo.Client) + + for _, rs := range s.RootModule().Resources { + if rs.Type != "packet_project" { + continue + } + + _, _, err := client.Projects.Get(rs.Primary.ID) + + if err == nil { + fmt.Errorf("Project cstill exists") + } + } + + return nil +} + +func testAccCheckPacketProjectAttributes(project *packngo.Project) resource.TestCheckFunc { + return func(s *terraform.State) error { + + if project.Name != "foobar" { + return fmt.Errorf("Bad name: %s", project.Name) + } + + return nil + } +} + +func testAccCheckPacketProjectExists(n string, project *packngo.Project) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[n] + + if !ok { + return fmt.Errorf("Not found: %s", n) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No Record ID is set") + } + + client := testAccProvider.Meta().(*packngo.Client) + + foundProject, _, err := client.Projects.Get(rs.Primary.ID) + + if err != nil { + return err + } + + if foundProject.ID != rs.Primary.ID { + return fmt.Errorf("Record not found: %v - %v", rs.Primary.ID, foundProject) + } + + *project = *foundProject + + return nil + } +} + +var testAccCheckPacketProjectConfig_basic = fmt.Sprintf(` +resource "packet_project" "foobar" { + name = "foobar" +}`) diff --git a/builtin/providers/packet/resource_packet_ssh_key.go b/builtin/providers/packet/resource_packet_ssh_key.go new file mode 100644 index 000000000..95e04bd8c --- /dev/null +++ b/builtin/providers/packet/resource_packet_ssh_key.go @@ -0,0 +1,128 @@ +package packet + +import ( + "fmt" + "log" + "strings" + + "github.com/hashicorp/terraform/helper/schema" + "github.com/packethost/packngo" +) + +func resourcePacketSSHKey() *schema.Resource { + return &schema.Resource{ + Create: resourcePacketSSHKeyCreate, + Read: resourcePacketSSHKeyRead, + Update: resourcePacketSSHKeyUpdate, + Delete: resourcePacketSSHKeyDelete, + + Schema: map[string]*schema.Schema{ + "id": &schema.Schema{ + Type: schema.TypeString, + Computed: true, + }, + + "name": &schema.Schema{ + Type: schema.TypeString, + Required: true, + }, + + "public_key": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + + "fingerprint": &schema.Schema{ + Type: schema.TypeString, + Computed: true, + }, + + "created": &schema.Schema{ + Type: schema.TypeString, + Computed: true, + }, + + "updated": &schema.Schema{ + Type: schema.TypeString, + Computed: true, + }, + }, + } +} + +func resourcePacketSSHKeyCreate(d *schema.ResourceData, meta interface{}) error { + client := meta.(*packngo.Client) + + createRequest := &packngo.SSHKeyCreateRequest{ + Label: d.Get("name").(string), + Key: d.Get("public_key").(string), + } + + log.Printf("[DEBUG] SSH Key create configuration: %#v", createRequest) + key, _, err := client.SSHKeys.Create(createRequest) + if err != nil { + return fmt.Errorf("Error creating SSH Key: %s", err) + } + + d.SetId(key.ID) + log.Printf("[INFO] SSH Key: %s", key.ID) + + return resourcePacketSSHKeyRead(d, meta) +} + +func resourcePacketSSHKeyRead(d *schema.ResourceData, meta interface{}) error { + client := meta.(*packngo.Client) + + key, _, err := client.SSHKeys.Get(d.Id()) + if err != nil { + // If the key is somehow already destroyed, mark as + // succesfully gone + if strings.Contains(err.Error(), "404") { + d.SetId("") + return nil + } + + return fmt.Errorf("Error retrieving SSH key: %s", err) + } + + d.Set("id", key.ID) + d.Set("name", key.Label) + d.Set("public_key", key.Key) + d.Set("fingerprint", key.FingerPrint) + d.Set("created", key.Created) + d.Set("updated", key.Updated) + + return nil +} + +func resourcePacketSSHKeyUpdate(d *schema.ResourceData, meta interface{}) error { + client := meta.(*packngo.Client) + + updateRequest := &packngo.SSHKeyUpdateRequest{ + ID: d.Get("id").(string), + Label: d.Get("name").(string), + Key: d.Get("public_key").(string), + } + + log.Printf("[DEBUG] SSH key update: %#v", d.Get("id")) + _, _, err := client.SSHKeys.Update(updateRequest) + if err != nil { + return fmt.Errorf("Failed to update SSH key: %s", err) + } + + return resourcePacketSSHKeyRead(d, meta) +} + +func resourcePacketSSHKeyDelete(d *schema.ResourceData, meta interface{}) error { + client := meta.(*packngo.Client) + + log.Printf("[INFO] Deleting SSH key: %s", d.Id()) + _, err := client.SSHKeys.Delete(d.Id()) + if err != nil { + return fmt.Errorf("Error deleting SSH key: %s", err) + } + + d.SetId("") + return nil +} diff --git a/builtin/providers/packet/resource_packet_ssh_key_test.go b/builtin/providers/packet/resource_packet_ssh_key_test.go new file mode 100644 index 000000000..765086d4f --- /dev/null +++ b/builtin/providers/packet/resource_packet_ssh_key_test.go @@ -0,0 +1,104 @@ +package packet + +import ( + "fmt" + "strings" + "testing" + + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" + "github.com/packethost/packngo" +) + +func TestAccPacketSSHKey_Basic(t *testing.T) { + var key packngo.SSHKey + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckPacketSSHKeyDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccCheckPacketSSHKeyConfig_basic, + Check: resource.ComposeTestCheckFunc( + testAccCheckPacketSSHKeyExists("packet_ssh_key.foobar", &key), + testAccCheckPacketSSHKeyAttributes(&key), + resource.TestCheckResourceAttr( + "packet_ssh_key.foobar", "name", "foobar"), + resource.TestCheckResourceAttr( + "packet_ssh_key.foobar", "public_key", testAccValidPublicKey), + ), + }, + }, + }) +} + +func testAccCheckPacketSSHKeyDestroy(s *terraform.State) error { + client := testAccProvider.Meta().(*packngo.Client) + + for _, rs := range s.RootModule().Resources { + if rs.Type != "packet_ssh_key" { + continue + } + + _, _, err := client.SSHKeys.Get(rs.Primary.ID) + + if err == nil { + fmt.Errorf("SSH key still exists") + } + } + + return nil +} + +func testAccCheckPacketSSHKeyAttributes(key *packngo.SSHKey) resource.TestCheckFunc { + return func(s *terraform.State) error { + + if key.Label != "foobar" { + return fmt.Errorf("Bad name: %s", key.Label) + } + + return nil + } +} + +func testAccCheckPacketSSHKeyExists(n string, key *packngo.SSHKey) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[n] + + if !ok { + return fmt.Errorf("Not found: %s", n) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No Record ID is set") + } + + client := testAccProvider.Meta().(*packngo.Client) + + foundKey, _, err := client.SSHKeys.Get(rs.Primary.ID) + + if err != nil { + return err + } + + if foundKey.ID != rs.Primary.ID { + return fmt.Errorf("SSh Key not found: %v - %v", rs.Primary.ID, foundKey) + } + + *key = *foundKey + + fmt.Printf("key: %v", key) + return nil + } +} + +var testAccCheckPacketSSHKeyConfig_basic = fmt.Sprintf(` +resource "packet_ssh_key" "foobar" { + name = "foobar" + public_key = "%s" +}`, testAccValidPublicKey) + +var testAccValidPublicKey = strings.TrimSpace(` +ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCKVmnMOlHKcZK8tpt3MP1lqOLAcqcJzhsvJcjscgVERRN7/9484SOBJ3HSKxxNG5JN8owAjy5f9yYwcUg+JaUVuytn5Pv3aeYROHGGg+5G346xaq3DAwX6Y5ykr2fvjObgncQBnuU5KHWCECO/4h8uWuwh/kfniXPVjFToc+gnkqA+3RKpAecZhFXwfalQ9mMuYGFxn+fwn8cYEApsJbsEmb0iJwPiZ5hjFC8wREuiTlhPHDgkBLOiycd20op2nXzDbHfCHInquEe/gYxEitALONxm0swBOwJZwlTDOB7C6y2dzlrtxr1L59m7pCkWI4EtTRLvleehBoj3u7jB4usR +`) diff --git a/builtin/providers/rundeck/resource_job.go b/builtin/providers/rundeck/resource_job.go index 7411d5746..c9af25b0b 100644 --- a/builtin/providers/rundeck/resource_job.go +++ b/builtin/providers/rundeck/resource_job.go @@ -340,10 +340,10 @@ func jobFromResourceData(d *schema.ResourceData) (*rundeck.JobDetail, error) { LogLevel: d.Get("log_level").(string), AllowConcurrentExecutions: d.Get("allow_concurrent_executions").(bool), Dispatch: &rundeck.JobDispatch{ - MaxThreadCount: d.Get("max_thread_count").(int), - ContinueOnError: d.Get("continue_on_error").(bool), - RankAttribute: d.Get("rank_attribute").(string), - RankOrder: d.Get("rank_order").(string), + MaxThreadCount: d.Get("max_thread_count").(int), + ContinueOnError: d.Get("continue_on_error").(bool), + RankAttribute: d.Get("rank_attribute").(string), + RankOrder: d.Get("rank_order").(string), }, } diff --git a/builtin/providers/vsphere/config.go b/builtin/providers/vsphere/config.go new file mode 100644 index 000000000..1f6af7ffd --- /dev/null +++ b/builtin/providers/vsphere/config.go @@ -0,0 +1,39 @@ +package vsphere + +import ( + "fmt" + "log" + "net/url" + + "github.com/vmware/govmomi" + "golang.org/x/net/context" +) + +const ( + defaultInsecureFlag = true +) + +type Config struct { + User string + Password string + VCenterServer string +} + +// Client() returns a new client for accessing VMWare vSphere. +func (c *Config) Client() (*govmomi.Client, error) { + u, err := url.Parse("https://" + c.VCenterServer + "/sdk") + if err != nil { + return nil, fmt.Errorf("Error parse url: %s", err) + } + + u.User = url.UserPassword(c.User, c.Password) + + client, err := govmomi.NewClient(context.TODO(), u, defaultInsecureFlag) + if err != nil { + return nil, fmt.Errorf("Error setting up client: %s", err) + } + + log.Printf("[INFO] VMWare vSphere Client configured for URL: %s", u) + + return client, nil +} diff --git a/builtin/providers/vsphere/provider.go b/builtin/providers/vsphere/provider.go new file mode 100644 index 000000000..4dce81a9d --- /dev/null +++ b/builtin/providers/vsphere/provider.go @@ -0,0 +1,50 @@ +package vsphere + +import ( + "github.com/hashicorp/terraform/helper/schema" + "github.com/hashicorp/terraform/terraform" +) + +// Provider returns a terraform.ResourceProvider. +func Provider() terraform.ResourceProvider { + return &schema.Provider{ + Schema: map[string]*schema.Schema{ + "user": &schema.Schema{ + Type: schema.TypeString, + Required: true, + DefaultFunc: schema.EnvDefaultFunc("VSPHERE_USER", nil), + Description: "The user name for vSphere API operations.", + }, + + "password": &schema.Schema{ + Type: schema.TypeString, + Required: true, + DefaultFunc: schema.EnvDefaultFunc("VSPHERE_PASSWORD", nil), + Description: "The user password for vSphere API operations.", + }, + + "vcenter_server": &schema.Schema{ + Type: schema.TypeString, + Required: true, + DefaultFunc: schema.EnvDefaultFunc("VSPHERE_VCENTER", nil), + Description: "The vCenter Server name for vSphere API operations.", + }, + }, + + ResourcesMap: map[string]*schema.Resource{ + "vsphere_virtual_machine": resourceVSphereVirtualMachine(), + }, + + ConfigureFunc: providerConfigure, + } +} + +func providerConfigure(d *schema.ResourceData) (interface{}, error) { + config := Config{ + User: d.Get("user").(string), + Password: d.Get("password").(string), + VCenterServer: d.Get("vcenter_server").(string), + } + + return config.Client() +} diff --git a/builtin/providers/vsphere/provider_test.go b/builtin/providers/vsphere/provider_test.go new file mode 100644 index 000000000..bb8e4dc55 --- /dev/null +++ b/builtin/providers/vsphere/provider_test.go @@ -0,0 +1,43 @@ +package vsphere + +import ( + "os" + "testing" + + "github.com/hashicorp/terraform/helper/schema" + "github.com/hashicorp/terraform/terraform" +) + +var testAccProviders map[string]terraform.ResourceProvider +var testAccProvider *schema.Provider + +func init() { + testAccProvider = Provider().(*schema.Provider) + testAccProviders = map[string]terraform.ResourceProvider{ + "vsphere": testAccProvider, + } +} + +func TestProvider(t *testing.T) { + if err := Provider().(*schema.Provider).InternalValidate(); err != nil { + t.Fatalf("err: %s", err) + } +} + +func TestProvider_impl(t *testing.T) { + var _ terraform.ResourceProvider = Provider() +} + +func testAccPreCheck(t *testing.T) { + if v := os.Getenv("VSPHERE_USER"); v == "" { + t.Fatal("VSPHERE_USER must be set for acceptance tests") + } + + if v := os.Getenv("VSPHERE_PASSWORD"); v == "" { + t.Fatal("VSPHERE_PASSWORD must be set for acceptance tests") + } + + if v := os.Getenv("VSPHERE_VCENTER"); v == "" { + t.Fatal("VSPHERE_VCENTER must be set for acceptance tests") + } +} diff --git a/builtin/providers/vsphere/resource_vsphere_virtual_machine.go b/builtin/providers/vsphere/resource_vsphere_virtual_machine.go new file mode 100644 index 000000000..c6b1292ac --- /dev/null +++ b/builtin/providers/vsphere/resource_vsphere_virtual_machine.go @@ -0,0 +1,1061 @@ +package vsphere + +import ( + "fmt" + "log" + "net" + "strings" + "time" + + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/helper/schema" + "github.com/vmware/govmomi" + "github.com/vmware/govmomi/find" + "github.com/vmware/govmomi/object" + "github.com/vmware/govmomi/property" + "github.com/vmware/govmomi/vim25/mo" + "github.com/vmware/govmomi/vim25/types" + "golang.org/x/net/context" +) + +var DefaultDNSSuffixes = []string{ + "vsphere.local", +} + +var DefaultDNSServers = []string{ + "8.8.8.8", + "8.8.4.4", +} + +type networkInterface struct { + deviceName string + label string + ipAddress string + subnetMask string + adapterType string // TODO: Make "adapter_type" argument +} + +type hardDisk struct { + size int64 + iops int64 +} + +type virtualMachine struct { + name string + datacenter string + cluster string + resourcePool string + datastore string + vcpu int + memoryMb int64 + template string + networkInterfaces []networkInterface + hardDisks []hardDisk + gateway string + domain string + timeZone string + dnsSuffixes []string + dnsServers []string +} + +func resourceVSphereVirtualMachine() *schema.Resource { + return &schema.Resource{ + Create: resourceVSphereVirtualMachineCreate, + Read: resourceVSphereVirtualMachineRead, + Delete: resourceVSphereVirtualMachineDelete, + + Schema: map[string]*schema.Schema{ + "name": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + + "vcpu": &schema.Schema{ + Type: schema.TypeInt, + Required: true, + ForceNew: true, + }, + + "memory": &schema.Schema{ + Type: schema.TypeInt, + Required: true, + ForceNew: true, + }, + + "datacenter": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ForceNew: true, + }, + + "cluster": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ForceNew: true, + }, + + "resource_pool": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ForceNew: true, + }, + + "gateway": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ForceNew: true, + }, + + "domain": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ForceNew: true, + Default: "vsphere.local", + }, + + "time_zone": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ForceNew: true, + Default: "Etc/UTC", + }, + + "dns_suffixes": &schema.Schema{ + Type: schema.TypeList, + Optional: true, + Elem: &schema.Schema{Type: schema.TypeString}, + ForceNew: true, + }, + + "dns_servers": &schema.Schema{ + Type: schema.TypeList, + Optional: true, + Elem: &schema.Schema{Type: schema.TypeString}, + ForceNew: true, + }, + + "network_interface": &schema.Schema{ + Type: schema.TypeList, + Required: true, + ForceNew: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "label": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + + "ip_address": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + }, + + "subnet_mask": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + }, + + "adapter_type": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ForceNew: true, + }, + }, + }, + }, + + "disk": &schema.Schema{ + Type: schema.TypeList, + Required: true, + ForceNew: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "template": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ForceNew: true, + }, + + "datastore": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ForceNew: true, + }, + + "size": &schema.Schema{ + Type: schema.TypeInt, + Optional: true, + ForceNew: true, + }, + + "iops": &schema.Schema{ + Type: schema.TypeInt, + Optional: true, + ForceNew: true, + }, + }, + }, + }, + + "boot_delay": &schema.Schema{ + Type: schema.TypeInt, + Optional: true, + ForceNew: true, + }, + }, + } +} + +func resourceVSphereVirtualMachineCreate(d *schema.ResourceData, meta interface{}) error { + client := meta.(*govmomi.Client) + + vm := virtualMachine{ + name: d.Get("name").(string), + vcpu: d.Get("vcpu").(int), + memoryMb: int64(d.Get("memory").(int)), + } + + if v, ok := d.GetOk("datacenter"); ok { + vm.datacenter = v.(string) + } + + if v, ok := d.GetOk("cluster"); ok { + vm.cluster = v.(string) + } + + if v, ok := d.GetOk("resource_pool"); ok { + vm.resourcePool = v.(string) + } + + if v, ok := d.GetOk("gateway"); ok { + vm.gateway = v.(string) + } + + if v, ok := d.GetOk("domain"); ok { + vm.domain = v.(string) + } + + if v, ok := d.GetOk("time_zone"); ok { + vm.timeZone = v.(string) + } + + if raw, ok := d.GetOk("dns_suffixes"); ok { + for _, v := range raw.([]interface{}) { + vm.dnsSuffixes = append(vm.dnsSuffixes, v.(string)) + } + } else { + vm.dnsSuffixes = DefaultDNSSuffixes + } + + if raw, ok := d.GetOk("dns_servers"); ok { + for _, v := range raw.([]interface{}) { + vm.dnsServers = append(vm.dnsServers, v.(string)) + } + } else { + vm.dnsServers = DefaultDNSServers + } + + if vL, ok := d.GetOk("network_interface"); ok { + networks := make([]networkInterface, len(vL.([]interface{}))) + for i, v := range vL.([]interface{}) { + network := v.(map[string]interface{}) + networks[i].label = network["label"].(string) + if v, ok := network["ip_address"].(string); ok && v != "" { + networks[i].ipAddress = v + } + if v, ok := network["subnet_mask"].(string); ok && v != "" { + networks[i].subnetMask = v + } + } + vm.networkInterfaces = networks + log.Printf("[DEBUG] network_interface init: %v", networks) + } + + if vL, ok := d.GetOk("disk"); ok { + disks := make([]hardDisk, len(vL.([]interface{}))) + for i, v := range vL.([]interface{}) { + disk := v.(map[string]interface{}) + if i == 0 { + if v, ok := disk["template"].(string); ok && v != "" { + vm.template = v + } else { + if v, ok := disk["size"].(int); ok && v != 0 { + disks[i].size = int64(v) + } else { + return fmt.Errorf("If template argument is not specified, size argument is required.") + } + } + if v, ok := disk["datastore"].(string); ok && v != "" { + vm.datastore = v + } + } else { + if v, ok := disk["size"].(int); ok && v != 0 { + disks[i].size = int64(v) + } else { + return fmt.Errorf("Size argument is required.") + } + } + if v, ok := disk["iops"].(int); ok && v != 0 { + disks[i].iops = int64(v) + } + } + vm.hardDisks = disks + log.Printf("[DEBUG] disk init: %v", disks) + } + + if vm.template != "" { + err := vm.deployVirtualMachine(client) + if err != nil { + return err + } + } else { + err := vm.createVirtualMachine(client) + if err != nil { + return err + } + } + + if _, ok := d.GetOk("network_interface.0.ip_address"); !ok { + if v, ok := d.GetOk("boot_delay"); ok { + stateConf := &resource.StateChangeConf{ + Pending: []string{"pending"}, + Target: "active", + Refresh: waitForNetworkingActive(client, vm.datacenter, vm.name), + Timeout: 600 * time.Second, + Delay: time.Duration(v.(int)) * time.Second, + MinTimeout: 2 * time.Second, + } + + _, err := stateConf.WaitForState() + if err != nil { + return err + } + } + } + d.SetId(vm.name) + log.Printf("[INFO] Created virtual machine: %s", d.Id()) + + return resourceVSphereVirtualMachineRead(d, meta) +} + +func resourceVSphereVirtualMachineRead(d *schema.ResourceData, meta interface{}) error { + client := meta.(*govmomi.Client) + dc, err := getDatacenter(client, d.Get("datacenter").(string)) + if err != nil { + return err + } + finder := find.NewFinder(client.Client, true) + finder = finder.SetDatacenter(dc) + + vm, err := finder.VirtualMachine(context.TODO(), d.Get("name").(string)) + if err != nil { + log.Printf("[ERROR] Virtual machine not found: %s", d.Get("name").(string)) + d.SetId("") + return nil + } + + var mvm mo.VirtualMachine + + collector := property.DefaultCollector(client.Client) + if err := collector.RetrieveOne(context.TODO(), vm.Reference(), []string{"guest", "summary", "datastore"}, &mvm); err != nil { + return err + } + + log.Printf("[DEBUG] %#v", dc) + log.Printf("[DEBUG] %#v", mvm.Summary.Config) + log.Printf("[DEBUG] %#v", mvm.Guest.Net) + + networkInterfaces := make([]map[string]interface{}, 0) + for _, v := range mvm.Guest.Net { + if v.DeviceConfigId >= 0 { + log.Printf("[DEBUG] %#v", v.Network) + networkInterface := make(map[string]interface{}) + networkInterface["label"] = v.Network + if len(v.IpAddress) > 0 { + log.Printf("[DEBUG] %#v", v.IpAddress[0]) + networkInterface["ip_address"] = v.IpAddress[0] + + m := net.CIDRMask(v.IpConfig.IpAddress[0].PrefixLength, 32) + subnetMask := net.IPv4(m[0], m[1], m[2], m[3]) + networkInterface["subnet_mask"] = subnetMask.String() + log.Printf("[DEBUG] %#v", subnetMask.String()) + } + networkInterfaces = append(networkInterfaces, networkInterface) + } + } + err = d.Set("network_interface", networkInterfaces) + if err != nil { + return fmt.Errorf("Invalid network interfaces to set: %#v", networkInterfaces) + } + + var rootDatastore string + for _, v := range mvm.Datastore { + var md mo.Datastore + if err := collector.RetrieveOne(context.TODO(), v, []string{"name", "parent"}, &md); err != nil { + return err + } + if md.Parent.Type == "StoragePod" { + var msp mo.StoragePod + if err := collector.RetrieveOne(context.TODO(), *md.Parent, []string{"name"}, &msp); err != nil { + return err + } + rootDatastore = msp.Name + log.Printf("[DEBUG] %#v", msp.Name) + } else { + rootDatastore = md.Name + log.Printf("[DEBUG] %#v", md.Name) + } + break + } + + d.Set("datacenter", dc) + d.Set("memory", mvm.Summary.Config.MemorySizeMB) + d.Set("cpu", mvm.Summary.Config.NumCpu) + d.Set("datastore", rootDatastore) + + // Initialize the connection info + d.SetConnInfo(map[string]string{ + "type": "ssh", + "host": networkInterfaces[0]["ip_address"].(string), + }) + + return nil +} + +func resourceVSphereVirtualMachineDelete(d *schema.ResourceData, meta interface{}) error { + client := meta.(*govmomi.Client) + dc, err := getDatacenter(client, d.Get("datacenter").(string)) + if err != nil { + return err + } + finder := find.NewFinder(client.Client, true) + finder = finder.SetDatacenter(dc) + + vm, err := finder.VirtualMachine(context.TODO(), d.Get("name").(string)) + if err != nil { + return err + } + + log.Printf("[INFO] Deleting virtual machine: %s", d.Id()) + + task, err := vm.PowerOff(context.TODO()) + if err != nil { + return err + } + + err = task.Wait(context.TODO()) + if err != nil { + return err + } + + task, err = vm.Destroy(context.TODO()) + if err != nil { + return err + } + + err = task.Wait(context.TODO()) + if err != nil { + return err + } + + d.SetId("") + return nil +} + +func waitForNetworkingActive(client *govmomi.Client, datacenter, name string) resource.StateRefreshFunc { + return func() (interface{}, string, error) { + dc, err := getDatacenter(client, datacenter) + if err != nil { + log.Printf("[ERROR] %#v", err) + return nil, "", err + } + finder := find.NewFinder(client.Client, true) + finder = finder.SetDatacenter(dc) + + vm, err := finder.VirtualMachine(context.TODO(), name) + if err != nil { + log.Printf("[ERROR] %#v", err) + return nil, "", err + } + + var mvm mo.VirtualMachine + collector := property.DefaultCollector(client.Client) + if err := collector.RetrieveOne(context.TODO(), vm.Reference(), []string{"summary"}, &mvm); err != nil { + log.Printf("[ERROR] %#v", err) + return nil, "", err + } + + if mvm.Summary.Guest.IpAddress != "" { + log.Printf("[DEBUG] IP address with DHCP: %v", mvm.Summary.Guest.IpAddress) + return mvm.Summary, "active", err + } else { + log.Printf("[DEBUG] Waiting for IP address") + return nil, "pending", err + } + } +} + +// getDatacenter gets datacenter object +func getDatacenter(c *govmomi.Client, dc string) (*object.Datacenter, error) { + finder := find.NewFinder(c.Client, true) + if dc != "" { + d, err := finder.Datacenter(context.TODO(), dc) + return d, err + } else { + d, err := finder.DefaultDatacenter(context.TODO()) + return d, err + } +} + +// addHardDisk adds a new Hard Disk to the VirtualMachine. +func addHardDisk(vm *object.VirtualMachine, size, iops int64, diskType string) error { + devices, err := vm.Device(context.TODO()) + if err != nil { + return err + } + log.Printf("[DEBUG] vm devices: %#v\n", devices) + + controller, err := devices.FindDiskController("scsi") + if err != nil { + return err + } + log.Printf("[DEBUG] disk controller: %#v\n", controller) + + disk := devices.CreateDisk(controller, "") + existing := devices.SelectByBackingInfo(disk.Backing) + log.Printf("[DEBUG] disk: %#v\n", disk) + + if len(existing) == 0 { + disk.CapacityInKB = int64(size * 1024 * 1024) + if iops != 0 { + disk.StorageIOAllocation = &types.StorageIOAllocationInfo{ + Limit: iops, + } + } + backing := disk.Backing.(*types.VirtualDiskFlatVer2BackingInfo) + + if diskType == "eager_zeroed" { + // eager zeroed thick virtual disk + backing.ThinProvisioned = types.NewBool(false) + backing.EagerlyScrub = types.NewBool(true) + } else if diskType == "thin" { + // thin provisioned virtual disk + backing.ThinProvisioned = types.NewBool(true) + } + + log.Printf("[DEBUG] addHardDisk: %#v\n", disk) + log.Printf("[DEBUG] addHardDisk: %#v\n", disk.CapacityInKB) + + return vm.AddDevice(context.TODO(), disk) + } else { + log.Printf("[DEBUG] addHardDisk: Disk already present.\n") + + return nil + } +} + +// createNetworkDevice creates VirtualDeviceConfigSpec for Network Device. +func createNetworkDevice(f *find.Finder, label, adapterType string) (*types.VirtualDeviceConfigSpec, error) { + network, err := f.Network(context.TODO(), "*"+label) + if err != nil { + return nil, err + } + + backing, err := network.EthernetCardBackingInfo(context.TODO()) + if err != nil { + return nil, err + } + + if adapterType == "vmxnet3" { + return &types.VirtualDeviceConfigSpec{ + Operation: types.VirtualDeviceConfigSpecOperationAdd, + Device: &types.VirtualVmxnet3{ + types.VirtualVmxnet{ + types.VirtualEthernetCard{ + VirtualDevice: types.VirtualDevice{ + Key: -1, + Backing: backing, + }, + AddressType: string(types.VirtualEthernetCardMacTypeGenerated), + }, + }, + }, + }, nil + } else if adapterType == "e1000" { + return &types.VirtualDeviceConfigSpec{ + Operation: types.VirtualDeviceConfigSpecOperationAdd, + Device: &types.VirtualE1000{ + types.VirtualEthernetCard{ + VirtualDevice: types.VirtualDevice{ + Key: -1, + Backing: backing, + }, + AddressType: string(types.VirtualEthernetCardMacTypeGenerated), + }, + }, + }, nil + } else { + return nil, fmt.Errorf("Invalid network adapter type.") + } +} + +// createVMRelocateSpec creates VirtualMachineRelocateSpec to set a place for a new VirtualMachine. +func createVMRelocateSpec(rp *object.ResourcePool, ds *object.Datastore, vm *object.VirtualMachine) (types.VirtualMachineRelocateSpec, error) { + var key int + + devices, err := vm.Device(context.TODO()) + if err != nil { + return types.VirtualMachineRelocateSpec{}, err + } + for _, d := range devices { + if devices.Type(d) == "disk" { + key = d.GetVirtualDevice().Key + } + } + + rpr := rp.Reference() + dsr := ds.Reference() + return types.VirtualMachineRelocateSpec{ + Datastore: &dsr, + Pool: &rpr, + Disk: []types.VirtualMachineRelocateSpecDiskLocator{ + types.VirtualMachineRelocateSpecDiskLocator{ + Datastore: dsr, + DiskBackingInfo: &types.VirtualDiskFlatVer2BackingInfo{ + DiskMode: "persistent", + ThinProvisioned: types.NewBool(false), + EagerlyScrub: types.NewBool(true), + }, + DiskId: key, + }, + }, + }, nil +} + +// getDatastoreObject gets datastore object. +func getDatastoreObject(client *govmomi.Client, f *object.DatacenterFolders, name string) (types.ManagedObjectReference, error) { + s := object.NewSearchIndex(client.Client) + ref, err := s.FindChild(context.TODO(), f.DatastoreFolder, name) + if err != nil { + return types.ManagedObjectReference{}, err + } + if ref == nil { + return types.ManagedObjectReference{}, fmt.Errorf("Datastore '%s' not found.", name) + } + log.Printf("[DEBUG] getDatastoreObject: reference: %#v", ref) + return ref.Reference(), nil +} + +// createStoragePlacementSpecCreate creates StoragePlacementSpec for create action. +func createStoragePlacementSpecCreate(f *object.DatacenterFolders, rp *object.ResourcePool, storagePod object.StoragePod, configSpec types.VirtualMachineConfigSpec) types.StoragePlacementSpec { + vmfr := f.VmFolder.Reference() + rpr := rp.Reference() + spr := storagePod.Reference() + + sps := types.StoragePlacementSpec{ + Type: "create", + ConfigSpec: &configSpec, + PodSelectionSpec: types.StorageDrsPodSelectionSpec{ + StoragePod: &spr, + }, + Folder: &vmfr, + ResourcePool: &rpr, + } + log.Printf("[DEBUG] findDatastore: StoragePlacementSpec: %#v\n", sps) + return sps +} + +// createStoragePlacementSpecClone creates StoragePlacementSpec for clone action. +func createStoragePlacementSpecClone(c *govmomi.Client, f *object.DatacenterFolders, vm *object.VirtualMachine, rp *object.ResourcePool, storagePod object.StoragePod) types.StoragePlacementSpec { + vmr := vm.Reference() + vmfr := f.VmFolder.Reference() + rpr := rp.Reference() + spr := storagePod.Reference() + + var o mo.VirtualMachine + err := vm.Properties(context.TODO(), vmr, []string{"datastore"}, &o) + if err != nil { + return types.StoragePlacementSpec{} + } + ds := object.NewDatastore(c.Client, o.Datastore[0]) + log.Printf("[DEBUG] findDatastore: datastore: %#v\n", ds) + + devices, err := vm.Device(context.TODO()) + if err != nil { + return types.StoragePlacementSpec{} + } + + var key int + for _, d := range devices.SelectByType((*types.VirtualDisk)(nil)) { + key = d.GetVirtualDevice().Key + log.Printf("[DEBUG] findDatastore: virtual devices: %#v\n", d.GetVirtualDevice()) + } + + sps := types.StoragePlacementSpec{ + Type: "clone", + Vm: &vmr, + PodSelectionSpec: types.StorageDrsPodSelectionSpec{ + StoragePod: &spr, + }, + CloneSpec: &types.VirtualMachineCloneSpec{ + Location: types.VirtualMachineRelocateSpec{ + Disk: []types.VirtualMachineRelocateSpecDiskLocator{ + types.VirtualMachineRelocateSpecDiskLocator{ + Datastore: ds.Reference(), + DiskBackingInfo: &types.VirtualDiskFlatVer2BackingInfo{}, + DiskId: key, + }, + }, + Pool: &rpr, + }, + PowerOn: false, + Template: false, + }, + CloneName: "dummy", + Folder: &vmfr, + } + return sps +} + +// findDatastore finds Datastore object. +func findDatastore(c *govmomi.Client, sps types.StoragePlacementSpec) (*object.Datastore, error) { + var datastore *object.Datastore + log.Printf("[DEBUG] findDatastore: StoragePlacementSpec: %#v\n", sps) + + srm := object.NewStorageResourceManager(c.Client) + rds, err := srm.RecommendDatastores(context.TODO(), sps) + if err != nil { + return nil, err + } + log.Printf("[DEBUG] findDatastore: recommendDatastores: %#v\n", rds) + + spa := rds.Recommendations[0].Action[0].(*types.StoragePlacementAction) + datastore = object.NewDatastore(c.Client, spa.Destination) + log.Printf("[DEBUG] findDatastore: datastore: %#v", datastore) + + return datastore, nil +} + +// createVirtualMchine creates a new VirtualMachine. +func (vm *virtualMachine) createVirtualMachine(c *govmomi.Client) error { + dc, err := getDatacenter(c, vm.datacenter) + if err != nil { + return err + } + finder := find.NewFinder(c.Client, true) + finder = finder.SetDatacenter(dc) + + var resourcePool *object.ResourcePool + if vm.resourcePool == "" { + if vm.cluster == "" { + resourcePool, err = finder.DefaultResourcePool(context.TODO()) + if err != nil { + return err + } + } else { + resourcePool, err = finder.ResourcePool(context.TODO(), "*"+vm.cluster+"/Resources") + if err != nil { + return err + } + } + } else { + resourcePool, err = finder.ResourcePool(context.TODO(), vm.resourcePool) + if err != nil { + return err + } + } + log.Printf("[DEBUG] resource pool: %#v", resourcePool) + + dcFolders, err := dc.Folders(context.TODO()) + if err != nil { + return err + } + + // network + networkDevices := []types.BaseVirtualDeviceConfigSpec{} + for _, network := range vm.networkInterfaces { + // network device + nd, err := createNetworkDevice(finder, network.label, "e1000") + if err != nil { + return err + } + networkDevices = append(networkDevices, nd) + } + + // make config spec + configSpec := types.VirtualMachineConfigSpec{ + GuestId: "otherLinux64Guest", + Name: vm.name, + NumCPUs: vm.vcpu, + NumCoresPerSocket: 1, + MemoryMB: vm.memoryMb, + DeviceChange: networkDevices, + } + log.Printf("[DEBUG] virtual machine config spec: %v", configSpec) + + var datastore *object.Datastore + if vm.datastore == "" { + datastore, err = finder.DefaultDatastore(context.TODO()) + if err != nil { + return err + } + } else { + datastore, err = finder.Datastore(context.TODO(), vm.datastore) + if err != nil { + // TODO: datastore cluster support in govmomi finder function + d, err := getDatastoreObject(c, dcFolders, vm.datastore) + if err != nil { + return err + } + + if d.Type == "StoragePod" { + sp := object.StoragePod{ + object.NewFolder(c.Client, d), + } + sps := createStoragePlacementSpecCreate(dcFolders, resourcePool, sp, configSpec) + datastore, err = findDatastore(c, sps) + if err != nil { + return err + } + } else { + datastore = object.NewDatastore(c.Client, d) + } + } + } + + log.Printf("[DEBUG] datastore: %#v", datastore) + + var mds mo.Datastore + if err = datastore.Properties(context.TODO(), datastore.Reference(), []string{"name"}, &mds); err != nil { + return err + } + log.Printf("[DEBUG] datastore: %#v", mds.Name) + scsi, err := object.SCSIControllerTypes().CreateSCSIController("scsi") + if err != nil { + log.Printf("[ERROR] %s", err) + } + + configSpec.DeviceChange = append(configSpec.DeviceChange, &types.VirtualDeviceConfigSpec{ + Operation: types.VirtualDeviceConfigSpecOperationAdd, + Device: scsi, + }) + configSpec.Files = &types.VirtualMachineFileInfo{VmPathName: fmt.Sprintf("[%s]", mds.Name)} + + task, err := dcFolders.VmFolder.CreateVM(context.TODO(), configSpec, resourcePool, nil) + if err != nil { + log.Printf("[ERROR] %s", err) + } + + err = task.Wait(context.TODO()) + if err != nil { + log.Printf("[ERROR] %s", err) + } + + newVM, err := finder.VirtualMachine(context.TODO(), vm.name) + if err != nil { + return err + } + log.Printf("[DEBUG] new vm: %v", newVM) + + log.Printf("[DEBUG] add hard disk: %v", vm.hardDisks) + for _, hd := range vm.hardDisks { + log.Printf("[DEBUG] add hard disk: %v", hd.size) + log.Printf("[DEBUG] add hard disk: %v", hd.iops) + err = addHardDisk(newVM, hd.size, hd.iops, "thin") + if err != nil { + return err + } + } + return nil +} + +// deployVirtualMchine deploys a new VirtualMachine. +func (vm *virtualMachine) deployVirtualMachine(c *govmomi.Client) error { + dc, err := getDatacenter(c, vm.datacenter) + if err != nil { + return err + } + finder := find.NewFinder(c.Client, true) + finder = finder.SetDatacenter(dc) + + template, err := finder.VirtualMachine(context.TODO(), vm.template) + if err != nil { + return err + } + log.Printf("[DEBUG] template: %#v", template) + + var resourcePool *object.ResourcePool + if vm.resourcePool == "" { + if vm.cluster == "" { + resourcePool, err = finder.DefaultResourcePool(context.TODO()) + if err != nil { + return err + } + } else { + resourcePool, err = finder.ResourcePool(context.TODO(), "*"+vm.cluster+"/Resources") + if err != nil { + return err + } + } + } else { + resourcePool, err = finder.ResourcePool(context.TODO(), vm.resourcePool) + if err != nil { + return err + } + } + log.Printf("[DEBUG] resource pool: %#v", resourcePool) + + dcFolders, err := dc.Folders(context.TODO()) + if err != nil { + return err + } + + var datastore *object.Datastore + if vm.datastore == "" { + datastore, err = finder.DefaultDatastore(context.TODO()) + if err != nil { + return err + } + } else { + datastore, err = finder.Datastore(context.TODO(), vm.datastore) + if err != nil { + // TODO: datastore cluster support in govmomi finder function + d, err := getDatastoreObject(c, dcFolders, vm.datastore) + if err != nil { + return err + } + + if d.Type == "StoragePod" { + sp := object.StoragePod{ + object.NewFolder(c.Client, d), + } + sps := createStoragePlacementSpecClone(c, dcFolders, template, resourcePool, sp) + datastore, err = findDatastore(c, sps) + if err != nil { + return err + } + } else { + datastore = object.NewDatastore(c.Client, d) + } + } + } + log.Printf("[DEBUG] datastore: %#v", datastore) + + relocateSpec, err := createVMRelocateSpec(resourcePool, datastore, template) + if err != nil { + return err + } + log.Printf("[DEBUG] relocate spec: %v", relocateSpec) + + // network + networkDevices := []types.BaseVirtualDeviceConfigSpec{} + networkConfigs := []types.CustomizationAdapterMapping{} + for _, network := range vm.networkInterfaces { + // network device + nd, err := createNetworkDevice(finder, network.label, "vmxnet3") + if err != nil { + return err + } + networkDevices = append(networkDevices, nd) + + var ipSetting types.CustomizationIPSettings + if network.ipAddress == "" { + ipSetting = types.CustomizationIPSettings{ + Ip: &types.CustomizationDhcpIpGenerator{}, + } + } else { + log.Printf("[DEBUG] gateway: %v", vm.gateway) + log.Printf("[DEBUG] ip address: %v", network.ipAddress) + log.Printf("[DEBUG] subnet mask: %v", network.subnetMask) + ipSetting = types.CustomizationIPSettings{ + Gateway: []string{ + vm.gateway, + }, + Ip: &types.CustomizationFixedIp{ + IpAddress: network.ipAddress, + }, + SubnetMask: network.subnetMask, + } + } + + // network config + config := types.CustomizationAdapterMapping{ + Adapter: ipSetting, + } + networkConfigs = append(networkConfigs, config) + } + log.Printf("[DEBUG] network configs: %v", networkConfigs[0].Adapter) + + // make config spec + configSpec := types.VirtualMachineConfigSpec{ + NumCPUs: vm.vcpu, + NumCoresPerSocket: 1, + MemoryMB: vm.memoryMb, + DeviceChange: networkDevices, + } + log.Printf("[DEBUG] virtual machine config spec: %v", configSpec) + + // create CustomizationSpec + customSpec := types.CustomizationSpec{ + Identity: &types.CustomizationLinuxPrep{ + HostName: &types.CustomizationFixedName{ + Name: strings.Split(vm.name, ".")[0], + }, + Domain: vm.domain, + TimeZone: vm.timeZone, + HwClockUTC: types.NewBool(true), + }, + GlobalIPSettings: types.CustomizationGlobalIPSettings{ + DnsSuffixList: vm.dnsSuffixes, + DnsServerList: vm.dnsServers, + }, + NicSettingMap: networkConfigs, + } + log.Printf("[DEBUG] custom spec: %v", customSpec) + + // make vm clone spec + cloneSpec := types.VirtualMachineCloneSpec{ + Location: relocateSpec, + Template: false, + Config: &configSpec, + Customization: &customSpec, + PowerOn: true, + } + log.Printf("[DEBUG] clone spec: %v", cloneSpec) + + task, err := template.Clone(context.TODO(), dcFolders.VmFolder, vm.name, cloneSpec) + if err != nil { + return err + } + + _, err = task.WaitForResult(context.TODO(), nil) + if err != nil { + return err + } + + newVM, err := finder.VirtualMachine(context.TODO(), vm.name) + if err != nil { + return err + } + log.Printf("[DEBUG] new vm: %v", newVM) + + ip, err := newVM.WaitForIP(context.TODO()) + if err != nil { + return err + } + log.Printf("[DEBUG] ip address: %v", ip) + + for i := 1; i < len(vm.hardDisks); i++ { + err = addHardDisk(newVM, vm.hardDisks[i].size, vm.hardDisks[i].iops, "eager_zeroed") + if err != nil { + return err + } + } + return nil +} diff --git a/builtin/providers/vsphere/resource_vsphere_virtual_machine_test.go b/builtin/providers/vsphere/resource_vsphere_virtual_machine_test.go new file mode 100644 index 000000000..75bc339e8 --- /dev/null +++ b/builtin/providers/vsphere/resource_vsphere_virtual_machine_test.go @@ -0,0 +1,240 @@ +package vsphere + +import ( + "fmt" + "os" + "testing" + + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" + "github.com/vmware/govmomi" + "github.com/vmware/govmomi/find" + "github.com/vmware/govmomi/object" + "golang.org/x/net/context" +) + +func TestAccVSphereVirtualMachine_basic(t *testing.T) { + var vm virtualMachine + datacenter := os.Getenv("VSPHERE_DATACENTER") + cluster := os.Getenv("VSPHERE_CLUSTER") + datastore := os.Getenv("VSPHERE_DATASTORE") + template := os.Getenv("VSPHERE_TEMPLATE") + gateway := os.Getenv("VSPHERE_NETWORK_GATEWAY") + label := os.Getenv("VSPHERE_NETWORK_LABEL") + ip_address := os.Getenv("VSPHERE_NETWORK_IP_ADDRESS") + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckVSphereVirtualMachineDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: fmt.Sprintf( + testAccCheckVSphereVirtualMachineConfig_basic, + datacenter, + cluster, + gateway, + label, + ip_address, + datastore, + template, + ), + Check: resource.ComposeTestCheckFunc( + testAccCheckVSphereVirtualMachineExists("vsphere_virtual_machine.foo", &vm), + resource.TestCheckResourceAttr( + "vsphere_virtual_machine.foo", "name", "terraform-test"), + resource.TestCheckResourceAttr( + "vsphere_virtual_machine.foo", "datacenter", datacenter), + resource.TestCheckResourceAttr( + "vsphere_virtual_machine.foo", "vcpu", "2"), + resource.TestCheckResourceAttr( + "vsphere_virtual_machine.foo", "memory", "4096"), + resource.TestCheckResourceAttr( + "vsphere_virtual_machine.foo", "disk.#", "2"), + resource.TestCheckResourceAttr( + "vsphere_virtual_machine.foo", "disk.0.datastore", datastore), + resource.TestCheckResourceAttr( + "vsphere_virtual_machine.foo", "disk.0.template", template), + resource.TestCheckResourceAttr( + "vsphere_virtual_machine.foo", "network_interface.#", "1"), + resource.TestCheckResourceAttr( + "vsphere_virtual_machine.foo", "network_interface.0.label", label), + ), + }, + }, + }) +} + +func TestAccVSphereVirtualMachine_dhcp(t *testing.T) { + var vm virtualMachine + datacenter := os.Getenv("VSPHERE_DATACENTER") + cluster := os.Getenv("VSPHERE_CLUSTER") + datastore := os.Getenv("VSPHERE_DATASTORE") + template := os.Getenv("VSPHERE_TEMPLATE") + label := os.Getenv("VSPHERE_NETWORK_LABEL_DHCP") + password := os.Getenv("VSPHERE_VM_PASSWORD") + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckVSphereVirtualMachineDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: fmt.Sprintf( + testAccCheckVSphereVirtualMachineConfig_dhcp, + datacenter, + cluster, + label, + datastore, + template, + password, + ), + Check: resource.ComposeTestCheckFunc( + testAccCheckVSphereVirtualMachineExists("vsphere_virtual_machine.bar", &vm), + resource.TestCheckResourceAttr( + "vsphere_virtual_machine.bar", "name", "terraform-test"), + resource.TestCheckResourceAttr( + "vsphere_virtual_machine.bar", "datacenter", datacenter), + resource.TestCheckResourceAttr( + "vsphere_virtual_machine.bar", "vcpu", "2"), + resource.TestCheckResourceAttr( + "vsphere_virtual_machine.bar", "memory", "4096"), + resource.TestCheckResourceAttr( + "vsphere_virtual_machine.bar", "disk.#", "1"), + resource.TestCheckResourceAttr( + "vsphere_virtual_machine.bar", "disk.0.datastore", datastore), + resource.TestCheckResourceAttr( + "vsphere_virtual_machine.bar", "disk.0.template", template), + resource.TestCheckResourceAttr( + "vsphere_virtual_machine.bar", "network_interface.#", "1"), + resource.TestCheckResourceAttr( + "vsphere_virtual_machine.bar", "network_interface.0.label", label), + ), + }, + }, + }) +} + +func testAccCheckVSphereVirtualMachineDestroy(s *terraform.State) error { + client := testAccProvider.Meta().(*govmomi.Client) + finder := find.NewFinder(client.Client, true) + + for _, rs := range s.RootModule().Resources { + if rs.Type != "vsphere_virtual_machine" { + continue + } + + dc, err := finder.Datacenter(context.TODO(), rs.Primary.Attributes["datacenter"]) + if err != nil { + return fmt.Errorf("error %s", err) + } + + dcFolders, err := dc.Folders(context.TODO()) + if err != nil { + return fmt.Errorf("error %s", err) + } + + _, err = object.NewSearchIndex(client.Client).FindChild(context.TODO(), dcFolders.VmFolder, rs.Primary.Attributes["name"]) + if err == nil { + return fmt.Errorf("Record still exists") + } + } + + return nil +} + +func testAccCheckVSphereVirtualMachineExists(n string, vm *virtualMachine) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[n] + if !ok { + return fmt.Errorf("Not found: %s", n) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No ID is set") + } + + client := testAccProvider.Meta().(*govmomi.Client) + finder := find.NewFinder(client.Client, true) + + dc, err := finder.Datacenter(context.TODO(), rs.Primary.Attributes["datacenter"]) + if err != nil { + return fmt.Errorf("error %s", err) + } + + dcFolders, err := dc.Folders(context.TODO()) + if err != nil { + return fmt.Errorf("error %s", err) + } + + _, err = object.NewSearchIndex(client.Client).FindChild(context.TODO(), dcFolders.VmFolder, rs.Primary.Attributes["name"]) + /* + vmRef, err := client.SearchIndex().FindChild(dcFolders.VmFolder, rs.Primary.Attributes["name"]) + if err != nil { + return fmt.Errorf("error %s", err) + } + + found := govmomi.NewVirtualMachine(client, vmRef.Reference()) + fmt.Printf("%v", found) + + if found.Name != rs.Primary.ID { + return fmt.Errorf("Instance not found") + } + *instance = *found + */ + + *vm = virtualMachine{ + name: rs.Primary.ID, + } + + return nil + } +} + +const testAccCheckVSphereVirtualMachineConfig_basic = ` +resource "vsphere_virtual_machine" "foo" { + name = "terraform-test" + datacenter = "%s" + cluster = "%s" + vcpu = 2 + memory = 4096 + gateway = "%s" + network_interface { + label = "%s" + ip_address = "%s" + subnet_mask = "255.255.255.0" + } + disk { + datastore = "%s" + template = "%s" + iops = 500 + } + disk { + size = 1 + iops = 500 + } +} +` + +const testAccCheckVSphereVirtualMachineConfig_dhcp = ` +resource "vsphere_virtual_machine" "bar" { + name = "terraform-test" + datacenter = "%s" + cluster = "%s" + vcpu = 2 + memory = 4096 + network_interface { + label = "%s" + } + disk { + datastore = "%s" + template = "%s" + } + + connection { + host = "${self.network_interface.0.ip_address}" + user = "root" + password = "%s" + } +} +` diff --git a/builtin/provisioners/chef/linux_provisioner_test.go b/builtin/provisioners/chef/linux_provisioner_test.go index 27a1dbba0..6c57bfef0 100644 --- a/builtin/provisioners/chef/linux_provisioner_test.go +++ b/builtin/provisioners/chef/linux_provisioner_test.go @@ -310,6 +310,8 @@ validation_client_name "validator" node_name "nodename1" + + http_proxy "http://proxy.local" ENV['http_proxy'] = "http://proxy.local" ENV['HTTP_PROXY'] = "http://proxy.local" diff --git a/builtin/provisioners/chef/resource_provisioner.go b/builtin/provisioners/chef/resource_provisioner.go index f50b1b831..50b5666ee 100644 --- a/builtin/provisioners/chef/resource_provisioner.go +++ b/builtin/provisioners/chef/resource_provisioner.go @@ -41,6 +41,12 @@ chef_server_url "{{ .ServerURL }}" validation_client_name "{{ .ValidationClientName }}" node_name "{{ .NodeName }}" +{{ if .UsePolicyfile }} +use_policyfile true +policy_group "{{ .PolicyGroup }}" +policy_name "{{ .PolicyName }}" +{{ end }} + {{ if .HTTPProxy }} http_proxy "{{ .HTTPProxy }}" ENV['http_proxy'] = "{{ .HTTPProxy }}" @@ -62,6 +68,9 @@ type Provisioner struct { Attributes interface{} `mapstructure:"attributes"` Environment string `mapstructure:"environment"` LogToFile bool `mapstructure:"log_to_file"` + UsePolicyfile bool `mapstructure:"use_policyfile"` + PolicyGroup string `mapstructure:"policy_group"` + PolicyName string `mapstructure:"policy_name"` HTTPProxy string `mapstructure:"http_proxy"` HTTPSProxy string `mapstructure:"https_proxy"` NOProxy []string `mapstructure:"no_proxy"` @@ -171,7 +180,7 @@ func (r *ResourceProvisioner) Validate(c *terraform.ResourceConfig) (ws []string if p.NodeName == "" { es = append(es, fmt.Errorf("Key not found: node_name")) } - if p.RunList == nil { + if !p.UsePolicyfile && p.RunList == nil { es = append(es, fmt.Errorf("Key not found: run_list")) } if p.ServerURL == "" { @@ -183,6 +192,12 @@ func (r *ResourceProvisioner) Validate(c *terraform.ResourceConfig) (ws []string if p.ValidationKeyPath == "" { es = append(es, fmt.Errorf("Key not found: validation_key_path")) } + if p.UsePolicyfile && p.PolicyName == "" { + es = append(es, fmt.Errorf("Policyfile enabled but key not found: policy_name")) + } + if p.UsePolicyfile && p.PolicyGroup == "" { + es = append(es, fmt.Errorf("Policyfile enabled but key not found: policy_group")) + } return ws, es } @@ -302,7 +317,14 @@ func (p *Provisioner) runChefClientFunc( confDir string) func(terraform.UIOutput, communicator.Communicator) error { return func(o terraform.UIOutput, comm communicator.Communicator) error { fb := path.Join(confDir, firstBoot) - cmd := fmt.Sprintf("%s -j %q -E %q", chefCmd, fb, p.Environment) + var cmd string + + // Policyfiles do not support chef environments, so don't pass the `-E` flag. + if p.UsePolicyfile { + cmd = fmt.Sprintf("%s -j %q", chefCmd, fb) + } else { + cmd = fmt.Sprintf("%s -j %q -E %q", chefCmd, fb, p.Environment) + } if p.LogToFile { if err := os.MkdirAll(logfileDir, 0755); err != nil { @@ -413,7 +435,9 @@ func (p *Provisioner) deployConfigFiles( } // Add the initial runlist to the first boot settings - fb["run_list"] = p.RunList + if !p.UsePolicyfile { + fb["run_list"] = p.RunList + } // Marshal the first boot settings to JSON d, err := json.Marshal(fb) diff --git a/builtin/provisioners/chef/windows_provisioner_test.go b/builtin/provisioners/chef/windows_provisioner_test.go index a01599a30..11e61d888 100644 --- a/builtin/provisioners/chef/windows_provisioner_test.go +++ b/builtin/provisioners/chef/windows_provisioner_test.go @@ -342,6 +342,8 @@ validation_client_name "validator" node_name "nodename1" + + http_proxy "http://proxy.local" ENV['http_proxy'] = "http://proxy.local" ENV['HTTP_PROXY'] = "http://proxy.local" diff --git a/command/apply.go b/command/apply.go index f924c65ca..0687116a8 100644 --- a/command/apply.go +++ b/command/apply.go @@ -7,8 +7,8 @@ import ( "sort" "strings" + "github.com/hashicorp/go-getter" "github.com/hashicorp/go-multierror" - "github.com/hashicorp/terraform/config/module" "github.com/hashicorp/terraform/terraform" ) @@ -39,6 +39,8 @@ func (c *ApplyCommand) Run(args []string) int { cmdFlags.BoolVar(&destroyForce, "force", false, "force") } cmdFlags.BoolVar(&refresh, "refresh", true, "refresh") + cmdFlags.IntVar( + &c.Meta.parallelism, "parallelism", DefaultParallelism, "parallelism") cmdFlags.StringVar(&c.Meta.statePath, "state", DefaultStateFilename, "path") cmdFlags.StringVar(&c.Meta.stateOutPath, "state-out", "", "path") cmdFlags.StringVar(&c.Meta.backupPath, "backup", "", "path") @@ -74,7 +76,7 @@ func (c *ApplyCommand) Run(args []string) int { if !c.Destroy && maybeInit { // Do a detect to determine if we need to do an init + apply. - if detected, err := module.Detect(configPath, pwd); err != nil { + if detected, err := getter.Detect(configPath, pwd, getter.Detectors); err != nil { c.Ui.Error(fmt.Sprintf( "Invalid path: %s", err)) return 1 @@ -94,9 +96,10 @@ func (c *ApplyCommand) Run(args []string) int { // Build the context based on the arguments given ctx, planned, err := c.Context(contextOpts{ - Destroy: c.Destroy, - Path: configPath, - StatePath: c.Meta.statePath, + Destroy: c.Destroy, + Path: configPath, + StatePath: c.Meta.statePath, + Parallelism: c.Meta.parallelism, }) if err != nil { c.Ui.Error(err.Error()) @@ -278,6 +281,9 @@ Options: -no-color If specified, output won't contain any color. + -parallelism=n Limit the number of concurrent operations. + Defaults to 10. + -refresh=true Update state prior to checking for differences. This has no effect if a plan file is given to apply. @@ -320,6 +326,9 @@ Options: -no-color If specified, output won't contain any color. + -parallelism=n Limit the number of concurrent operations. + Defaults to 10. + -refresh=true Update state prior to checking for differences. This has no effect if a plan file is given to apply. diff --git a/command/apply_test.go b/command/apply_test.go index 052bd592c..c5379c4f3 100644 --- a/command/apply_test.go +++ b/command/apply_test.go @@ -58,6 +58,82 @@ func TestApply(t *testing.T) { } } +func TestApply_parallelism1(t *testing.T) { + statePath := testTempFile(t) + + ui := new(cli.MockUi) + p := testProvider() + pr := new(terraform.MockResourceProvisioner) + + pr.ApplyFn = func(*terraform.InstanceState, *terraform.ResourceConfig) error { + time.Sleep(time.Second) + return nil + } + + args := []string{ + "-state", statePath, + "-parallelism=1", + testFixturePath("parallelism"), + } + + c := &ApplyCommand{ + Meta: Meta{ + ContextOpts: testCtxConfigWithShell(p, pr), + Ui: ui, + }, + } + + start := time.Now() + if code := c.Run(args); code != 0 { + t.Fatalf("bad: %d\n\n%s", code, ui.ErrorWriter.String()) + } + elapsed := time.Since(start).Seconds() + + // This test should take exactly two seconds, plus some minor amount of execution time. + if elapsed < 2 || elapsed > 2.2 { + t.Fatalf("bad: %f\n\n%s", elapsed, ui.ErrorWriter.String()) + } + +} + +func TestApply_parallelism2(t *testing.T) { + statePath := testTempFile(t) + + ui := new(cli.MockUi) + p := testProvider() + pr := new(terraform.MockResourceProvisioner) + + pr.ApplyFn = func(*terraform.InstanceState, *terraform.ResourceConfig) error { + time.Sleep(time.Second) + return nil + } + + args := []string{ + "-state", statePath, + "-parallelism=2", + testFixturePath("parallelism"), + } + + c := &ApplyCommand{ + Meta: Meta{ + ContextOpts: testCtxConfigWithShell(p, pr), + Ui: ui, + }, + } + + start := time.Now() + if code := c.Run(args); code != 0 { + t.Fatalf("bad: %d\n\n%s", code, ui.ErrorWriter.String()) + } + elapsed := time.Since(start).Seconds() + + // This test should take exactly one second, plus some minor amount of execution time. + if elapsed < 1 || elapsed > 1.2 { + t.Fatalf("bad: %f\n\n%s", elapsed, ui.ErrorWriter.String()) + } + +} + func TestApply_configInvalid(t *testing.T) { p := testProvider() ui := new(cli.MockUi) diff --git a/command/command.go b/command/command.go index c9a87230b..80e82e78c 100644 --- a/command/command.go +++ b/command/command.go @@ -26,6 +26,10 @@ const DefaultBackupExtension = ".backup" // by default. const DefaultDataDirectory = ".terraform" +// DefaultParallelism is the limit Terraform places on total parallel +// operations as it walks the dependency graph. +const DefaultParallelism = 10 + func validateContext(ctx *terraform.Context, ui cli.Ui) bool { if ws, es := ctx.Validate(); len(ws) > 0 || len(es) > 0 { ui.Output( diff --git a/command/command_test.go b/command/command_test.go index 2544cf531..954579c3d 100644 --- a/command/command_test.go +++ b/command/command_test.go @@ -7,6 +7,7 @@ import ( "strings" "testing" + "github.com/hashicorp/go-getter" "github.com/hashicorp/terraform/config/module" "github.com/hashicorp/terraform/terraform" ) @@ -52,13 +53,28 @@ func testCtxConfig(p terraform.ResourceProvider) *terraform.ContextOpts { } } +func testCtxConfigWithShell(p terraform.ResourceProvider, pr terraform.ResourceProvisioner) *terraform.ContextOpts { + return &terraform.ContextOpts{ + Providers: map[string]terraform.ResourceProviderFactory{ + "test": func() (terraform.ResourceProvider, error) { + return p, nil + }, + }, + Provisioners: map[string]terraform.ResourceProvisionerFactory{ + "shell": func() (terraform.ResourceProvisioner, error) { + return pr, nil + }, + }, + } +} + func testModule(t *testing.T, name string) *module.Tree { mod, err := module.NewTreeModule("", filepath.Join(fixtureDir, name)) if err != nil { t.Fatalf("err: %s", err) } - s := &module.FolderStorage{StorageDir: tempDir(t)} + s := &getter.FolderStorage{StorageDir: tempDir(t)} if err := mod.Load(s, module.GetModeGet); err != nil { t.Fatalf("err: %s", err) } diff --git a/command/flag_kv_test.go b/command/flag_kv_test.go index b17266cc6..a134a1692 100644 --- a/command/flag_kv_test.go +++ b/command/flag_kv_test.go @@ -45,7 +45,7 @@ func TestFlagKV(t *testing.T) { for _, tc := range cases { f := new(FlagKV) err := f.Set(tc.Input) - if (err != nil) != tc.Error { + if err != nil != tc.Error { t.Fatalf("bad error. Input: %#v", tc.Input) } @@ -95,7 +95,7 @@ foo = "bar" f := new(FlagKVFile) err := f.Set(path) - if (err != nil) != tc.Error { + if err != nil != tc.Error { t.Fatalf("bad error. Input: %#v", tc.Input) } diff --git a/command/format_plan.go b/command/format_plan.go index 66df5f8c2..daf3f60aa 100644 --- a/command/format_plan.go +++ b/command/format_plan.go @@ -131,7 +131,7 @@ func formatPlanModuleExpand( newResource := "" if attrDiff.RequiresNew && rdiff.Destroy { - newResource = " (forces new resource)" + newResource = opts.Color.Color(" [red](forces new resource)") } buf.WriteString(fmt.Sprintf( diff --git a/command/init.go b/command/init.go index fb842d08d..1b92c0806 100644 --- a/command/init.go +++ b/command/init.go @@ -6,6 +6,7 @@ import ( "os" "strings" + "github.com/hashicorp/go-getter" "github.com/hashicorp/terraform/config" "github.com/hashicorp/terraform/config/module" "github.com/hashicorp/terraform/terraform" @@ -75,7 +76,7 @@ func (c *InitCommand) Run(args []string) int { } // Detect - source, err = module.Detect(source, pwd) + source, err = getter.Detect(source, pwd, getter.Detectors) if err != nil { c.Ui.Error(fmt.Sprintf( "Error with module source: %s", err)) diff --git a/command/meta.go b/command/meta.go index 4c1c09afe..3a12de02f 100644 --- a/command/meta.go +++ b/command/meta.go @@ -9,6 +9,7 @@ import ( "path/filepath" "strconv" + "github.com/hashicorp/go-getter" "github.com/hashicorp/terraform/config/module" "github.com/hashicorp/terraform/state" "github.com/hashicorp/terraform/terraform" @@ -59,9 +60,13 @@ type Meta struct { // // backupPath is used to backup the state file before writing a modified // version. It defaults to stateOutPath + DefaultBackupExtension + // + // parallelism is used to control the number of concurrent operations + // allowed when walking the graph statePath string stateOutPath string backupPath string + parallelism int } // initStatePaths is used to initialize the default values for @@ -151,6 +156,7 @@ func (m *Meta) Context(copts contextOpts) (*terraform.Context, bool, error) { } opts.Module = mod + opts.Parallelism = copts.Parallelism opts.State = state.State() ctx := terraform.NewContext(opts) return ctx, false, nil @@ -325,9 +331,9 @@ func (m *Meta) flagSet(n string) *flag.FlagSet { // moduleStorage returns the module.Storage implementation used to store // modules for commands. -func (m *Meta) moduleStorage(root string) module.Storage { +func (m *Meta) moduleStorage(root string) getter.Storage { return &uiModuleStorage{ - Storage: &module.FolderStorage{ + Storage: &getter.FolderStorage{ StorageDir: filepath.Join(root, "modules"), }, Ui: m.Ui, @@ -430,4 +436,7 @@ type contextOpts struct { // Set to true when running a destroy plan/apply. Destroy bool + + // Number of concurrent operations allowed + Parallelism int } diff --git a/command/module_storage.go b/command/module_storage.go index e17786a80..5bb832897 100644 --- a/command/module_storage.go +++ b/command/module_storage.go @@ -3,14 +3,14 @@ package command import ( "fmt" - "github.com/hashicorp/terraform/config/module" + "github.com/hashicorp/go-getter" "github.com/mitchellh/cli" ) // uiModuleStorage implements module.Storage and is just a proxy to output // to the UI any Get operations. type uiModuleStorage struct { - Storage module.Storage + Storage getter.Storage Ui cli.Ui } diff --git a/command/module_storage_test.go b/command/module_storage_test.go index b77c2b5f7..97a5ed7ae 100644 --- a/command/module_storage_test.go +++ b/command/module_storage_test.go @@ -3,9 +3,9 @@ package command import ( "testing" - "github.com/hashicorp/terraform/config/module" + "github.com/hashicorp/go-getter" ) func TestUiModuleStorage_impl(t *testing.T) { - var _ module.Storage = new(uiModuleStorage) + var _ getter.Storage = new(uiModuleStorage) } diff --git a/command/plan.go b/command/plan.go index 15c2b505f..cd1aeaec6 100644 --- a/command/plan.go +++ b/command/plan.go @@ -27,6 +27,8 @@ func (c *PlanCommand) Run(args []string) int { cmdFlags.BoolVar(&refresh, "refresh", true, "refresh") c.addModuleDepthFlag(cmdFlags, &moduleDepth) cmdFlags.StringVar(&outPath, "out", "", "path") + cmdFlags.IntVar( + &c.Meta.parallelism, "parallelism", DefaultParallelism, "parallelism") cmdFlags.StringVar(&c.Meta.statePath, "state", DefaultStateFilename, "path") cmdFlags.StringVar(&c.Meta.backupPath, "backup", "", "path") cmdFlags.BoolVar(&detailed, "detailed-exitcode", false, "detailed-exitcode") @@ -57,9 +59,10 @@ func (c *PlanCommand) Run(args []string) int { c.Meta.extraHooks = []terraform.Hook{countHook} ctx, _, err := c.Context(contextOpts{ - Destroy: destroy, - Path: path, - StatePath: c.Meta.statePath, + Destroy: destroy, + Path: path, + StatePath: c.Meta.statePath, + Parallelism: c.Meta.parallelism, }) if err != nil { c.Ui.Error(err.Error()) @@ -183,6 +186,8 @@ Options: -out=path Write a plan file to the given path. This can be used as input to the "apply" command. + -parallelism=n Limit the number of concurrent operations. Defaults to 10. + -refresh=true Update state prior to checking for differences. -state=statefile Path to a Terraform state file to use to look diff --git a/command/refresh.go b/command/refresh.go index ee3cd7007..99190bf87 100644 --- a/command/refresh.go +++ b/command/refresh.go @@ -18,6 +18,7 @@ func (c *RefreshCommand) Run(args []string) int { cmdFlags := c.Meta.flagSet("refresh") cmdFlags.StringVar(&c.Meta.statePath, "state", DefaultStateFilename, "path") + cmdFlags.IntVar(&c.Meta.parallelism, "parallelism", 0, "parallelism") cmdFlags.StringVar(&c.Meta.stateOutPath, "state-out", "", "path") cmdFlags.StringVar(&c.Meta.backupPath, "backup", "", "path") cmdFlags.Usage = func() { c.Ui.Error(c.Help()) } @@ -78,8 +79,9 @@ func (c *RefreshCommand) Run(args []string) int { // Build the context based on the arguments given ctx, _, err := c.Context(contextOpts{ - Path: configPath, - StatePath: c.Meta.statePath, + Path: configPath, + StatePath: c.Meta.statePath, + Parallelism: c.Meta.parallelism, }) if err != nil { c.Ui.Error(err.Error()) diff --git a/command/remote_config.go b/command/remote_config.go index 95973dfdb..d7e73a5d8 100644 --- a/command/remote_config.go +++ b/command/remote_config.go @@ -338,7 +338,7 @@ func (c *RemoteConfigCommand) enableRemoteState() int { func (c *RemoteConfigCommand) Help() string { helpText := ` -Usage: terraform remote [options] +Usage: terraform remote config [options] Configures Terraform to use a remote state server. This allows state to be pulled down when necessary and then pushed to the server when @@ -348,7 +348,8 @@ Usage: terraform remote [options] Options: -backend=Atlas Specifies the type of remote backend. Must be one - of Atlas, Consul, or HTTP. Defaults to Atlas. + of Atlas, Consul, Etcd, HTTP, S3, or Swift. Defaults + to Atlas. -backend-config="k=v" Specifies configuration for the remote storage backend. This can be specified multiple times. diff --git a/command/test-fixtures/parallelism/main.tf b/command/test-fixtures/parallelism/main.tf new file mode 100644 index 000000000..7708209c1 --- /dev/null +++ b/command/test-fixtures/parallelism/main.tf @@ -0,0 +1,13 @@ +resource "test_instance" "foo1" { + ami = "bar" + + // shell has been configured to sleep for one second + provisioner "shell" {} +} + +resource "test_instance" "foo2" { + ami = "bar" + + // shell has been configured to sleep for one second + provisioner "shell" {} +} diff --git a/communicator/winrm/provisioner.go b/communicator/winrm/provisioner.go index 59c0ba7dd..d1562998c 100644 --- a/communicator/winrm/provisioner.go +++ b/communicator/winrm/provisioner.go @@ -99,7 +99,7 @@ func safeDuration(dur string, defaultDur time.Duration) time.Duration { func formatDuration(duration time.Duration) string { h := int(duration.Hours()) - m := int(duration.Minutes()) - (h * 60) + m := int(duration.Minutes()) - h*60 s := int(duration.Seconds()) - (h*3600 + m*60) res := "PT" diff --git a/config.go b/config.go index 648223888..c9b2a7f75 100644 --- a/config.go +++ b/config.go @@ -12,7 +12,7 @@ import ( "github.com/hashicorp/hcl" "github.com/hashicorp/terraform/plugin" "github.com/hashicorp/terraform/terraform" - "github.com/mitchellh/osext" + "github.com/kardianos/osext" ) // Config is the structure of the configuration for the Terraform CLI. diff --git a/config/append_test.go b/config/append_test.go index adeb7835b..8d6258ecd 100644 --- a/config/append_test.go +++ b/config/append_test.go @@ -91,7 +91,7 @@ func TestAppend(t *testing.T) { for i, tc := range cases { actual, err := Append(tc.c1, tc.c2) - if (err != nil) != tc.err { + if err != nil != tc.err { t.Fatalf("%d: error fail", i) } diff --git a/config/config.go b/config/config.go index 811b77ec7..d31777f6e 100644 --- a/config/config.go +++ b/config/config.go @@ -8,10 +8,10 @@ import ( "strconv" "strings" + "github.com/hashicorp/go-multierror" "github.com/hashicorp/terraform/config/lang" "github.com/hashicorp/terraform/config/lang/ast" "github.com/hashicorp/terraform/flatmap" - "github.com/hashicorp/terraform/helper/multierror" "github.com/mitchellh/mapstructure" "github.com/mitchellh/reflectwalk" ) @@ -84,8 +84,9 @@ type Resource struct { // ResourceLifecycle is used to store the lifecycle tuning parameters // to allow customized behavior type ResourceLifecycle struct { - CreateBeforeDestroy bool `mapstructure:"create_before_destroy"` - PreventDestroy bool `mapstructure:"prevent_destroy"` + CreateBeforeDestroy bool `mapstructure:"create_before_destroy"` + PreventDestroy bool `mapstructure:"prevent_destroy"` + IgnoreChanges []string `mapstructure:"ignore_changes"` } // Provisioner is a configured provisioner step on a resource. diff --git a/config/interpolate_funcs.go b/config/interpolate_funcs.go index bbe2b8434..e98ade2f0 100644 --- a/config/interpolate_funcs.go +++ b/config/interpolate_funcs.go @@ -2,14 +2,17 @@ package config import ( "bytes" + "encoding/base64" "errors" "fmt" "io/ioutil" + "net" "regexp" "sort" "strconv" "strings" + "github.com/apparentlymart/go-cidr/cidr" "github.com/hashicorp/terraform/config/lang/ast" "github.com/mitchellh/go-homedir" ) @@ -19,16 +22,126 @@ var Funcs map[string]ast.Function func init() { Funcs = map[string]ast.Function{ - "concat": interpolationFuncConcat(), - "element": interpolationFuncElement(), - "file": interpolationFuncFile(), - "format": interpolationFuncFormat(), - "formatlist": interpolationFuncFormatList(), - "index": interpolationFuncIndex(), - "join": interpolationFuncJoin(), - "length": interpolationFuncLength(), - "replace": interpolationFuncReplace(), - "split": interpolationFuncSplit(), + "cidrhost": interpolationFuncCidrHost(), + "cidrnetmask": interpolationFuncCidrNetmask(), + "cidrsubnet": interpolationFuncCidrSubnet(), + "compact": interpolationFuncCompact(), + "concat": interpolationFuncConcat(), + "element": interpolationFuncElement(), + "file": interpolationFuncFile(), + "format": interpolationFuncFormat(), + "formatlist": interpolationFuncFormatList(), + "index": interpolationFuncIndex(), + "join": interpolationFuncJoin(), + "length": interpolationFuncLength(), + "lower": interpolationFuncLower(), + "replace": interpolationFuncReplace(), + "split": interpolationFuncSplit(), + "base64encode": interpolationFuncBase64Encode(), + "base64decode": interpolationFuncBase64Decode(), + "upper": interpolationFuncUpper(), + } +} + +// interpolationFuncCompact strips a list of multi-variable values +// (e.g. as returned by "split") of any empty strings. +func interpolationFuncCompact() ast.Function { + return ast.Function{ + ArgTypes: []ast.Type{ast.TypeString}, + ReturnType: ast.TypeString, + Variadic: false, + Callback: func(args []interface{}) (interface{}, error) { + if !IsStringList(args[0].(string)) { + return args[0].(string), nil + } + return StringList(args[0].(string)).Compact().String(), nil + }, + } +} + +// interpolationFuncCidrHost implements the "cidrhost" function that +// fills in the host part of a CIDR range address to create a single +// host address +func interpolationFuncCidrHost() ast.Function { + return ast.Function{ + ArgTypes: []ast.Type{ + ast.TypeString, // starting CIDR mask + ast.TypeInt, // host number to insert + }, + ReturnType: ast.TypeString, + Variadic: false, + Callback: func(args []interface{}) (interface{}, error) { + hostNum := args[1].(int) + _, network, err := net.ParseCIDR(args[0].(string)) + if err != nil { + return nil, fmt.Errorf("invalid CIDR expression: %s", err) + } + + ip, err := cidr.Host(network, hostNum) + if err != nil { + return nil, err + } + + return ip.String(), nil + }, + } +} + +// interpolationFuncCidrNetmask implements the "cidrnetmask" function +// that returns the subnet mask in IP address notation. +func interpolationFuncCidrNetmask() ast.Function { + return ast.Function{ + ArgTypes: []ast.Type{ + ast.TypeString, // CIDR mask + }, + ReturnType: ast.TypeString, + Variadic: false, + Callback: func(args []interface{}) (interface{}, error) { + _, network, err := net.ParseCIDR(args[0].(string)) + if err != nil { + return nil, fmt.Errorf("invalid CIDR expression: %s", err) + } + + return net.IP(network.Mask).String(), nil + }, + } +} + +// interpolationFuncCidrSubnet implements the "cidrsubnet" function that +// adds an additional subnet of the given length onto an existing +// IP block expressed in CIDR notation. +func interpolationFuncCidrSubnet() ast.Function { + return ast.Function{ + ArgTypes: []ast.Type{ + ast.TypeString, // starting CIDR mask + ast.TypeInt, // number of bits to extend the prefix + ast.TypeInt, // network number to append to the prefix + }, + ReturnType: ast.TypeString, + Variadic: false, + Callback: func(args []interface{}) (interface{}, error) { + extraBits := args[1].(int) + subnetNum := args[2].(int) + _, network, err := net.ParseCIDR(args[0].(string)) + if err != nil { + return nil, fmt.Errorf("invalid CIDR expression: %s", err) + } + + // For portability with 32-bit systems where the subnet number + // will be a 32-bit int, we only allow extension of 32 bits in + // one call even if we're running on a 64-bit machine. + // (Of course, this is significant only for IPv6.) + if extraBits > 32 { + return nil, fmt.Errorf("may not extend prefix by more than 32 bits") + } + + newNetwork, err := cidr.Subnet(network, extraBits, subnetNum) + if err != nil { + return nil, err + } + + return newNetwork.String(), nil + }, } } @@ -392,3 +505,59 @@ func interpolationFuncValues(vs map[string]ast.Variable) ast.Function { }, } } + +// interpolationFuncBase64Encode implements the "base64encode" function that +// allows Base64 encoding. +func interpolationFuncBase64Encode() ast.Function { + return ast.Function{ + ArgTypes: []ast.Type{ast.TypeString}, + ReturnType: ast.TypeString, + Callback: func(args []interface{}) (interface{}, error) { + s := args[0].(string) + return base64.StdEncoding.EncodeToString([]byte(s)), nil + }, + } +} + +// interpolationFuncBase64Decode implements the "base64decode" function that +// allows Base64 decoding. +func interpolationFuncBase64Decode() ast.Function { + return ast.Function{ + ArgTypes: []ast.Type{ast.TypeString}, + ReturnType: ast.TypeString, + Callback: func(args []interface{}) (interface{}, error) { + s := args[0].(string) + sDec, err := base64.StdEncoding.DecodeString(s) + if err != nil { + return "", fmt.Errorf("failed to decode base64 data '%s'", s) + } + return string(sDec), nil + }, + } +} + +// interpolationFuncLower implements the "lower" function that does +// string lower casing. +func interpolationFuncLower() ast.Function { + return ast.Function{ + ArgTypes: []ast.Type{ast.TypeString}, + ReturnType: ast.TypeString, + Callback: func(args []interface{}) (interface{}, error) { + toLower := args[0].(string) + return strings.ToLower(toLower), nil + }, + } +} + +// interpolationFuncUpper implements the "upper" function that does +// string upper casing. +func interpolationFuncUpper() ast.Function { + return ast.Function{ + ArgTypes: []ast.Type{ast.TypeString}, + ReturnType: ast.TypeString, + Callback: func(args []interface{}) (interface{}, error) { + toUpper := args[0].(string) + return strings.ToUpper(toUpper), nil + }, + } +} diff --git a/config/interpolate_funcs_test.go b/config/interpolate_funcs_test.go index 05f84c201..7b7311fd4 100644 --- a/config/interpolate_funcs_test.go +++ b/config/interpolate_funcs_test.go @@ -11,6 +11,142 @@ import ( "github.com/hashicorp/terraform/config/lang/ast" ) +func TestInterpolateFuncCompact(t *testing.T) { + testFunction(t, testFunctionConfig{ + Cases: []testFunctionCase{ + // empty string within array + { + `${compact(split(",", "a,,b"))}`, + NewStringList([]string{"a", "b"}).String(), + false, + }, + + // empty string at the end of array + { + `${compact(split(",", "a,b,"))}`, + NewStringList([]string{"a", "b"}).String(), + false, + }, + + // single empty string + { + `${compact(split(",", ""))}`, + NewStringList([]string{}).String(), + false, + }, + }, + }) +} + +func TestInterpolateFuncCidrHost(t *testing.T) { + testFunction(t, testFunctionConfig{ + Cases: []testFunctionCase{ + { + `${cidrhost("192.168.1.0/24", 5)}`, + "192.168.1.5", + false, + }, + { + `${cidrhost("192.168.1.0/30", 255)}`, + nil, + true, // 255 doesn't fit in two bits + }, + { + `${cidrhost("not-a-cidr", 6)}`, + nil, + true, // not a valid CIDR mask + }, + { + `${cidrhost("10.256.0.0/8", 6)}`, + nil, + true, // can't have an octet >255 + }, + }, + }) +} + +func TestInterpolateFuncCidrNetmask(t *testing.T) { + testFunction(t, testFunctionConfig{ + Cases: []testFunctionCase{ + { + `${cidrnetmask("192.168.1.0/24")}`, + "255.255.255.0", + false, + }, + { + `${cidrnetmask("192.168.1.0/32")}`, + "255.255.255.255", + false, + }, + { + `${cidrnetmask("0.0.0.0/0")}`, + "0.0.0.0", + false, + }, + { + // This doesn't really make sense for IPv6 networks + // but it ought to do something sensible anyway. + `${cidrnetmask("1::/64")}`, + "ffff:ffff:ffff:ffff::", + false, + }, + { + `${cidrnetmask("not-a-cidr")}`, + nil, + true, // not a valid CIDR mask + }, + { + `${cidrnetmask("10.256.0.0/8")}`, + nil, + true, // can't have an octet >255 + }, + }, + }) +} + +func TestInterpolateFuncCidrSubnet(t *testing.T) { + testFunction(t, testFunctionConfig{ + Cases: []testFunctionCase{ + { + `${cidrsubnet("192.168.2.0/20", 4, 6)}`, + "192.168.6.0/24", + false, + }, + { + `${cidrsubnet("fe80::/48", 16, 6)}`, + "fe80:0:0:6::/64", + false, + }, + { + // IPv4 address encoded in IPv6 syntax gets normalized + `${cidrsubnet("::ffff:192.168.0.0/112", 8, 6)}`, + "192.168.6.0/24", + false, + }, + { + `${cidrsubnet("192.168.0.0/30", 4, 6)}`, + nil, + true, // not enough bits left + }, + { + `${cidrsubnet("192.168.0.0/16", 2, 16)}`, + nil, + true, // can't encode 16 in 2 bits + }, + { + `${cidrsubnet("not-a-cidr", 4, 6)}`, + nil, + true, // not a valid CIDR mask + }, + { + `${cidrsubnet("10.256.0.0/8", 4, 6)}`, + nil, + true, // can't have an octet >255 + }, + }, + }) +} + func TestInterpolateFuncDeprecatedConcat(t *testing.T) { testFunction(t, testFunctionConfig{ Cases: []testFunctionCase{ @@ -584,6 +720,87 @@ func TestInterpolateFuncElement(t *testing.T) { }) } +func TestInterpolateFuncBase64Encode(t *testing.T) { + testFunction(t, testFunctionConfig{ + Cases: []testFunctionCase{ + // Regular base64 encoding + { + `${base64encode("abc123!?$*&()'-=@~")}`, + "YWJjMTIzIT8kKiYoKSctPUB+", + false, + }, + }, + }) +} + +func TestInterpolateFuncBase64Decode(t *testing.T) { + testFunction(t, testFunctionConfig{ + Cases: []testFunctionCase{ + // Regular base64 decoding + { + `${base64decode("YWJjMTIzIT8kKiYoKSctPUB+")}`, + "abc123!?$*&()'-=@~", + false, + }, + + // Invalid base64 data decoding + { + `${base64decode("this-is-an-invalid-base64-data")}`, + nil, + true, + }, + }, + }) +} + +func TestInterpolateFuncLower(t *testing.T) { + testFunction(t, testFunctionConfig{ + Cases: []testFunctionCase{ + { + `${lower("HELLO")}`, + "hello", + false, + }, + + { + `${lower("")}`, + "", + false, + }, + + { + `${lower()}`, + nil, + true, + }, + }, + }) +} + +func TestInterpolateFuncUpper(t *testing.T) { + testFunction(t, testFunctionConfig{ + Cases: []testFunctionCase{ + { + `${upper("hello")}`, + "HELLO", + false, + }, + + { + `${upper("")}`, + "", + false, + }, + + { + `${upper()}`, + nil, + true, + }, + }, + }) +} + type testFunctionConfig struct { Cases []testFunctionCase Vars map[string]ast.Variable @@ -603,7 +820,7 @@ func testFunction(t *testing.T, config testFunctionConfig) { } out, _, err := lang.Eval(ast, langEvalConfig(config.Vars)) - if (err != nil) != tc.Error { + if err != nil != tc.Error { t.Fatalf("Case #%d:\ninput: %#v\nerr: %s", i, tc.Input, err) } diff --git a/config/interpolate_test.go b/config/interpolate_test.go index 69a6ca229..3328571cc 100644 --- a/config/interpolate_test.go +++ b/config/interpolate_test.go @@ -66,7 +66,7 @@ func TestNewInterpolatedVariable(t *testing.T) { for i, tc := range cases { actual, err := NewInterpolatedVariable(tc.Input) - if (err != nil) != tc.Error { + if err != nil != tc.Error { t.Fatalf("%d. Error: %s", i, err) } if !reflect.DeepEqual(actual, tc.Result) { diff --git a/config/lang/check_identifier_test.go b/config/lang/check_identifier_test.go index 1ed52580e..fe76be1d4 100644 --- a/config/lang/check_identifier_test.go +++ b/config/lang/check_identifier_test.go @@ -134,7 +134,7 @@ func TestIdentifierCheck(t *testing.T) { visitor := &IdentifierCheck{Scope: tc.Scope} err = visitor.Visit(node) - if (err != nil) != tc.Error { + if err != nil != tc.Error { t.Fatalf("Error: %s\n\nInput: %s", err, tc.Input) } } diff --git a/config/lang/check_types_test.go b/config/lang/check_types_test.go index eb108044e..6087f98d5 100644 --- a/config/lang/check_types_test.go +++ b/config/lang/check_types_test.go @@ -169,7 +169,7 @@ func TestTypeCheck(t *testing.T) { visitor := &TypeCheck{Scope: tc.Scope} err = visitor.Visit(node) - if (err != nil) != tc.Error { + if err != nil != tc.Error { t.Fatalf("Error: %s\n\nInput: %s", err, tc.Input) } } @@ -247,7 +247,7 @@ func TestTypeCheck_implicit(t *testing.T) { // Do the first pass... visitor := &TypeCheck{Scope: tc.Scope, Implicit: implicitMap} err = visitor.Visit(node) - if (err != nil) != tc.Error { + if err != nil != tc.Error { t.Fatalf("Error: %s\n\nInput: %s", err, tc.Input) } if err != nil { diff --git a/config/lang/eval_test.go b/config/lang/eval_test.go index 44f25d6fd..122f44d1f 100644 --- a/config/lang/eval_test.go +++ b/config/lang/eval_test.go @@ -260,7 +260,7 @@ func TestEval(t *testing.T) { } out, outType, err := Eval(node, &EvalConfig{GlobalScope: tc.Scope}) - if (err != nil) != tc.Error { + if err != nil != tc.Error { t.Fatalf("Error: %s\n\nInput: %s", err, tc.Input) } if outType != tc.ResultType { diff --git a/config/lang/parse_test.go b/config/lang/parse_test.go index 8d705dccb..dc75424bc 100644 --- a/config/lang/parse_test.go +++ b/config/lang/parse_test.go @@ -353,7 +353,7 @@ func TestParse(t *testing.T) { for _, tc := range cases { actual, err := Parse(tc.Input) - if (err != nil) != tc.Error { + if err != nil != tc.Error { t.Fatalf("Error: %s\n\nInput: %s", err, tc.Input) } if !reflect.DeepEqual(actual, tc.Result) { diff --git a/config/lang/y.go b/config/lang/y.go index e7dd185ae..fd0693f15 100644 --- a/config/lang/y.go +++ b/config/lang/y.go @@ -30,7 +30,10 @@ const INTEGER = 57355 const FLOAT = 57356 const STRING = 57357 -var parserToknames = []string{ +var parserToknames = [...]string{ + "$end", + "error", + "$unk", "PROGRAM_BRACKET_LEFT", "PROGRAM_BRACKET_RIGHT", "PROGRAM_STRING_START", @@ -44,7 +47,7 @@ var parserToknames = []string{ "FLOAT", "STRING", } -var parserStatenames = []string{} +var parserStatenames = [...]string{} const parserEofCode = 1 const parserErrCode = 2 @@ -53,7 +56,7 @@ const parserMaxDepth = 200 //line lang.y:165 //line yacctab:1 -var parserExca = []int{ +var parserExca = [...]int{ -1, 1, 1, -1, -2, 0, @@ -67,75 +70,103 @@ var parserStates []string const parserLast = 30 -var parserAct = []int{ +var parserAct = [...]int{ 9, 20, 16, 16, 7, 7, 3, 18, 10, 8, 1, 17, 14, 12, 13, 6, 6, 19, 8, 22, 15, 23, 24, 11, 2, 25, 16, 21, 4, 5, } -var parserPact = []int{ +var parserPact = [...]int{ 1, -1000, 1, -1000, -1000, -1000, -1000, 0, -1000, 15, 0, 1, -1000, -1000, -1, -1000, 0, -8, 0, -1000, -1000, 12, -9, -1000, 0, -9, } -var parserPgo = []int{ +var parserPgo = [...]int{ 0, 0, 29, 28, 23, 6, 27, 10, } -var parserR1 = []int{ +var parserR1 = [...]int{ 0, 7, 7, 4, 4, 5, 5, 2, 1, 1, 1, 1, 1, 1, 1, 6, 6, 6, 3, } -var parserR2 = []int{ +var parserR2 = [...]int{ 0, 0, 1, 1, 2, 1, 1, 3, 3, 1, 1, 1, 3, 1, 4, 0, 3, 1, 1, } -var parserChk = []int{ +var parserChk = [...]int{ -1000, -7, -4, -5, -3, -2, 15, 4, -5, -1, 8, -4, 13, 14, 12, 5, 11, -1, 8, -1, 9, -6, -1, 9, 10, -1, } -var parserDef = []int{ +var parserDef = [...]int{ 1, -2, 2, 3, 5, 6, 18, 0, 4, 0, 0, 9, 10, 11, 13, 7, 0, 0, 15, 12, 8, 0, 17, 14, 0, 16, } -var parserTok1 = []int{ +var parserTok1 = [...]int{ 1, } -var parserTok2 = []int{ +var parserTok2 = [...]int{ 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, } -var parserTok3 = []int{ +var parserTok3 = [...]int{ 0, } +var parserErrorMessages = [...]struct { + state int + token int + msg string +}{} + //line yaccpar:1 /* parser for yacc output */ -var parserDebug = 0 +var ( + parserDebug = 0 + parserErrorVerbose = false +) type parserLexer interface { Lex(lval *parserSymType) int Error(s string) } +type parserParser interface { + Parse(parserLexer) int + Lookahead() int +} + +type parserParserImpl struct { + lookahead func() int +} + +func (p *parserParserImpl) Lookahead() int { + return p.lookahead() +} + +func parserNewParser() parserParser { + p := &parserParserImpl{ + lookahead: func() int { return -1 }, + } + return p +} + const parserFlag = -1000 func parserTokname(c int) string { - // 4 is TOKSTART above - if c >= 4 && c-4 < len(parserToknames) { - if parserToknames[c-4] != "" { - return parserToknames[c-4] + if c >= 1 && c-1 < len(parserToknames) { + if parserToknames[c-1] != "" { + return parserToknames[c-1] } } return __yyfmt__.Sprintf("tok-%v", c) @@ -150,51 +181,129 @@ func parserStatname(s int) string { return __yyfmt__.Sprintf("state-%v", s) } -func parserlex1(lex parserLexer, lval *parserSymType) int { - c := 0 - char := lex.Lex(lval) +func parserErrorMessage(state, lookAhead int) string { + const TOKSTART = 4 + + if !parserErrorVerbose { + return "syntax error" + } + + for _, e := range parserErrorMessages { + if e.state == state && e.token == lookAhead { + return "syntax error: " + e.msg + } + } + + res := "syntax error: unexpected " + parserTokname(lookAhead) + + // To match Bison, suggest at most four expected tokens. + expected := make([]int, 0, 4) + + // Look for shiftable tokens. + base := parserPact[state] + for tok := TOKSTART; tok-1 < len(parserToknames); tok++ { + if n := base + tok; n >= 0 && n < parserLast && parserChk[parserAct[n]] == tok { + if len(expected) == cap(expected) { + return res + } + expected = append(expected, tok) + } + } + + if parserDef[state] == -2 { + i := 0 + for parserExca[i] != -1 || parserExca[i+1] != state { + i += 2 + } + + // Look for tokens that we accept or reduce. + for i += 2; parserExca[i] >= 0; i += 2 { + tok := parserExca[i] + if tok < TOKSTART || parserExca[i+1] == 0 { + continue + } + if len(expected) == cap(expected) { + return res + } + expected = append(expected, tok) + } + + // If the default action is to accept or reduce, give up. + if parserExca[i+1] != 0 { + return res + } + } + + for i, tok := range expected { + if i == 0 { + res += ", expecting " + } else { + res += " or " + } + res += parserTokname(tok) + } + return res +} + +func parserlex1(lex parserLexer, lval *parserSymType) (char, token int) { + token = 0 + char = lex.Lex(lval) if char <= 0 { - c = parserTok1[0] + token = parserTok1[0] goto out } if char < len(parserTok1) { - c = parserTok1[char] + token = parserTok1[char] goto out } if char >= parserPrivate { if char < parserPrivate+len(parserTok2) { - c = parserTok2[char-parserPrivate] + token = parserTok2[char-parserPrivate] goto out } } for i := 0; i < len(parserTok3); i += 2 { - c = parserTok3[i+0] - if c == char { - c = parserTok3[i+1] + token = parserTok3[i+0] + if token == char { + token = parserTok3[i+1] goto out } } out: - if c == 0 { - c = parserTok2[1] /* unknown char */ + if token == 0 { + token = parserTok2[1] /* unknown char */ } if parserDebug >= 3 { - __yyfmt__.Printf("lex %s(%d)\n", parserTokname(c), uint(char)) + __yyfmt__.Printf("lex %s(%d)\n", parserTokname(token), uint(char)) } - return c + return char, token } func parserParse(parserlex parserLexer) int { + return parserNewParser().Parse(parserlex) +} + +func (parserrcvr *parserParserImpl) Parse(parserlex parserLexer) int { var parsern int var parserlval parserSymType var parserVAL parserSymType + var parserDollar []parserSymType + _ = parserDollar // silence set and not used parserS := make([]parserSymType, parserMaxDepth) Nerrs := 0 /* number of errors */ Errflag := 0 /* error recovery flag */ parserstate := 0 parserchar := -1 + parsertoken := -1 // parserchar translated into internal numbering + parserrcvr.lookahead = func() int { return parserchar } + defer func() { + // Make sure we report no lookahead when not parsing. + parserstate = -1 + parserchar = -1 + parsertoken = -1 + }() parserp := -1 goto parserstack @@ -207,7 +316,7 @@ ret1: parserstack: /* put a state and value onto the stack */ if parserDebug >= 4 { - __yyfmt__.Printf("char %v in %v\n", parserTokname(parserchar), parserStatname(parserstate)) + __yyfmt__.Printf("char %v in %v\n", parserTokname(parsertoken), parserStatname(parserstate)) } parserp++ @@ -225,15 +334,16 @@ parsernewstate: goto parserdefault /* simple state */ } if parserchar < 0 { - parserchar = parserlex1(parserlex, &parserlval) + parserchar, parsertoken = parserlex1(parserlex, &parserlval) } - parsern += parserchar + parsern += parsertoken if parsern < 0 || parsern >= parserLast { goto parserdefault } parsern = parserAct[parsern] - if parserChk[parsern] == parserchar { /* valid shift */ + if parserChk[parsern] == parsertoken { /* valid shift */ parserchar = -1 + parsertoken = -1 parserVAL = parserlval parserstate = parsern if Errflag > 0 { @@ -247,7 +357,7 @@ parserdefault: parsern = parserDef[parserstate] if parsern == -2 { if parserchar < 0 { - parserchar = parserlex1(parserlex, &parserlval) + parserchar, parsertoken = parserlex1(parserlex, &parserlval) } /* look through exception table */ @@ -260,7 +370,7 @@ parserdefault: } for xi += 2; ; xi += 2 { parsern = parserExca[xi+0] - if parsern < 0 || parsern == parserchar { + if parsern < 0 || parsern == parsertoken { break } } @@ -273,11 +383,11 @@ parserdefault: /* error ... attempt to resume parsing */ switch Errflag { case 0: /* brand new error */ - parserlex.Error("syntax error") + parserlex.Error(parserErrorMessage(parserstate, parsertoken)) Nerrs++ if parserDebug >= 1 { __yyfmt__.Printf("%s", parserStatname(parserstate)) - __yyfmt__.Printf(" saw %s\n", parserTokname(parserchar)) + __yyfmt__.Printf(" saw %s\n", parserTokname(parsertoken)) } fallthrough @@ -305,12 +415,13 @@ parserdefault: case 3: /* no shift yet; clobber input char */ if parserDebug >= 2 { - __yyfmt__.Printf("error recovery discards %s\n", parserTokname(parserchar)) + __yyfmt__.Printf("error recovery discards %s\n", parserTokname(parsertoken)) } - if parserchar == parserEofCode { + if parsertoken == parserEofCode { goto ret1 } parserchar = -1 + parsertoken = -1 goto parsernewstate /* try again in the same state */ } } @@ -325,6 +436,13 @@ parserdefault: _ = parserpt // guard against "declared and not used" parserp -= parserR2[parsern] + // parserp is now the index of $0. Perform the default action. Iff the + // reduced production is ε, $1 is possibly out of range. + if parserp+1 >= len(parserS) { + nyys := make([]parserSymType, len(parserS)*2) + copy(nyys, parserS) + parserS = nyys + } parserVAL = parserS[parserp+1] /* consult goto table to find next state */ @@ -344,6 +462,7 @@ parserdefault: switch parsernt { case 1: + parserDollar = parserS[parserpt-0 : parserpt+1] //line lang.y:35 { parserResult = &ast.LiteralNode{ @@ -353,9 +472,10 @@ parserdefault: } } case 2: + parserDollar = parserS[parserpt-1 : parserpt+1] //line lang.y:43 { - parserResult = parserS[parserpt-0].node + parserResult = parserDollar[1].node // We want to make sure that the top value is always a Concat // so that the return value is always a string type from an @@ -365,28 +485,30 @@ parserdefault: // because functionally the AST is the same, but we do that because // it makes for an easy literal check later (to check if a string // has any interpolations). - if _, ok := parserS[parserpt-0].node.(*ast.Concat); !ok { - if n, ok := parserS[parserpt-0].node.(*ast.LiteralNode); !ok || n.Typex != ast.TypeString { + if _, ok := parserDollar[1].node.(*ast.Concat); !ok { + if n, ok := parserDollar[1].node.(*ast.LiteralNode); !ok || n.Typex != ast.TypeString { parserResult = &ast.Concat{ - Exprs: []ast.Node{parserS[parserpt-0].node}, - Posx: parserS[parserpt-0].node.Pos(), + Exprs: []ast.Node{parserDollar[1].node}, + Posx: parserDollar[1].node.Pos(), } } } } case 3: + parserDollar = parserS[parserpt-1 : parserpt+1] //line lang.y:66 { - parserVAL.node = parserS[parserpt-0].node + parserVAL.node = parserDollar[1].node } case 4: + parserDollar = parserS[parserpt-2 : parserpt+1] //line lang.y:70 { var result []ast.Node - if c, ok := parserS[parserpt-1].node.(*ast.Concat); ok { - result = append(c.Exprs, parserS[parserpt-0].node) + if c, ok := parserDollar[1].node.(*ast.Concat); ok { + result = append(c.Exprs, parserDollar[2].node) } else { - result = []ast.Node{parserS[parserpt-1].node, parserS[parserpt-0].node} + result = []ast.Node{parserDollar[1].node, parserDollar[2].node} } parserVAL.node = &ast.Concat{ @@ -395,89 +517,103 @@ parserdefault: } } case 5: + parserDollar = parserS[parserpt-1 : parserpt+1] //line lang.y:86 { - parserVAL.node = parserS[parserpt-0].node + parserVAL.node = parserDollar[1].node } case 6: + parserDollar = parserS[parserpt-1 : parserpt+1] //line lang.y:90 { - parserVAL.node = parserS[parserpt-0].node + parserVAL.node = parserDollar[1].node } case 7: + parserDollar = parserS[parserpt-3 : parserpt+1] //line lang.y:96 { - parserVAL.node = parserS[parserpt-1].node + parserVAL.node = parserDollar[2].node } case 8: + parserDollar = parserS[parserpt-3 : parserpt+1] //line lang.y:102 { - parserVAL.node = parserS[parserpt-1].node + parserVAL.node = parserDollar[2].node } case 9: + parserDollar = parserS[parserpt-1 : parserpt+1] //line lang.y:106 { - parserVAL.node = parserS[parserpt-0].node + parserVAL.node = parserDollar[1].node } case 10: + parserDollar = parserS[parserpt-1 : parserpt+1] //line lang.y:110 { parserVAL.node = &ast.LiteralNode{ - Value: parserS[parserpt-0].token.Value.(int), + Value: parserDollar[1].token.Value.(int), Typex: ast.TypeInt, - Posx: parserS[parserpt-0].token.Pos, + Posx: parserDollar[1].token.Pos, } } case 11: + parserDollar = parserS[parserpt-1 : parserpt+1] //line lang.y:118 { parserVAL.node = &ast.LiteralNode{ - Value: parserS[parserpt-0].token.Value.(float64), + Value: parserDollar[1].token.Value.(float64), Typex: ast.TypeFloat, - Posx: parserS[parserpt-0].token.Pos, + Posx: parserDollar[1].token.Pos, } } case 12: + parserDollar = parserS[parserpt-3 : parserpt+1] //line lang.y:126 { parserVAL.node = &ast.Arithmetic{ - Op: parserS[parserpt-1].token.Value.(ast.ArithmeticOp), - Exprs: []ast.Node{parserS[parserpt-2].node, parserS[parserpt-0].node}, - Posx: parserS[parserpt-2].node.Pos(), + Op: parserDollar[2].token.Value.(ast.ArithmeticOp), + Exprs: []ast.Node{parserDollar[1].node, parserDollar[3].node}, + Posx: parserDollar[1].node.Pos(), } } case 13: + parserDollar = parserS[parserpt-1 : parserpt+1] //line lang.y:134 { - parserVAL.node = &ast.VariableAccess{Name: parserS[parserpt-0].token.Value.(string), Posx: parserS[parserpt-0].token.Pos} + parserVAL.node = &ast.VariableAccess{Name: parserDollar[1].token.Value.(string), Posx: parserDollar[1].token.Pos} } case 14: + parserDollar = parserS[parserpt-4 : parserpt+1] //line lang.y:138 { - parserVAL.node = &ast.Call{Func: parserS[parserpt-3].token.Value.(string), Args: parserS[parserpt-1].nodeList, Posx: parserS[parserpt-3].token.Pos} + parserVAL.node = &ast.Call{Func: parserDollar[1].token.Value.(string), Args: parserDollar[3].nodeList, Posx: parserDollar[1].token.Pos} } case 15: + parserDollar = parserS[parserpt-0 : parserpt+1] //line lang.y:143 { parserVAL.nodeList = nil } case 16: + parserDollar = parserS[parserpt-3 : parserpt+1] //line lang.y:147 { - parserVAL.nodeList = append(parserS[parserpt-2].nodeList, parserS[parserpt-0].node) + parserVAL.nodeList = append(parserDollar[1].nodeList, parserDollar[3].node) } case 17: + parserDollar = parserS[parserpt-1 : parserpt+1] //line lang.y:151 { - parserVAL.nodeList = append(parserVAL.nodeList, parserS[parserpt-0].node) + parserVAL.nodeList = append(parserVAL.nodeList, parserDollar[1].node) } case 18: + parserDollar = parserS[parserpt-1 : parserpt+1] //line lang.y:157 { parserVAL.node = &ast.LiteralNode{ - Value: parserS[parserpt-0].token.Value.(string), + Value: parserDollar[1].token.Value.(string), Typex: ast.TypeString, - Posx: parserS[parserpt-0].token.Pos, + Posx: parserDollar[1].token.Pos, } } } diff --git a/config/loader.go b/config/loader.go index 4f5d8e765..5711ce8ef 100644 --- a/config/loader.go +++ b/config/loader.go @@ -210,5 +210,5 @@ func dirFiles(dir string) ([]string, []string, error) { func isIgnoredFile(name string) bool { return strings.HasPrefix(name, ".") || // Unix-like hidden files strings.HasSuffix(name, "~") || // vim - (strings.HasPrefix(name, "#") && strings.HasSuffix(name, "#")) // emacs + strings.HasPrefix(name, "#") && strings.HasSuffix(name, "#") // emacs } diff --git a/config/loader_test.go b/config/loader_test.go index d239bd0b9..eaf4f10aa 100644 --- a/config/loader_test.go +++ b/config/loader_test.go @@ -440,6 +440,54 @@ func TestLoadFile_createBeforeDestroy(t *testing.T) { } } +func TestLoadFile_ignoreChanges(t *testing.T) { + c, err := LoadFile(filepath.Join(fixtureDir, "ignore-changes.tf")) + if err != nil { + t.Fatalf("err: %s", err) + } + + if c == nil { + t.Fatal("config should not be nil") + } + + actual := resourcesStr(c.Resources) + print(actual) + if actual != strings.TrimSpace(ignoreChangesResourcesStr) { + t.Fatalf("bad:\n%s", actual) + } + + // Check for the flag value + r := c.Resources[0] + if r.Name != "web" && r.Type != "aws_instance" { + t.Fatalf("Bad: %#v", r) + } + + // Should populate ignore changes + if len(r.Lifecycle.IgnoreChanges) == 0 { + t.Fatalf("Bad: %#v", r) + } + + r = c.Resources[1] + if r.Name != "bar" && r.Type != "aws_instance" { + t.Fatalf("Bad: %#v", r) + } + + // Should not populate ignore changes + if len(r.Lifecycle.IgnoreChanges) > 0 { + t.Fatalf("Bad: %#v", r) + } + + r = c.Resources[2] + if r.Name != "baz" && r.Type != "aws_instance" { + t.Fatalf("Bad: %#v", r) + } + + // Should not populate ignore changes + if len(r.Lifecycle.IgnoreChanges) > 0 { + t.Fatalf("Bad: %#v", r) + } +} + func TestLoad_preventDestroyString(t *testing.T) { c, err := LoadFile(filepath.Join(fixtureDir, "prevent-destroy-string.tf")) if err != nil { @@ -676,3 +724,12 @@ aws_instance[bar] (x1) aws_instance[web] (x1) ami ` + +const ignoreChangesResourcesStr = ` +aws_instance[bar] (x1) + ami +aws_instance[baz] (x1) + ami +aws_instance[web] (x1) + ami +` diff --git a/config/merge_test.go b/config/merge_test.go index 40144f0c7..6fe55a2d5 100644 --- a/config/merge_test.go +++ b/config/merge_test.go @@ -157,7 +157,7 @@ func TestMerge(t *testing.T) { for i, tc := range cases { actual, err := Merge(tc.c1, tc.c2) - if (err != nil) != tc.err { + if err != nil != tc.err { t.Fatalf("%d: error fail", i) } diff --git a/config/module/detect.go b/config/module/detect.go deleted file mode 100644 index 51e07f725..000000000 --- a/config/module/detect.go +++ /dev/null @@ -1,92 +0,0 @@ -package module - -import ( - "fmt" - "path/filepath" - - "github.com/hashicorp/terraform/helper/url" -) - -// Detector defines the interface that an invalid URL or a URL with a blank -// scheme is passed through in order to determine if its shorthand for -// something else well-known. -type Detector interface { - // Detect will detect whether the string matches a known pattern to - // turn it into a proper URL. - Detect(string, string) (string, bool, error) -} - -// Detectors is the list of detectors that are tried on an invalid URL. -// This is also the order they're tried (index 0 is first). -var Detectors []Detector - -func init() { - Detectors = []Detector{ - new(GitHubDetector), - new(BitBucketDetector), - new(FileDetector), - } -} - -// Detect turns a source string into another source string if it is -// detected to be of a known pattern. -// -// This is safe to be called with an already valid source string: Detect -// will just return it. -func Detect(src string, pwd string) (string, error) { - getForce, getSrc := getForcedGetter(src) - - // Separate out the subdir if there is one, we don't pass that to detect - getSrc, subDir := getDirSubdir(getSrc) - - u, err := url.Parse(getSrc) - if err == nil && u.Scheme != "" { - // Valid URL - return src, nil - } - - for _, d := range Detectors { - result, ok, err := d.Detect(getSrc, pwd) - if err != nil { - return "", err - } - if !ok { - continue - } - - var detectForce string - detectForce, result = getForcedGetter(result) - result, detectSubdir := getDirSubdir(result) - - // If we have a subdir from the detection, then prepend it to our - // requested subdir. - if detectSubdir != "" { - if subDir != "" { - subDir = filepath.Join(detectSubdir, subDir) - } else { - subDir = detectSubdir - } - } - if subDir != "" { - u, err := url.Parse(result) - if err != nil { - return "", fmt.Errorf("Error parsing URL: %s", err) - } - u.Path += "//" + subDir - result = u.String() - } - - // Preserve the forced getter if it exists. We try to use the - // original set force first, followed by any force set by the - // detector. - if getForce != "" { - result = fmt.Sprintf("%s::%s", getForce, result) - } else if detectForce != "" { - result = fmt.Sprintf("%s::%s", detectForce, result) - } - - return result, nil - } - - return "", fmt.Errorf("invalid source string: %s", src) -} diff --git a/config/module/detect_bitbucket.go b/config/module/detect_bitbucket.go deleted file mode 100644 index 657637c09..000000000 --- a/config/module/detect_bitbucket.go +++ /dev/null @@ -1,66 +0,0 @@ -package module - -import ( - "encoding/json" - "fmt" - "net/http" - "net/url" - "strings" -) - -// BitBucketDetector implements Detector to detect BitBucket URLs and turn -// them into URLs that the Git or Hg Getter can understand. -type BitBucketDetector struct{} - -func (d *BitBucketDetector) Detect(src, _ string) (string, bool, error) { - if len(src) == 0 { - return "", false, nil - } - - if strings.HasPrefix(src, "bitbucket.org/") { - return d.detectHTTP(src) - } - - return "", false, nil -} - -func (d *BitBucketDetector) detectHTTP(src string) (string, bool, error) { - u, err := url.Parse("https://" + src) - if err != nil { - return "", true, fmt.Errorf("error parsing BitBucket URL: %s", err) - } - - // We need to get info on this BitBucket repository to determine whether - // it is Git or Hg. - var info struct { - SCM string `json:"scm"` - } - infoUrl := "https://api.bitbucket.org/1.0/repositories" + u.Path - resp, err := http.Get(infoUrl) - if err != nil { - return "", true, fmt.Errorf("error looking up BitBucket URL: %s", err) - } - if resp.StatusCode == 403 { - // A private repo - return "", true, fmt.Errorf( - "shorthand BitBucket URL can't be used for private repos, " + - "please use a full URL") - } - dec := json.NewDecoder(resp.Body) - if err := dec.Decode(&info); err != nil { - return "", true, fmt.Errorf("error looking up BitBucket URL: %s", err) - } - - switch info.SCM { - case "git": - if !strings.HasSuffix(u.Path, ".git") { - u.Path += ".git" - } - - return "git::" + u.String(), true, nil - case "hg": - return "hg::" + u.String(), true, nil - default: - return "", true, fmt.Errorf("unknown BitBucket SCM type: %s", info.SCM) - } -} diff --git a/config/module/detect_bitbucket_test.go b/config/module/detect_bitbucket_test.go deleted file mode 100644 index b05fd5999..000000000 --- a/config/module/detect_bitbucket_test.go +++ /dev/null @@ -1,67 +0,0 @@ -package module - -import ( - "net/http" - "strings" - "testing" -) - -const testBBUrl = "https://bitbucket.org/hashicorp/tf-test-git" - -func TestBitBucketDetector(t *testing.T) { - t.Parallel() - - if _, err := http.Get(testBBUrl); err != nil { - t.Log("internet may not be working, skipping BB tests") - t.Skip() - } - - cases := []struct { - Input string - Output string - }{ - // HTTP - { - "bitbucket.org/hashicorp/tf-test-git", - "git::https://bitbucket.org/hashicorp/tf-test-git.git", - }, - { - "bitbucket.org/hashicorp/tf-test-git.git", - "git::https://bitbucket.org/hashicorp/tf-test-git.git", - }, - { - "bitbucket.org/hashicorp/tf-test-hg", - "hg::https://bitbucket.org/hashicorp/tf-test-hg", - }, - } - - pwd := "/pwd" - f := new(BitBucketDetector) - for i, tc := range cases { - var err error - for i := 0; i < 3; i++ { - var output string - var ok bool - output, ok, err = f.Detect(tc.Input, pwd) - if err != nil { - if strings.Contains(err.Error(), "invalid character") { - continue - } - - t.Fatalf("err: %s", err) - } - if !ok { - t.Fatal("not ok") - } - - if output != tc.Output { - t.Fatalf("%d: bad: %#v", i, output) - } - - break - } - if i >= 3 { - t.Fatalf("failure from bitbucket: %s", err) - } - } -} diff --git a/config/module/detect_file.go b/config/module/detect_file.go deleted file mode 100644 index 859739f95..000000000 --- a/config/module/detect_file.go +++ /dev/null @@ -1,60 +0,0 @@ -package module - -import ( - "fmt" - "os" - "path/filepath" - "runtime" -) - -// FileDetector implements Detector to detect file paths. -type FileDetector struct{} - -func (d *FileDetector) Detect(src, pwd string) (string, bool, error) { - if len(src) == 0 { - return "", false, nil - } - - if !filepath.IsAbs(src) { - if pwd == "" { - return "", true, fmt.Errorf( - "relative paths require a module with a pwd") - } - - // Stat the pwd to determine if its a symbolic link. If it is, - // then the pwd becomes the original directory. Otherwise, - // `filepath.Join` below does some weird stuff. - // - // We just ignore if the pwd doesn't exist. That error will be - // caught later when we try to use the URL. - if fi, err := os.Lstat(pwd); !os.IsNotExist(err) { - if err != nil { - return "", true, err - } - if fi.Mode()&os.ModeSymlink != 0 { - pwd, err = os.Readlink(pwd) - if err != nil { - return "", true, err - } - } - } - - src = filepath.Join(pwd, src) - } - - return fmtFileURL(src), true, nil -} - -func fmtFileURL(path string) string { - if runtime.GOOS == "windows" { - // Make sure we're using "/" on Windows. URLs are "/"-based. - path = filepath.ToSlash(path) - return fmt.Sprintf("file://%s", path) - } - - // Make sure that we don't start with "/" since we add that below. - if path[0] == '/' { - path = path[1:] - } - return fmt.Sprintf("file:///%s", path) -} diff --git a/config/module/detect_file_test.go b/config/module/detect_file_test.go deleted file mode 100644 index 4c75ce83d..000000000 --- a/config/module/detect_file_test.go +++ /dev/null @@ -1,88 +0,0 @@ -package module - -import ( - "runtime" - "testing" -) - -type fileTest struct { - in, pwd, out string - err bool -} - -var fileTests = []fileTest{ - {"./foo", "/pwd", "file:///pwd/foo", false}, - {"./foo?foo=bar", "/pwd", "file:///pwd/foo?foo=bar", false}, - {"foo", "/pwd", "file:///pwd/foo", false}, -} - -var unixFileTests = []fileTest{ - {"/foo", "/pwd", "file:///foo", false}, - {"/foo?bar=baz", "/pwd", "file:///foo?bar=baz", false}, -} - -var winFileTests = []fileTest{ - {"/foo", "/pwd", "file:///pwd/foo", false}, - {`C:\`, `/pwd`, `file://C:/`, false}, - {`C:\?bar=baz`, `/pwd`, `file://C:/?bar=baz`, false}, -} - -func TestFileDetector(t *testing.T) { - if runtime.GOOS == "windows" { - fileTests = append(fileTests, winFileTests...) - } else { - fileTests = append(fileTests, unixFileTests...) - } - - f := new(FileDetector) - for i, tc := range fileTests { - out, ok, err := f.Detect(tc.in, tc.pwd) - if err != nil { - t.Fatalf("err: %s", err) - } - if !ok { - t.Fatal("not ok") - } - - if out != tc.out { - t.Fatalf("%d: bad: %#v", i, out) - } - } -} - -var noPwdFileTests = []fileTest{ - {in: "./foo", pwd: "", out: "", err: true}, - {in: "foo", pwd: "", out: "", err: true}, -} - -var noPwdUnixFileTests = []fileTest{ - {in: "/foo", pwd: "", out: "file:///foo", err: false}, -} - -var noPwdWinFileTests = []fileTest{ - {in: "/foo", pwd: "", out: "", err: true}, - {in: `C:\`, pwd: ``, out: `file://C:/`, err: false}, -} - -func TestFileDetector_noPwd(t *testing.T) { - if runtime.GOOS == "windows" { - noPwdFileTests = append(noPwdFileTests, noPwdWinFileTests...) - } else { - noPwdFileTests = append(noPwdFileTests, noPwdUnixFileTests...) - } - - f := new(FileDetector) - for i, tc := range noPwdFileTests { - out, ok, err := f.Detect(tc.in, tc.pwd) - if (err != nil) != tc.err { - t.Fatalf("%d: err: %s", i, err) - } - if !ok { - t.Fatal("not ok") - } - - if out != tc.out { - t.Fatalf("%d: bad: %#v", i, out) - } - } -} diff --git a/config/module/detect_github.go b/config/module/detect_github.go deleted file mode 100644 index c4a4e89f0..000000000 --- a/config/module/detect_github.go +++ /dev/null @@ -1,73 +0,0 @@ -package module - -import ( - "fmt" - "net/url" - "strings" -) - -// GitHubDetector implements Detector to detect GitHub URLs and turn -// them into URLs that the Git Getter can understand. -type GitHubDetector struct{} - -func (d *GitHubDetector) Detect(src, _ string) (string, bool, error) { - if len(src) == 0 { - return "", false, nil - } - - if strings.HasPrefix(src, "github.com/") { - return d.detectHTTP(src) - } else if strings.HasPrefix(src, "git@github.com:") { - return d.detectSSH(src) - } - - return "", false, nil -} - -func (d *GitHubDetector) detectHTTP(src string) (string, bool, error) { - parts := strings.Split(src, "/") - if len(parts) < 3 { - return "", false, fmt.Errorf( - "GitHub URLs should be github.com/username/repo") - } - - urlStr := fmt.Sprintf("https://%s", strings.Join(parts[:3], "/")) - url, err := url.Parse(urlStr) - if err != nil { - return "", true, fmt.Errorf("error parsing GitHub URL: %s", err) - } - - if !strings.HasSuffix(url.Path, ".git") { - url.Path += ".git" - } - - if len(parts) > 3 { - url.Path += "//" + strings.Join(parts[3:], "/") - } - - return "git::" + url.String(), true, nil -} - -func (d *GitHubDetector) detectSSH(src string) (string, bool, error) { - idx := strings.Index(src, ":") - qidx := strings.Index(src, "?") - if qidx == -1 { - qidx = len(src) - } - - var u url.URL - u.Scheme = "ssh" - u.User = url.User("git") - u.Host = "github.com" - u.Path = src[idx+1 : qidx] - if qidx < len(src) { - q, err := url.ParseQuery(src[qidx+1:]) - if err != nil { - return "", true, fmt.Errorf("error parsing GitHub SSH URL: %s", err) - } - - u.RawQuery = q.Encode() - } - - return "git::" + u.String(), true, nil -} diff --git a/config/module/detect_github_test.go b/config/module/detect_github_test.go deleted file mode 100644 index 822e1806d..000000000 --- a/config/module/detect_github_test.go +++ /dev/null @@ -1,55 +0,0 @@ -package module - -import ( - "testing" -) - -func TestGitHubDetector(t *testing.T) { - cases := []struct { - Input string - Output string - }{ - // HTTP - {"github.com/hashicorp/foo", "git::https://github.com/hashicorp/foo.git"}, - {"github.com/hashicorp/foo.git", "git::https://github.com/hashicorp/foo.git"}, - { - "github.com/hashicorp/foo/bar", - "git::https://github.com/hashicorp/foo.git//bar", - }, - { - "github.com/hashicorp/foo?foo=bar", - "git::https://github.com/hashicorp/foo.git?foo=bar", - }, - { - "github.com/hashicorp/foo.git?foo=bar", - "git::https://github.com/hashicorp/foo.git?foo=bar", - }, - - // SSH - {"git@github.com:hashicorp/foo.git", "git::ssh://git@github.com/hashicorp/foo.git"}, - { - "git@github.com:hashicorp/foo.git//bar", - "git::ssh://git@github.com/hashicorp/foo.git//bar", - }, - { - "git@github.com:hashicorp/foo.git?foo=bar", - "git::ssh://git@github.com/hashicorp/foo.git?foo=bar", - }, - } - - pwd := "/pwd" - f := new(GitHubDetector) - for i, tc := range cases { - output, ok, err := f.Detect(tc.Input, pwd) - if err != nil { - t.Fatalf("err: %s", err) - } - if !ok { - t.Fatal("not ok") - } - - if output != tc.Output { - t.Fatalf("%d: bad: %#v", i, output) - } - } -} diff --git a/config/module/detect_test.go b/config/module/detect_test.go deleted file mode 100644 index e1e3b4372..000000000 --- a/config/module/detect_test.go +++ /dev/null @@ -1,51 +0,0 @@ -package module - -import ( - "testing" -) - -func TestDetect(t *testing.T) { - cases := []struct { - Input string - Pwd string - Output string - Err bool - }{ - {"./foo", "/foo", "file:///foo/foo", false}, - {"git::./foo", "/foo", "git::file:///foo/foo", false}, - { - "git::github.com/hashicorp/foo", - "", - "git::https://github.com/hashicorp/foo.git", - false, - }, - { - "./foo//bar", - "/foo", - "file:///foo/foo//bar", - false, - }, - { - "git::github.com/hashicorp/foo//bar", - "", - "git::https://github.com/hashicorp/foo.git//bar", - false, - }, - { - "git::https://github.com/hashicorp/consul.git", - "", - "git::https://github.com/hashicorp/consul.git", - false, - }, - } - - for i, tc := range cases { - output, err := Detect(tc.Input, tc.Pwd) - if (err != nil) != tc.Err { - t.Fatalf("%d: bad err: %s", i, err) - } - if output != tc.Output { - t.Fatalf("%d: bad output: %s\nexpected: %s", i, output, tc.Output) - } - } -} diff --git a/config/module/folder_storage.go b/config/module/folder_storage.go deleted file mode 100644 index 81c9a2ac1..000000000 --- a/config/module/folder_storage.go +++ /dev/null @@ -1,65 +0,0 @@ -package module - -import ( - "crypto/md5" - "encoding/hex" - "fmt" - "os" - "path/filepath" -) - -// FolderStorage is an implementation of the Storage interface that manages -// modules on the disk. -type FolderStorage struct { - // StorageDir is the directory where the modules will be stored. - StorageDir string -} - -// Dir implements Storage.Dir -func (s *FolderStorage) Dir(key string) (d string, e bool, err error) { - d = s.dir(key) - _, err = os.Stat(d) - if err == nil { - // Directory exists - e = true - return - } - if os.IsNotExist(err) { - // Directory doesn't exist - d = "" - e = false - err = nil - return - } - - // An error - d = "" - e = false - return -} - -// Get implements Storage.Get -func (s *FolderStorage) Get(key string, source string, update bool) error { - dir := s.dir(key) - if !update { - if _, err := os.Stat(dir); err == nil { - // If the directory already exists, then we're done since - // we're not updating. - return nil - } else if !os.IsNotExist(err) { - // If the error we got wasn't a file-not-exist error, then - // something went wrong and we should report it. - return fmt.Errorf("Error reading module directory: %s", err) - } - } - - // Get the source. This always forces an update. - return Get(dir, source) -} - -// dir returns the directory name internally that we'll use to map to -// internally. -func (s *FolderStorage) dir(key string) string { - sum := md5.Sum([]byte(key)) - return filepath.Join(s.StorageDir, hex.EncodeToString(sum[:])) -} diff --git a/config/module/folder_storage_test.go b/config/module/folder_storage_test.go deleted file mode 100644 index 7fda6b21a..000000000 --- a/config/module/folder_storage_test.go +++ /dev/null @@ -1,48 +0,0 @@ -package module - -import ( - "os" - "path/filepath" - "testing" -) - -func TestFolderStorage_impl(t *testing.T) { - var _ Storage = new(FolderStorage) -} - -func TestFolderStorage(t *testing.T) { - s := &FolderStorage{StorageDir: tempDir(t)} - - module := testModule("basic") - - // A module shouldn't exist at first... - _, ok, err := s.Dir(module) - if err != nil { - t.Fatalf("err: %s", err) - } - if ok { - t.Fatal("should not exist") - } - - key := "foo" - - // We can get it - err = s.Get(key, module, false) - if err != nil { - t.Fatalf("err: %s", err) - } - - // Now the module exists - dir, ok, err := s.Dir(key) - if err != nil { - t.Fatalf("err: %s", err) - } - if !ok { - t.Fatal("should exist") - } - - mainPath := filepath.Join(dir, "main.tf") - if _, err := os.Stat(mainPath); err != nil { - t.Fatalf("err: %s", err) - } -} diff --git a/config/module/get.go b/config/module/get.go index 627d395a9..cba15277f 100644 --- a/config/module/get.go +++ b/config/module/get.go @@ -1,113 +1,30 @@ package module import ( - "bytes" - "fmt" "io/ioutil" - "net/url" "os" - "os/exec" - "path/filepath" - "regexp" - "strings" - "syscall" - urlhelper "github.com/hashicorp/terraform/helper/url" + "github.com/hashicorp/go-getter" ) -// Getter defines the interface that schemes must implement to download -// and update modules. -type Getter interface { - // Get downloads the given URL into the given directory. This always - // assumes that we're updating and gets the latest version that it can. - // - // The directory may already exist (if we're updating). If it is in a - // format that isn't understood, an error should be returned. Get shouldn't - // simply nuke the directory. - Get(string, *url.URL) error -} - -// Getters is the mapping of scheme to the Getter implementation that will -// be used to get a dependency. -var Getters map[string]Getter - -// forcedRegexp is the regular expression that finds forced getters. This -// syntax is schema::url, example: git::https://foo.com -var forcedRegexp = regexp.MustCompile(`^([A-Za-z]+)::(.+)$`) - -func init() { - httpGetter := new(HttpGetter) - - Getters = map[string]Getter{ - "file": new(FileGetter), - "git": new(GitGetter), - "hg": new(HgGetter), - "http": httpGetter, - "https": httpGetter, - } -} - -// Get downloads the module specified by src into the folder specified by -// dst. If dst already exists, Get will attempt to update it. +// GetMode is an enum that describes how modules are loaded. // -// src is a URL, whereas dst is always just a file path to a folder. This -// folder doesn't need to exist. It will be created if it doesn't exist. -func Get(dst, src string) error { - var force string - force, src = getForcedGetter(src) +// GetModeLoad says that modules will not be downloaded or updated, they will +// only be loaded from the storage. +// +// GetModeGet says that modules can be initially downloaded if they don't +// exist, but otherwise to just load from the current version in storage. +// +// GetModeUpdate says that modules should be checked for updates and +// downloaded prior to loading. If there are no updates, we load the version +// from disk, otherwise we download first and then load. +type GetMode byte - // If there is a subdir component, then we download the root separately - // and then copy over the proper subdir. - var realDst string - src, subDir := getDirSubdir(src) - if subDir != "" { - tmpDir, err := ioutil.TempDir("", "tf") - if err != nil { - return err - } - if err := os.RemoveAll(tmpDir); err != nil { - return err - } - defer os.RemoveAll(tmpDir) - - realDst = dst - dst = tmpDir - } - - u, err := urlhelper.Parse(src) - if err != nil { - return err - } - if force == "" { - force = u.Scheme - } - - g, ok := Getters[force] - if !ok { - return fmt.Errorf( - "module download not supported for scheme '%s'", force) - } - - err = g.Get(dst, u) - if err != nil { - err = fmt.Errorf("error downloading module '%s': %s", src, err) - return err - } - - // If we have a subdir, copy that over - if subDir != "" { - if err := os.RemoveAll(realDst); err != nil { - return err - } - if err := os.MkdirAll(realDst, 0755); err != nil { - return err - } - - return copyDir(realDst, filepath.Join(dst, subDir)) - } - - return nil -} +const ( + GetModeNone GetMode = iota + GetModeGet + GetModeUpdate +) // GetCopy is the same as Get except that it downloads a copy of the // module represented by source. @@ -126,7 +43,7 @@ func GetCopy(dst, src string) error { defer os.RemoveAll(tmpDir) // Get to that temporary dir - if err := Get(tmpDir, src); err != nil { + if err := getter.Get(tmpDir, src); err != nil { return err } @@ -139,69 +56,14 @@ func GetCopy(dst, src string) error { return copyDir(dst, tmpDir) } -// getRunCommand is a helper that will run a command and capture the output -// in the case an error happens. -func getRunCommand(cmd *exec.Cmd) error { - var buf bytes.Buffer - cmd.Stdout = &buf - cmd.Stderr = &buf - err := cmd.Run() - if err == nil { - return nil - } - if exiterr, ok := err.(*exec.ExitError); ok { - // The program has exited with an exit code != 0 - if status, ok := exiterr.Sys().(syscall.WaitStatus); ok { - return fmt.Errorf( - "%s exited with %d: %s", - cmd.Path, - status.ExitStatus(), - buf.String()) +func getStorage(s getter.Storage, key string, src string, mode GetMode) (string, bool, error) { + // Get the module with the level specified if we were told to. + if mode > GetModeNone { + if err := s.Get(key, src, mode == GetModeUpdate); err != nil { + return "", false, err } } - return fmt.Errorf("error running %s: %s", cmd.Path, buf.String()) -} - -// getDirSubdir takes a source and returns a tuple of the URL without -// the subdir and the URL with the subdir. -func getDirSubdir(src string) (string, string) { - // Calcaulate an offset to avoid accidentally marking the scheme - // as the dir. - var offset int - if idx := strings.Index(src, "://"); idx > -1 { - offset = idx + 3 - } - - // First see if we even have an explicit subdir - idx := strings.Index(src[offset:], "//") - if idx == -1 { - return src, "" - } - - idx += offset - subdir := src[idx+2:] - src = src[:idx] - - // Next, check if we have query parameters and push them onto the - // URL. - if idx = strings.Index(subdir, "?"); idx > -1 { - query := subdir[idx:] - subdir = subdir[:idx] - src += query - } - - return src, subdir -} - -// getForcedGetter takes a source and returns the tuple of the forced -// getter and the raw URL (without the force syntax). -func getForcedGetter(src string) (string, string) { - var forced string - if ms := forcedRegexp.FindStringSubmatch(src); ms != nil { - forced = ms[1] - src = ms[2] - } - - return forced, src + // Get the directory where the module is. + return s.Dir(key) } diff --git a/config/module/get_file.go b/config/module/get_file.go deleted file mode 100644 index 73cb85834..000000000 --- a/config/module/get_file.go +++ /dev/null @@ -1,46 +0,0 @@ -package module - -import ( - "fmt" - "net/url" - "os" - "path/filepath" -) - -// FileGetter is a Getter implementation that will download a module from -// a file scheme. -type FileGetter struct{} - -func (g *FileGetter) Get(dst string, u *url.URL) error { - // The source path must exist and be a directory to be usable. - if fi, err := os.Stat(u.Path); err != nil { - return fmt.Errorf("source path error: %s", err) - } else if !fi.IsDir() { - return fmt.Errorf("source path must be a directory") - } - - fi, err := os.Lstat(dst) - if err != nil && !os.IsNotExist(err) { - return err - } - - // If the destination already exists, it must be a symlink - if err == nil { - mode := fi.Mode() - if mode&os.ModeSymlink == 0 { - return fmt.Errorf("destination exists and is not a symlink") - } - - // Remove the destination - if err := os.Remove(dst); err != nil { - return err - } - } - - // Create all the parent directories - if err := os.MkdirAll(filepath.Dir(dst), 0755); err != nil { - return err - } - - return os.Symlink(u.Path, dst) -} diff --git a/config/module/get_file_test.go b/config/module/get_file_test.go deleted file mode 100644 index 4c9f6126a..000000000 --- a/config/module/get_file_test.go +++ /dev/null @@ -1,104 +0,0 @@ -package module - -import ( - "os" - "path/filepath" - "testing" -) - -func TestFileGetter_impl(t *testing.T) { - var _ Getter = new(FileGetter) -} - -func TestFileGetter(t *testing.T) { - g := new(FileGetter) - dst := tempDir(t) - - // With a dir that doesn't exist - if err := g.Get(dst, testModuleURL("basic")); err != nil { - t.Fatalf("err: %s", err) - } - - // Verify the destination folder is a symlink - fi, err := os.Lstat(dst) - if err != nil { - t.Fatalf("err: %s", err) - } - if fi.Mode()&os.ModeSymlink == 0 { - t.Fatal("destination is not a symlink") - } - - // Verify the main file exists - mainPath := filepath.Join(dst, "main.tf") - if _, err := os.Stat(mainPath); err != nil { - t.Fatalf("err: %s", err) - } -} - -func TestFileGetter_sourceFile(t *testing.T) { - g := new(FileGetter) - dst := tempDir(t) - - // With a source URL that is a path to a file - u := testModuleURL("basic") - u.Path += "/main.tf" - if err := g.Get(dst, u); err == nil { - t.Fatal("should error") - } -} - -func TestFileGetter_sourceNoExist(t *testing.T) { - g := new(FileGetter) - dst := tempDir(t) - - // With a source URL that doesn't exist - u := testModuleURL("basic") - u.Path += "/main" - if err := g.Get(dst, u); err == nil { - t.Fatal("should error") - } -} - -func TestFileGetter_dir(t *testing.T) { - g := new(FileGetter) - dst := tempDir(t) - - if err := os.MkdirAll(dst, 0755); err != nil { - t.Fatalf("err: %s", err) - } - - // With a dir that exists that isn't a symlink - if err := g.Get(dst, testModuleURL("basic")); err == nil { - t.Fatal("should error") - } -} - -func TestFileGetter_dirSymlink(t *testing.T) { - g := new(FileGetter) - dst := tempDir(t) - dst2 := tempDir(t) - - // Make parents - if err := os.MkdirAll(filepath.Dir(dst), 0755); err != nil { - t.Fatalf("err: %s", err) - } - if err := os.MkdirAll(dst2, 0755); err != nil { - t.Fatalf("err: %s", err) - } - - // Make a symlink - if err := os.Symlink(dst2, dst); err != nil { - t.Fatalf("err: %s", err) - } - - // With a dir that exists that isn't a symlink - if err := g.Get(dst, testModuleURL("basic")); err != nil { - t.Fatalf("err: %s", err) - } - - // Verify the main file exists - mainPath := filepath.Join(dst, "main.tf") - if _, err := os.Stat(mainPath); err != nil { - t.Fatalf("err: %s", err) - } -} diff --git a/config/module/get_git.go b/config/module/get_git.go deleted file mode 100644 index 5ab27ba0b..000000000 --- a/config/module/get_git.go +++ /dev/null @@ -1,74 +0,0 @@ -package module - -import ( - "fmt" - "net/url" - "os" - "os/exec" -) - -// GitGetter is a Getter implementation that will download a module from -// a git repository. -type GitGetter struct{} - -func (g *GitGetter) Get(dst string, u *url.URL) error { - if _, err := exec.LookPath("git"); err != nil { - return fmt.Errorf("git must be available and on the PATH") - } - - // Extract some query parameters we use - var ref string - q := u.Query() - if len(q) > 0 { - ref = q.Get("ref") - q.Del("ref") - - // Copy the URL - var newU url.URL = *u - u = &newU - u.RawQuery = q.Encode() - } - - // First: clone or update the repository - _, err := os.Stat(dst) - if err != nil && !os.IsNotExist(err) { - return err - } - if err == nil { - err = g.update(dst, u) - } else { - err = g.clone(dst, u) - } - if err != nil { - return err - } - - // Next: check out the proper tag/branch if it is specified, and checkout - if ref == "" { - return nil - } - - return g.checkout(dst, ref) -} - -func (g *GitGetter) checkout(dst string, ref string) error { - cmd := exec.Command("git", "checkout", ref) - cmd.Dir = dst - return getRunCommand(cmd) -} - -func (g *GitGetter) clone(dst string, u *url.URL) error { - cmd := exec.Command("git", "clone", u.String(), dst) - return getRunCommand(cmd) -} - -func (g *GitGetter) update(dst string, u *url.URL) error { - // We have to be on a branch to pull - if err := g.checkout(dst, "master"); err != nil { - return err - } - - cmd := exec.Command("git", "pull", "--ff-only") - cmd.Dir = dst - return getRunCommand(cmd) -} diff --git a/config/module/get_git_test.go b/config/module/get_git_test.go deleted file mode 100644 index 3885ff8e7..000000000 --- a/config/module/get_git_test.go +++ /dev/null @@ -1,143 +0,0 @@ -package module - -import ( - "os" - "os/exec" - "path/filepath" - "testing" -) - -var testHasGit bool - -func init() { - if _, err := exec.LookPath("git"); err == nil { - testHasGit = true - } -} - -func TestGitGetter_impl(t *testing.T) { - var _ Getter = new(GitGetter) -} - -func TestGitGetter(t *testing.T) { - if !testHasGit { - t.Log("git not found, skipping") - t.Skip() - } - - g := new(GitGetter) - dst := tempDir(t) - - // Git doesn't allow nested ".git" directories so we do some hackiness - // here to get around that... - moduleDir := filepath.Join(fixtureDir, "basic-git") - oldName := filepath.Join(moduleDir, "DOTgit") - newName := filepath.Join(moduleDir, ".git") - if err := os.Rename(oldName, newName); err != nil { - t.Fatalf("err: %s", err) - } - defer os.Rename(newName, oldName) - - // With a dir that doesn't exist - if err := g.Get(dst, testModuleURL("basic-git")); err != nil { - t.Fatalf("err: %s", err) - } - - // Verify the main file exists - mainPath := filepath.Join(dst, "main.tf") - if _, err := os.Stat(mainPath); err != nil { - t.Fatalf("err: %s", err) - } -} - -func TestGitGetter_branch(t *testing.T) { - if !testHasGit { - t.Log("git not found, skipping") - t.Skip() - } - - g := new(GitGetter) - dst := tempDir(t) - - // Git doesn't allow nested ".git" directories so we do some hackiness - // here to get around that... - moduleDir := filepath.Join(fixtureDir, "basic-git") - oldName := filepath.Join(moduleDir, "DOTgit") - newName := filepath.Join(moduleDir, ".git") - if err := os.Rename(oldName, newName); err != nil { - t.Fatalf("err: %s", err) - } - defer os.Rename(newName, oldName) - - url := testModuleURL("basic-git") - q := url.Query() - q.Add("ref", "test-branch") - url.RawQuery = q.Encode() - - if err := g.Get(dst, url); err != nil { - t.Fatalf("err: %s", err) - } - - // Verify the main file exists - mainPath := filepath.Join(dst, "main_branch.tf") - if _, err := os.Stat(mainPath); err != nil { - t.Fatalf("err: %s", err) - } - - // Get again should work - if err := g.Get(dst, url); err != nil { - t.Fatalf("err: %s", err) - } - - // Verify the main file exists - mainPath = filepath.Join(dst, "main_branch.tf") - if _, err := os.Stat(mainPath); err != nil { - t.Fatalf("err: %s", err) - } -} - -func TestGitGetter_tag(t *testing.T) { - if !testHasGit { - t.Log("git not found, skipping") - t.Skip() - } - - g := new(GitGetter) - dst := tempDir(t) - - // Git doesn't allow nested ".git" directories so we do some hackiness - // here to get around that... - moduleDir := filepath.Join(fixtureDir, "basic-git") - oldName := filepath.Join(moduleDir, "DOTgit") - newName := filepath.Join(moduleDir, ".git") - if err := os.Rename(oldName, newName); err != nil { - t.Fatalf("err: %s", err) - } - defer os.Rename(newName, oldName) - - url := testModuleURL("basic-git") - q := url.Query() - q.Add("ref", "v1.0") - url.RawQuery = q.Encode() - - if err := g.Get(dst, url); err != nil { - t.Fatalf("err: %s", err) - } - - // Verify the main file exists - mainPath := filepath.Join(dst, "main_tag1.tf") - if _, err := os.Stat(mainPath); err != nil { - t.Fatalf("err: %s", err) - } - - // Get again should work - if err := g.Get(dst, url); err != nil { - t.Fatalf("err: %s", err) - } - - // Verify the main file exists - mainPath = filepath.Join(dst, "main_tag1.tf") - if _, err := os.Stat(mainPath); err != nil { - t.Fatalf("err: %s", err) - } -} diff --git a/config/module/get_hg.go b/config/module/get_hg.go deleted file mode 100644 index f74c14093..000000000 --- a/config/module/get_hg.go +++ /dev/null @@ -1,89 +0,0 @@ -package module - -import ( - "fmt" - "net/url" - "os" - "os/exec" - "runtime" - - urlhelper "github.com/hashicorp/terraform/helper/url" -) - -// HgGetter is a Getter implementation that will download a module from -// a Mercurial repository. -type HgGetter struct{} - -func (g *HgGetter) Get(dst string, u *url.URL) error { - if _, err := exec.LookPath("hg"); err != nil { - return fmt.Errorf("hg must be available and on the PATH") - } - - newURL, err := urlhelper.Parse(u.String()) - if err != nil { - return err - } - if fixWindowsDrivePath(newURL) { - // See valid file path form on http://www.selenic.com/hg/help/urls - newURL.Path = fmt.Sprintf("/%s", newURL.Path) - } - - // Extract some query parameters we use - var rev string - q := newURL.Query() - if len(q) > 0 { - rev = q.Get("rev") - q.Del("rev") - - newURL.RawQuery = q.Encode() - } - - _, err = os.Stat(dst) - if err != nil && !os.IsNotExist(err) { - return err - } - if err != nil { - if err := g.clone(dst, newURL); err != nil { - return err - } - } - - if err := g.pull(dst, newURL); err != nil { - return err - } - - return g.update(dst, newURL, rev) -} - -func (g *HgGetter) clone(dst string, u *url.URL) error { - cmd := exec.Command("hg", "clone", "-U", u.String(), dst) - return getRunCommand(cmd) -} - -func (g *HgGetter) pull(dst string, u *url.URL) error { - cmd := exec.Command("hg", "pull") - cmd.Dir = dst - return getRunCommand(cmd) -} - -func (g *HgGetter) update(dst string, u *url.URL, rev string) error { - args := []string{"update"} - if rev != "" { - args = append(args, rev) - } - - cmd := exec.Command("hg", args...) - cmd.Dir = dst - return getRunCommand(cmd) -} - -func fixWindowsDrivePath(u *url.URL) bool { - // hg assumes a file:/// prefix for Windows drive letter file paths. - // (e.g. file:///c:/foo/bar) - // If the URL Path does not begin with a '/' character, the resulting URL - // path will have a file:// prefix. (e.g. file://c:/foo/bar) - // See http://www.selenic.com/hg/help/urls and the examples listed in - // http://selenic.com/repo/hg-stable/file/1265a3a71d75/mercurial/util.py#l1936 - return runtime.GOOS == "windows" && u.Scheme == "file" && - len(u.Path) > 1 && u.Path[0] != '/' && u.Path[1] == ':' -} diff --git a/config/module/get_hg_test.go b/config/module/get_hg_test.go deleted file mode 100644 index d7125bde2..000000000 --- a/config/module/get_hg_test.go +++ /dev/null @@ -1,81 +0,0 @@ -package module - -import ( - "os" - "os/exec" - "path/filepath" - "testing" -) - -var testHasHg bool - -func init() { - if _, err := exec.LookPath("hg"); err == nil { - testHasHg = true - } -} - -func TestHgGetter_impl(t *testing.T) { - var _ Getter = new(HgGetter) -} - -func TestHgGetter(t *testing.T) { - t.Parallel() - - if !testHasHg { - t.Log("hg not found, skipping") - t.Skip() - } - - g := new(HgGetter) - dst := tempDir(t) - - // With a dir that doesn't exist - if err := g.Get(dst, testModuleURL("basic-hg")); err != nil { - t.Fatalf("err: %s", err) - } - - // Verify the main file exists - mainPath := filepath.Join(dst, "main.tf") - if _, err := os.Stat(mainPath); err != nil { - t.Fatalf("err: %s", err) - } -} - -func TestHgGetter_branch(t *testing.T) { - t.Parallel() - - if !testHasHg { - t.Log("hg not found, skipping") - t.Skip() - } - - g := new(HgGetter) - dst := tempDir(t) - - url := testModuleURL("basic-hg") - q := url.Query() - q.Add("rev", "test-branch") - url.RawQuery = q.Encode() - - if err := g.Get(dst, url); err != nil { - t.Fatalf("err: %s", err) - } - - // Verify the main file exists - mainPath := filepath.Join(dst, "main_branch.tf") - if _, err := os.Stat(mainPath); err != nil { - t.Fatalf("err: %s", err) - } - - // Get again should work - if err := g.Get(dst, url); err != nil { - t.Fatalf("err: %s", err) - } - - // Verify the main file exists - mainPath = filepath.Join(dst, "main_branch.tf") - if _, err := os.Stat(mainPath); err != nil { - t.Fatalf("err: %s", err) - } -} diff --git a/config/module/get_http.go b/config/module/get_http.go deleted file mode 100644 index be65d921a..000000000 --- a/config/module/get_http.go +++ /dev/null @@ -1,173 +0,0 @@ -package module - -import ( - "encoding/xml" - "fmt" - "io" - "io/ioutil" - "net/http" - "net/url" - "os" - "path/filepath" - "strings" -) - -// HttpGetter is a Getter implementation that will download a module from -// an HTTP endpoint. The protocol for downloading a module from an HTTP -// endpoing is as follows: -// -// An HTTP GET request is made to the URL with the additional GET parameter -// "terraform-get=1". This lets you handle that scenario specially if you -// wish. The response must be a 2xx. -// -// First, a header is looked for "X-Terraform-Get" which should contain -// a source URL to download. -// -// If the header is not present, then a meta tag is searched for named -// "terraform-get" and the content should be a source URL. -// -// The source URL, whether from the header or meta tag, must be a fully -// formed URL. The shorthand syntax of "github.com/foo/bar" or relative -// paths are not allowed. -type HttpGetter struct{} - -func (g *HttpGetter) Get(dst string, u *url.URL) error { - // Copy the URL so we can modify it - var newU url.URL = *u - u = &newU - - // Add terraform-get to the parameter. - q := u.Query() - q.Add("terraform-get", "1") - u.RawQuery = q.Encode() - - // Get the URL - resp, err := http.Get(u.String()) - if err != nil { - return err - } - defer resp.Body.Close() - if resp.StatusCode < 200 || resp.StatusCode >= 300 { - return fmt.Errorf("bad response code: %d", resp.StatusCode) - } - - // Extract the source URL - var source string - if v := resp.Header.Get("X-Terraform-Get"); v != "" { - source = v - } else { - source, err = g.parseMeta(resp.Body) - if err != nil { - return err - } - } - if source == "" { - return fmt.Errorf("no source URL was returned") - } - - // If there is a subdir component, then we download the root separately - // into a temporary directory, then copy over the proper subdir. - source, subDir := getDirSubdir(source) - if subDir == "" { - return Get(dst, source) - } - - // We have a subdir, time to jump some hoops - return g.getSubdir(dst, source, subDir) -} - -// getSubdir downloads the source into the destination, but with -// the proper subdir. -func (g *HttpGetter) getSubdir(dst, source, subDir string) error { - // Create a temporary directory to store the full source - td, err := ioutil.TempDir("", "tf") - if err != nil { - return err - } - defer os.RemoveAll(td) - - // Download that into the given directory - if err := Get(td, source); err != nil { - return err - } - - // Make sure the subdir path actually exists - sourcePath := filepath.Join(td, subDir) - if _, err := os.Stat(sourcePath); err != nil { - return fmt.Errorf( - "Error downloading %s: %s", source, err) - } - - // Copy the subdirectory into our actual destination. - if err := os.RemoveAll(dst); err != nil { - return err - } - - // Make the final destination - if err := os.MkdirAll(dst, 0755); err != nil { - return err - } - - return copyDir(dst, sourcePath) -} - -// parseMeta looks for the first meta tag in the given reader that -// will give us the source URL. -func (g *HttpGetter) parseMeta(r io.Reader) (string, error) { - d := xml.NewDecoder(r) - d.CharsetReader = charsetReader - d.Strict = false - var err error - var t xml.Token - for { - t, err = d.Token() - if err != nil { - if err == io.EOF { - err = nil - } - return "", err - } - if e, ok := t.(xml.StartElement); ok && strings.EqualFold(e.Name.Local, "body") { - return "", nil - } - if e, ok := t.(xml.EndElement); ok && strings.EqualFold(e.Name.Local, "head") { - return "", nil - } - e, ok := t.(xml.StartElement) - if !ok || !strings.EqualFold(e.Name.Local, "meta") { - continue - } - if attrValue(e.Attr, "name") != "terraform-get" { - continue - } - if f := attrValue(e.Attr, "content"); f != "" { - return f, nil - } - } -} - -// attrValue returns the attribute value for the case-insensitive key -// `name', or the empty string if nothing is found. -func attrValue(attrs []xml.Attr, name string) string { - for _, a := range attrs { - if strings.EqualFold(a.Name.Local, name) { - return a.Value - } - } - return "" -} - -// charsetReader returns a reader for the given charset. Currently -// it only supports UTF-8 and ASCII. Otherwise, it returns a meaningful -// error which is printed by go get, so the user can find why the package -// wasn't downloaded if the encoding is not supported. Note that, in -// order to reduce potential errors, ASCII is treated as UTF-8 (i.e. characters -// greater than 0x7f are not rejected). -func charsetReader(charset string, input io.Reader) (io.Reader, error) { - switch strings.ToLower(charset) { - case "ascii": - return input, nil - default: - return nil, fmt.Errorf("can't decode XML document using charset %q", charset) - } -} diff --git a/config/module/get_http_test.go b/config/module/get_http_test.go deleted file mode 100644 index 5f2590f48..000000000 --- a/config/module/get_http_test.go +++ /dev/null @@ -1,155 +0,0 @@ -package module - -import ( - "fmt" - "net" - "net/http" - "net/url" - "os" - "path/filepath" - "testing" -) - -func TestHttpGetter_impl(t *testing.T) { - var _ Getter = new(HttpGetter) -} - -func TestHttpGetter_header(t *testing.T) { - ln := testHttpServer(t) - defer ln.Close() - - g := new(HttpGetter) - dst := tempDir(t) - - var u url.URL - u.Scheme = "http" - u.Host = ln.Addr().String() - u.Path = "/header" - - // Get it! - if err := g.Get(dst, &u); err != nil { - t.Fatalf("err: %s", err) - } - - // Verify the main file exists - mainPath := filepath.Join(dst, "main.tf") - if _, err := os.Stat(mainPath); err != nil { - t.Fatalf("err: %s", err) - } -} - -func TestHttpGetter_meta(t *testing.T) { - ln := testHttpServer(t) - defer ln.Close() - - g := new(HttpGetter) - dst := tempDir(t) - - var u url.URL - u.Scheme = "http" - u.Host = ln.Addr().String() - u.Path = "/meta" - - // Get it! - if err := g.Get(dst, &u); err != nil { - t.Fatalf("err: %s", err) - } - - // Verify the main file exists - mainPath := filepath.Join(dst, "main.tf") - if _, err := os.Stat(mainPath); err != nil { - t.Fatalf("err: %s", err) - } -} - -func TestHttpGetter_metaSubdir(t *testing.T) { - ln := testHttpServer(t) - defer ln.Close() - - g := new(HttpGetter) - dst := tempDir(t) - - var u url.URL - u.Scheme = "http" - u.Host = ln.Addr().String() - u.Path = "/meta-subdir" - - // Get it! - if err := g.Get(dst, &u); err != nil { - t.Fatalf("err: %s", err) - } - - // Verify the main file exists - mainPath := filepath.Join(dst, "sub.tf") - if _, err := os.Stat(mainPath); err != nil { - t.Fatalf("err: %s", err) - } -} - -func TestHttpGetter_none(t *testing.T) { - ln := testHttpServer(t) - defer ln.Close() - - g := new(HttpGetter) - dst := tempDir(t) - - var u url.URL - u.Scheme = "http" - u.Host = ln.Addr().String() - u.Path = "/none" - - // Get it! - if err := g.Get(dst, &u); err == nil { - t.Fatal("should error") - } -} - -func testHttpServer(t *testing.T) net.Listener { - ln, err := net.Listen("tcp", "127.0.0.1:0") - if err != nil { - t.Fatalf("err: %s", err) - } - - mux := http.NewServeMux() - mux.HandleFunc("/header", testHttpHandlerHeader) - mux.HandleFunc("/meta", testHttpHandlerMeta) - mux.HandleFunc("/meta-subdir", testHttpHandlerMetaSubdir) - - var server http.Server - server.Handler = mux - go server.Serve(ln) - - return ln -} - -func testHttpHandlerHeader(w http.ResponseWriter, r *http.Request) { - w.Header().Add("X-Terraform-Get", testModuleURL("basic").String()) - w.WriteHeader(200) -} - -func testHttpHandlerMeta(w http.ResponseWriter, r *http.Request) { - w.Write([]byte(fmt.Sprintf(testHttpMetaStr, testModuleURL("basic").String()))) -} - -func testHttpHandlerMetaSubdir(w http.ResponseWriter, r *http.Request) { - w.Write([]byte(fmt.Sprintf(testHttpMetaStr, testModuleURL("basic//subdir").String()))) -} - -func testHttpHandlerNone(w http.ResponseWriter, r *http.Request) { - w.Write([]byte(testHttpNoneStr)) -} - -const testHttpMetaStr = ` - - - - - -` - -const testHttpNoneStr = ` - - - - -` diff --git a/config/module/get_test.go b/config/module/get_test.go deleted file mode 100644 index b403c835c..000000000 --- a/config/module/get_test.go +++ /dev/null @@ -1,128 +0,0 @@ -package module - -import ( - "os" - "path/filepath" - "strings" - "testing" -) - -func TestGet_badSchema(t *testing.T) { - dst := tempDir(t) - u := testModule("basic") - u = strings.Replace(u, "file", "nope", -1) - - if err := Get(dst, u); err == nil { - t.Fatal("should error") - } -} - -func TestGet_file(t *testing.T) { - dst := tempDir(t) - u := testModule("basic") - - if err := Get(dst, u); err != nil { - t.Fatalf("err: %s", err) - } - - mainPath := filepath.Join(dst, "main.tf") - if _, err := os.Stat(mainPath); err != nil { - t.Fatalf("err: %s", err) - } -} - -func TestGet_fileForced(t *testing.T) { - dst := tempDir(t) - u := testModule("basic") - u = "file::" + u - - if err := Get(dst, u); err != nil { - t.Fatalf("err: %s", err) - } - - mainPath := filepath.Join(dst, "main.tf") - if _, err := os.Stat(mainPath); err != nil { - t.Fatalf("err: %s", err) - } -} - -func TestGet_fileSubdir(t *testing.T) { - dst := tempDir(t) - u := testModule("basic//subdir") - - if err := Get(dst, u); err != nil { - t.Fatalf("err: %s", err) - } - - mainPath := filepath.Join(dst, "sub.tf") - if _, err := os.Stat(mainPath); err != nil { - t.Fatalf("err: %s", err) - } -} - -func TestGetCopy_dot(t *testing.T) { - dst := tempDir(t) - u := testModule("basic-dot") - - if err := GetCopy(dst, u); err != nil { - t.Fatalf("err: %s", err) - } - - mainPath := filepath.Join(dst, "main.tf") - if _, err := os.Stat(mainPath); err != nil { - t.Fatalf("err: %s", err) - } - - mainPath = filepath.Join(dst, "foo.tf") - if _, err := os.Stat(mainPath); err == nil { - t.Fatal("should not have foo.tf") - } -} - -func TestGetCopy_file(t *testing.T) { - dst := tempDir(t) - u := testModule("basic") - - if err := GetCopy(dst, u); err != nil { - t.Fatalf("err: %s", err) - } - - mainPath := filepath.Join(dst, "main.tf") - if _, err := os.Stat(mainPath); err != nil { - t.Fatalf("err: %s", err) - } -} - -func TestGetDirSubdir(t *testing.T) { - cases := []struct { - Input string - Dir, Sub string - }{ - { - "hashicorp.com", - "hashicorp.com", "", - }, - { - "hashicorp.com//foo", - "hashicorp.com", "foo", - }, - { - "hashicorp.com//foo?bar=baz", - "hashicorp.com?bar=baz", "foo", - }, - { - "file://foo//bar", - "file://foo", "bar", - }, - } - - for i, tc := range cases { - adir, asub := getDirSubdir(tc.Input) - if adir != tc.Dir { - t.Fatalf("%d: bad dir: %#v", i, adir) - } - if asub != tc.Sub { - t.Fatalf("%d: bad sub: %#v", i, asub) - } - } -} diff --git a/config/module/module_test.go b/config/module/module_test.go index f1517e480..89fee6ec5 100644 --- a/config/module/module_test.go +++ b/config/module/module_test.go @@ -2,13 +2,12 @@ package module import ( "io/ioutil" - "net/url" "os" "path/filepath" "testing" + "github.com/hashicorp/go-getter" "github.com/hashicorp/terraform/config" - urlhelper "github.com/hashicorp/terraform/helper/url" ) const fixtureDir = "./test-fixtures" @@ -34,24 +33,6 @@ func testConfig(t *testing.T, n string) *config.Config { return c } -func testModule(n string) string { - p := filepath.Join(fixtureDir, n) - p, err := filepath.Abs(p) - if err != nil { - panic(err) - } - return fmtFileURL(p) -} - -func testModuleURL(n string) *url.URL { - u, err := urlhelper.Parse(testModule(n)) - if err != nil { - panic(err) - } - - return u -} - -func testStorage(t *testing.T) Storage { - return &FolderStorage{StorageDir: tempDir(t)} +func testStorage(t *testing.T) getter.Storage { + return &getter.FolderStorage{StorageDir: tempDir(t)} } diff --git a/config/module/storage.go b/config/module/storage.go deleted file mode 100644 index 9c752f630..000000000 --- a/config/module/storage.go +++ /dev/null @@ -1,25 +0,0 @@ -package module - -// Storage is an interface that knows how to lookup downloaded modules -// as well as download and update modules from their sources into the -// proper location. -type Storage interface { - // Dir returns the directory on local disk where the modulue source - // can be loaded from. - Dir(string) (string, bool, error) - - // Get will download and optionally update the given module. - Get(string, string, bool) error -} - -func getStorage(s Storage, key string, src string, mode GetMode) (string, bool, error) { - // Get the module with the level specified if we were told to. - if mode > GetModeNone { - if err := s.Get(key, src, mode == GetModeUpdate); err != nil { - return "", false, err - } - } - - // Get the directory where the module is. - return s.Dir(key) -} diff --git a/config/module/tree.go b/config/module/tree.go index d7b3ac966..6a75c19c2 100644 --- a/config/module/tree.go +++ b/config/module/tree.go @@ -8,6 +8,7 @@ import ( "strings" "sync" + "github.com/hashicorp/go-getter" "github.com/hashicorp/terraform/config" ) @@ -27,25 +28,6 @@ type Tree struct { lock sync.RWMutex } -// GetMode is an enum that describes how modules are loaded. -// -// GetModeLoad says that modules will not be downloaded or updated, they will -// only be loaded from the storage. -// -// GetModeGet says that modules can be initially downloaded if they don't -// exist, but otherwise to just load from the current version in storage. -// -// GetModeUpdate says that modules should be checked for updates and -// downloaded prior to loading. If there are no updates, we load the version -// from disk, otherwise we download first and then load. -type GetMode byte - -const ( - GetModeNone GetMode = iota - GetModeGet - GetModeUpdate -) - // NewTree returns a new Tree for the given config structure. func NewTree(name string, c *config.Config) *Tree { return &Tree{config: c, name: name} @@ -136,7 +118,7 @@ func (t *Tree) Name() string { // module trees inherently require the configuration to be in a reasonably // sane state: no circular dependencies, proper module sources, etc. A full // suite of validations can be done by running Validate (after loading). -func (t *Tree) Load(s Storage, mode GetMode) error { +func (t *Tree) Load(s getter.Storage, mode GetMode) error { t.lock.Lock() defer t.lock.Unlock() @@ -159,15 +141,15 @@ func (t *Tree) Load(s Storage, mode GetMode) error { path = append(path, m.Name) // Split out the subdir if we have one - source, subDir := getDirSubdir(m.Source) + source, subDir := getter.SourceDirSubdir(m.Source) - source, err := Detect(source, t.config.Dir) + source, err := getter.Detect(source, t.config.Dir, getter.Detectors) if err != nil { return fmt.Errorf("module %s: %s", m.Name, err) } // Check if the detector introduced something new. - source, subDir2 := getDirSubdir(source) + source, subDir2 := getter.SourceDirSubdir(source) if subDir2 != "" { subDir = filepath.Join(subDir2, subDir) } diff --git a/config/string_list.go b/config/string_list.go index 70d43d1e4..e3caea70b 100644 --- a/config/string_list.go +++ b/config/string_list.go @@ -24,6 +24,20 @@ type StringList string // ["", ""] => SLDSLDSLD const stringListDelim = `B780FFEC-B661-4EB8-9236-A01737AD98B6` +// Takes a Stringlist and returns one without empty strings in it +func (sl StringList) Compact() StringList { + parts := sl.Slice() + + newlist := []string{} + // drop the empty strings + for i := range parts { + if parts[i] != "" { + newlist = append(newlist, parts[i]) + } + } + return NewStringList(newlist) +} + // Build a StringList from a slice func NewStringList(parts []string) StringList { // We have to special case the empty list representation @@ -55,11 +69,10 @@ func (sl StringList) Length() int { func (sl StringList) Slice() []string { parts := strings.Split(string(sl), stringListDelim) - switch len(parts) { - case 0, 1: + // split on an empty StringList will have a length of 2, since there is + // always at least one deliminator + if len(parts) <= 2 { return []string{} - case 2: - return []string{""} } // strip empty elements generated by leading and trailing delimiters diff --git a/config/string_list_test.go b/config/string_list_test.go index 64049eb50..3fe57dfe2 100644 --- a/config/string_list_test.go +++ b/config/string_list_test.go @@ -27,3 +27,26 @@ func TestStringList_element(t *testing.T) { list, expected, actual) } } + +func TestStringList_empty_slice(t *testing.T) { + expected := []string{} + l := NewStringList(expected) + actual := l.Slice() + + if !reflect.DeepEqual(expected, actual) { + t.Fatalf("Expected %q, got %q", expected, actual) + } +} + +func TestStringList_empty_slice_length(t *testing.T) { + list := []string{} + l := NewStringList([]string{}) + actual := l.Length() + + expected := 0 + + if actual != expected { + t.Fatalf("Expected length of %q to be %d, got %d", + list, expected, actual) + } +} diff --git a/config/test-fixtures/ignore-changes.tf b/config/test-fixtures/ignore-changes.tf new file mode 100644 index 000000000..765a05798 --- /dev/null +++ b/config/test-fixtures/ignore-changes.tf @@ -0,0 +1,17 @@ +resource "aws_instance" "web" { + ami = "foo" + lifecycle { + ignore_changes = ["ami"] + } +} + +resource "aws_instance" "bar" { + ami = "foo" + lifecycle { + ignore_changes = [] + } +} + +resource "aws_instance" "baz" { + ami = "foo" +} diff --git a/config_unix.go b/config_unix.go index c51ea5ec4..69d76278a 100644 --- a/config_unix.go +++ b/config_unix.go @@ -33,7 +33,7 @@ func configDir() (string, error) { func homeDir() (string, error) { // First prefer the HOME environmental variable if home := os.Getenv("HOME"); home != "" { - log.Printf("Detected home directory from env var: %s", home) + log.Printf("[DEBUG] Detected home directory from env var: %s", home) return home, nil } diff --git a/dag/graph.go b/dag/graph.go index 41d766f40..2572096ed 100644 --- a/dag/graph.go +++ b/dag/graph.go @@ -11,8 +11,8 @@ import ( type Graph struct { vertices *Set edges *Set - downEdges map[Vertex]*Set - upEdges map[Vertex]*Set + downEdges map[interface{}]*Set + upEdges map[interface{}]*Set once sync.Once } @@ -110,10 +110,10 @@ func (g *Graph) RemoveEdge(edge Edge) { g.edges.Delete(edge) // Delete the up/down edges - if s, ok := g.downEdges[edge.Source()]; ok { + if s, ok := g.downEdges[hashcode(edge.Source())]; ok { s.Delete(edge.Target()) } - if s, ok := g.upEdges[edge.Target()]; ok { + if s, ok := g.upEdges[hashcode(edge.Target())]; ok { s.Delete(edge.Source()) } } @@ -121,13 +121,13 @@ func (g *Graph) RemoveEdge(edge Edge) { // DownEdges returns the outward edges from the source Vertex v. func (g *Graph) DownEdges(v Vertex) *Set { g.once.Do(g.init) - return g.downEdges[v] + return g.downEdges[hashcode(v)] } // UpEdges returns the inward edges to the destination Vertex v. func (g *Graph) UpEdges(v Vertex) *Set { g.once.Do(g.init) - return g.upEdges[v] + return g.upEdges[hashcode(v)] } // Connect adds an edge with the given source and target. This is safe to @@ -139,9 +139,11 @@ func (g *Graph) Connect(edge Edge) { source := edge.Source() target := edge.Target() + sourceCode := hashcode(source) + targetCode := hashcode(target) // Do we have this already? If so, don't add it again. - if s, ok := g.downEdges[source]; ok && s.Include(target) { + if s, ok := g.downEdges[sourceCode]; ok && s.Include(target) { return } @@ -149,18 +151,18 @@ func (g *Graph) Connect(edge Edge) { g.edges.Add(edge) // Add the down edge - s, ok := g.downEdges[source] + s, ok := g.downEdges[sourceCode] if !ok { s = new(Set) - g.downEdges[source] = s + g.downEdges[sourceCode] = s } s.Add(target) // Add the up edge - s, ok = g.upEdges[target] + s, ok = g.upEdges[targetCode] if !ok { s = new(Set) - g.upEdges[target] = s + g.upEdges[targetCode] = s } s.Add(source) } @@ -184,7 +186,7 @@ func (g *Graph) String() string { // Write each node in order... for _, name := range names { v := mapping[name] - targets := g.downEdges[v] + targets := g.downEdges[hashcode(v)] buf.WriteString(fmt.Sprintf("%s\n", name)) @@ -207,8 +209,8 @@ func (g *Graph) String() string { func (g *Graph) init() { g.vertices = new(Set) g.edges = new(Set) - g.downEdges = make(map[Vertex]*Set) - g.upEdges = make(map[Vertex]*Set) + g.downEdges = make(map[interface{}]*Set) + g.upEdges = make(map[interface{}]*Set) } // VertexName returns the name of a vertex. diff --git a/dag/graph_test.go b/dag/graph_test.go index 8dd05e95e..eb3e40b3a 100644 --- a/dag/graph_test.go +++ b/dag/graph_test.go @@ -1,6 +1,7 @@ package dag import ( + "fmt" "strings" "testing" ) @@ -79,6 +80,36 @@ func TestGraph_replaceSelf(t *testing.T) { } } +// This tests that connecting edges works based on custom Hashcode +// implementations for uniqueness. +func TestGraph_hashcode(t *testing.T) { + var g Graph + g.Add(&hashVertex{code: 1}) + g.Add(&hashVertex{code: 2}) + g.Add(&hashVertex{code: 3}) + g.Connect(BasicEdge( + &hashVertex{code: 1}, + &hashVertex{code: 3})) + + actual := strings.TrimSpace(g.String()) + expected := strings.TrimSpace(testGraphBasicStr) + if actual != expected { + t.Fatalf("bad: %s", actual) + } +} + +type hashVertex struct { + code interface{} +} + +func (v *hashVertex) Hashcode() interface{} { + return v.code +} + +func (v *hashVertex) Name() string { + return fmt.Sprintf("%#v", v.code) +} + const testGraphBasicStr = ` 1 3 diff --git a/dag/set.go b/dag/set.go index 9cc0d98e2..d4b29226b 100644 --- a/dag/set.go +++ b/dag/set.go @@ -17,22 +17,31 @@ type Hashable interface { Hashcode() interface{} } +// hashcode returns the hashcode used for set elements. +func hashcode(v interface{}) interface{} { + if h, ok := v.(Hashable); ok { + return h.Hashcode() + } + + return v +} + // Add adds an item to the set func (s *Set) Add(v interface{}) { s.once.Do(s.init) - s.m[s.code(v)] = v + s.m[hashcode(v)] = v } // Delete removes an item from the set. func (s *Set) Delete(v interface{}) { s.once.Do(s.init) - delete(s.m, s.code(v)) + delete(s.m, hashcode(v)) } // Include returns true/false of whether a value is in the set. func (s *Set) Include(v interface{}) bool { s.once.Do(s.init) - _, ok := s.m[s.code(v)] + _, ok := s.m[hashcode(v)] return ok } @@ -73,14 +82,6 @@ func (s *Set) List() []interface{} { return r } -func (s *Set) code(v interface{}) interface{} { - if h, ok := v.(Hashable); ok { - return h.Hashcode() - } - - return v -} - func (s *Set) init() { s.m = make(map[interface{}]interface{}) } diff --git a/deps/v0-6-4.json b/deps/v0-6-4.json new file mode 100644 index 000000000..e0d17b58f --- /dev/null +++ b/deps/v0-6-4.json @@ -0,0 +1,440 @@ +{ + "ImportPath": "github.com/hashicorp/terraform", + "GoVersion": "go1.4.2", + "Packages": [ + "./..." + ], + "Deps": [ + { + "ImportPath": "github.com/Azure/azure-sdk-for-go/core/http", + "Comment": "v1.2-261-g3dcabb6", + "Rev": "3dcabb61c225af4013db7af20d4fe430fd09e311" + }, + { + "ImportPath": "github.com/Azure/azure-sdk-for-go/core/tls", + "Comment": "v1.2-261-g3dcabb6", + "Rev": "3dcabb61c225af4013db7af20d4fe430fd09e311" + }, + { + "ImportPath": "github.com/Azure/azure-sdk-for-go/management", + "Comment": "v1.2-261-g3dcabb6", + "Rev": "3dcabb61c225af4013db7af20d4fe430fd09e311" + }, + { + "ImportPath": "github.com/Azure/azure-sdk-for-go/storage", + "Comment": "v1.2-261-g3dcabb6", + "Rev": "3dcabb61c225af4013db7af20d4fe430fd09e311" + }, + { + "ImportPath": "github.com/apparentlymart/go-rundeck-api/rundeck", + "Comment": "v0.0.1", + "Rev": "cddcfbabbe903e9c8df35ff9569dbb8d67789200" + }, + { + "ImportPath": "github.com/armon/circbuf", + "Rev": "bbbad097214e2918d8543d5201d12bfd7bca254d" + }, + { + "ImportPath": "github.com/aws/aws-sdk-go/aws", + "Comment": "v0.9.14-3-g308eaa6", + "Rev": "308eaa65c0ddf03c701d511b7d73b3f3620452a1" + }, + { + "ImportPath": "github.com/aws/aws-sdk-go/internal/endpoints", + "Comment": "v0.9.14-3-g308eaa6", + "Rev": "308eaa65c0ddf03c701d511b7d73b3f3620452a1" + }, + { + "ImportPath": "github.com/aws/aws-sdk-go/internal/protocol/ec2query", + "Comment": "v0.9.14-3-g308eaa6", + "Rev": "308eaa65c0ddf03c701d511b7d73b3f3620452a1" + }, + { + "ImportPath": "github.com/aws/aws-sdk-go/internal/protocol/json/jsonutil", + "Comment": "v0.9.14-3-g308eaa6", + "Rev": "308eaa65c0ddf03c701d511b7d73b3f3620452a1" + }, + { + "ImportPath": "github.com/aws/aws-sdk-go/internal/protocol/jsonrpc", + "Comment": "v0.9.14-3-g308eaa6", + "Rev": "308eaa65c0ddf03c701d511b7d73b3f3620452a1" + }, + { + "ImportPath": "github.com/aws/aws-sdk-go/internal/protocol/query", + "Comment": "v0.9.14-3-g308eaa6", + "Rev": "308eaa65c0ddf03c701d511b7d73b3f3620452a1" + }, + { + "ImportPath": "github.com/aws/aws-sdk-go/internal/protocol/rest", + "Comment": "v0.9.14-3-g308eaa6", + "Rev": "308eaa65c0ddf03c701d511b7d73b3f3620452a1" + }, + { + "ImportPath": "github.com/aws/aws-sdk-go/internal/protocol/restjson", + "Comment": "v0.9.14-3-g308eaa6", + "Rev": "308eaa65c0ddf03c701d511b7d73b3f3620452a1" + }, + { + "ImportPath": "github.com/aws/aws-sdk-go/internal/protocol/restxml", + "Comment": "v0.9.14-3-g308eaa6", + "Rev": "308eaa65c0ddf03c701d511b7d73b3f3620452a1" + }, + { + "ImportPath": "github.com/aws/aws-sdk-go/internal/protocol/xml/xmlutil", + "Comment": "v0.9.14-3-g308eaa6", + "Rev": "308eaa65c0ddf03c701d511b7d73b3f3620452a1" + }, + { + "ImportPath": "github.com/aws/aws-sdk-go/internal/signer/v4", + "Comment": "v0.9.14-3-g308eaa6", + "Rev": "308eaa65c0ddf03c701d511b7d73b3f3620452a1" + }, + { + "ImportPath": "github.com/aws/aws-sdk-go/service/autoscaling", + "Comment": "v0.9.14-3-g308eaa6", + "Rev": "308eaa65c0ddf03c701d511b7d73b3f3620452a1" + }, + { + "ImportPath": "github.com/aws/aws-sdk-go/service/cloudwatch", + "Comment": "v0.9.14-3-g308eaa6", + "Rev": "308eaa65c0ddf03c701d511b7d73b3f3620452a1" + }, + { + "ImportPath": "github.com/aws/aws-sdk-go/service/cloudwatchlogs", + "Comment": "v0.9.14-3-g308eaa6", + "Rev": "308eaa65c0ddf03c701d511b7d73b3f3620452a1" + }, + { + "ImportPath": "github.com/aws/aws-sdk-go/service/directoryservice", + "Comment": "v0.9.14-3-g308eaa6", + "Rev": "308eaa65c0ddf03c701d511b7d73b3f3620452a1" + }, + { + "ImportPath": "github.com/aws/aws-sdk-go/service/dynamodb", + "Comment": "v0.9.14-3-g308eaa6", + "Rev": "308eaa65c0ddf03c701d511b7d73b3f3620452a1" + }, + { + "ImportPath": "github.com/aws/aws-sdk-go/service/ec2", + "Comment": "v0.9.14-3-g308eaa6", + "Rev": "308eaa65c0ddf03c701d511b7d73b3f3620452a1" + }, + { + "ImportPath": "github.com/aws/aws-sdk-go/service/ecs", + "Comment": "v0.9.14-3-g308eaa6", + "Rev": "308eaa65c0ddf03c701d511b7d73b3f3620452a1" + }, + { + "ImportPath": "github.com/aws/aws-sdk-go/service/efs", + "Comment": "v0.9.14-3-g308eaa6", + "Rev": "308eaa65c0ddf03c701d511b7d73b3f3620452a1" + }, + { + "ImportPath": "github.com/aws/aws-sdk-go/service/elasticache", + "Comment": "v0.9.14-3-g308eaa6", + "Rev": "308eaa65c0ddf03c701d511b7d73b3f3620452a1" + }, + { + "ImportPath": "github.com/aws/aws-sdk-go/service/elasticsearchservice", + "Comment": "v0.9.14-3-g308eaa6", + "Rev": "308eaa65c0ddf03c701d511b7d73b3f3620452a1" + }, + { + "ImportPath": "github.com/aws/aws-sdk-go/service/elb", + "Comment": "v0.9.14-3-g308eaa6", + "Rev": "308eaa65c0ddf03c701d511b7d73b3f3620452a1" + }, + { + "ImportPath": "github.com/aws/aws-sdk-go/service/glacier", + "Comment": "v0.9.14-3-g308eaa6", + "Rev": "308eaa65c0ddf03c701d511b7d73b3f3620452a1" + }, + { + "ImportPath": "github.com/aws/aws-sdk-go/service/iam", + "Comment": "v0.9.14-3-g308eaa6", + "Rev": "308eaa65c0ddf03c701d511b7d73b3f3620452a1" + }, + { + "ImportPath": "github.com/aws/aws-sdk-go/service/kinesis", + "Comment": "v0.9.14-3-g308eaa6", + "Rev": "308eaa65c0ddf03c701d511b7d73b3f3620452a1" + }, + { + "ImportPath": "github.com/aws/aws-sdk-go/service/lambda", + "Comment": "v0.9.14-3-g308eaa6", + "Rev": "308eaa65c0ddf03c701d511b7d73b3f3620452a1" + }, + { + "ImportPath": "github.com/aws/aws-sdk-go/service/opsworks", + "Comment": "v0.9.14-3-g308eaa6", + "Rev": "308eaa65c0ddf03c701d511b7d73b3f3620452a1" + }, + { + "ImportPath": "github.com/aws/aws-sdk-go/service/rds", + "Comment": "v0.9.14-3-g308eaa6", + "Rev": "308eaa65c0ddf03c701d511b7d73b3f3620452a1" + }, + { + "ImportPath": "github.com/aws/aws-sdk-go/service/route53", + "Comment": "v0.9.14-3-g308eaa6", + "Rev": "308eaa65c0ddf03c701d511b7d73b3f3620452a1" + }, + { + "ImportPath": "github.com/aws/aws-sdk-go/service/s3", + "Comment": "v0.9.14-3-g308eaa6", + "Rev": "308eaa65c0ddf03c701d511b7d73b3f3620452a1" + }, + { + "ImportPath": "github.com/aws/aws-sdk-go/service/sns", + "Comment": "v0.9.14-3-g308eaa6", + "Rev": "308eaa65c0ddf03c701d511b7d73b3f3620452a1" + }, + { + "ImportPath": "github.com/aws/aws-sdk-go/service/sqs", + "Comment": "v0.9.14-3-g308eaa6", + "Rev": "308eaa65c0ddf03c701d511b7d73b3f3620452a1" + }, + { + "ImportPath": "github.com/awslabs/aws-sdk-go/aws", + "Comment": "v0.9.14-3-g308eaa6", + "Rev": "308eaa65c0ddf03c701d511b7d73b3f3620452a1" + }, + { + "ImportPath": "github.com/cyberdelia/heroku-go/v3", + "Rev": "8344c6a3e281a99a693f5b71186249a8620eeb6b" + }, + { + "ImportPath": "github.com/dylanmei/iso8601", + "Rev": "2075bf119b58e5576c6ed9f867b8f3d17f2e54d4" + }, + { + "ImportPath": "github.com/dylanmei/winrmtest", + "Rev": "3e9661c52c45dab9a8528966a23d421922fca9b9" + }, + { + "ImportPath": "github.com/fsouza/go-dockerclient", + "Rev": "09604abc82243886001c3f56fd709d4ba603cead" + }, + { + "ImportPath": "github.com/hashicorp/atlas-go/archive", + "Comment": "20141209094003-77-g85a782d", + "Rev": "85a782d724b87fcd19db1c4aef9d5337a9bb7a0f" + }, + { + "ImportPath": "github.com/hashicorp/atlas-go/v1", + "Comment": "20141209094003-77-g85a782d", + "Rev": "85a782d724b87fcd19db1c4aef9d5337a9bb7a0f" + }, + { + "ImportPath": "github.com/hashicorp/consul/api", + "Comment": "v0.5.2-325-g5d9530d", + "Rev": "5d9530d7def3be989ba141382f1b9d82583418f4" + }, + { + "ImportPath": "github.com/hashicorp/errwrap", + "Rev": "7554cd9344cec97297fa6649b055a8c98c2a1e55" + }, + { + "ImportPath": "github.com/hashicorp/go-checkpoint", + "Rev": "528ab62f37fa83d4360e8ab2b2c425d6692ef533" + }, + { + "ImportPath": "github.com/hashicorp/go-multierror", + "Rev": "d30f09973e19c1dfcd120b2d9c4f168e68d6b5d5" + }, + { + "ImportPath": "github.com/hashicorp/go-version", + "Rev": "2b9865f60ce11e527bd1255ba82036d465570aa3" + }, + { + "ImportPath": "github.com/hashicorp/hcl", + "Rev": "4de51957ef8d4aba6e285ddfc587633bbfc7c0e8" + }, + { + "ImportPath": "github.com/hashicorp/logutils", + "Rev": "0dc08b1671f34c4250ce212759ebd880f743d883" + }, + { + "ImportPath": "github.com/hashicorp/yamux", + "Rev": "ddcd0a6ec7c55e29f235e27935bf98d302281bd3" + }, + { + "ImportPath": "github.com/imdario/mergo", + "Comment": "0.2.0-5-g61a5285", + "Rev": "61a52852277811e93e06d28e0d0c396284a7730b" + }, + { + "ImportPath": "github.com/masterzen/simplexml/dom", + "Rev": "95ba30457eb1121fa27753627c774c7cd4e90083" + }, + { + "ImportPath": "github.com/masterzen/winrm/soap", + "Rev": "b280be362a0c6af26fbaaa055924fb9c4830b006" + }, + { + "ImportPath": "github.com/masterzen/winrm/winrm", + "Rev": "b280be362a0c6af26fbaaa055924fb9c4830b006" + }, + { + "ImportPath": "github.com/masterzen/xmlpath", + "Rev": "13f4951698adc0fa9c1dda3e275d489a24201161" + }, + { + "ImportPath": "github.com/mitchellh/cli", + "Rev": "8102d0ed5ea2709ade1243798785888175f6e415" + }, + { + "ImportPath": "github.com/mitchellh/colorstring", + "Rev": "8631ce90f28644f54aeedcb3e389a85174e067d1" + }, + { + "ImportPath": "github.com/mitchellh/copystructure", + "Rev": "6fc66267e9da7d155a9d3bd489e00dad02666dc6" + }, + { + "ImportPath": "github.com/mitchellh/go-homedir", + "Rev": "df55a15e5ce646808815381b3db47a8c66ea62f4" + }, + { + "ImportPath": "github.com/mitchellh/go-linereader", + "Rev": "07bab5fdd9580500aea6ada0e09df4aa28e68abd" + }, + { + "ImportPath": "github.com/mitchellh/mapstructure", + "Rev": "281073eb9eb092240d33ef253c404f1cca550309" + }, + { + "ImportPath": "github.com/mitchellh/osext", + "Rev": "0dd3f918b21bec95ace9dc86c7e70266cfc5c702" + }, + { + "ImportPath": "github.com/mitchellh/packer/common/uuid", + "Comment": "v0.8.6-76-g88386bc", + "Rev": "88386bc9db1c850306e5c3737f14bef3a2c4050d" + }, + { + "ImportPath": "github.com/mitchellh/panicwrap", + "Rev": "1655d88c8ff7495ae9d2c19fd8f445f4657e22b0" + }, + { + "ImportPath": "github.com/mitchellh/prefixedio", + "Rev": "89d9b535996bf0a185f85b59578f2e245f9e1724" + }, + { + "ImportPath": "github.com/mitchellh/reflectwalk", + "Rev": "eecf4c70c626c7cfbb95c90195bc34d386c74ac6" + }, + { + "ImportPath": "github.com/nu7hatch/gouuid", + "Rev": "179d4d0c4d8d407a32af483c2354df1d2c91e6c3" + }, + { + "ImportPath": "github.com/packer-community/winrmcp/winrmcp", + "Rev": "743b1afe5ee3f6d5ba71a0d50673fa0ba2123d6b" + }, + { + "ImportPath": "github.com/packethost/packngo", + "Rev": "496f5c8895c06505fae527830a9e554dc65325f4" + }, + { + "ImportPath": "github.com/pborman/uuid", + "Rev": "cccd189d45f7ac3368a0d127efb7f4d08ae0b655" + }, + { + "ImportPath": "github.com/pearkes/cloudflare", + "Rev": "19e280b056f3742e535ea12ae92a37ea7767ea82" + }, + { + "ImportPath": "github.com/pearkes/digitalocean", + "Rev": "e966f00c2d9de5743e87697ab77c7278f5998914" + }, + { + "ImportPath": "github.com/pearkes/dnsimple", + "Rev": "2a807d118c9e52e94819f414a6ec0293b45cad01" + }, + { + "ImportPath": "github.com/pearkes/mailgun", + "Rev": "5b02e7e9ffee9869f81393e80db138f6ff726260" + }, + { + "ImportPath": "github.com/rackspace/gophercloud", + "Comment": "v1.0.0-681-g8d032cb", + "Rev": "8d032cb1e835a0018269de3d6b53bb24fc77a8c0" + }, + { + "ImportPath": "github.com/satori/go.uuid", + "Rev": "08f0718b61e95ddba0ade3346725fe0e4bf28ca6" + }, + { + "ImportPath": "github.com/soniah/dnsmadeeasy", + "Comment": "v1.1-2-g5578a8c", + "Rev": "5578a8c15e33958c61cf7db720b6181af65f4a9e" + }, + { + "ImportPath": "github.com/vaughan0/go-ini", + "Rev": "a98ad7ee00ec53921f08832bc06ecf7fd600e6a1" + }, + { + "ImportPath": "github.com/vmware/govmomi", + "Comment": "v0.2.0-28-g6037863", + "Rev": "603786323c18c13dd8b3da3d4f86b1dce4adf126" + }, + { + "ImportPath": "github.com/xanzy/go-cloudstack/cloudstack", + "Comment": "v1.2.0-48-g0e6e56f", + "Rev": "0e6e56fc0db3f48f060273f2e2ffe5d8d41b0112" + }, + { + "ImportPath": "golang.org/x/crypto/curve25519", + "Rev": "c8b9e6388ef638d5a8a9d865c634befdc46a6784" + }, + { + "ImportPath": "golang.org/x/crypto/pkcs12", + "Rev": "c8b9e6388ef638d5a8a9d865c634befdc46a6784" + }, + { + "ImportPath": "golang.org/x/crypto/ssh", + "Rev": "c8b9e6388ef638d5a8a9d865c634befdc46a6784" + }, + { + "ImportPath": "golang.org/x/net/context", + "Rev": "21c3935a8fc0f954d03e6b8a560c9600ffee38d2" + }, + { + "ImportPath": "golang.org/x/oauth2", + "Rev": "ef4eca6b097fad7cec79afcc278d213a6de1c960" + }, + { + "ImportPath": "google.golang.org/api/compute/v1", + "Rev": "e2903ca9e33d6cbaedda541d96996219056e8214" + }, + { + "ImportPath": "google.golang.org/api/container/v1", + "Rev": "e2903ca9e33d6cbaedda541d96996219056e8214" + }, + { + "ImportPath": "google.golang.org/api/dns/v1", + "Rev": "e2903ca9e33d6cbaedda541d96996219056e8214" + }, + { + "ImportPath": "google.golang.org/api/googleapi", + "Rev": "e2903ca9e33d6cbaedda541d96996219056e8214" + }, + { + "ImportPath": "google.golang.org/api/internal", + "Rev": "e2903ca9e33d6cbaedda541d96996219056e8214" + }, + { + "ImportPath": "google.golang.org/api/storage/v1", + "Rev": "e2903ca9e33d6cbaedda541d96996219056e8214" + }, + { + "ImportPath": "google.golang.org/cloud/compute/metadata", + "Rev": "4bea1598a0936d6d116506b59a8e1aa962b585c3" + }, + { + "ImportPath": "google.golang.org/cloud/internal", + "Rev": "4bea1598a0936d6d116506b59a8e1aa962b585c3" + } + ] +} diff --git a/deps/v0-6-5.json b/deps/v0-6-5.json new file mode 100644 index 000000000..71ace6175 --- /dev/null +++ b/deps/v0-6-5.json @@ -0,0 +1,476 @@ +{ + "ImportPath": "github.com/hashicorp/terraform", + "GoVersion": "go1.4.2", + "Packages": [ + "./..." + ], + "Deps": [ + { + "ImportPath": "github.com/Azure/azure-sdk-for-go/core/http", + "Comment": "v1.2-261-g3dcabb6", + "Rev": "3dcabb61c225af4013db7af20d4fe430fd09e311" + }, + { + "ImportPath": "github.com/Azure/azure-sdk-for-go/core/tls", + "Comment": "v1.2-261-g3dcabb6", + "Rev": "3dcabb61c225af4013db7af20d4fe430fd09e311" + }, + { + "ImportPath": "github.com/Azure/azure-sdk-for-go/management", + "Comment": "v1.2-261-g3dcabb6", + "Rev": "3dcabb61c225af4013db7af20d4fe430fd09e311" + }, + { + "ImportPath": "github.com/Azure/azure-sdk-for-go/storage", + "Comment": "v1.2-261-g3dcabb6", + "Rev": "3dcabb61c225af4013db7af20d4fe430fd09e311" + }, + { + "ImportPath": "github.com/apparentlymart/go-rundeck-api/rundeck", + "Comment": "v0.0.1", + "Rev": "cddcfbabbe903e9c8df35ff9569dbb8d67789200" + }, + { + "ImportPath": "github.com/armon/circbuf", + "Rev": "bbbad097214e2918d8543d5201d12bfd7bca254d" + }, + { + "ImportPath": "github.com/aws/aws-sdk-go/aws", + "Comment": "v0.9.15", + "Rev": "7ab6754ddaaa7972ac1c896ddd7f796cc726e79d" + }, + { + "ImportPath": "github.com/aws/aws-sdk-go/internal/endpoints", + "Comment": "v0.9.15", + "Rev": "7ab6754ddaaa7972ac1c896ddd7f796cc726e79d" + }, + { + "ImportPath": "github.com/aws/aws-sdk-go/internal/protocol/ec2query", + "Comment": "v0.9.15", + "Rev": "7ab6754ddaaa7972ac1c896ddd7f796cc726e79d" + }, + { + "ImportPath": "github.com/aws/aws-sdk-go/internal/protocol/json/jsonutil", + "Comment": "v0.9.15", + "Rev": "7ab6754ddaaa7972ac1c896ddd7f796cc726e79d" + }, + { + "ImportPath": "github.com/aws/aws-sdk-go/internal/protocol/jsonrpc", + "Comment": "v0.9.15", + "Rev": "7ab6754ddaaa7972ac1c896ddd7f796cc726e79d" + }, + { + "ImportPath": "github.com/aws/aws-sdk-go/internal/protocol/query", + "Comment": "v0.9.15", + "Rev": "7ab6754ddaaa7972ac1c896ddd7f796cc726e79d" + }, + { + "ImportPath": "github.com/aws/aws-sdk-go/internal/protocol/rest", + "Comment": "v0.9.15", + "Rev": "7ab6754ddaaa7972ac1c896ddd7f796cc726e79d" + }, + { + "ImportPath": "github.com/aws/aws-sdk-go/internal/protocol/restjson", + "Comment": "v0.9.15", + "Rev": "7ab6754ddaaa7972ac1c896ddd7f796cc726e79d" + }, + { + "ImportPath": "github.com/aws/aws-sdk-go/internal/protocol/restxml", + "Comment": "v0.9.15", + "Rev": "7ab6754ddaaa7972ac1c896ddd7f796cc726e79d" + }, + { + "ImportPath": "github.com/aws/aws-sdk-go/internal/protocol/xml/xmlutil", + "Comment": "v0.9.15", + "Rev": "7ab6754ddaaa7972ac1c896ddd7f796cc726e79d" + }, + { + "ImportPath": "github.com/aws/aws-sdk-go/internal/signer/v4", + "Comment": "v0.9.15", + "Rev": "7ab6754ddaaa7972ac1c896ddd7f796cc726e79d" + }, + { + "ImportPath": "github.com/aws/aws-sdk-go/service/autoscaling", + "Comment": "v0.9.15", + "Rev": "7ab6754ddaaa7972ac1c896ddd7f796cc726e79d" + }, + { + "ImportPath": "github.com/aws/aws-sdk-go/service/cloudwatch", + "Comment": "v0.9.15", + "Rev": "7ab6754ddaaa7972ac1c896ddd7f796cc726e79d" + }, + { + "ImportPath": "github.com/aws/aws-sdk-go/service/cloudwatchlogs", + "Comment": "v0.9.15", + "Rev": "7ab6754ddaaa7972ac1c896ddd7f796cc726e79d" + }, + { + "ImportPath": "github.com/aws/aws-sdk-go/service/directoryservice", + "Comment": "v0.9.15", + "Rev": "7ab6754ddaaa7972ac1c896ddd7f796cc726e79d" + }, + { + "ImportPath": "github.com/aws/aws-sdk-go/service/dynamodb", + "Comment": "v0.9.15", + "Rev": "7ab6754ddaaa7972ac1c896ddd7f796cc726e79d" + }, + { + "ImportPath": "github.com/aws/aws-sdk-go/service/ec2", + "Comment": "v0.9.15", + "Rev": "7ab6754ddaaa7972ac1c896ddd7f796cc726e79d" + }, + { + "ImportPath": "github.com/aws/aws-sdk-go/service/ecs", + "Comment": "v0.9.15", + "Rev": "7ab6754ddaaa7972ac1c896ddd7f796cc726e79d" + }, + { + "ImportPath": "github.com/aws/aws-sdk-go/service/efs", + "Comment": "v0.9.15", + "Rev": "7ab6754ddaaa7972ac1c896ddd7f796cc726e79d" + }, + { + "ImportPath": "github.com/aws/aws-sdk-go/service/elasticache", + "Comment": "v0.9.15", + "Rev": "7ab6754ddaaa7972ac1c896ddd7f796cc726e79d" + }, + { + "ImportPath": "github.com/aws/aws-sdk-go/service/elasticsearchservice", + "Comment": "v0.9.15", + "Rev": "7ab6754ddaaa7972ac1c896ddd7f796cc726e79d" + }, + { + "ImportPath": "github.com/aws/aws-sdk-go/service/elb", + "Comment": "v0.9.15", + "Rev": "7ab6754ddaaa7972ac1c896ddd7f796cc726e79d" + }, + { + "ImportPath": "github.com/aws/aws-sdk-go/service/glacier", + "Comment": "v0.9.15", + "Rev": "7ab6754ddaaa7972ac1c896ddd7f796cc726e79d" + }, + { + "ImportPath": "github.com/aws/aws-sdk-go/service/iam", + "Comment": "v0.9.15", + "Rev": "7ab6754ddaaa7972ac1c896ddd7f796cc726e79d" + }, + { + "ImportPath": "github.com/aws/aws-sdk-go/service/kinesis", + "Comment": "v0.9.15", + "Rev": "7ab6754ddaaa7972ac1c896ddd7f796cc726e79d" + }, + { + "ImportPath": "github.com/aws/aws-sdk-go/service/lambda", + "Comment": "v0.9.15", + "Rev": "7ab6754ddaaa7972ac1c896ddd7f796cc726e79d" + }, + { + "ImportPath": "github.com/aws/aws-sdk-go/service/opsworks", + "Comment": "v0.9.15", + "Rev": "7ab6754ddaaa7972ac1c896ddd7f796cc726e79d" + }, + { + "ImportPath": "github.com/aws/aws-sdk-go/service/rds", + "Comment": "v0.9.15", + "Rev": "7ab6754ddaaa7972ac1c896ddd7f796cc726e79d" + }, + { + "ImportPath": "github.com/aws/aws-sdk-go/service/route53", + "Comment": "v0.9.15", + "Rev": "7ab6754ddaaa7972ac1c896ddd7f796cc726e79d" + }, + { + "ImportPath": "github.com/aws/aws-sdk-go/service/s3", + "Comment": "v0.9.15", + "Rev": "7ab6754ddaaa7972ac1c896ddd7f796cc726e79d" + }, + { + "ImportPath": "github.com/aws/aws-sdk-go/service/sns", + "Comment": "v0.9.15", + "Rev": "7ab6754ddaaa7972ac1c896ddd7f796cc726e79d" + }, + { + "ImportPath": "github.com/aws/aws-sdk-go/service/sqs", + "Comment": "v0.9.15", + "Rev": "7ab6754ddaaa7972ac1c896ddd7f796cc726e79d" + }, + { + "ImportPath": "github.com/awslabs/aws-sdk-go/aws", + "Comment": "v0.9.15", + "Rev": "7ab6754ddaaa7972ac1c896ddd7f796cc726e79d" + }, + { + "ImportPath": "github.com/coreos/etcd/client", + "Comment": "v2.2.0-246-g8d3ed01", + "Rev": "8d3ed0176c41a5585e040368455fe803fa95511b" + }, + { + "ImportPath": "github.com/coreos/etcd/pkg/pathutil", + "Comment": "v2.2.0-246-g8d3ed01", + "Rev": "8d3ed0176c41a5585e040368455fe803fa95511b" + }, + { + "ImportPath": "github.com/coreos/etcd/pkg/types", + "Comment": "v2.2.0-246-g8d3ed01", + "Rev": "8d3ed0176c41a5585e040368455fe803fa95511b" + }, + { + "ImportPath": "github.com/cyberdelia/heroku-go/v3", + "Rev": "8344c6a3e281a99a693f5b71186249a8620eeb6b" + }, + { + "ImportPath": "github.com/digitalocean/godo", + "Comment": "v0.9.0-2-gc03bb09", + "Rev": "c03bb099b8dc38e87581902a56885013a0865703" + }, + { + "ImportPath": "github.com/dylanmei/iso8601", + "Rev": "2075bf119b58e5576c6ed9f867b8f3d17f2e54d4" + }, + { + "ImportPath": "github.com/dylanmei/winrmtest", + "Rev": "3e9661c52c45dab9a8528966a23d421922fca9b9" + }, + { + "ImportPath": "github.com/fsouza/go-dockerclient", + "Rev": "412c004d923b7b89701e7a1632de83f843657a03" + }, + { + "ImportPath": "github.com/google/go-querystring/query", + "Rev": "547ef5ac979778feb2f760cdb5f4eae1a2207b86" + }, + { + "ImportPath": "github.com/hashicorp/atlas-go/archive", + "Comment": "20141209094003-79-gabffe75", + "Rev": "abffe75c7dff7f6c3344727348a95fe70c519696" + }, + { + "ImportPath": "github.com/hashicorp/atlas-go/v1", + "Comment": "20141209094003-79-gabffe75", + "Rev": "abffe75c7dff7f6c3344727348a95fe70c519696" + }, + { + "ImportPath": "github.com/hashicorp/consul/api", + "Comment": "v0.5.2-461-g158eabd", + "Rev": "158eabdd6f2408067c1d7656fa10e49434f96480" + }, + { + "ImportPath": "github.com/hashicorp/errwrap", + "Rev": "7554cd9344cec97297fa6649b055a8c98c2a1e55" + }, + { + "ImportPath": "github.com/hashicorp/go-checkpoint", + "Rev": "ee53b27929ebf0a6d217c96d2107c6c09b8bebb3" + }, + { + "ImportPath": "github.com/hashicorp/go-getter", + "Rev": "2463fe5ef95a59a4096482fb9390b5683a5c380a" + }, + { + "ImportPath": "github.com/hashicorp/go-multierror", + "Rev": "d30f09973e19c1dfcd120b2d9c4f168e68d6b5d5" + }, + { + "ImportPath": "github.com/hashicorp/go-version", + "Rev": "2b9865f60ce11e527bd1255ba82036d465570aa3" + }, + { + "ImportPath": "github.com/hashicorp/hcl", + "Rev": "4de51957ef8d4aba6e285ddfc587633bbfc7c0e8" + }, + { + "ImportPath": "github.com/hashicorp/logutils", + "Rev": "0dc08b1671f34c4250ce212759ebd880f743d883" + }, + { + "ImportPath": "github.com/hashicorp/yamux", + "Rev": "ddcd0a6ec7c55e29f235e27935bf98d302281bd3" + }, + { + "ImportPath": "github.com/imdario/mergo", + "Comment": "0.2.0-5-g61a5285", + "Rev": "61a52852277811e93e06d28e0d0c396284a7730b" + }, + { + "ImportPath": "github.com/kardianos/osext", + "Rev": "6e7f843663477789fac7c02def0d0909e969b4e5" + }, + { + "ImportPath": "github.com/masterzen/simplexml/dom", + "Rev": "95ba30457eb1121fa27753627c774c7cd4e90083" + }, + { + "ImportPath": "github.com/masterzen/winrm/soap", + "Rev": "e3e57d617b7d9573db6c98567a261916ff53cfb3" + }, + { + "ImportPath": "github.com/masterzen/winrm/winrm", + "Rev": "e3e57d617b7d9573db6c98567a261916ff53cfb3" + }, + { + "ImportPath": "github.com/masterzen/xmlpath", + "Rev": "13f4951698adc0fa9c1dda3e275d489a24201161" + }, + { + "ImportPath": "github.com/mitchellh/cli", + "Rev": "8102d0ed5ea2709ade1243798785888175f6e415" + }, + { + "ImportPath": "github.com/mitchellh/colorstring", + "Rev": "8631ce90f28644f54aeedcb3e389a85174e067d1" + }, + { + "ImportPath": "github.com/mitchellh/copystructure", + "Rev": "6fc66267e9da7d155a9d3bd489e00dad02666dc6" + }, + { + "ImportPath": "github.com/mitchellh/go-homedir", + "Rev": "df55a15e5ce646808815381b3db47a8c66ea62f4" + }, + { + "ImportPath": "github.com/mitchellh/go-linereader", + "Rev": "07bab5fdd9580500aea6ada0e09df4aa28e68abd" + }, + { + "ImportPath": "github.com/mitchellh/mapstructure", + "Rev": "281073eb9eb092240d33ef253c404f1cca550309" + }, + { + "ImportPath": "github.com/mitchellh/osext", + "Rev": "5e2d6d41470f99c881826dedd8c526728b783c9c" + }, + { + "ImportPath": "github.com/mitchellh/packer/common/uuid", + "Comment": "v0.8.6-114-gd66268f", + "Rev": "d66268f5f92dc3f785616f9d10f233ece8636e9c" + }, + { + "ImportPath": "github.com/mitchellh/panicwrap", + "Rev": "1655d88c8ff7495ae9d2c19fd8f445f4657e22b0" + }, + { + "ImportPath": "github.com/mitchellh/prefixedio", + "Rev": "89d9b535996bf0a185f85b59578f2e245f9e1724" + }, + { + "ImportPath": "github.com/mitchellh/reflectwalk", + "Rev": "eecf4c70c626c7cfbb95c90195bc34d386c74ac6" + }, + { + "ImportPath": "github.com/nu7hatch/gouuid", + "Rev": "179d4d0c4d8d407a32af483c2354df1d2c91e6c3" + }, + { + "ImportPath": "github.com/packer-community/winrmcp/winrmcp", + "Rev": "3d184cea22ee1c41ec1697e0d830ff0c78f7ea97" + }, + { + "ImportPath": "github.com/packethost/packngo", + "Rev": "f03d7dc788a8b57b62d301ccb98c950c325756f8" + }, + { + "ImportPath": "github.com/pborman/uuid", + "Rev": "cccd189d45f7ac3368a0d127efb7f4d08ae0b655" + }, + { + "ImportPath": "github.com/pearkes/cloudflare", + "Rev": "922f1c75017c54430fb706364d29eff10f64c56d" + }, + { + "ImportPath": "github.com/pearkes/dnsimple", + "Rev": "59fa6243d3d5ac56ab0df76be4c6da30821154b0" + }, + { + "ImportPath": "github.com/pearkes/mailgun", + "Rev": "5b02e7e9ffee9869f81393e80db138f6ff726260" + }, + { + "ImportPath": "github.com/rackspace/gophercloud", + "Comment": "v1.0.0-683-gdc139e8", + "Rev": "dc139e8a4612310304c1c71aa2b07d94ab7bdeaf" + }, + { + "ImportPath": "github.com/satori/go.uuid", + "Rev": "08f0718b61e95ddba0ade3346725fe0e4bf28ca6" + }, + { + "ImportPath": "github.com/soniah/dnsmadeeasy", + "Comment": "v1.1-2-g5578a8c", + "Rev": "5578a8c15e33958c61cf7db720b6181af65f4a9e" + }, + { + "ImportPath": "github.com/tent/http-link-go", + "Rev": "ac974c61c2f990f4115b119354b5e0b47550e888" + }, + { + "ImportPath": "github.com/ugorji/go/codec", + "Rev": "8a2a3a8c488c3ebd98f422a965260278267a0551" + }, + { + "ImportPath": "github.com/vaughan0/go-ini", + "Rev": "a98ad7ee00ec53921f08832bc06ecf7fd600e6a1" + }, + { + "ImportPath": "github.com/vmware/govmomi", + "Comment": "v0.2.0-32-gc33a28e", + "Rev": "c33a28ed780856865047dda04412c67f2d55de8e" + }, + { + "ImportPath": "github.com/xanzy/go-cloudstack/cloudstack", + "Comment": "v1.2.0-48-g0e6e56f", + "Rev": "0e6e56fc0db3f48f060273f2e2ffe5d8d41b0112" + }, + { + "ImportPath": "golang.org/x/crypto/curve25519", + "Rev": "c8b9e6388ef638d5a8a9d865c634befdc46a6784" + }, + { + "ImportPath": "golang.org/x/crypto/pkcs12", + "Rev": "c8b9e6388ef638d5a8a9d865c634befdc46a6784" + }, + { + "ImportPath": "golang.org/x/crypto/ssh", + "Rev": "c8b9e6388ef638d5a8a9d865c634befdc46a6784" + }, + { + "ImportPath": "golang.org/x/net/context", + "Rev": "9946ad7d5eae91d8edca4f54d1a1e130a052e823" + }, + { + "ImportPath": "golang.org/x/oauth2", + "Rev": "ef4eca6b097fad7cec79afcc278d213a6de1c960" + }, + { + "ImportPath": "google.golang.org/api/compute/v1", + "Rev": "c83ee8e9b7e6c40a486c0992a963ea8b6911de67" + }, + { + "ImportPath": "google.golang.org/api/container/v1", + "Rev": "c83ee8e9b7e6c40a486c0992a963ea8b6911de67" + }, + { + "ImportPath": "google.golang.org/api/dns/v1", + "Rev": "c83ee8e9b7e6c40a486c0992a963ea8b6911de67" + }, + { + "ImportPath": "google.golang.org/api/googleapi", + "Rev": "c83ee8e9b7e6c40a486c0992a963ea8b6911de67" + }, + { + "ImportPath": "google.golang.org/api/internal", + "Rev": "c83ee8e9b7e6c40a486c0992a963ea8b6911de67" + }, + { + "ImportPath": "google.golang.org/api/storage/v1", + "Rev": "c83ee8e9b7e6c40a486c0992a963ea8b6911de67" + }, + { + "ImportPath": "google.golang.org/cloud/compute/metadata", + "Rev": "2400193c85c3561d13880d34e0e10c4315bb02af" + }, + { + "ImportPath": "google.golang.org/cloud/internal", + "Rev": "2400193c85c3561d13880d34e0e10c4315bb02af" + } + ] +} diff --git a/deps/v0-6-6.json b/deps/v0-6-6.json new file mode 100644 index 000000000..fcc90ead9 --- /dev/null +++ b/deps/v0-6-6.json @@ -0,0 +1,489 @@ +{ + "ImportPath": "github.com/hashicorp/terraform", + "GoVersion": "go1.4.2", + "Packages": [ + "./..." + ], + "Deps": [ + { + "ImportPath": "github.com/Azure/azure-sdk-for-go/core/http", + "Comment": "v1.2-261-g3dcabb6", + "Rev": "3dcabb61c225af4013db7af20d4fe430fd09e311" + }, + { + "ImportPath": "github.com/Azure/azure-sdk-for-go/core/tls", + "Comment": "v1.2-261-g3dcabb6", + "Rev": "3dcabb61c225af4013db7af20d4fe430fd09e311" + }, + { + "ImportPath": "github.com/Azure/azure-sdk-for-go/management", + "Comment": "v1.2-261-g3dcabb6", + "Rev": "3dcabb61c225af4013db7af20d4fe430fd09e311" + }, + { + "ImportPath": "github.com/Azure/azure-sdk-for-go/storage", + "Comment": "v1.2-261-g3dcabb6", + "Rev": "3dcabb61c225af4013db7af20d4fe430fd09e311" + }, + { + "ImportPath": "github.com/apparentlymart/go-cidr/cidr", + "Rev": "a3ebdb999b831ecb6ab8a226e31b07b2b9061c47" + }, + { + "ImportPath": "github.com/apparentlymart/go-rundeck-api/rundeck", + "Comment": "v0.0.1", + "Rev": "cddcfbabbe903e9c8df35ff9569dbb8d67789200" + }, + { + "ImportPath": "github.com/armon/circbuf", + "Rev": "bbbad097214e2918d8543d5201d12bfd7bca254d" + }, + { + "ImportPath": "github.com/aws/aws-sdk-go/aws", + "Comment": "v0.9.16-1-g66c840e", + "Rev": "66c840e9981dd121a4239fc25e33b6c1c1caa781" + }, + { + "ImportPath": "github.com/aws/aws-sdk-go/internal/endpoints", + "Comment": "v0.9.16-1-g66c840e", + "Rev": "66c840e9981dd121a4239fc25e33b6c1c1caa781" + }, + { + "ImportPath": "github.com/aws/aws-sdk-go/internal/protocol/ec2query", + "Comment": "v0.9.16-1-g66c840e", + "Rev": "66c840e9981dd121a4239fc25e33b6c1c1caa781" + }, + { + "ImportPath": "github.com/aws/aws-sdk-go/internal/protocol/json/jsonutil", + "Comment": "v0.9.16-1-g66c840e", + "Rev": "66c840e9981dd121a4239fc25e33b6c1c1caa781" + }, + { + "ImportPath": "github.com/aws/aws-sdk-go/internal/protocol/jsonrpc", + "Comment": "v0.9.16-1-g66c840e", + "Rev": "66c840e9981dd121a4239fc25e33b6c1c1caa781" + }, + { + "ImportPath": "github.com/aws/aws-sdk-go/internal/protocol/query", + "Comment": "v0.9.16-1-g66c840e", + "Rev": "66c840e9981dd121a4239fc25e33b6c1c1caa781" + }, + { + "ImportPath": "github.com/aws/aws-sdk-go/internal/protocol/rest", + "Comment": "v0.9.16-1-g66c840e", + "Rev": "66c840e9981dd121a4239fc25e33b6c1c1caa781" + }, + { + "ImportPath": "github.com/aws/aws-sdk-go/internal/protocol/restjson", + "Comment": "v0.9.16-1-g66c840e", + "Rev": "66c840e9981dd121a4239fc25e33b6c1c1caa781" + }, + { + "ImportPath": "github.com/aws/aws-sdk-go/internal/protocol/restxml", + "Comment": "v0.9.16-1-g66c840e", + "Rev": "66c840e9981dd121a4239fc25e33b6c1c1caa781" + }, + { + "ImportPath": "github.com/aws/aws-sdk-go/internal/protocol/xml/xmlutil", + "Comment": "v0.9.16-1-g66c840e", + "Rev": "66c840e9981dd121a4239fc25e33b6c1c1caa781" + }, + { + "ImportPath": "github.com/aws/aws-sdk-go/internal/signer/v4", + "Comment": "v0.9.16-1-g66c840e", + "Rev": "66c840e9981dd121a4239fc25e33b6c1c1caa781" + }, + { + "ImportPath": "github.com/aws/aws-sdk-go/service/autoscaling", + "Comment": "v0.9.16-1-g66c840e", + "Rev": "66c840e9981dd121a4239fc25e33b6c1c1caa781" + }, + { + "ImportPath": "github.com/aws/aws-sdk-go/service/cloudwatch", + "Comment": "v0.9.16-1-g66c840e", + "Rev": "66c840e9981dd121a4239fc25e33b6c1c1caa781" + }, + { + "ImportPath": "github.com/aws/aws-sdk-go/service/cloudwatchlogs", + "Comment": "v0.9.16-1-g66c840e", + "Rev": "66c840e9981dd121a4239fc25e33b6c1c1caa781" + }, + { + "ImportPath": "github.com/aws/aws-sdk-go/service/codedeploy", + "Comment": "v0.9.16-1-g66c840e", + "Rev": "66c840e9981dd121a4239fc25e33b6c1c1caa781" + }, + { + "ImportPath": "github.com/aws/aws-sdk-go/service/directoryservice", + "Comment": "v0.9.16-1-g66c840e", + "Rev": "66c840e9981dd121a4239fc25e33b6c1c1caa781" + }, + { + "ImportPath": "github.com/aws/aws-sdk-go/service/dynamodb", + "Comment": "v0.9.16-1-g66c840e", + "Rev": "66c840e9981dd121a4239fc25e33b6c1c1caa781" + }, + { + "ImportPath": "github.com/aws/aws-sdk-go/service/ec2", + "Comment": "v0.9.16-1-g66c840e", + "Rev": "66c840e9981dd121a4239fc25e33b6c1c1caa781" + }, + { + "ImportPath": "github.com/aws/aws-sdk-go/service/ecs", + "Comment": "v0.9.16-1-g66c840e", + "Rev": "66c840e9981dd121a4239fc25e33b6c1c1caa781" + }, + { + "ImportPath": "github.com/aws/aws-sdk-go/service/efs", + "Comment": "v0.9.16-1-g66c840e", + "Rev": "66c840e9981dd121a4239fc25e33b6c1c1caa781" + }, + { + "ImportPath": "github.com/aws/aws-sdk-go/service/elasticache", + "Comment": "v0.9.16-1-g66c840e", + "Rev": "66c840e9981dd121a4239fc25e33b6c1c1caa781" + }, + { + "ImportPath": "github.com/aws/aws-sdk-go/service/elasticsearchservice", + "Comment": "v0.9.16-1-g66c840e", + "Rev": "66c840e9981dd121a4239fc25e33b6c1c1caa781" + }, + { + "ImportPath": "github.com/aws/aws-sdk-go/service/elb", + "Comment": "v0.9.16-1-g66c840e", + "Rev": "66c840e9981dd121a4239fc25e33b6c1c1caa781" + }, + { + "ImportPath": "github.com/aws/aws-sdk-go/service/glacier", + "Comment": "v0.9.16-1-g66c840e", + "Rev": "66c840e9981dd121a4239fc25e33b6c1c1caa781" + }, + { + "ImportPath": "github.com/aws/aws-sdk-go/service/iam", + "Comment": "v0.9.16-1-g66c840e", + "Rev": "66c840e9981dd121a4239fc25e33b6c1c1caa781" + }, + { + "ImportPath": "github.com/aws/aws-sdk-go/service/kinesis", + "Comment": "v0.9.16-1-g66c840e", + "Rev": "66c840e9981dd121a4239fc25e33b6c1c1caa781" + }, + { + "ImportPath": "github.com/aws/aws-sdk-go/service/lambda", + "Comment": "v0.9.16-1-g66c840e", + "Rev": "66c840e9981dd121a4239fc25e33b6c1c1caa781" + }, + { + "ImportPath": "github.com/aws/aws-sdk-go/service/opsworks", + "Comment": "v0.9.16-1-g66c840e", + "Rev": "66c840e9981dd121a4239fc25e33b6c1c1caa781" + }, + { + "ImportPath": "github.com/aws/aws-sdk-go/service/rds", + "Comment": "v0.9.16-1-g66c840e", + "Rev": "66c840e9981dd121a4239fc25e33b6c1c1caa781" + }, + { + "ImportPath": "github.com/aws/aws-sdk-go/service/route53", + "Comment": "v0.9.16-1-g66c840e", + "Rev": "66c840e9981dd121a4239fc25e33b6c1c1caa781" + }, + { + "ImportPath": "github.com/aws/aws-sdk-go/service/s3", + "Comment": "v0.9.16-1-g66c840e", + "Rev": "66c840e9981dd121a4239fc25e33b6c1c1caa781" + }, + { + "ImportPath": "github.com/aws/aws-sdk-go/service/sns", + "Comment": "v0.9.16-1-g66c840e", + "Rev": "66c840e9981dd121a4239fc25e33b6c1c1caa781" + }, + { + "ImportPath": "github.com/aws/aws-sdk-go/service/sqs", + "Comment": "v0.9.16-1-g66c840e", + "Rev": "66c840e9981dd121a4239fc25e33b6c1c1caa781" + }, + { + "ImportPath": "github.com/awslabs/aws-sdk-go/aws", + "Comment": "v0.9.16-1-g66c840e", + "Rev": "66c840e9981dd121a4239fc25e33b6c1c1caa781" + }, + { + "ImportPath": "github.com/coreos/etcd/client", + "Comment": "v2.2.0-261-gae62a77", + "Rev": "ae62a77de61d70f434ed848ba48b44247cb54c94" + }, + { + "ImportPath": "github.com/coreos/etcd/pkg/pathutil", + "Comment": "v2.2.0-261-gae62a77", + "Rev": "ae62a77de61d70f434ed848ba48b44247cb54c94" + }, + { + "ImportPath": "github.com/coreos/etcd/pkg/types", + "Comment": "v2.2.0-261-gae62a77", + "Rev": "ae62a77de61d70f434ed848ba48b44247cb54c94" + }, + { + "ImportPath": "github.com/cyberdelia/heroku-go/v3", + "Rev": "8344c6a3e281a99a693f5b71186249a8620eeb6b" + }, + { + "ImportPath": "github.com/digitalocean/godo", + "Comment": "v0.9.0-2-gc03bb09", + "Rev": "c03bb099b8dc38e87581902a56885013a0865703" + }, + { + "ImportPath": "github.com/dylanmei/iso8601", + "Rev": "2075bf119b58e5576c6ed9f867b8f3d17f2e54d4" + }, + { + "ImportPath": "github.com/dylanmei/winrmtest", + "Rev": "3e9661c52c45dab9a8528966a23d421922fca9b9" + }, + { + "ImportPath": "github.com/fsouza/go-dockerclient", + "Rev": "44f75219dec4d25d3ac5483d38d3ada7eaf047ab" + }, + { + "ImportPath": "github.com/google/go-querystring/query", + "Rev": "547ef5ac979778feb2f760cdb5f4eae1a2207b86" + }, + { + "ImportPath": "github.com/hashicorp/atlas-go/archive", + "Comment": "20141209094003-81-g6c9afe8", + "Rev": "6c9afe8bb88099b424db07dea18f434371de8199" + }, + { + "ImportPath": "github.com/hashicorp/atlas-go/v1", + "Comment": "20141209094003-81-g6c9afe8", + "Rev": "6c9afe8bb88099b424db07dea18f434371de8199" + }, + { + "ImportPath": "github.com/hashicorp/consul/api", + "Comment": "v0.5.2-469-g6a350d5", + "Rev": "6a350d5d19a41f94e0c99a933410e8545c4e7a51" + }, + { + "ImportPath": "github.com/hashicorp/errwrap", + "Rev": "7554cd9344cec97297fa6649b055a8c98c2a1e55" + }, + { + "ImportPath": "github.com/hashicorp/go-checkpoint", + "Rev": "e4b2dc34c0f698ee04750bf2035d8b9384233e1b" + }, + { + "ImportPath": "github.com/hashicorp/go-cleanhttp", + "Rev": "5df5ddc69534f1a4697289f1dca2193fbb40213f" + }, + { + "ImportPath": "github.com/hashicorp/go-getter", + "Rev": "2463fe5ef95a59a4096482fb9390b5683a5c380a" + }, + { + "ImportPath": "github.com/hashicorp/go-multierror", + "Rev": "d30f09973e19c1dfcd120b2d9c4f168e68d6b5d5" + }, + { + "ImportPath": "github.com/hashicorp/go-version", + "Rev": "2b9865f60ce11e527bd1255ba82036d465570aa3" + }, + { + "ImportPath": "github.com/hashicorp/hcl", + "Rev": "4de51957ef8d4aba6e285ddfc587633bbfc7c0e8" + }, + { + "ImportPath": "github.com/hashicorp/logutils", + "Rev": "0dc08b1671f34c4250ce212759ebd880f743d883" + }, + { + "ImportPath": "github.com/hashicorp/yamux", + "Rev": "ddcd0a6ec7c55e29f235e27935bf98d302281bd3" + }, + { + "ImportPath": "github.com/imdario/mergo", + "Comment": "0.2.0-5-g61a5285", + "Rev": "61a52852277811e93e06d28e0d0c396284a7730b" + }, + { + "ImportPath": "github.com/kardianos/osext", + "Rev": "6e7f843663477789fac7c02def0d0909e969b4e5" + }, + { + "ImportPath": "github.com/masterzen/simplexml/dom", + "Rev": "95ba30457eb1121fa27753627c774c7cd4e90083" + }, + { + "ImportPath": "github.com/masterzen/winrm/soap", + "Rev": "e3e57d617b7d9573db6c98567a261916ff53cfb3" + }, + { + "ImportPath": "github.com/masterzen/winrm/winrm", + "Rev": "e3e57d617b7d9573db6c98567a261916ff53cfb3" + }, + { + "ImportPath": "github.com/masterzen/xmlpath", + "Rev": "13f4951698adc0fa9c1dda3e275d489a24201161" + }, + { + "ImportPath": "github.com/mitchellh/cli", + "Rev": "8102d0ed5ea2709ade1243798785888175f6e415" + }, + { + "ImportPath": "github.com/mitchellh/colorstring", + "Rev": "8631ce90f28644f54aeedcb3e389a85174e067d1" + }, + { + "ImportPath": "github.com/mitchellh/copystructure", + "Rev": "6fc66267e9da7d155a9d3bd489e00dad02666dc6" + }, + { + "ImportPath": "github.com/mitchellh/go-homedir", + "Rev": "df55a15e5ce646808815381b3db47a8c66ea62f4" + }, + { + "ImportPath": "github.com/mitchellh/go-linereader", + "Rev": "07bab5fdd9580500aea6ada0e09df4aa28e68abd" + }, + { + "ImportPath": "github.com/mitchellh/mapstructure", + "Rev": "281073eb9eb092240d33ef253c404f1cca550309" + }, + { + "ImportPath": "github.com/mitchellh/osext", + "Rev": "5e2d6d41470f99c881826dedd8c526728b783c9c" + }, + { + "ImportPath": "github.com/mitchellh/packer/common/uuid", + "Comment": "v0.8.6-128-g8e63ce1", + "Rev": "8e63ce13028ed6a3204d7ed210c4790ea11d7db9" + }, + { + "ImportPath": "github.com/mitchellh/panicwrap", + "Rev": "1655d88c8ff7495ae9d2c19fd8f445f4657e22b0" + }, + { + "ImportPath": "github.com/mitchellh/prefixedio", + "Rev": "89d9b535996bf0a185f85b59578f2e245f9e1724" + }, + { + "ImportPath": "github.com/mitchellh/reflectwalk", + "Rev": "eecf4c70c626c7cfbb95c90195bc34d386c74ac6" + }, + { + "ImportPath": "github.com/nu7hatch/gouuid", + "Rev": "179d4d0c4d8d407a32af483c2354df1d2c91e6c3" + }, + { + "ImportPath": "github.com/packer-community/winrmcp/winrmcp", + "Rev": "3d184cea22ee1c41ec1697e0d830ff0c78f7ea97" + }, + { + "ImportPath": "github.com/packethost/packngo", + "Rev": "f03d7dc788a8b57b62d301ccb98c950c325756f8" + }, + { + "ImportPath": "github.com/pborman/uuid", + "Rev": "cccd189d45f7ac3368a0d127efb7f4d08ae0b655" + }, + { + "ImportPath": "github.com/pearkes/cloudflare", + "Rev": "3d4cd12a4c3a7fc29b338b774f7f8b7e3d5afc2e" + }, + { + "ImportPath": "github.com/pearkes/dnsimple", + "Rev": "78996265f576c7580ff75d0cb2c606a61883ceb8" + }, + { + "ImportPath": "github.com/pearkes/mailgun", + "Rev": "b88605989c4141d22a6d874f78800399e5bb7ac2" + }, + { + "ImportPath": "github.com/rackspace/gophercloud", + "Comment": "v1.0.0-685-g63ee53d", + "Rev": "63ee53d682169b50b8dfaca88722ba19bd5b17a6" + }, + { + "ImportPath": "github.com/satori/go.uuid", + "Rev": "08f0718b61e95ddba0ade3346725fe0e4bf28ca6" + }, + { + "ImportPath": "github.com/soniah/dnsmadeeasy", + "Comment": "v1.1-2-g5578a8c", + "Rev": "5578a8c15e33958c61cf7db720b6181af65f4a9e" + }, + { + "ImportPath": "github.com/tent/http-link-go", + "Rev": "ac974c61c2f990f4115b119354b5e0b47550e888" + }, + { + "ImportPath": "github.com/ugorji/go/codec", + "Rev": "8a2a3a8c488c3ebd98f422a965260278267a0551" + }, + { + "ImportPath": "github.com/vaughan0/go-ini", + "Rev": "a98ad7ee00ec53921f08832bc06ecf7fd600e6a1" + }, + { + "ImportPath": "github.com/vmware/govmomi", + "Comment": "v0.2.0-36-g6be2410", + "Rev": "6be2410334b7be4f6f8962206e49042207f99673" + }, + { + "ImportPath": "github.com/xanzy/go-cloudstack/cloudstack", + "Comment": "v1.2.0-48-g0e6e56f", + "Rev": "0e6e56fc0db3f48f060273f2e2ffe5d8d41b0112" + }, + { + "ImportPath": "golang.org/x/crypto/curve25519", + "Rev": "c8b9e6388ef638d5a8a9d865c634befdc46a6784" + }, + { + "ImportPath": "golang.org/x/crypto/pkcs12", + "Rev": "c8b9e6388ef638d5a8a9d865c634befdc46a6784" + }, + { + "ImportPath": "golang.org/x/crypto/ssh", + "Rev": "c8b9e6388ef638d5a8a9d865c634befdc46a6784" + }, + { + "ImportPath": "golang.org/x/net/context", + "Rev": "2cba614e8ff920c60240d2677bc019af32ee04e5" + }, + { + "ImportPath": "golang.org/x/oauth2", + "Rev": "038cb4adce85ed41e285c2e7cc6221a92bfa44aa" + }, + { + "ImportPath": "google.golang.org/api/compute/v1", + "Rev": "c83ee8e9b7e6c40a486c0992a963ea8b6911de67" + }, + { + "ImportPath": "google.golang.org/api/container/v1", + "Rev": "c83ee8e9b7e6c40a486c0992a963ea8b6911de67" + }, + { + "ImportPath": "google.golang.org/api/dns/v1", + "Rev": "c83ee8e9b7e6c40a486c0992a963ea8b6911de67" + }, + { + "ImportPath": "google.golang.org/api/googleapi", + "Rev": "c83ee8e9b7e6c40a486c0992a963ea8b6911de67" + }, + { + "ImportPath": "google.golang.org/api/internal", + "Rev": "c83ee8e9b7e6c40a486c0992a963ea8b6911de67" + }, + { + "ImportPath": "google.golang.org/api/storage/v1", + "Rev": "c83ee8e9b7e6c40a486c0992a963ea8b6911de67" + }, + { + "ImportPath": "google.golang.org/cloud/compute/metadata", + "Rev": "2400193c85c3561d13880d34e0e10c4315bb02af" + }, + { + "ImportPath": "google.golang.org/cloud/internal", + "Rev": "2400193c85c3561d13880d34e0e10c4315bb02af" + } + ] +} diff --git a/examples/aws-rds/README.md b/examples/aws-rds/README.md index 79e9ebd02..5af928c39 100644 --- a/examples/aws-rds/README.md +++ b/examples/aws-rds/README.md @@ -10,7 +10,7 @@ If you need to use existing security groups and subnets, remove the sg.tf and su Pass the password variable through your ENV variable. -Several paraneters are externalized, review the different variables.tf files and change them to fit your needs. Carefully review the CIDR blocks, egress/ingress rules, availability zones that are very specific to your account. +Several parameters are externalized, review the different variables.tf files and change them to fit your needs. Carefully review the CIDR blocks, egress/ingress rules, availability zones that are very specific to your account. Once ready run 'terraform plan' to review. At the minimum, provide the vpc_id as input variable. diff --git a/examples/gce-vpn/vpn.tf b/examples/gce-vpn/vpn.tf index 2693c1001..23fa8a02c 100644 --- a/examples/gce-vpn/vpn.tf +++ b/examples/gce-vpn/vpn.tf @@ -98,7 +98,7 @@ resource "google_compute_forwarding_rule" "fr2_udp4500" { } # Each tunnel is responsible for encrypting and decrypting traffic exiting -# and leaving it's associated gateway +# and leaving its associated gateway resource "google_compute_vpn_tunnel" "tunnel1" { name = "tunnel1" region = "${var.region1}" diff --git a/examples/openstack-with-networking/README.md b/examples/openstack-with-networking/README.md new file mode 100644 index 000000000..2f9d381ca --- /dev/null +++ b/examples/openstack-with-networking/README.md @@ -0,0 +1,63 @@ +# Basic OpenStack architecture with networking + +This provides a template for running a simple architecture on an OpenStack +cloud. + +To simplify the example, this intentionally ignores deploying and +getting your application onto the servers. However, you could do so either via +[provisioners](https://www.terraform.io/docs/provisioners/) and a configuration +management tool, or by pre-baking configured images with +[Packer](http://www.packer.io). + +After you run `terraform apply` on this configuration, it will output the +floating IP address assigned to the instance. After your instance started, +this should respond with the default nginx web page. + +First set the required environment variables for the OpenStack provider by +sourcing the [credentials file](http://docs.openstack.org/cli-reference/content/cli_openrc.html). + +``` +source openrc +``` + +Afterwards run with a command like this: + +``` +terraform apply \ + -var 'external_gateway=c1901f39-f76e-498a-9547-c29ba45f64df' \ + -var 'pool=public' +``` + +To get a list of usable floating IP pools run this command: + +``` +$ nova floating-ip-pool-list ++--------+ +| name | ++--------+ +| public | ++--------+ +``` + +To get the UUID of the external gateway run this command: + +``` +$ neutron net-show FLOATING_IP_POOL ++---------------------------+--------------------------------------+ +| Field | Value | ++---------------------------+--------------------------------------+ +| admin_state_up | True | +| id | c1901f39-f76e-498a-9547-c29ba45f64df | +| mtu | 0 | +| name | public | +| port_security_enabled | True | +| provider:network_type | vxlan | +| provider:physical_network | | +| provider:segmentation_id | 1092 | +| router:external | True | +| shared | False | +| status | ACTIVE | +| subnets | 42b672ae-8d51-4a18-a028-ddae7859ec4c | +| tenant_id | 1bde0a49d2ff44ffb44e6339a8cefe3a | ++---------------------------+--------------------------------------+ +``` diff --git a/examples/openstack-with-networking/main.tf b/examples/openstack-with-networking/main.tf new file mode 100644 index 000000000..d57925263 --- /dev/null +++ b/examples/openstack-with-networking/main.tf @@ -0,0 +1,79 @@ +resource "openstack_compute_keypair_v2" "terraform" { + name = "terraform" + public_key = "${file("${var.ssh_key_file}.pub")}" +} + +resource "openstack_networking_network_v2" "terraform" { + name = "terraform" + admin_state_up = "true" +} + +resource "openstack_networking_subnet_v2" "terraform" { + name = "terraform" + network_id = "${openstack_networking_network_v2.terraform.id}" + cidr = "10.0.0.0/24" + ip_version = 4 + dns_nameservers = ["8.8.8.8","8.8.4.4"] +} + +resource "openstack_networking_router_v2" "terraform" { + name = "terraform" + admin_state_up = "true" + external_gateway = "${var.external_gateway}" +} + +resource "openstack_networking_router_interface_v2" "terraform" { + router_id = "${openstack_networking_router_v2.terraform.id}" + subnet_id = "${openstack_networking_subnet_v2.terraform.id}" +} + +resource "openstack_compute_secgroup_v2" "terraform" { + name = "terraform" + description = "Security group for the Terraform example instances" + rule { + from_port = 22 + to_port = 22 + ip_protocol = "tcp" + cidr = "0.0.0.0/0" + } + rule { + from_port = 80 + to_port = 80 + ip_protocol = "tcp" + cidr = "0.0.0.0/0" + } + rule { + from_port = -1 + to_port = -1 + ip_protocol = "icmp" + cidr = "0.0.0.0/0" + } +} + +resource "openstack_compute_floatingip_v2" "terraform" { + pool = "${var.pool}" + depends_on = ["openstack_networking_router_interface_v2.terraform"] +} + +resource "openstack_compute_instance_v2" "terraform" { + name = "terraform" + image_name = "${var.image}" + flavor_name = "${var.flavor}" + key_pair = "${openstack_compute_keypair_v2.terraform.name}" + security_groups = [ "${openstack_compute_secgroup_v2.terraform.name}" ] + floating_ip = "${openstack_compute_floatingip_v2.terraform.address}" + network { + uuid = "${openstack_networking_network_v2.terraform.id}" + } + provisioner "remote-exec" { + connection { + user = "${var.ssh_user_name}" + key_file = "${var.ssh_key_file}" + } + inline = [ + "sudo apt-get -y update", + "sudo apt-get -y install nginx", + "sudo service nginx start" + ] + } +} diff --git a/examples/openstack-with-networking/openrc.sample b/examples/openstack-with-networking/openrc.sample new file mode 100644 index 000000000..c9a38e0a1 --- /dev/null +++ b/examples/openstack-with-networking/openrc.sample @@ -0,0 +1,7 @@ +#!/usr/bin/env bash + +export OS_AUTH_URL=http://KEYSTONE.ENDPOINT.URL:5000/v2.0 +export OS_TENANT_NAME=YOUR_TENANT_NAME +export OS_USERNAME=YOUR_USERNAME +export OS_PASSWORD=YOUR_PASSWORD +export OS_REGION_NAME=YOUR_REGION_NAME diff --git a/examples/openstack-with-networking/outputs.tf b/examples/openstack-with-networking/outputs.tf new file mode 100644 index 000000000..42f923fe2 --- /dev/null +++ b/examples/openstack-with-networking/outputs.tf @@ -0,0 +1,3 @@ +output "address" { + value = "${openstack_compute_floatingip_v2.terraform.address}" +} diff --git a/examples/openstack-with-networking/variables.tf b/examples/openstack-with-networking/variables.tf new file mode 100644 index 000000000..3477cf67e --- /dev/null +++ b/examples/openstack-with-networking/variables.tf @@ -0,0 +1,22 @@ +variable "image" { + default = "Ubuntu 14.04" +} + +variable "flavor" { + default = "m1.small" +} + +variable "ssh_key_file" { + default = "~/.ssh/id_rsa.terraform" +} + +variable "ssh_user_name" { + default = "ubuntu" +} + +variable "external_gateway" { +} + +variable "pool" { + default = "public" +} diff --git a/helper/multierror/error.go b/helper/multierror/error.go deleted file mode 100644 index ae21e4366..000000000 --- a/helper/multierror/error.go +++ /dev/null @@ -1,54 +0,0 @@ -package multierror - -import ( - "fmt" - "strings" -) - -// Error is an error type to track multiple errors. This is used to -// accumulate errors in cases such as configuration parsing, and returning -// them as a single error. -type Error struct { - Errors []error -} - -func (e *Error) Error() string { - points := make([]string, len(e.Errors)) - for i, err := range e.Errors { - points[i] = fmt.Sprintf("* %s", err) - } - - return fmt.Sprintf( - "%d error(s) occurred:\n\n%s", - len(e.Errors), strings.Join(points, "\n")) -} - -func (e *Error) GoString() string { - return fmt.Sprintf("*%#v", *e) -} - -// ErrorAppend is a helper function that will append more errors -// onto an Error in order to create a larger multi-error. If the -// original error is not an Error, it will be turned into one. -func ErrorAppend(err error, errs ...error) *Error { - if err == nil { - err = new(Error) - } - - switch err := err.(type) { - case *Error: - if err == nil { - err = new(Error) - } - - err.Errors = append(err.Errors, errs...) - return err - default: - newErrs := make([]error, len(errs)+1) - newErrs[0] = err - copy(newErrs[1:], errs) - return &Error{ - Errors: newErrs, - } - } -} diff --git a/helper/multierror/error_test.go b/helper/multierror/error_test.go deleted file mode 100644 index 207c00465..000000000 --- a/helper/multierror/error_test.go +++ /dev/null @@ -1,56 +0,0 @@ -package multierror - -import ( - "errors" - "testing" -) - -func TestError_Impl(t *testing.T) { - var raw interface{} - raw = &Error{} - if _, ok := raw.(error); !ok { - t.Fatal("Error must implement error") - } -} - -func TestErrorError(t *testing.T) { - expected := `2 error(s) occurred: - -* foo -* bar` - - errors := []error{ - errors.New("foo"), - errors.New("bar"), - } - - multi := &Error{errors} - if multi.Error() != expected { - t.Fatalf("bad: %s", multi.Error()) - } -} - -func TestErrorAppend_Error(t *testing.T) { - original := &Error{ - Errors: []error{errors.New("foo")}, - } - - result := ErrorAppend(original, errors.New("bar")) - if len(result.Errors) != 2 { - t.Fatalf("wrong len: %d", len(result.Errors)) - } - - original = &Error{} - result = ErrorAppend(original, errors.New("bar")) - if len(result.Errors) != 1 { - t.Fatalf("wrong len: %d", len(result.Errors)) - } -} - -func TestErrorAppend_NonError(t *testing.T) { - original := errors.New("foo") - result := ErrorAppend(original, errors.New("bar")) - if len(result.Errors) != 2 { - t.Fatalf("wrong len: %d", len(result.Errors)) - } -} diff --git a/helper/resource/testing.go b/helper/resource/testing.go index eaa0cbf71..0b53c3c61 100644 --- a/helper/resource/testing.go +++ b/helper/resource/testing.go @@ -11,6 +11,7 @@ import ( "strings" "testing" + "github.com/hashicorp/go-getter" "github.com/hashicorp/terraform/config/module" "github.com/hashicorp/terraform/terraform" ) @@ -198,7 +199,7 @@ func testStep( } // Load the modules - modStorage := &module.FolderStorage{ + modStorage := &getter.FolderStorage{ StorageDir: filepath.Join(cfgPath, ".tfmodules"), } err = mod.Load(modStorage, module.GetModeGet) diff --git a/helper/schema/field_reader.go b/helper/schema/field_reader.go index fc2a1e090..c3a6c76fa 100644 --- a/helper/schema/field_reader.go +++ b/helper/schema/field_reader.go @@ -38,15 +38,7 @@ func (r *FieldReadResult) ValueOrZero(s *Schema) interface{} { return r.Value } - result := s.Type.Zero() - - // The zero value of a set is nil, but we want it - // to actually be an empty set object... - if set, ok := result.(*Set); ok && set.F == nil { - set.F = s.Set - } - - return result + return s.ZeroValue() } // addrToSchema finds the final element schema for the given address diff --git a/helper/schema/field_reader_config.go b/helper/schema/field_reader_config.go index 69b63eac7..76aeed2bd 100644 --- a/helper/schema/field_reader_config.go +++ b/helper/schema/field_reader_config.go @@ -201,7 +201,7 @@ func (r *ConfigFieldReader) readSet( address []string, schema *Schema) (FieldReadResult, map[int]int, error) { indexMap := make(map[int]int) // Create the set that will be our result - set := &Set{F: schema.Set} + set := schema.ZeroValue().(*Set) raw, err := readListField(&nestedConfigFieldReader{r}, address, schema) if err != nil { diff --git a/helper/schema/field_reader_config_test.go b/helper/schema/field_reader_config_test.go index 96028a89c..be37fcef9 100644 --- a/helper/schema/field_reader_config_test.go +++ b/helper/schema/field_reader_config_test.go @@ -122,7 +122,7 @@ func TestConfigFieldReader_DefaultHandling(t *testing.T) { Config: tc.Config, } out, err := r.ReadField(tc.Addr) - if (err != nil) != tc.Err { + if err != nil != tc.Err { t.Fatalf("%s: err: %s", name, err) } if s, ok := out.Value.(*Set); ok { @@ -192,7 +192,7 @@ func TestConfigFieldReader_ComputedMap(t *testing.T) { Config: tc.Config, } out, err := r.ReadField(tc.Addr) - if (err != nil) != tc.Err { + if err != nil != tc.Err { t.Fatalf("%s: err: %s", name, err) } if s, ok := out.Value.(*Set); ok { @@ -283,7 +283,7 @@ func TestConfigFieldReader_ComputedSet(t *testing.T) { Config: tc.Config, } out, err := r.ReadField(tc.Addr) - if (err != nil) != tc.Err { + if err != nil != tc.Err { t.Fatalf("%s: err: %s", name, err) } if s, ok := out.Value.(*Set); ok { diff --git a/helper/schema/field_reader_diff.go b/helper/schema/field_reader_diff.go index e17a6685e..dcb379436 100644 --- a/helper/schema/field_reader_diff.go +++ b/helper/schema/field_reader_diff.go @@ -141,7 +141,7 @@ func (r *DiffFieldReader) readSet( prefix := strings.Join(address, ".") + "." // Create the set that will be our result - set := &Set{F: schema.Set} + set := schema.ZeroValue().(*Set) // Go through the map and find all the set items for k, d := range r.Diff.Attributes { diff --git a/helper/schema/field_reader_diff_test.go b/helper/schema/field_reader_diff_test.go index 205b254f4..a763e4702 100644 --- a/helper/schema/field_reader_diff_test.go +++ b/helper/schema/field_reader_diff_test.go @@ -237,7 +237,7 @@ func TestDiffFieldReader_extra(t *testing.T) { for name, tc := range cases { out, err := r.ReadField(tc.Addr) - if (err != nil) != tc.Err { + if err != nil != tc.Err { t.Fatalf("%s: err: %s", name, err) } if s, ok := out.Value.(*Set); ok { diff --git a/helper/schema/field_reader_map.go b/helper/schema/field_reader_map.go index 6dc76c474..feb3fcc0a 100644 --- a/helper/schema/field_reader_map.go +++ b/helper/schema/field_reader_map.go @@ -105,7 +105,7 @@ func (r *MapFieldReader) readSet( } // Create the set that will be our result - set := &Set{F: schema.Set} + set := schema.ZeroValue().(*Set) // If we have an empty list, then return an empty list if countRaw.Computed || countRaw.Value.(int) == 0 { diff --git a/helper/schema/field_reader_map_test.go b/helper/schema/field_reader_map_test.go index e2d5342ee..61ffd4484 100644 --- a/helper/schema/field_reader_map_test.go +++ b/helper/schema/field_reader_map_test.go @@ -86,7 +86,7 @@ func TestMapFieldReader_extra(t *testing.T) { for name, tc := range cases { out, err := r.ReadField(tc.Addr) - if (err != nil) != tc.OutErr { + if err != nil != tc.OutErr { t.Fatalf("%s: err: %s", name, err) } if out.Computed != tc.OutComputed { diff --git a/helper/schema/field_reader_test.go b/helper/schema/field_reader_test.go index 7d4690762..c61fb8eb7 100644 --- a/helper/schema/field_reader_test.go +++ b/helper/schema/field_reader_test.go @@ -387,7 +387,7 @@ func testFieldReader(t *testing.T, f func(map[string]*Schema) FieldReader) { for name, tc := range cases { r := f(schema) out, err := r.ReadField(tc.Addr) - if (err != nil) != tc.Err { + if err != nil != tc.Err { t.Fatalf("%s: err: %s", name, err) } if s, ok := out.Value.(*Set); ok { diff --git a/helper/schema/field_writer_map_test.go b/helper/schema/field_writer_map_test.go index 8cf8100f2..3f54f8303 100644 --- a/helper/schema/field_writer_map_test.go +++ b/helper/schema/field_writer_map_test.go @@ -242,7 +242,7 @@ func TestMapFieldWriter(t *testing.T) { for name, tc := range cases { w := &MapFieldWriter{Schema: schema} err := w.WriteField(tc.Addr, tc.Value) - if (err != nil) != tc.Err { + if err != nil != tc.Err { t.Fatalf("%s: err: %s", name, err) } diff --git a/helper/schema/provider_test.go b/helper/schema/provider_test.go index e1f4b93b0..5701520a2 100644 --- a/helper/schema/provider_test.go +++ b/helper/schema/provider_test.go @@ -79,7 +79,7 @@ func TestProviderConfigure(t *testing.T) { } err = tc.P.Configure(terraform.NewResourceConfig(c)) - if (err != nil) != tc.Err { + if err != nil != tc.Err { t.Fatalf("%d: %s", i, err) } } @@ -141,7 +141,7 @@ func TestProviderValidate(t *testing.T) { } _, es := tc.P.Validate(terraform.NewResourceConfig(c)) - if (len(es) > 0) != tc.Err { + if len(es) > 0 != tc.Err { t.Fatalf("%d: %#v", i, es) } } @@ -180,7 +180,7 @@ func TestProviderValidateResource(t *testing.T) { } _, es := tc.P.ValidateResource(tc.Type, terraform.NewResourceConfig(c)) - if (len(es) > 0) != tc.Err { + if len(es) > 0 != tc.Err { t.Fatalf("%d: %#v", i, es) } } diff --git a/helper/schema/resource.go b/helper/schema/resource.go index 571fe18a6..a7b8cfe1e 100644 --- a/helper/schema/resource.go +++ b/helper/schema/resource.go @@ -244,7 +244,20 @@ func (r *Resource) InternalValidate(topSchemaMap schemaMap) error { return fmt.Errorf( "No Update defined, must set ForceNew on: %#v", nonForceNewAttrs) } + } else { + nonUpdateableAttrs := make([]string, 0) + for k, v := range r.Schema { + if v.ForceNew || v.Computed && !v.Optional { + nonUpdateableAttrs = append(nonUpdateableAttrs, k) + } + } + updateableAttrs := len(r.Schema) - len(nonUpdateableAttrs) + if updateableAttrs == 0 { + return fmt.Errorf( + "All fields are ForceNew or Computed w/out Optional, Update is superfluous") + } } + tsm = schemaMap(r.Schema) } diff --git a/helper/schema/resource_data_test.go b/helper/schema/resource_data_test.go index 95479cfbf..dc62a8a19 100644 --- a/helper/schema/resource_data_test.go +++ b/helper/schema/resource_data_test.go @@ -1736,7 +1736,7 @@ func TestResourceDataSet(t *testing.T) { } err = d.Set(tc.Key, tc.Value) - if (err != nil) != tc.Err { + if err != nil != tc.Err { t.Fatalf("%d err: %s", i, err) } diff --git a/helper/schema/resource_test.go b/helper/schema/resource_test.go index e35979eb2..ecfede51b 100644 --- a/helper/schema/resource_test.go +++ b/helper/schema/resource_test.go @@ -335,11 +335,41 @@ func TestResourceInternalValidate(t *testing.T) { }, true, }, + + // Update undefined for non-ForceNew field + { + &Resource{ + Create: func(d *ResourceData, meta interface{}) error { return nil }, + Schema: map[string]*Schema{ + "boo": &Schema{ + Type: TypeInt, + Optional: true, + }, + }, + }, + true, + }, + + // Update defined for ForceNew field + { + &Resource{ + Create: func(d *ResourceData, meta interface{}) error { return nil }, + Update: func(d *ResourceData, meta interface{}) error { return nil }, + Schema: map[string]*Schema{ + "goo": &Schema{ + Type: TypeInt, + Optional: true, + ForceNew: true, + }, + }, + }, + true, + }, } for i, tc := range cases { err := tc.In.InternalValidate(schemaMap{}) - if (err != nil) != tc.Err { + if err != nil != tc.Err { t.Fatalf("%d: bad: %s", i, err) } } @@ -555,7 +585,7 @@ func TestResourceRefresh_needsMigration(t *testing.T) { if err != nil { t.Fatalf("err: %#v", err) } - s.Attributes["newfoo"] = strconv.Itoa((int(oldfoo * 10))) + s.Attributes["newfoo"] = strconv.Itoa(int(oldfoo * 10)) delete(s.Attributes, "oldfoo") return s, nil diff --git a/helper/schema/schema.go b/helper/schema/schema.go index 59a3260fb..8ed813526 100644 --- a/helper/schema/schema.go +++ b/helper/schema/schema.go @@ -207,6 +207,30 @@ func (s *Schema) DefaultValue() (interface{}, error) { return nil, nil } +// Returns a zero value for the schema. +func (s *Schema) ZeroValue() interface{} { + // If it's a set then we'll do a bit of extra work to provide the + // right hashing function in our empty value. + if s.Type == TypeSet { + setFunc := s.Set + if setFunc == nil { + // Default set function uses the schema to hash the whole value + elem := s.Elem + switch t := elem.(type) { + case *Schema: + setFunc = HashSchema(t) + case *Resource: + setFunc = HashResource(t) + default: + panic("invalid set element type") + } + } + return &Set{F: setFunc} + } else { + return s.Type.Zero() + } +} + func (s *Schema) finalizeDiff( d *terraform.ResourceAttrDiff) *terraform.ResourceAttrDiff { if d == nil { @@ -496,10 +520,8 @@ func (m schemaMap) InternalValidate(topSchemaMap schemaMap) error { return fmt.Errorf("%s: Default is not valid for lists or sets", k) } - if v.Type == TypeList && v.Set != nil { + if v.Type != TypeSet && v.Set != nil { return fmt.Errorf("%s: Set can only be set for TypeSet", k) - } else if v.Type == TypeSet && v.Set == nil { - return fmt.Errorf("%s: Set must be set", k) } switch t := v.Elem.(type) { @@ -518,8 +540,8 @@ func (m schemaMap) InternalValidate(topSchemaMap schemaMap) error { if v.ValidateFunc != nil { switch v.Type { - case TypeList, TypeSet, TypeMap: - return fmt.Errorf("ValidateFunc is only supported on primitives.") + case TypeList, TypeSet: + return fmt.Errorf("ValidateFunc is not yet supported on lists or sets.") } } } @@ -782,10 +804,10 @@ func (m schemaMap) diffSet( } if o == nil { - o = &Set{F: schema.Set} + o = schema.ZeroValue().(*Set) } if n == nil { - n = &Set{F: schema.Set} + n = schema.ZeroValue().(*Set) } os := o.(*Set) ns := n.(*Set) @@ -805,7 +827,7 @@ func (m schemaMap) diffSet( newStr := strconv.Itoa(newLen) // If the set computed then say that the # is computed - if computedSet || (schema.Computed && !nSet) { + if computedSet || schema.Computed && !nSet { // If # already exists, equals 0 and no new set is supplied, there // is nothing to record in the diff count, ok := d.GetOk(k + ".#") @@ -1096,6 +1118,17 @@ func (m schemaMap) validateMap( } } + if schema.ValidateFunc != nil { + validatableMap := make(map[string]interface{}) + for _, raw := range raws { + for k, v := range raw.(map[string]interface{}) { + validatableMap[k] = v + } + } + + return schema.ValidateFunc(validatableMap, k) + } + return nil, nil } @@ -1145,8 +1178,25 @@ func (m schemaMap) validatePrimitive( raw interface{}, schema *Schema, c *terraform.ResourceConfig) ([]string, []error) { + + // Catch if the user gave a complex type where a primitive was + // expected, so we can return a friendly error message that + // doesn't contain Go type system terminology. + switch reflect.ValueOf(raw).Type().Kind() { + case reflect.Slice: + return nil, []error{ + fmt.Errorf("%s must be a single value, not a list", k), + } + case reflect.Map: + return nil, []error{ + fmt.Errorf("%s must be a single value, not a map", k), + } + default: // ok + } + if c.IsComputed(k) { - // If the key is being computed, then it is not an error + // If the key is being computed, then it is not an error as + // long as it's not a slice or map. return nil, nil } diff --git a/helper/schema/schema_test.go b/helper/schema/schema_test.go index 83aa72a5c..e43300c99 100644 --- a/helper/schema/schema_test.go +++ b/helper/schema/schema_test.go @@ -2437,7 +2437,7 @@ func TestSchemaMap_Diff(t *testing.T) { d, err := schemaMap(tc.Schema).Diff( tc.State, terraform.NewResourceConfig(c)) - if (err != nil) != tc.Err { + if err != nil != tc.Err { t.Fatalf("#%d err: %s", i, err) } @@ -2595,7 +2595,7 @@ func TestSchemaMap_Input(t *testing.T) { rc.Config = make(map[string]interface{}) actual, err := schemaMap(tc.Schema).Input(input, rc) - if (err != nil) != tc.Err { + if err != nil != tc.Err { t.Fatalf("#%v err: %s", i, err) } @@ -2789,7 +2789,7 @@ func TestSchemaMap_InternalValidate(t *testing.T) { Optional: true, }, }, - true, + false, }, // Required but computed @@ -2903,7 +2903,7 @@ func TestSchemaMap_InternalValidate(t *testing.T) { { map[string]*Schema{ "foo": &Schema{ - Type: TypeMap, + Type: TypeSet, Required: true, ValidateFunc: func(v interface{}, k string) (ws []string, es []error) { return @@ -2916,7 +2916,7 @@ func TestSchemaMap_InternalValidate(t *testing.T) { for i, tc := range cases { err := schemaMap(tc.In).InternalValidate(schemaMap{}) - if (err != nil) != tc.Err { + if err != nil != tc.Err { if tc.Err { t.Fatalf("%d: Expected error did not occur:\n\n%#v", i, tc.In) } @@ -3409,6 +3409,36 @@ func TestSchemaMap_Validate(t *testing.T) { Err: true, }, + "Bad, should not allow lists to be assigned to string attributes": { + Schema: map[string]*Schema{ + "availability_zone": &Schema{ + Type: TypeString, + Required: true, + }, + }, + + Config: map[string]interface{}{ + "availability_zone": []interface{}{"foo", "bar", "baz"}, + }, + + Err: true, + }, + + "Bad, should not allow maps to be assigned to string attributes": { + Schema: map[string]*Schema{ + "availability_zone": &Schema{ + Type: TypeString, + Required: true, + }, + }, + + Config: map[string]interface{}{ + "availability_zone": map[string]interface{}{"foo": "bar", "baz": "thing"}, + }, + + Err: true, + }, + "Deprecated attribute usage generates warning, but not error": { Schema: map[string]*Schema{ "old_news": &Schema{ @@ -3652,7 +3682,7 @@ func TestSchemaMap_Validate(t *testing.T) { } ws, es := schemaMap(tc.Schema).Validate(terraform.NewResourceConfig(c)) - if (len(es) > 0) != tc.Err { + if len(es) > 0 != tc.Err { if len(es) == 0 { t.Errorf("%q: no errors", tn) } diff --git a/helper/schema/serialize.go b/helper/schema/serialize.go new file mode 100644 index 000000000..78f5bfbd6 --- /dev/null +++ b/helper/schema/serialize.go @@ -0,0 +1,105 @@ +package schema + +import ( + "bytes" + "sort" + "strconv" +) + +func SerializeValueForHash(buf *bytes.Buffer, val interface{}, schema *Schema) { + if val == nil { + buf.WriteRune(';') + return + } + + switch schema.Type { + case TypeBool: + if val.(bool) { + buf.WriteRune('1') + } else { + buf.WriteRune('0') + } + case TypeInt: + buf.WriteString(strconv.Itoa(val.(int))) + case TypeFloat: + buf.WriteString(strconv.FormatFloat(val.(float64), 'g', -1, 64)) + case TypeString: + buf.WriteString(val.(string)) + case TypeList: + buf.WriteRune('(') + l := val.([]interface{}) + for _, innerVal := range l { + serializeCollectionMemberForHash(buf, innerVal, schema.Elem) + } + buf.WriteRune(')') + case TypeMap: + m := val.(map[string]interface{}) + var keys []string + for k := range m { + keys = append(keys, k) + } + sort.Strings(keys) + buf.WriteRune('[') + for _, k := range keys { + innerVal := m[k] + buf.WriteString(k) + buf.WriteRune(':') + serializeCollectionMemberForHash(buf, innerVal, schema.Elem) + } + buf.WriteRune(']') + case TypeSet: + buf.WriteRune('{') + s := val.(*Set) + for _, innerVal := range s.List() { + serializeCollectionMemberForHash(buf, innerVal, schema.Elem) + } + buf.WriteRune('}') + default: + panic("unknown schema type to serialize") + } + buf.WriteRune(';') +} + +// SerializeValueForHash appends a serialization of the given resource config +// to the given buffer, guaranteeing deterministic results given the same value +// and schema. +// +// Its primary purpose is as input into a hashing function in order +// to hash complex substructures when used in sets, and so the serialization +// is not reversible. +func SerializeResourceForHash(buf *bytes.Buffer, val interface{}, resource *Resource) { + sm := resource.Schema + m := val.(map[string]interface{}) + var keys []string + for k := range sm { + keys = append(keys, k) + } + sort.Strings(keys) + for _, k := range keys { + innerSchema := sm[k] + // Skip attributes that are not user-provided. Computed attributes + // do not contribute to the hash since their ultimate value cannot + // be known at plan/diff time. + if !(innerSchema.Required || innerSchema.Optional) { + continue + } + + buf.WriteString(k) + buf.WriteRune(':') + innerVal := m[k] + SerializeValueForHash(buf, innerVal, innerSchema) + } +} + +func serializeCollectionMemberForHash(buf *bytes.Buffer, val interface{}, elem interface{}) { + switch tElem := elem.(type) { + case *Schema: + SerializeValueForHash(buf, val, tElem) + case *Resource: + buf.WriteRune('<') + SerializeResourceForHash(buf, val, tElem) + buf.WriteString(">;") + default: + panic("invalid element type") + } +} diff --git a/helper/schema/serialize_test.go b/helper/schema/serialize_test.go new file mode 100644 index 000000000..7fe9e20bf --- /dev/null +++ b/helper/schema/serialize_test.go @@ -0,0 +1,214 @@ +package schema + +import ( + "bytes" + "testing" +) + +func TestSerializeForHash(t *testing.T) { + type testCase struct { + Schema interface{} + Value interface{} + Expected string + } + + tests := []testCase{ + + testCase{ + Schema: &Schema{ + Type: TypeInt, + }, + Value: 0, + Expected: "0;", + }, + + testCase{ + Schema: &Schema{ + Type: TypeInt, + }, + Value: 200, + Expected: "200;", + }, + + testCase{ + Schema: &Schema{ + Type: TypeBool, + }, + Value: true, + Expected: "1;", + }, + + testCase{ + Schema: &Schema{ + Type: TypeBool, + }, + Value: false, + Expected: "0;", + }, + + testCase{ + Schema: &Schema{ + Type: TypeFloat, + }, + Value: 1.0, + Expected: "1;", + }, + + testCase{ + Schema: &Schema{ + Type: TypeFloat, + }, + Value: 1.54, + Expected: "1.54;", + }, + + testCase{ + Schema: &Schema{ + Type: TypeFloat, + }, + Value: 0.1, + Expected: "0.1;", + }, + + testCase{ + Schema: &Schema{ + Type: TypeString, + }, + Value: "hello", + Expected: "hello;", + }, + + testCase{ + Schema: &Schema{ + Type: TypeString, + }, + Value: "1", + Expected: "1;", + }, + + testCase{ + Schema: &Schema{ + Type: TypeList, + Elem: &Schema{ + Type: TypeString, + }, + }, + Value: []interface{}{}, + Expected: "();", + }, + + testCase{ + Schema: &Schema{ + Type: TypeList, + Elem: &Schema{ + Type: TypeString, + }, + }, + Value: []interface{}{"hello", "world"}, + Expected: "(hello;world;);", + }, + + testCase{ + Schema: &Schema{ + Type: TypeList, + Elem: &Resource{ + Schema: map[string]*Schema{ + "fo": &Schema{ + Type: TypeString, + Required: true, + }, + "fum": &Schema{ + Type: TypeString, + Required: true, + }, + }, + }, + }, + Value: []interface{}{ + map[string]interface{}{ + "fo": "bar", + }, + map[string]interface{}{ + "fo": "baz", + "fum": "boz", + }, + }, + Expected: "(;;);", + }, + + testCase{ + Schema: &Schema{ + Type: TypeSet, + Elem: &Schema{ + Type: TypeString, + }, + }, + Value: NewSet(func(i interface{}) int { return len(i.(string)) }, []interface{}{ + "hello", + "woo", + }), + Expected: "{woo;hello;};", + }, + + testCase{ + Schema: &Schema{ + Type: TypeMap, + Elem: &Schema{ + Type: TypeString, + }, + }, + Value: map[string]interface{}{ + "foo": "bar", + "baz": "foo", + }, + Expected: "[baz:foo;foo:bar;];", + }, + + testCase{ + Schema: &Resource{ + Schema: map[string]*Schema{ + "name": &Schema{ + Type: TypeString, + Required: true, + }, + "size": &Schema{ + Type: TypeInt, + Optional: true, + }, + "green": &Schema{ + Type: TypeBool, + Optional: true, + Computed: true, + }, + "upside_down": &Schema{ + Type: TypeBool, + Computed: true, + }, + }, + }, + Value: map[string]interface{}{ + "name": "my-fun-database", + "size": 12, + "green": true, + }, + Expected: "green:1;name:my-fun-database;size:12;", + }, + } + + for _, test := range tests { + var gotBuf bytes.Buffer + schema := test.Schema + + switch s := schema.(type) { + case *Schema: + SerializeValueForHash(&gotBuf, test.Value, s) + case *Resource: + SerializeResourceForHash(&gotBuf, test.Value, s) + } + + got := gotBuf.String() + if got != test.Expected { + t.Errorf("hash(%#v) got %#v, but want %#v", test.Value, got, test.Expected) + } + } +} diff --git a/helper/schema/set.go b/helper/schema/set.go index 8d21866df..e070a1eb9 100644 --- a/helper/schema/set.go +++ b/helper/schema/set.go @@ -1,6 +1,7 @@ package schema import ( + "bytes" "fmt" "reflect" "sort" @@ -15,6 +16,28 @@ func HashString(v interface{}) int { return hashcode.String(v.(string)) } +// HashResource hashes complex structures that are described using +// a *Resource. This is the default set implementation used when a set's +// element type is a full resource. +func HashResource(resource *Resource) SchemaSetFunc { + return func(v interface{}) int { + var buf bytes.Buffer + SerializeResourceForHash(&buf, v, resource) + return hashcode.String(buf.String()) + } +} + +// HashSchema hashes values that are described using a *Schema. This is the +// default set implementation used when a set's element type is a single +// schema. +func HashSchema(schema *Schema) SchemaSetFunc { + return func(v interface{}) int { + var buf bytes.Buffer + SerializeValueForHash(&buf, v, schema) + return hashcode.String(buf.String()) + } +} + // Set is a set data structure that is returned for elements of type // TypeSet. type Set struct { diff --git a/log.go b/log.go index 70046b347..1077c3e55 100644 --- a/log.go +++ b/log.go @@ -2,28 +2,65 @@ package main import ( "io" + "log" "os" + "strings" + + "github.com/hashicorp/logutils" ) // These are the environmental variables that determine if we log, and if // we log whether or not the log should go to a file. -const EnvLog = "TF_LOG" //Set to True -const EnvLogFile = "TF_LOG_PATH" //Set to a file +const ( + EnvLog = "TF_LOG" // Set to True + EnvLogFile = "TF_LOG_PATH" // Set to a file +) -// logOutput determines where we should send logs (if anywhere). +var validLevels = []logutils.LogLevel{"TRACE", "DEBUG", "INFO", "WARN", "ERROR"} + +// logOutput determines where we should send logs (if anywhere) and the log level. func logOutput() (logOutput io.Writer, err error) { logOutput = nil - if os.Getenv(EnvLog) != "" { - logOutput = os.Stderr + envLevel := os.Getenv(EnvLog) + if envLevel == "" { + return + } - if logPath := os.Getenv(EnvLogFile); logPath != "" { - var err error - logOutput, err = os.Create(logPath) - if err != nil { - return nil, err - } + logOutput = os.Stderr + if logPath := os.Getenv(EnvLogFile); logPath != "" { + var err error + logOutput, err = os.Create(logPath) + if err != nil { + return nil, err } } + // This was the default since the beginning + logLevel := logutils.LogLevel("TRACE") + + if isValidLogLevel(envLevel) { + // allow following for better ux: info, Info or INFO + logLevel = logutils.LogLevel(strings.ToUpper(envLevel)) + } else { + log.Printf("[WARN] Invalid log level: %q. Defaulting to level: TRACE. Valid levels are: %+v", + envLevel, validLevels) + } + + logOutput = &logutils.LevelFilter{ + Levels: validLevels, + MinLevel: logLevel, + Writer: logOutput, + } + return } + +func isValidLogLevel(level string) bool { + for _, l := range validLevels { + if strings.ToUpper(level) == string(l) { + return true + } + } + + return false +} diff --git a/plugin/client.go b/plugin/client.go index be54526c7..8a3b03fc0 100644 --- a/plugin/client.go +++ b/plugin/client.go @@ -88,7 +88,7 @@ func CleanupClients() { }(client) } - log.Println("waiting for all plugin processes to complete...") + log.Println("[DEBUG] waiting for all plugin processes to complete...") wg.Wait() } @@ -326,7 +326,7 @@ func (c *Client) logStderr(r io.Reader) { c.config.Stderr.Write([]byte(line)) line = strings.TrimRightFunc(line, unicode.IsSpace) - log.Printf("%s: %s", filepath.Base(c.config.Cmd.Path), line) + log.Printf("[DEBUG] %s: %s", filepath.Base(c.config.Cmd.Path), line) } if err == io.EOF { diff --git a/scripts/dist.sh b/scripts/dist.sh index 00038e3ac..1488d1b71 100755 --- a/scripts/dist.sh +++ b/scripts/dist.sh @@ -36,14 +36,6 @@ shasum -a256 * > ./terraform_${VERSION}_SHA256SUMS popd # Upload -for ARCHIVE in ./pkg/dist/*; do - ARCHIVE_NAME=$(basename ${ARCHIVE}) - - echo Uploading: $ARCHIVE_NAME - curl \ - -T ${ARCHIVE} \ - -umitchellh:${BINTRAY_API_KEY} \ - "https://api.bintray.com/content/mitchellh/terraform/terraform/${VERSION}/${ARCHIVE_NAME}" -done +hc-releases -upload=./pkg/dist exit 0 diff --git a/scripts/website_push.sh b/scripts/website_push.sh index 36b62f1be..fa58fd694 100755 --- a/scripts/website_push.sh +++ b/scripts/website_push.sh @@ -16,7 +16,8 @@ while [ -h "$SOURCE" ] ; do SOURCE="$(readlink "$SOURCE")"; done DIR="$( cd -P "$( dirname "$SOURCE" )/.." && pwd )" # Copy into tmpdir -cp -R $DIR/website/ $DEPLOY/ +shopt -s dotglob +cp -r $DIR/website/* $DEPLOY/ # Change into that directory pushd $DEPLOY &>/dev/null @@ -25,6 +26,7 @@ pushd $DEPLOY &>/dev/null touch .gitignore echo ".sass-cache" >> .gitignore echo "build" >> .gitignore +echo "vendor" >> .gitignore # Add everything git init -q . diff --git a/state/remote/atlas.go b/state/remote/atlas.go index f52d834a2..f33f407ce 100644 --- a/state/remote/atlas.go +++ b/state/remote/atlas.go @@ -6,11 +6,15 @@ import ( "encoding/base64" "fmt" "io" + "log" "net/http" "net/url" "os" "path" "strings" + + "github.com/hashicorp/go-cleanhttp" + "github.com/hashicorp/terraform/terraform" ) const ( @@ -73,6 +77,9 @@ type AtlasClient struct { Name string AccessToken string RunId string + HTTPClient *http.Client + + conflictHandlingAttempted bool } func (c *AtlasClient) Get() (*Payload, error) { @@ -83,7 +90,8 @@ func (c *AtlasClient) Get() (*Payload, error) { } // Request the url - resp, err := http.DefaultClient.Do(req) + client := c.http() + resp, err := client.Do(req) if err != nil { return nil, err } @@ -161,7 +169,8 @@ func (c *AtlasClient) Put(state []byte) error { req.ContentLength = int64(len(state)) // Make the request - resp, err := http.DefaultClient.Do(req) + client := c.http() + resp, err := client.Do(req) if err != nil { return fmt.Errorf("Failed to upload state: %v", err) } @@ -171,6 +180,8 @@ func (c *AtlasClient) Put(state []byte) error { switch resp.StatusCode { case http.StatusOK: return nil + case http.StatusConflict: + return c.handleConflict(c.readBody(resp.Body), state) default: return fmt.Errorf( "HTTP error: %d\n\nBody: %s", @@ -186,7 +197,8 @@ func (c *AtlasClient) Delete() error { } // Make the request - resp, err := http.DefaultClient.Do(req) + client := c.http() + resp, err := client.Do(req) if err != nil { return fmt.Errorf("Failed to delete state: %v", err) } @@ -236,3 +248,74 @@ func (c *AtlasClient) url() *url.URL { RawQuery: values.Encode(), } } + +func (c *AtlasClient) http() *http.Client { + if c.HTTPClient != nil { + return c.HTTPClient + } + return cleanhttp.DefaultClient() +} + +// Atlas returns an HTTP 409 - Conflict if the pushed state reports the same +// Serial number but the checksum of the raw content differs. This can +// sometimes happen when Terraform changes state representation internally +// between versions in a way that's semantically neutral but affects the JSON +// output and therefore the checksum. +// +// Here we detect and handle this situation by ticking the serial and retrying +// iff for the previous state and the proposed state: +// +// * the serials match +// * the parsed states are Equal (semantically equivalent) +// +// In other words, in this situation Terraform can override Atlas's detected +// conflict by asserting that the state it is pushing is indeed correct. +func (c *AtlasClient) handleConflict(msg string, state []byte) error { + log.Printf("[DEBUG] Handling Atlas conflict response: %s", msg) + + if c.conflictHandlingAttempted { + log.Printf("[DEBUG] Already attempted conflict resolution; returning conflict.") + } else { + c.conflictHandlingAttempted = true + log.Printf("[DEBUG] Atlas reported conflict, checking for equivalent states.") + + payload, err := c.Get() + if err != nil { + return conflictHandlingError(err) + } + + currentState, err := terraform.ReadState(bytes.NewReader(payload.Data)) + if err != nil { + return conflictHandlingError(err) + } + + proposedState, err := terraform.ReadState(bytes.NewReader(state)) + if err != nil { + return conflictHandlingError(err) + } + + if statesAreEquivalent(currentState, proposedState) { + log.Printf("[DEBUG] States are equivalent, incrementing serial and retrying.") + proposedState.Serial++ + var buf bytes.Buffer + if err := terraform.WriteState(proposedState, &buf); err != nil { + return conflictHandlingError(err) + } + return c.Put(buf.Bytes()) + } else { + log.Printf("[DEBUG] States are not equivalent, returning conflict.") + } + } + + return fmt.Errorf( + "Atlas detected a remote state conflict.\n\nMessage: %s", msg) +} + +func conflictHandlingError(err error) error { + return fmt.Errorf( + "Error while handling a conflict response from Atlas: %s", err) +} + +func statesAreEquivalent(current, proposed *terraform.State) bool { + return current.Serial == proposed.Serial && current.Equal(proposed) +} diff --git a/state/remote/atlas_test.go b/state/remote/atlas_test.go index 202e15dad..ae7ee8a1b 100644 --- a/state/remote/atlas_test.go +++ b/state/remote/atlas_test.go @@ -1,9 +1,15 @@ package remote import ( + "bytes" + "crypto/md5" "net/http" + "net/http/httptest" "os" "testing" + "time" + + "github.com/hashicorp/terraform/terraform" ) func TestAtlasClient_impl(t *testing.T) { @@ -30,3 +36,259 @@ func TestAtlasClient(t *testing.T) { testClient(t, client) } + +func TestAtlasClient_ReportedConflictEqualStates(t *testing.T) { + fakeAtlas := newFakeAtlas(t, testStateModuleOrderChange) + srv := fakeAtlas.Server() + defer srv.Close() + client, err := atlasFactory(map[string]string{ + "access_token": "sometoken", + "name": "someuser/some-test-remote-state", + "address": srv.URL, + }) + if err != nil { + t.Fatalf("err: %s", err) + } + + state, err := terraform.ReadState(bytes.NewReader(testStateModuleOrderChange)) + if err != nil { + t.Fatalf("err: %s", err) + } + + var stateJson bytes.Buffer + if err := terraform.WriteState(state, &stateJson); err != nil { + t.Fatalf("err: %s", err) + } + if err := client.Put(stateJson.Bytes()); err != nil { + t.Fatalf("err: %s", err) + } +} + +func TestAtlasClient_NoConflict(t *testing.T) { + fakeAtlas := newFakeAtlas(t, testStateSimple) + srv := fakeAtlas.Server() + defer srv.Close() + client, err := atlasFactory(map[string]string{ + "access_token": "sometoken", + "name": "someuser/some-test-remote-state", + "address": srv.URL, + }) + if err != nil { + t.Fatalf("err: %s", err) + } + + state, err := terraform.ReadState(bytes.NewReader(testStateSimple)) + if err != nil { + t.Fatalf("err: %s", err) + } + + fakeAtlas.NoConflictAllowed(true) + + var stateJson bytes.Buffer + if err := terraform.WriteState(state, &stateJson); err != nil { + t.Fatalf("err: %s", err) + } + if err := client.Put(stateJson.Bytes()); err != nil { + t.Fatalf("err: %s", err) + } +} + +func TestAtlasClient_LegitimateConflict(t *testing.T) { + fakeAtlas := newFakeAtlas(t, testStateSimple) + srv := fakeAtlas.Server() + defer srv.Close() + client, err := atlasFactory(map[string]string{ + "access_token": "sometoken", + "name": "someuser/some-test-remote-state", + "address": srv.URL, + }) + if err != nil { + t.Fatalf("err: %s", err) + } + + state, err := terraform.ReadState(bytes.NewReader(testStateSimple)) + if err != nil { + t.Fatalf("err: %s", err) + } + + // Changing the state but not the serial. Should generate a conflict. + state.RootModule().Outputs["drift"] = "happens" + + var stateJson bytes.Buffer + if err := terraform.WriteState(state, &stateJson); err != nil { + t.Fatalf("err: %s", err) + } + if err := client.Put(stateJson.Bytes()); err == nil { + t.Fatal("Expected error from state conflict, got none.") + } +} + +func TestAtlasClient_UnresolvableConflict(t *testing.T) { + fakeAtlas := newFakeAtlas(t, testStateSimple) + + // Something unexpected causes Atlas to conflict in a way that we can't fix. + fakeAtlas.AlwaysConflict(true) + + srv := fakeAtlas.Server() + defer srv.Close() + client, err := atlasFactory(map[string]string{ + "access_token": "sometoken", + "name": "someuser/some-test-remote-state", + "address": srv.URL, + }) + if err != nil { + t.Fatalf("err: %s", err) + } + + state, err := terraform.ReadState(bytes.NewReader(testStateSimple)) + if err != nil { + t.Fatalf("err: %s", err) + } + + var stateJson bytes.Buffer + if err := terraform.WriteState(state, &stateJson); err != nil { + t.Fatalf("err: %s", err) + } + doneCh := make(chan struct{}) + go func() { + defer close(doneCh) + if err := client.Put(stateJson.Bytes()); err == nil { + t.Fatal("Expected error from state conflict, got none.") + } + }() + + select { + case <-doneCh: + // OK + case <-time.After(50 * time.Millisecond): + t.Fatalf("Timed out after 50ms, probably because retrying infinitely.") + } +} + +// Stub Atlas HTTP API for a given state JSON string; does checksum-based +// conflict detection equivalent to Atlas's. +type fakeAtlas struct { + state []byte + t *testing.T + + // Used to test that we only do the special conflict handling retry once. + alwaysConflict bool + + // Used to fail the test immediately if a conflict happens. + noConflictAllowed bool +} + +func newFakeAtlas(t *testing.T, state []byte) *fakeAtlas { + return &fakeAtlas{ + state: state, + t: t, + } +} + +func (f *fakeAtlas) Server() *httptest.Server { + return httptest.NewServer(http.HandlerFunc(f.handler)) +} + +func (f *fakeAtlas) CurrentState() *terraform.State { + currentState, err := terraform.ReadState(bytes.NewReader(f.state)) + if err != nil { + f.t.Fatalf("err: %s", err) + } + return currentState +} + +func (f *fakeAtlas) CurrentSerial() int64 { + return f.CurrentState().Serial +} + +func (f *fakeAtlas) CurrentSum() [md5.Size]byte { + return md5.Sum(f.state) +} + +func (f *fakeAtlas) AlwaysConflict(b bool) { + f.alwaysConflict = b +} + +func (f *fakeAtlas) NoConflictAllowed(b bool) { + f.noConflictAllowed = b +} + +func (f *fakeAtlas) handler(resp http.ResponseWriter, req *http.Request) { + switch req.Method { + case "GET": + // Respond with the current stored state. + resp.Header().Set("Content-Type", "application/json") + resp.Write(f.state) + case "PUT": + var buf bytes.Buffer + buf.ReadFrom(req.Body) + sum := md5.Sum(buf.Bytes()) + state, err := terraform.ReadState(&buf) + if err != nil { + f.t.Fatalf("err: %s", err) + } + conflict := f.CurrentSerial() == state.Serial && f.CurrentSum() != sum + conflict = conflict || f.alwaysConflict + if conflict { + if f.noConflictAllowed { + f.t.Fatal("Got conflict when NoConflictAllowed was set.") + } + http.Error(resp, "Conflict", 409) + } else { + f.state = buf.Bytes() + resp.WriteHeader(200) + } + } +} + +// This is a tfstate file with the module order changed, which is a structural +// but not a semantic difference. Terraform will sort these modules as it +// loads the state. +var testStateModuleOrderChange = []byte( + `{ + "version": 1, + "serial": 1, + "modules": [ + { + "path": [ + "root", + "child2", + "grandchild" + ], + "outputs": { + "foo": "bar2" + }, + "resources": null + }, + { + "path": [ + "root", + "child1", + "grandchild" + ], + "outputs": { + "foo": "bar1" + }, + "resources": null + } + ] +} +`) + +var testStateSimple = []byte( + `{ + "version": 1, + "serial": 1, + "modules": [ + { + "path": [ + "root" + ], + "outputs": { + "foo": "bar" + }, + "resources": null + } + ] +} +`) diff --git a/state/remote/etcd.go b/state/remote/etcd.go new file mode 100644 index 000000000..7993603ff --- /dev/null +++ b/state/remote/etcd.go @@ -0,0 +1,78 @@ +package remote + +import ( + "crypto/md5" + "fmt" + "strings" + + etcdapi "github.com/coreos/etcd/client" + "golang.org/x/net/context" +) + +func etcdFactory(conf map[string]string) (Client, error) { + path, ok := conf["path"] + if !ok { + return nil, fmt.Errorf("missing 'path' configuration") + } + + endpoints, ok := conf["endpoints"] + if !ok || endpoints == "" { + return nil, fmt.Errorf("missing 'endpoints' configuration") + } + + config := etcdapi.Config{ + Endpoints: strings.Split(endpoints, " "), + } + if username, ok := conf["username"]; ok && username != "" { + config.Username = username + } + if password, ok := conf["password"]; ok && password != "" { + config.Password = password + } + + client, err := etcdapi.New(config) + if err != nil { + return nil, err + } + + return &EtcdClient{ + Client: client, + Path: path, + }, nil +} + +// EtcdClient is a remote client that stores data in etcd. +type EtcdClient struct { + Client etcdapi.Client + Path string +} + +func (c *EtcdClient) Get() (*Payload, error) { + resp, err := etcdapi.NewKeysAPI(c.Client).Get(context.Background(), c.Path, &etcdapi.GetOptions{Quorum: true}) + if err != nil { + if err, ok := err.(etcdapi.Error); ok && err.Code == etcdapi.ErrorCodeKeyNotFound { + return nil, nil + } + return nil, err + } + if resp.Node.Dir { + return nil, fmt.Errorf("path is a directory") + } + + data := []byte(resp.Node.Value) + md5 := md5.Sum(data) + return &Payload{ + Data: data, + MD5: md5[:], + }, nil +} + +func (c *EtcdClient) Put(data []byte) error { + _, err := etcdapi.NewKeysAPI(c.Client).Set(context.Background(), c.Path, string(data), nil) + return err +} + +func (c *EtcdClient) Delete() error { + _, err := etcdapi.NewKeysAPI(c.Client).Delete(context.Background(), c.Path, nil) + return err +} diff --git a/state/remote/etcd_test.go b/state/remote/etcd_test.go new file mode 100644 index 000000000..6d06d801b --- /dev/null +++ b/state/remote/etcd_test.go @@ -0,0 +1,38 @@ +package remote + +import ( + "fmt" + "os" + "testing" + "time" +) + +func TestEtcdClient_impl(t *testing.T) { + var _ Client = new(EtcdClient) +} + +func TestEtcdClient(t *testing.T) { + endpoint := os.Getenv("ETCD_ENDPOINT") + if endpoint == "" { + t.Skipf("skipping; ETCD_ENDPOINT must be set") + } + + config := map[string]string{ + "endpoints": endpoint, + "path": fmt.Sprintf("tf-unit/%s", time.Now().String()), + } + + if username := os.Getenv("ETCD_USERNAME"); username != "" { + config["username"] = username + } + if password := os.Getenv("ETCD_PASSWORD"); password != "" { + config["password"] = password + } + + client, err := etcdFactory(config) + if err != nil { + t.Fatalf("Error for valid config: %s", err) + } + + testClient(t, client) +} diff --git a/state/remote/http_test.go b/state/remote/http_test.go index e6e7297c1..55d682d17 100644 --- a/state/remote/http_test.go +++ b/state/remote/http_test.go @@ -8,6 +8,8 @@ import ( "net/http/httptest" "net/url" "testing" + + "github.com/hashicorp/go-cleanhttp" ) func TestHTTPClient_impl(t *testing.T) { @@ -24,7 +26,7 @@ func TestHTTPClient(t *testing.T) { t.Fatalf("err: %s", err) } - client := &HTTPClient{URL: url, Client: http.DefaultClient} + client := &HTTPClient{URL: url, Client: cleanhttp.DefaultClient()} testClient(t, client) } diff --git a/state/remote/remote.go b/state/remote/remote.go index 7ebea3222..5337ad7b7 100644 --- a/state/remote/remote.go +++ b/state/remote/remote.go @@ -38,6 +38,7 @@ func NewClient(t string, conf map[string]string) (Client, error) { var BuiltinClients = map[string]Factory{ "atlas": atlasFactory, "consul": consulFactory, + "etcd": etcdFactory, "http": httpFactory, "s3": s3Factory, "swift": swiftFactory, diff --git a/state/remote/s3.go b/state/remote/s3.go index c2d897dd0..a8fefe6d1 100644 --- a/state/remote/s3.go +++ b/state/remote/s3.go @@ -4,6 +4,7 @@ import ( "bytes" "fmt" "io" + "log" "os" "strconv" @@ -12,6 +13,7 @@ import ( "github.com/aws/aws-sdk-go/aws/credentials" "github.com/aws/aws-sdk-go/aws/credentials/ec2rolecreds" "github.com/aws/aws-sdk-go/service/s3" + "github.com/hashicorp/go-cleanhttp" ) func s3Factory(conf map[string]string) (Client, error) { @@ -45,6 +47,11 @@ func s3Factory(conf map[string]string) (Client, error) { serverSideEncryption = v } + acl := "" + if raw, ok := conf["acl"]; ok { + acl = raw + } + accessKeyId := conf["access_key"] secretAccessKey := conf["secret_key"] @@ -69,6 +76,7 @@ func s3Factory(conf map[string]string) (Client, error) { awsConfig := &aws.Config{ Credentials: credentialsProvider, Region: aws.String(regionName), + HTTPClient: cleanhttp.DefaultClient(), } nativeClient := s3.New(awsConfig) @@ -77,6 +85,7 @@ func s3Factory(conf map[string]string) (Client, error) { bucketName: bucketName, keyName: keyName, serverSideEncryption: serverSideEncryption, + acl: acl, }, nil } @@ -85,6 +94,7 @@ type S3Client struct { bucketName string keyName string serverSideEncryption bool + acl string } func (c *S3Client) Get() (*Payload, error) { @@ -125,7 +135,7 @@ func (c *S3Client) Get() (*Payload, error) { } func (c *S3Client) Put(data []byte) error { - contentType := "application/octet-stream" + contentType := "application/json" contentLength := int64(len(data)) i := &s3.PutObjectInput{ @@ -140,6 +150,12 @@ func (c *S3Client) Put(data []byte) error { i.ServerSideEncryption = aws.String("AES256") } + if c.acl != "" { + i.ACL = aws.String(c.acl) + } + + log.Printf("[DEBUG] Uploading remote state to S3: %#v", i) + if _, err := c.nativeClient.PutObject(i); err == nil { return nil } else { diff --git a/terraform/context.go b/terraform/context.go index be01a492a..d91a85176 100644 --- a/terraform/context.go +++ b/terraform/context.go @@ -292,7 +292,11 @@ func (c *Context) Apply() (*State, error) { } // Do the walk - _, err = c.walk(graph, walkApply) + if c.destroy { + _, err = c.walk(graph, walkDestroy) + } else { + _, err = c.walk(graph, walkApply) + } // Clean out any unused things c.state.prune() @@ -509,7 +513,7 @@ func (c *Context) releaseRun(ch chan<- struct{}) { func (c *Context) walk( graph *Graph, operation walkOperation) (*ContextGraphWalker, error) { // Walk the graph - log.Printf("[INFO] Starting graph walk: %s", operation.String()) + log.Printf("[DEBUG] Starting graph walk: %s", operation.String()) walker := &ContextGraphWalker{Context: c, Operation: operation} return walker, graph.Walk(walker) } diff --git a/terraform/context_apply_test.go b/terraform/context_apply_test.go index 4b2113d63..1fd069db0 100644 --- a/terraform/context_apply_test.go +++ b/terraform/context_apply_test.go @@ -10,6 +10,8 @@ import ( "sync/atomic" "testing" "time" + + "github.com/hashicorp/terraform/config/module" ) func TestContext2Apply(t *testing.T) { @@ -298,6 +300,88 @@ func TestContext2Apply_destroyComputed(t *testing.T) { } } +// https://github.com/hashicorp/terraform/issues/2892 +func TestContext2Apply_destroyCrossProviders(t *testing.T) { + m := testModule(t, "apply-destroy-cross-providers") + + p_aws := testProvider("aws") + p_aws.ApplyFn = testApplyFn + p_aws.DiffFn = testDiffFn + + p_tf := testProvider("terraform") + p_tf.ApplyFn = testApplyFn + p_tf.DiffFn = testDiffFn + + providers := map[string]ResourceProviderFactory{ + "aws": testProviderFuncFixed(p_aws), + "terraform": testProviderFuncFixed(p_tf), + } + + // Bug only appears from time to time, + // so we run this test multiple times + // to check for the race-condition + for i := 0; i <= 10; i++ { + ctx := getContextForApply_destroyCrossProviders( + t, m, providers) + + if p, err := ctx.Plan(); err != nil { + t.Fatalf("err: %s", err) + } else { + t.Logf(p.String()) + } + + if _, err := ctx.Apply(); err != nil { + t.Fatalf("err: %s", err) + } + } +} + +func getContextForApply_destroyCrossProviders( + t *testing.T, + m *module.Tree, + providers map[string]ResourceProviderFactory) *Context { + state := &State{ + Modules: []*ModuleState{ + &ModuleState{ + Path: rootModulePath, + Resources: map[string]*ResourceState{ + "terraform_remote_state.shared": &ResourceState{ + Type: "terraform_remote_state", + Primary: &InstanceState{ + ID: "remote-2652591293", + Attributes: map[string]string{ + "output.env_name": "test", + }, + }, + }, + }, + }, + &ModuleState{ + Path: []string{"root", "example"}, + Resources: map[string]*ResourceState{ + "aws_vpc.bar": &ResourceState{ + Type: "aws_vpc", + Primary: &InstanceState{ + ID: "vpc-aaabbb12", + Attributes: map[string]string{ + "value": "test", + }, + }, + }, + }, + }, + }, + } + ctx := testContext2(t, &ContextOpts{ + Module: m, + Providers: providers, + State: state, + Destroy: true, + }) + + return ctx +} + func TestContext2Apply_minimal(t *testing.T) { m := testModule(t, "apply-minimal") p := testProvider("aws") diff --git a/terraform/context_input_test.go b/terraform/context_input_test.go index 155f4c72f..404ef0ffc 100644 --- a/terraform/context_input_test.go +++ b/terraform/context_input_test.go @@ -510,3 +510,62 @@ aws_instance.foo: t.Fatalf("expected: \n%s\ngot: \n%s\n", expectedStr, actualStr) } } + +func TestContext2Input_varPartiallyComputed(t *testing.T) { + input := new(MockUIInput) + m := testModule(t, "input-var-partially-computed") + p := testProvider("aws") + p.ApplyFn = testApplyFn + p.DiffFn = testDiffFn + ctx := testContext2(t, &ContextOpts{ + Module: m, + Providers: map[string]ResourceProviderFactory{ + "aws": testProviderFuncFixed(p), + }, + Variables: map[string]string{ + "foo": "foovalue", + }, + UIInput: input, + State: &State{ + Modules: []*ModuleState{ + &ModuleState{ + Path: rootModulePath, + Resources: map[string]*ResourceState{ + "aws_instance.foo": &ResourceState{ + Type: "aws_instance", + Primary: &InstanceState{ + ID: "i-abc123", + Attributes: map[string]string{ + "id": "i-abc123", + }, + }, + }, + }, + }, + &ModuleState{ + Path: append(rootModulePath, "child"), + Resources: map[string]*ResourceState{ + "aws_instance.mod": &ResourceState{ + Type: "aws_instance", + Primary: &InstanceState{ + ID: "i-bcd345", + Attributes: map[string]string{ + "id": "i-bcd345", + "value": "one,i-abc123", + }, + }, + }, + }, + }, + }, + }, + }) + + if err := ctx.Input(InputModeStd); err != nil { + t.Fatalf("err: %s", err) + } + + if _, err := ctx.Plan(); err != nil { + t.Fatalf("err: %s", err) + } +} diff --git a/terraform/context_plan_test.go b/terraform/context_plan_test.go index 50f2bb471..db6f24577 100644 --- a/terraform/context_plan_test.go +++ b/terraform/context_plan_test.go @@ -1672,3 +1672,49 @@ func TestContext2Plan_varListErr(t *testing.T) { t.Fatal("should error") } } + +func TestContext2Plan_ignoreChanges(t *testing.T) { + m := testModule(t, "plan-ignore-changes") + p := testProvider("aws") + p.DiffFn = testDiffFn + s := &State{ + Modules: []*ModuleState{ + &ModuleState{ + Path: rootModulePath, + Resources: map[string]*ResourceState{ + "aws_instance.foo": &ResourceState{ + Primary: &InstanceState{ + ID: "bar", + Attributes: map[string]string{"ami": "ami-abcd1234"}, + }, + }, + }, + }, + }, + } + ctx := testContext2(t, &ContextOpts{ + Module: m, + Providers: map[string]ResourceProviderFactory{ + "aws": testProviderFuncFixed(p), + }, + Variables: map[string]string{ + "foo": "ami-1234abcd", + }, + State: s, + }) + + plan, err := ctx.Plan() + if err != nil { + t.Fatalf("err: %s", err) + } + + if len(plan.Diff.RootModule().Resources) < 1 { + t.Fatalf("bad: %#v", plan.Diff.RootModule().Resources) + } + + actual := strings.TrimSpace(plan.String()) + expected := strings.TrimSpace(testTerraformPlanIgnoreChangesStr) + if actual != expected { + t.Fatalf("bad:\n%s\n\nexpected\n\n%s", actual, expected) + } +} diff --git a/terraform/eval_apply.go b/terraform/eval_apply.go index c22a6ca4e..6314baa86 100644 --- a/terraform/eval_apply.go +++ b/terraform/eval_apply.go @@ -49,7 +49,7 @@ func (n *EvalApply) Eval(ctx EvalContext) (interface{}, error) { // Flag if we're creating a new instance if n.CreateNew != nil { - *n.CreateNew = (state.ID == "" && !diff.Destroy) || diff.RequiresNew() + *n.CreateNew = state.ID == "" && !diff.Destroy || diff.RequiresNew() } { diff --git a/terraform/eval_ignore_changes.go b/terraform/eval_ignore_changes.go new file mode 100644 index 000000000..2eb2d9bb1 --- /dev/null +++ b/terraform/eval_ignore_changes.go @@ -0,0 +1,33 @@ +package terraform + +import ( + "github.com/hashicorp/terraform/config" + "strings" +) + +// EvalIgnoreChanges is an EvalNode implementation that removes diff +// attributes if their name matches names provided by the resource's +// IgnoreChanges lifecycle. +type EvalIgnoreChanges struct { + Resource *config.Resource + Diff **InstanceDiff +} + +func (n *EvalIgnoreChanges) Eval(ctx EvalContext) (interface{}, error) { + if n.Diff == nil || *n.Diff == nil || n.Resource == nil || n.Resource.Id() == "" { + return nil, nil + } + + diff := *n.Diff + ignoreChanges := n.Resource.Lifecycle.IgnoreChanges + + for _, ignoredName := range ignoreChanges { + for name := range diff.Attributes { + if strings.HasPrefix(name, ignoredName) { + delete(diff.Attributes, name) + } + } + } + + return nil, nil +} diff --git a/terraform/eval_validate.go b/terraform/eval_validate.go index e808240a0..533788230 100644 --- a/terraform/eval_validate.go +++ b/terraform/eval_validate.go @@ -49,9 +49,12 @@ func (n *EvalValidateCount) Eval(ctx EvalContext) (interface{}, error) { } RETURN: - return nil, &EvalValidateError{ - Errors: errs, + if len(errs) != 0 { + err = &EvalValidateError{ + Errors: errs, + } } + return nil, err } // EvalValidateProvider is an EvalNode implementation that validates diff --git a/terraform/evaltree_provider.go b/terraform/evaltree_provider.go index 99e3ccb1e..9ec6ea0c5 100644 --- a/terraform/evaltree_provider.go +++ b/terraform/evaltree_provider.go @@ -71,7 +71,7 @@ func ProviderEvalTree(n string, config *config.RawConfig) EvalNode { // Apply stuff seq = append(seq, &EvalOpFilter{ - Ops: []walkOperation{walkRefresh, walkPlan, walkApply}, + Ops: []walkOperation{walkRefresh, walkPlan, walkApply, walkDestroy}, Node: &EvalSequence{ Nodes: []EvalNode{ &EvalGetProvider{ @@ -98,7 +98,7 @@ func ProviderEvalTree(n string, config *config.RawConfig) EvalNode { // We configure on everything but validate, since validate may // not have access to all the variables. seq = append(seq, &EvalOpFilter{ - Ops: []walkOperation{walkRefresh, walkPlan, walkApply}, + Ops: []walkOperation{walkRefresh, walkPlan, walkApply, walkDestroy}, Node: &EvalSequence{ Nodes: []EvalNode{ &EvalConfigProvider{ diff --git a/terraform/graph_builder.go b/terraform/graph_builder.go index 2190be15e..ca9966701 100644 --- a/terraform/graph_builder.go +++ b/terraform/graph_builder.go @@ -107,7 +107,7 @@ func (b *BuiltinGraphBuilder) Steps(path []string) []GraphTransformer { &OrphanTransformer{ State: b.State, Module: b.Root, - Targeting: (len(b.Targets) > 0), + Targeting: len(b.Targets) > 0, }, // Output-related transformations diff --git a/terraform/graph_config_node_output.go b/terraform/graph_config_node_output.go index 5b2d95fdc..d4f00451c 100644 --- a/terraform/graph_config_node_output.go +++ b/terraform/graph_config_node_output.go @@ -44,7 +44,7 @@ func (n *GraphNodeConfigOutput) DependentOn() []string { // GraphNodeEvalable impl. func (n *GraphNodeConfigOutput) EvalTree() EvalNode { return &EvalOpFilter{ - Ops: []walkOperation{walkRefresh, walkPlan, walkApply}, + Ops: []walkOperation{walkRefresh, walkPlan, walkApply, walkDestroy}, Node: &EvalSequence{ Nodes: []EvalNode{ &EvalWriteOutput{ diff --git a/terraform/graph_config_node_resource.go b/terraform/graph_config_node_resource.go index dfc958714..2bf0e4568 100644 --- a/terraform/graph_config_node_resource.go +++ b/terraform/graph_config_node_resource.go @@ -165,7 +165,7 @@ func (n *GraphNodeConfigResource) DynamicExpand(ctx EvalContext) (*Graph, error) steps = append(steps, &OrphanTransformer{ State: state, View: n.Resource.Id(), - Targeting: (len(n.Targets) > 0), + Targeting: len(n.Targets) > 0, }) steps = append(steps, &DeposedTransformer{ diff --git a/terraform/graph_dot_test.go b/terraform/graph_dot_test.go index da0e1f55e..ecef1984d 100644 --- a/terraform/graph_dot_test.go +++ b/terraform/graph_dot_test.go @@ -210,13 +210,13 @@ digraph { for tn, tc := range cases { actual, err := GraphDot(tc.Graph(), &tc.Opts) - if (err == nil) && tc.Error != "" { + if err == nil && tc.Error != "" { t.Fatalf("%s: expected err: %s, got none", tn, tc.Error) } - if (err != nil) && (tc.Error == "") { + if err != nil && tc.Error == "" { t.Fatalf("%s: unexpected err: %s", tn, err) } - if (err != nil) && (tc.Error != "") { + if err != nil && tc.Error != "" { if !strings.Contains(err.Error(), tc.Error) { t.Fatalf("%s: expected err: %s\nto contain: %s", tn, err, tc.Error) } diff --git a/terraform/graph_walk_operation.go b/terraform/graph_walk_operation.go index c2143fbd8..f2de24134 100644 --- a/terraform/graph_walk_operation.go +++ b/terraform/graph_walk_operation.go @@ -13,4 +13,5 @@ const ( walkPlanDestroy walkRefresh walkValidate + walkDestroy ) diff --git a/terraform/interpolate.go b/terraform/interpolate.go index 15b81cdbd..31c366eab 100644 --- a/terraform/interpolate.go +++ b/terraform/interpolate.go @@ -342,7 +342,7 @@ func (i *Interpolater) computeResourceVariable( // TODO: test by creating a state and configuration that is referencing // a non-existent variable "foo.bar" where the state only has "foo" // and verify plan works, but apply doesn't. - if i.Operation == walkApply { + if i.Operation == walkApply || i.Operation == walkDestroy { goto MISSING } @@ -378,7 +378,13 @@ MISSING: // be unknown. Instead, we return that the value is computed so // that the graph can continue to refresh other nodes. It doesn't // matter because the config isn't interpolated anyways. - if i.Operation == walkRefresh || i.Operation == walkPlanDestroy { + // + // For a Destroy, we're also fine with computed values, since our goal is + // only to get destroy nodes for existing resources. + // + // For an input walk, computed values are okay to return because we're only + // looking for missing variables to prompt the user for. + if i.Operation == walkRefresh || i.Operation == walkPlanDestroy || i.Operation == walkDestroy || i.Operation == walkInput { return config.UnknownVariableValue, nil } @@ -469,7 +475,13 @@ func (i *Interpolater) computeResourceMultiVariable( // be unknown. Instead, we return that the value is computed so // that the graph can continue to refresh other nodes. It doesn't // matter because the config isn't interpolated anyways. - if i.Operation == walkRefresh || i.Operation == walkPlanDestroy { + // + // For a Destroy, we're also fine with computed values, since our goal is + // only to get destroy nodes for existing resources. + // + // For an input walk, computed values are okay to return because we're only + // looking for missing variables to prompt the user for. + if i.Operation == walkRefresh || i.Operation == walkPlanDestroy || i.Operation == walkDestroy || i.Operation == walkInput { return config.UnknownVariableValue, nil } diff --git a/terraform/interpolate_test.go b/terraform/interpolate_test.go index bbbb1024a..fbce848ea 100644 --- a/terraform/interpolate_test.go +++ b/terraform/interpolate_test.go @@ -330,11 +330,6 @@ func TestInterpolator_resourceMultiAttributesWithResourceCount(t *testing.T) { Value: config.NewStringList([]string{}).String(), Type: ast.TypeString, }) - // Zero + zero elements - testInterpolate(t, i, scope, "aws_route53_zone.terra.*.nothing", ast.Variable{ - Value: config.NewStringList([]string{"", ""}).String(), - Type: ast.TypeString, - }) // Zero + 1 element testInterpolate(t, i, scope, "aws_route53_zone.terra.*.special", ast.Variable{ Value: config.NewStringList([]string{"extra"}).String(), diff --git a/terraform/resource_address.go b/terraform/resource_address.go index 583cdd2a0..f7dd94074 100644 --- a/terraform/resource_address.go +++ b/terraform/resource_address.go @@ -53,26 +53,26 @@ func (addr *ResourceAddress) Equals(raw interface{}) bool { return false } - pathMatch := ((len(addr.Path) == 0 && len(other.Path) == 0) || - reflect.DeepEqual(addr.Path, other.Path)) + pathMatch := len(addr.Path) == 0 && len(other.Path) == 0 || + reflect.DeepEqual(addr.Path, other.Path) - indexMatch := (addr.Index == -1 || + indexMatch := addr.Index == -1 || other.Index == -1 || - addr.Index == other.Index) + addr.Index == other.Index - nameMatch := (addr.Name == "" || + nameMatch := addr.Name == "" || other.Name == "" || - addr.Name == other.Name) + addr.Name == other.Name - typeMatch := (addr.Type == "" || + typeMatch := addr.Type == "" || other.Type == "" || - addr.Type == other.Type) + addr.Type == other.Type - return (pathMatch && + return pathMatch && indexMatch && addr.InstanceType == other.InstanceType && nameMatch && - typeMatch) + typeMatch } func ParseResourceIndex(s string) (int, error) { diff --git a/terraform/state.go b/terraform/state.go index 21b2c04de..8734cfc17 100644 --- a/terraform/state.go +++ b/terraform/state.go @@ -965,6 +965,21 @@ func (s *InstanceState) Equal(other *InstanceState) bool { } } + // Meta must be equal + if len(s.Meta) != len(other.Meta) { + return false + } + for k, v := range s.Meta { + otherV, ok := other.Meta[k] + if !ok { + return false + } + + if v != otherV { + return false + } + } + return true } @@ -1192,9 +1207,8 @@ func (s moduleStateSort) Less(i, j int) bool { return len(a.Path) < len(b.Path) } - // Otherwise, compare by last path element - idx := len(a.Path) - 1 - return a.Path[idx] < b.Path[idx] + // Otherwise, compare lexically + return strings.Join(a.Path, ".") < strings.Join(b.Path, ".") } func (s moduleStateSort) Swap(i, j int) { diff --git a/terraform/state_test.go b/terraform/state_test.go index eeb974d0b..8d24a8e75 100644 --- a/terraform/state_test.go +++ b/terraform/state_test.go @@ -40,6 +40,23 @@ func TestStateAddModule(t *testing.T) { []string{"root", "foo", "bar"}, }, }, + // Same last element, different middle element + { + [][]string{ + []string{"root", "foo", "bar"}, // This one should sort after... + []string{"root", "foo"}, + []string{"root"}, + []string{"root", "bar", "bar"}, // ...this one. + []string{"root", "bar"}, + }, + [][]string{ + []string{"root"}, + []string{"root", "bar"}, + []string{"root", "foo"}, + []string{"root", "bar", "bar"}, + []string{"root", "foo", "bar"}, + }, + }, } for _, tc := range cases { @@ -188,6 +205,43 @@ func TestStateEqual(t *testing.T) { }, }, }, + + // Meta differs + { + false, + &State{ + Modules: []*ModuleState{ + &ModuleState{ + Path: rootModulePath, + Resources: map[string]*ResourceState{ + "test_instance.foo": &ResourceState{ + Primary: &InstanceState{ + Meta: map[string]string{ + "schema_version": "1", + }, + }, + }, + }, + }, + }, + }, + &State{ + Modules: []*ModuleState{ + &ModuleState{ + Path: rootModulePath, + Resources: map[string]*ResourceState{ + "test_instance.foo": &ResourceState{ + Primary: &InstanceState{ + Meta: map[string]string{ + "schema_version": "2", + }, + }, + }, + }, + }, + }, + }, + }, } for i, tc := range cases { @@ -224,6 +278,41 @@ func TestStateIncrementSerialMaybe(t *testing.T) { }, 1, }, + "S2 is different, but only via Instance Metadata": { + &State{ + Serial: 3, + Modules: []*ModuleState{ + &ModuleState{ + Path: rootModulePath, + Resources: map[string]*ResourceState{ + "test_instance.foo": &ResourceState{ + Primary: &InstanceState{ + Meta: map[string]string{}, + }, + }, + }, + }, + }, + }, + &State{ + Serial: 3, + Modules: []*ModuleState{ + &ModuleState{ + Path: rootModulePath, + Resources: map[string]*ResourceState{ + "test_instance.foo": &ResourceState{ + Primary: &InstanceState{ + Meta: map[string]string{ + "schema_version": "1", + }, + }, + }, + }, + }, + }, + }, + 4, + }, "S1 serial is higher": { &State{Serial: 5}, &State{ diff --git a/terraform/terraform_test.go b/terraform/terraform_test.go index c84e9803c..d17726acb 100644 --- a/terraform/terraform_test.go +++ b/terraform/terraform_test.go @@ -13,6 +13,7 @@ import ( "sync" "testing" + "github.com/hashicorp/go-getter" "github.com/hashicorp/terraform/config" "github.com/hashicorp/terraform/config/module" ) @@ -70,7 +71,7 @@ func testModule(t *testing.T, name string) *module.Tree { t.Fatalf("err: %s", err) } - s := &module.FolderStorage{StorageDir: tempDir(t)} + s := &getter.FolderStorage{StorageDir: tempDir(t)} if err := mod.Load(s, module.GetModeGet); err != nil { t.Fatalf("err: %s", err) } @@ -1286,3 +1287,16 @@ STATE: ` + +const testTerraformPlanIgnoreChangesStr = ` +DIFF: + +UPDATE: aws_instance.foo + type: "" => "aws_instance" + +STATE: + +aws_instance.foo: + ID = bar + ami = ami-abcd1234 +` diff --git a/terraform/test-fixtures/apply-destroy-cross-providers/child/main.tf b/terraform/test-fixtures/apply-destroy-cross-providers/child/main.tf new file mode 100644 index 000000000..048b26dec --- /dev/null +++ b/terraform/test-fixtures/apply-destroy-cross-providers/child/main.tf @@ -0,0 +1,5 @@ +variable "value" {} + +resource "aws_vpc" "bar" { + value = "${var.value}" +} diff --git a/terraform/test-fixtures/apply-destroy-cross-providers/main.tf b/terraform/test-fixtures/apply-destroy-cross-providers/main.tf new file mode 100644 index 000000000..b0595b9e8 --- /dev/null +++ b/terraform/test-fixtures/apply-destroy-cross-providers/main.tf @@ -0,0 +1,6 @@ +resource "terraform_remote_state" "shared" {} + +module "child" { + source = "./child" + value = "${terraform_remote_state.shared.output.env_name}" +} diff --git a/terraform/test-fixtures/input-var-partially-computed/child/main.tf b/terraform/test-fixtures/input-var-partially-computed/child/main.tf new file mode 100644 index 000000000..a11cc5e83 --- /dev/null +++ b/terraform/test-fixtures/input-var-partially-computed/child/main.tf @@ -0,0 +1,5 @@ +variable "in" {} + +resource "aws_instance" "mod" { + value = "${var.in}" +} diff --git a/terraform/test-fixtures/input-var-partially-computed/main.tf b/terraform/test-fixtures/input-var-partially-computed/main.tf new file mode 100644 index 000000000..ada6f0cea --- /dev/null +++ b/terraform/test-fixtures/input-var-partially-computed/main.tf @@ -0,0 +1,7 @@ +resource "aws_instance" "foo" { } +resource "aws_instance" "bar" { } + +module "child" { + source = "./child" + in = "one,${aws_instance.foo.id},${aws_instance.bar.id}" +} diff --git a/terraform/test-fixtures/plan-ignore-changes/main.tf b/terraform/test-fixtures/plan-ignore-changes/main.tf new file mode 100644 index 000000000..056256a1d --- /dev/null +++ b/terraform/test-fixtures/plan-ignore-changes/main.tf @@ -0,0 +1,9 @@ +variable "foo" {} + +resource "aws_instance" "foo" { + ami = "${var.foo}" + + lifecycle { + ignore_changes = ["ami"] + } +} diff --git a/terraform/transform_deposed.go b/terraform/transform_deposed.go index 6ae1695f0..fa3143c3c 100644 --- a/terraform/transform_deposed.go +++ b/terraform/transform_deposed.go @@ -110,7 +110,7 @@ func (n *graphNodeDeposedResource) EvalTree() EvalNode { var diff *InstanceDiff var err error seq.Nodes = append(seq.Nodes, &EvalOpFilter{ - Ops: []walkOperation{walkApply}, + Ops: []walkOperation{walkApply, walkDestroy}, Node: &EvalSequence{ Nodes: []EvalNode{ &EvalGetProvider{ diff --git a/terraform/transform_orphan.go b/terraform/transform_orphan.go index bb381c823..45ea050ba 100644 --- a/terraform/transform_orphan.go +++ b/terraform/transform_orphan.go @@ -263,7 +263,7 @@ func (n *graphNodeOrphanResource) EvalTree() EvalNode { // Apply var err error seq.Nodes = append(seq.Nodes, &EvalOpFilter{ - Ops: []walkOperation{walkApply}, + Ops: []walkOperation{walkApply, walkDestroy}, Node: &EvalSequence{ Nodes: []EvalNode{ &EvalReadDiff{ diff --git a/terraform/transform_output.go b/terraform/transform_output.go index 5ea48a016..d3e839ce1 100644 --- a/terraform/transform_output.go +++ b/terraform/transform_output.go @@ -62,7 +62,7 @@ func (n *graphNodeOrphanOutput) Name() string { func (n *graphNodeOrphanOutput) EvalTree() EvalNode { return &EvalOpFilter{ - Ops: []walkOperation{walkApply, walkRefresh}, + Ops: []walkOperation{walkApply, walkDestroy, walkRefresh}, Node: &EvalDeleteOutput{ Name: n.OutputName, }, @@ -90,7 +90,7 @@ func (n *graphNodeOrphanOutputFlat) Name() string { func (n *graphNodeOrphanOutputFlat) EvalTree() EvalNode { return &EvalOpFilter{ - Ops: []walkOperation{walkApply, walkRefresh}, + Ops: []walkOperation{walkApply, walkDestroy, walkRefresh}, Node: &EvalDeleteOutput{ Name: n.OutputName, }, diff --git a/terraform/transform_provider.go b/terraform/transform_provider.go index 8a6655182..0ea226713 100644 --- a/terraform/transform_provider.go +++ b/terraform/transform_provider.go @@ -255,7 +255,7 @@ func (n *graphNodeDisabledProvider) EvalTree() EvalNode { var resourceConfig *ResourceConfig return &EvalOpFilter{ - Ops: []walkOperation{walkInput, walkValidate, walkRefresh, walkPlan, walkApply}, + Ops: []walkOperation{walkInput, walkValidate, walkRefresh, walkPlan, walkApply, walkDestroy}, Node: &EvalSequence{ Nodes: []EvalNode{ &EvalInterpolate{ diff --git a/terraform/transform_resource.go b/terraform/transform_resource.go index 0b56721b0..5091f29c9 100644 --- a/terraform/transform_resource.go +++ b/terraform/transform_resource.go @@ -318,6 +318,10 @@ func (n *graphNodeExpandedResource) EvalTree() EvalNode { Resource: n.Resource, Diff: &diff, }, + &EvalIgnoreChanges{ + Resource: n.Resource, + Diff: &diff, + }, &EvalWriteState{ Name: n.stateId(), ResourceType: n.Resource.Type, @@ -369,7 +373,7 @@ func (n *graphNodeExpandedResource) EvalTree() EvalNode { var createNew, tainted bool var createBeforeDestroyEnabled bool seq.Nodes = append(seq.Nodes, &EvalOpFilter{ - Ops: []walkOperation{walkApply}, + Ops: []walkOperation{walkApply, walkDestroy}, Node: &EvalSequence{ Nodes: []EvalNode{ // Get the saved diff for apply @@ -591,7 +595,7 @@ func (n *graphNodeExpandedResourceDestroy) EvalTree() EvalNode { var state *InstanceState var err error return &EvalOpFilter{ - Ops: []walkOperation{walkApply}, + Ops: []walkOperation{walkApply, walkDestroy}, Node: &EvalSequence{ Nodes: []EvalNode{ // Get the saved diff for apply diff --git a/terraform/transform_tainted.go b/terraform/transform_tainted.go index fdc1ae6bc..37e25df32 100644 --- a/terraform/transform_tainted.go +++ b/terraform/transform_tainted.go @@ -114,7 +114,7 @@ func (n *graphNodeTaintedResource) EvalTree() EvalNode { // Apply var diff *InstanceDiff seq.Nodes = append(seq.Nodes, &EvalOpFilter{ - Ops: []walkOperation{walkApply}, + Ops: []walkOperation{walkApply, walkDestroy}, Node: &EvalSequence{ Nodes: []EvalNode{ &EvalGetProvider{ diff --git a/terraform/version.go b/terraform/version.go index 741766330..eb34e7f30 100644 --- a/terraform/version.go +++ b/terraform/version.go @@ -1,7 +1,7 @@ package terraform // The main version number that is being run at the moment. -const Version = "0.6.4" +const Version = "0.6.7" // A pre-release marker for the version. If this is "" (empty string) // then it means that it is a final release. Otherwise, this is a pre-release diff --git a/terraform/walkoperation_string.go b/terraform/walkoperation_string.go index 423793c3c..1ce3661c4 100644 --- a/terraform/walkoperation_string.go +++ b/terraform/walkoperation_string.go @@ -4,9 +4,9 @@ package terraform import "fmt" -const _walkOperation_name = "walkInvalidwalkInputwalkApplywalkPlanwalkPlanDestroywalkRefreshwalkValidate" +const _walkOperation_name = "walkInvalidwalkInputwalkApplywalkPlanwalkPlanDestroywalkRefreshwalkValidatewalkDestroy" -var _walkOperation_index = [...]uint8{0, 11, 20, 29, 37, 52, 63, 75} +var _walkOperation_index = [...]uint8{0, 11, 20, 29, 37, 52, 63, 75, 86} func (i walkOperation) String() string { if i >= walkOperation(len(_walkOperation_index)-1) { diff --git a/website/Gemfile.lock b/website/Gemfile.lock index 826d25e8f..7034f311e 100644 --- a/website/Gemfile.lock +++ b/website/Gemfile.lock @@ -1,12 +1,12 @@ GIT remote: https://github.com/hashicorp/middleman-hashicorp - revision: 76f0f284ad44cea0457484ea83467192f02daf87 + revision: 15cbda0cf1d963fa71292dee921229e7ee618272 specs: - middleman-hashicorp (0.1.0) + middleman-hashicorp (0.2.0) bootstrap-sass (~> 3.3) builder (~> 3.2) less (~> 2.6) - middleman (~> 3.3) + middleman (~> 3.4) middleman-livereload (~> 3.4) middleman-minify-html (~> 3.4) middleman-syntax (~> 2.0) @@ -21,21 +21,25 @@ GIT GEM remote: https://rubygems.org/ specs: - activesupport (4.1.12) - i18n (~> 0.6, >= 0.6.9) + activesupport (4.2.4) + i18n (~> 0.7) json (~> 1.7, >= 1.7.7) minitest (~> 5.1) - thread_safe (~> 0.1) + thread_safe (~> 0.3, >= 0.3.4) tzinfo (~> 1.1) - autoprefixer-rails (5.2.1) + autoprefixer-rails (6.0.3) execjs json bootstrap-sass (3.3.5.1) autoprefixer-rails (>= 5.0.0.1) sass (>= 3.3.0) builder (3.2.2) - celluloid (0.16.0) - timers (~> 4.0.0) + capybara (2.4.4) + mime-types (>= 1.16) + nokogiri (>= 1.3.3) + rack (>= 1.0.0) + rack-test (>= 0.5.4) + xpath (~> 2.0) chunky_png (1.3.4) coffee-script (2.4.1) coffee-script-source @@ -59,52 +63,50 @@ GEM eventmachine (>= 0.12.9) http_parser.rb (~> 0.6.0) erubis (2.7.0) - eventmachine (1.0.7) - execjs (2.5.2) + eventmachine (1.0.8) + execjs (2.6.0) ffi (1.9.10) git-version-bump (0.15.1) - haml (4.0.6) + haml (4.0.7) tilt hike (1.2.3) - hitimes (1.2.2) - hooks (0.4.0) - uber (~> 0.0.4) + hooks (0.4.1) + uber (~> 0.0.14) htmlcompressor (0.2.0) http_parser.rb (0.6.0) i18n (0.7.0) json (1.8.3) - kramdown (1.8.0) + kramdown (1.9.0) less (2.6.0) commonjs (~> 0.2.7) - libv8 (3.16.14.11) - listen (2.10.1) - celluloid (~> 0.16.0) + libv8 (3.16.14.13) + listen (3.0.3) rb-fsevent (>= 0.9.3) rb-inotify (>= 0.9) - middleman (3.3.12) + middleman (3.4.0) coffee-script (~> 2.2) compass (>= 1.0.0, < 2.0.0) compass-import-once (= 1.0.5) execjs (~> 2.0) haml (>= 4.0.5) kramdown (~> 1.2) - middleman-core (= 3.3.12) + middleman-core (= 3.4.0) middleman-sprockets (>= 3.1.2) sass (>= 3.4.0, < 4.0) uglifier (~> 2.5) - middleman-core (3.3.12) - activesupport (~> 4.1.0) + middleman-core (3.4.0) + activesupport (~> 4.1) bundler (~> 1.1) + capybara (~> 2.4.4) erubis hooks (~> 0.3) i18n (~> 0.7.0) - listen (>= 2.7.9, < 3.0) + listen (~> 3.0.3) padrino-helpers (~> 0.12.3) rack (>= 1.4.5, < 2.0) - rack-test (~> 0.6.2) thor (>= 0.15.2, < 2.0) tilt (~> 1.4.1, < 2.0) - middleman-livereload (3.4.2) + middleman-livereload (3.4.3) em-websocket (~> 0.5.1) middleman-core (>= 3.3) rack-livereload (~> 0.3.15) @@ -119,8 +121,12 @@ GEM middleman-syntax (2.0.0) middleman-core (~> 3.2) rouge (~> 1.0) - minitest (5.7.0) + mime-types (2.6.2) + mini_portile (0.6.2) + minitest (5.8.1) multi_json (1.11.2) + nokogiri (1.6.6.2) + mini_portile (~> 0.6.0) padrino-helpers (0.12.5) i18n (~> 0.6, >= 0.6.7) padrino-support (= 0.12.5) @@ -128,7 +134,7 @@ GEM padrino-support (0.12.5) activesupport (>= 3.1) rack (1.6.4) - rack-contrib (1.3.0) + rack-contrib (1.4.0) git-version-bump (~> 0.15) rack (~> 1.4) rack-livereload (0.3.16) @@ -136,16 +142,16 @@ GEM rack-protection (1.5.3) rack rack-rewrite (1.5.1) - rack-ssl-enforcer (0.2.8) + rack-ssl-enforcer (0.2.9) rack-test (0.6.3) rack (>= 1.0) - rb-fsevent (0.9.5) + rb-fsevent (0.9.6) rb-inotify (0.9.5) ffi (>= 0.5.0) - redcarpet (3.3.2) + redcarpet (3.3.3) ref (2.0.0) - rouge (1.9.1) - sass (3.4.16) + rouge (1.10.1) + sass (3.4.19) sprockets (2.12.4) hike (~> 1.2) multi_json (~> 1.0) @@ -159,24 +165,27 @@ GEM therubyracer (0.12.2) libv8 (~> 3.16.14.0) ref - thin (1.6.3) + thin (1.6.4) daemons (~> 1.0, >= 1.0.9) - eventmachine (~> 1.0) + eventmachine (~> 1.0, >= 1.0.4) rack (~> 1.0) thor (0.19.1) thread_safe (0.3.5) tilt (1.4.1) - timers (4.0.1) - hitimes tzinfo (1.2.2) thread_safe (~> 0.1) - uber (0.0.13) - uglifier (2.7.1) + uber (0.0.15) + uglifier (2.7.2) execjs (>= 0.3.0) json (>= 1.8.0) + xpath (2.0.0) + nokogiri (~> 1.3) PLATFORMS ruby DEPENDENCIES middleman-hashicorp! + +BUNDLED WITH + 1.10.6 diff --git a/website/Makefile b/website/Makefile new file mode 100644 index 000000000..63bb4cab1 --- /dev/null +++ b/website/Makefile @@ -0,0 +1,10 @@ +all: build + +init: + bundle + +dev: init + bundle exec middleman server + +build: init + bundle exec middleman build \ No newline at end of file diff --git a/website/README.md b/website/README.md index 0e1c0fa49..cb83f714d 100644 --- a/website/README.md +++ b/website/README.md @@ -12,15 +12,7 @@ requests like any normal GitHub project, and we'll merge it in. ## Running the Site Locally -Running the site locally is simple. First you need a working copy of [Ruby >= 2.0](https://www.ruby-lang.org/en/downloads/) and [Bundler](http://bundler.io/). -Then you can clone this repo and run the following commands from this directory: - -``` -$ bundle -# ( installs all gem dependencies ) -$ bundle exec middleman server -# ( boots the local server ) -``` +Running the site locally is simple. First you need a working copy of [Ruby >= 2.0](https://www.ruby-lang.org/en/downloads/) and [Bundler](http://bundler.io/). Then you can clone this repo and run `make dev`. Then open up `http://localhost:4567`. Note that some URLs you may need to append ".html" to make them work (in the navigation). diff --git a/website/config.rb b/website/config.rb index 80bbb6443..f8432d255 100644 --- a/website/config.rb +++ b/website/config.rb @@ -1,13 +1,7 @@ -#------------------------------------------------------------------------- -# Configure Middleman -#------------------------------------------------------------------------- - set :base_url, "https://www.terraform.io/" activate :hashicorp do |h| - h.version = ENV["TERRAFORM_VERSION"] - h.bintray_enabled = ENV["BINTRAY_ENABLED"] - h.bintray_repo = "mitchellh/terraform" - h.bintray_user = "mitchellh" - h.bintray_key = ENV["BINTRAY_API_KEY"] + h.name = "terraform" + h.version = "0.6.6" + h.github_slug = "hashicorp/terraform" end diff --git a/website/source/assets/stylesheets/_docs.scss b/website/source/assets/stylesheets/_docs.scss index 799b631a0..0defd251a 100755 --- a/website/source/assets/stylesheets/_docs.scss +++ b/website/source/assets/stylesheets/_docs.scss @@ -20,8 +20,10 @@ body.layout-google, body.layout-heroku, body.layout-mailgun, body.layout-openstack, +body.layout-packet, body.layout-rundeck, body.layout-template, +body.layout-vsphere, body.layout-docs, body.layout-downloads, body.layout-inner, diff --git a/website/source/docs/commands/apply.html.markdown b/website/source/docs/commands/apply.html.markdown index dec4ea19d..770d41c95 100644 --- a/website/source/docs/commands/apply.html.markdown +++ b/website/source/docs/commands/apply.html.markdown @@ -35,6 +35,9 @@ The command-line flags are all optional. The list of available flags are: * `-no-color` - Disables output with coloring. +* `-parallelism=n` - Limit the number of concurrent operation as Terraform + [walks the graph](/docs/internals/graph.html#walking-the-graph). + * `-refresh=true` - Update the state for each resource prior to planning and applying. This has no effect if a plan file is given directly to apply. diff --git a/website/source/docs/commands/graph.html.markdown b/website/source/docs/commands/graph.html.markdown index c7c426142..d24005fcb 100644 --- a/website/source/docs/commands/graph.html.markdown +++ b/website/source/docs/commands/graph.html.markdown @@ -46,9 +46,6 @@ by GraphViz: $ terraform graph | dot -Tpng > graph.png ``` -Alternatively, the web-based [GraphViz Workspace](http://graphviz-dev.appspot.com) -can be used to quickly render DOT file inputs as well. - Here is an example graph output: ![Graph Example](graph-example.png) diff --git a/website/source/docs/commands/init.html.markdown b/website/source/docs/commands/init.html.markdown index ee4286c27..803d937d7 100644 --- a/website/source/docs/commands/init.html.markdown +++ b/website/source/docs/commands/init.html.markdown @@ -31,17 +31,34 @@ a remote state configuration if provided. The command-line flags are all optional. The list of available flags are: -* `-address=url` - URL of the remote storage server. Required for HTTP backend, - optional for Atlas and Consul. - -* `-access-token=token` - Authentication token for state storage server. - Required for Atlas backend, optional for Consul. - * `-backend=atlas` - Specifies the type of remote backend. Must be one - of Atlas, Consul, or HTTP. Defaults to atlas. + of Atlas, Consul, S3, or HTTP. Defaults to Atlas. -* `-name=name` - Name of the state file in the state storage server. - Required for Atlas backend. +* `-backend-config="k=v"` - Specify a configuration variable for a backend. This is how you set the required variables for the selected backend (as detailed in the [remote command documentation](/docs/command/remote.html). -* `-path=path` - Path of the remote state in Consul. Required for the Consul backend. +## Example: Consul + +This example will initialize the current directory and configure Consul remote storage: + +``` +$ terraform init \ + -backend=consul \ + -backend-config="address=your.consul.endpoint:443" \ + -backend-config="scheme=https" \ + -backend-config="path=tf/path/for/project" \ + /path/to/source/module +``` + +## Example: S3 + +This example will initialize the current directory and configure S3 remote storage: + +``` +$ terraform init \ + -backend=s3 \ + -backend-config="bucket=your-s3-bucket" \ + -backend-config="key=tf/path/for/project.json" \ + -backend-config="acl=bucket-owner-full-control" \ + /path/to/source/module +``` diff --git a/website/source/docs/commands/plan.html.markdown b/website/source/docs/commands/plan.html.markdown index 1c0b1b68a..e4a48ab5b 100644 --- a/website/source/docs/commands/plan.html.markdown +++ b/website/source/docs/commands/plan.html.markdown @@ -48,6 +48,9 @@ The command-line flags are all optional. The list of available flags are: changes shown in this plan are applied. Read the warning on saved plans below. +* `-parallelism=n` - Limit the number of concurrent operation as Terraform + [walks the graph](/docs/internals/graph.html#walking-the-graph). + * `-refresh=true` - Update the state prior to checking for differences. * `-state=path` - Path to the state file. Defaults to "terraform.tfstate". diff --git a/website/source/docs/commands/remote-config.html.markdown b/website/source/docs/commands/remote-config.html.markdown index c7586ac0e..6f9a84b93 100644 --- a/website/source/docs/commands/remote-config.html.markdown +++ b/website/source/docs/commands/remote-config.html.markdown @@ -45,10 +45,23 @@ The following backends are supported: * Atlas - Stores the state in Atlas. Requires the `name` and `access_token` variables. The `address` variable can optionally be provided. -* Consul - Stores the state in the KV store at a given path. - Requires the `path` variable. The `address` and `access_token` - variables can optionally be provided. Address is assumed to be the - local agent if not provided. +* Consul - Stores the state in the KV store at a given path. Requires the + `path` variable. Supports the `CONSUL_HTTP_TOKEN` environment variable + for specifying access credentials, or the `access_token` variable may + be provided, but this is not recommended since it would be included in + cleartext inside the persisted, shard state. Other supported parameters + include: + * `address` - DNS name and port of your Consul endpoint specified in the + format `dnsname:port`. Defaults to the local agent HTTP listener. This + may also be specified using the `CONSUL_HTTP_ADDR` environment variable. + * `scheme` - Specifies what protocol to use when talking to the given + `address`, either `http` or `https`. SSL support can also be triggered + by setting then environment variable `CONSUL_HTTP_SSL` to `true`. + +* Etcd - Stores the state in etcd at a given path. + Requires the `path` and `endpoints` variables. The `username` and `password` + variables can optionally be provided. `endpoints` is assumed to be a + space-separated list of etcd endpoints. * S3 - Stores the state as a given key in a given bucket on Amazon S3. Requires the `bucket` and `key` variables. Supports and honors the standard @@ -57,6 +70,13 @@ The following backends are supported: in the `access_key`, `secret_key` and `region` variables respectively, but passing credentials this way is not recommended since they will be included in cleartext inside the persisted state. + Other supported parameters include: + * `bucket` - the name of the S3 bucket + * `key` - path where to place/look for state file inside the bucket + * `encrypt` - whether to enable [server side encryption](http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingServerSideEncryption.html) + of the state file + * `acl` - [Canned ACL](http://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl) + to be applied to the state file. * HTTP - Stores the state using a simple REST client. State will be fetched via GET, updated via POST, and purged with DELETE. Requires the `address` variable. diff --git a/website/source/docs/configuration/interpolation.html.md b/website/source/docs/configuration/interpolation.html.md index f4730acd6..049c71825 100644 --- a/website/source/docs/configuration/interpolation.html.md +++ b/website/source/docs/configuration/interpolation.html.md @@ -74,6 +74,33 @@ are documented below. The supported built-in functions are: + * `base64decode(string)` - Given a base64-encoded string, decodes it and + returns the original string. + + * `base64encode(string)` - Returns a base64-encoded representation of the + given string. + + * `cidrhost(iprange, hostnum)` - Takes an IP address range in CIDR notation + and creates an IP address with the given host number. For example, + ``cidrhost("10.0.0.0/8", 2)`` returns ``10.0.0.2``. + + * `cidrnetmask(iprange)` - Takes an IP address range in CIDR notation + and returns the address-formatted subnet mask format that some + systems expect for IPv4 interfaces. For example, + ``cidrmask("10.0.0.0/8")`` returns ``255.0.0.0``. Not applicable + to IPv6 networks since CIDR notation is the only valid notation for + IPv6. + + * `cidrsubnet(iprange, newbits, netnum)` - Takes an IP address range in + CIDR notation (like ``10.0.0.0/8``) and extends its prefix to include an + additional subnet number. For example, + ``cidrsubnet("10.0.0.0/8", 8, 2)`` returns ``10.2.0.0/16``. + + * `compact(list)` - Removes empty string elements from a list. This can be + useful in some cases, for example when passing joined lists as module + variables or when parsing module outputs. + Example: `compact(module.my_asg.load_balancer_names)` + * `concat(list1, list2)` - Combines two or more lists into a single list. Example: `concat(aws_instance.db.*.tags.Name, aws_instance.web.*.tags.Name)` @@ -120,6 +147,8 @@ The supported built-in functions are: variable. The `map` parameter should be another variable, such as `var.amis`. + * `lower(string)` - returns a copy of the string with all Unicode letters mapped to their lower case. + * `replace(string, search, replace)` - Does a search and replace on the given string. All instances of `search` are replaced with the value of `replace`. If `search` is wrapped in forward slashes, it is treated @@ -136,6 +165,8 @@ The supported built-in functions are: `a_resource_param = ["${split(",", var.CSV_STRING)}"]`. Example: `split(",", module.amod.server_ids)` + * `upper(string)` - returns a copy of the string with all Unicode letters mapped to their upper case. + ## Templates Long strings can be managed using templates. [Templates](/docs/providers/template/index.html) are [resources](/docs/configuration/resources.html) defined by a filename and some variables to use during interpolation. They have a computed `rendered` attribute containing the result. diff --git a/website/source/docs/configuration/resources.html.md b/website/source/docs/configuration/resources.html.md index f099c5f25..d5e087fec 100644 --- a/website/source/docs/configuration/resources.html.md +++ b/website/source/docs/configuration/resources.html.md @@ -68,11 +68,20 @@ The `lifecycle` block allows the following keys to be set: destruction of a given resource. When this is set to `true`, any plan that includes a destroy of this resource will return an error message. + * `ignore_changes` (list of strings) - Customizes how diffs are evaluated for + resources, allowing individual attributes to be ignored through changes. + As an example, this can be used to ignore dynamic changes to the + resource from external resources. Other meta-parameters cannot be ignored. + ~> **NOTE on create\_before\_destroy and dependencies:** Resources that utilize the `create_before_destroy` key can only depend on other resources that also include `create_before_destroy`. Referencing a resource that does not include `create_before_destroy` will result in a dependency graph cycle. +~> **NOTE on ignore\_changes:** Ignored attribute names can be matched by their +name, not state ID. For example, if an `aws_route_table` has two routes defined +and the `ignore_changes` list contains "route", both routes will be ignored. + ------------- Within a resource, you can optionally have a **connection block**. @@ -191,6 +200,8 @@ where `LIFECYCLE` is: ``` lifecycle { [create_before_destroy = true|false] + [prevent_destroy = true|false] + [ignore_changes = [ATTRIBUTE NAME, ...]] } ``` diff --git a/website/source/docs/configuration/syntax.html.md b/website/source/docs/configuration/syntax.html.md index 2f0e7d547..8fcc6c68c 100644 --- a/website/source/docs/configuration/syntax.html.md +++ b/website/source/docs/configuration/syntax.html.md @@ -54,6 +54,11 @@ Basic bullet point reference: is [documented here](/docs/configuration/interpolation.html). + * Multiline strings can use shell-style "here doc" syntax, with + the string starting with a marker like `< To walk the graph, a standard depth-first traversal is done. Graph -walking is done with as much parallelism as possible: a node is walked -as soon as all of its dependencies are walked. +walking is done in parallel: a node is walked as soon as all of its +dependencies are walked. + +The amount of parallelism is limited using a semaphore to prevent too many +concurrent operations from overwhelming the resources of the machine running +Terraform. By default, up to 10 nodes in the graph will be processed +concurrently. This number can be set using the `-parallelism` flag on the +[plan](/docs/commands/plan.html), [apply](/docs/commands/apply.html), and +[destroy](/docs/commands/destroy.html) commands. + +Setting `-parallelism` is considered an advanced operation and should not be +necessary for normal usage of Terraform. It may be helpful in certain special +use cases or to help debug Terraform issues. + +Note that some providers (AWS, for example), handle API rate limiting issues at +a lower level by implementing graceful backoff/retry in their respective API +clients. For this reason, Terraform does not use this `parallelism` feature to +address API rate limits directly. diff --git a/website/source/docs/modules/sources.html.markdown b/website/source/docs/modules/sources.html.markdown index b0a2b4d0c..d9e6a1316 100644 --- a/website/source/docs/modules/sources.html.markdown +++ b/website/source/docs/modules/sources.html.markdown @@ -81,6 +81,30 @@ You can use the same parameters to GitHub repositories as you can generic Git repositories (such as tags or branches). See the documentation for generic Git repositories for more information. +#### Private GitHub Repos + +If you need Terraform to be able to fetch modules from private GitHub repos on +a remote machine (like a Atlas or a CI server), you'll need to provide +Terraform with credentials that can be used to authenticate as a user with read +access to the private repo. + +First, create a [machine +user](https://developer.github.com/guides/managing-deploy-keys/#machine-users) +with access to read from the private repo in question, then embed this user's +credentials into the source field: + +``` +module "private-infra" { + source = "git::https://MACHINE-USER:MACHINE-PASS@github.com/org/privatemodules//modules/foo" +} +``` + +Note that Terraform does not yet support interpolations in the `source` field, +so the machine username and password will have to be embedded directly into the +source string. You can track +[GH-1439](https://github.com/hashicorp/terraform/issues/1439) to learn when this +limitation is lifted. + ## BitBucket Terraform will automatically recognize BitBucket URLs and turn them into diff --git a/website/source/docs/providers/atlas/r/artifact.html.markdown b/website/source/docs/providers/atlas/r/artifact.html.markdown index 7c8be2985..08dae8fd9 100644 --- a/website/source/docs/providers/atlas/r/artifact.html.markdown +++ b/website/source/docs/providers/atlas/r/artifact.html.markdown @@ -32,6 +32,7 @@ resource "atlas_artifact" "web" { } # Start our instance with the dynamic ami value +# Remember to include the AWS region as it is part of the full ID resource "aws_instance" "app" { ami = "${atlas_artifact.web.metadata_full.region-us-east-1}" ... @@ -82,4 +83,3 @@ The following attributes are exported: For example, the "region.us-east-1" key will become "region-us-east-1". * `version_real` - The matching version of the artifact * `slug` - The artifact slug in Atlas - diff --git a/website/source/docs/providers/aws/r/app_cookie_stickiness_policy.html.markdown b/website/source/docs/providers/aws/r/app_cookie_stickiness_policy.html.markdown index 6d6215f22..c15f09d66 100644 --- a/website/source/docs/providers/aws/r/app_cookie_stickiness_policy.html.markdown +++ b/website/source/docs/providers/aws/r/app_cookie_stickiness_policy.html.markdown @@ -15,20 +15,20 @@ Provides an application cookie stickiness policy, which allows an ELB to wed its ``` resource "aws_elb" "lb" { name = "test-lb" - availability_zones = ["us-east-1a"] - listener { - instance_port = 8000 - instance_protocol = "http" - lb_port = 80 - lb_protocol = "http" - } + availability_zones = ["us-east-1a"] + listener { + instance_port = 8000 + instance_protocol = "http" + lb_port = 80 + lb_protocol = "http" + } } resource "aws_app_cookie_stickiness_policy" "foo" { - name = "foo_policy" - load_balancer = "${aws_elb.lb.id}" - lb_port = 80 - cookie_name = "MyAppCookie" + name = "foo_policy" + load_balancer = "${aws_elb.lb.name}" + lb_port = 80 + cookie_name = "MyAppCookie" } ``` @@ -37,7 +37,7 @@ resource "aws_app_cookie_stickiness_policy" "foo" { The following arguments are supported: * `name` - (Required) The name of the stickiness policy. -* `load_balancer` - (Required) The load balancer to which the policy +* `load_balancer` - (Required) The name of load balancer to which the policy should be attached. * `lb_port` - (Required) The load balancer port to which the policy should be applied. This must be an active listener on the load @@ -50,6 +50,6 @@ The following attributes are exported: * `id` - The ID of the policy. * `name` - The name of the stickiness policy. -* `load_balancer` - The load balancer to which the policy is attached. +* `load_balancer` - The name of load balancer to which the policy is attached. * `lb_port` - The load balancer port to which the policy is applied. * `cookie_name` - The application cookie whose lifetime the ELB's cookie should follow. diff --git a/website/source/docs/providers/aws/r/autoscaling_group.html.markdown b/website/source/docs/providers/aws/r/autoscaling_group.html.markdown index 022b1cf71..1c7641a2d 100644 --- a/website/source/docs/providers/aws/r/autoscaling_group.html.markdown +++ b/website/source/docs/providers/aws/r/autoscaling_group.html.markdown @@ -57,12 +57,20 @@ The following arguments are supported: for this number of healthy instances all attached load balancers. (See also [Waiting for Capacity](#waiting-for-capacity) below.) * `force_delete` - (Optional) Allows deleting the autoscaling group without waiting - for all instances in the pool to terminate. + for all instances in the pool to terminate. You can force an autoscaling group to delete + even if it's in the process of scaling a resource. Normally, Terraform + drains all the instances before deleting the group. This bypasses that + behavior and potentially leaves resources dangling. * `load_balancers` (Optional) A list of load balancer names to add to the autoscaling group names. * `vpc_zone_identifier` (Optional) A list of subnet IDs to launch resources in. * `termination_policies` (Optional) A list of policies to decide how the instances in the auto scale group should be terminated. * `tag` (Optional) A list of tag blocks. Tags documented below. +* `wait_for_capacity_timeout` (Default: "10m") A maximum + [duration](https://golang.org/pkg/time/#ParseDuration) that Terraform should + wait for ASG instances to be healthy before timing out. (See also [Waiting + for Capacity](#waiting-for-capacity) below.) Setting this to "0" causes + Terraform to skip all Capacity Waiting behavior. Tags support the following: @@ -110,9 +118,12 @@ Terraform considers an instance "healthy" when the ASG reports `HealthStatus: Docs](https://docs.aws.amazon.com/AutoScaling/latest/DeveloperGuide/AutoScalingGroupLifecycle.html) for more information on an ASG's lifecycle. -Terraform will wait for healthy instances for up to 10 minutes. If ASG creation -is taking more than a few minutes, it's worth investigating for scaling activity -errors, which can be caused by problems with the selected Launch Configuration. +Terraform will wait for healthy instances for up to +`wait_for_capacity_timeout`. If ASG creation is taking more than a few minutes, +it's worth investigating for scaling activity errors, which can be caused by +problems with the selected Launch Configuration. + +Setting `wait_for_capacity_timeout` to `"0"` disables ASG Capacity waiting. #### Waiting for ELB Capacity @@ -121,8 +132,9 @@ Balancers. If `min_elb_capacity` is set, Terraform will wait for that number of Instances to be `"InService"` in all attached `load_balancers`. This can be used to ensure that service is being provided before Terraform moves on. -As with ASG Capacity, Terraform will wait for up to 10 minutes for -`"InService"` instances. If ASG creation takes more than a few minutes, this -could indicate one of a number of configuration problems. See the [AWS Docs on -Load Balancer Troubleshooting](https://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/elb-troubleshooting.html) +As with ASG Capacity, Terraform will wait for up to `wait_for_capacity_timeout` +(for `"InService"` instances. If ASG creation takes more than a few minutes, +this could indicate one of a number of configuration problems. See the [AWS +Docs on Load Balancer +Troubleshooting](https://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/elb-troubleshooting.html) for more information. diff --git a/website/source/docs/providers/aws/r/autoscaling_lifecycle_hooks.html.markdown b/website/source/docs/providers/aws/r/autoscaling_lifecycle_hooks.html.markdown new file mode 100644 index 000000000..a753c864b --- /dev/null +++ b/website/source/docs/providers/aws/r/autoscaling_lifecycle_hooks.html.markdown @@ -0,0 +1,55 @@ +--- +layout: "aws" +page_title: "AWS: aws_autoscaling_lifecycle_hook" +sidebar_current: "docs-aws-resource-autoscaling-lifecycle-hook" +description: |- + Provides an AutoScaling Lifecycle Hooks resource. +--- + +# aws\_autoscaling\_lifecycle\_hook + +Provides an AutoScaling Lifecycle Hook resource. + +## Example Usage + +``` +resource "aws_autoscaling_group" "foobar" { + availability_zones = ["us-west-2a"] + name = "terraform-test-foobar5" + health_check_type = "EC2" + termination_policies = ["OldestInstance"] + tag { + key = "Foo" + value = "foo-bar" + propagate_at_launch = true + } +} + +resource "aws_autoscaling_lifecycle_hook" "foobar" { + name = "foobar" + autoscaling_group_name = "${aws_autoscaling_group.foobar.name}" + default_result = "CONTINUE" + heartbeat_timeout = 2000 + lifecycle_transition = "autoscaling:EC2_INSTANCE_LAUNCHING" + notification_metadata = < **NOTE:** You can specify either the `instance` ID or the `network_interface` ID, +but not both. Including both will **not** return an error from the AWS API, but will +have undefined behavior. See the relevant [AssociateAddress API Call][1] for +more information. + ## Attributes Reference The following attributes are exported: @@ -36,3 +41,5 @@ The following attributes are exported: * `instance` - Contains the ID of the attached instance. * `network_interface` - Contains the ID of the attached network interface. + +[1]: http://docs.aws.amazon.com/fr_fr/AWSEC2/latest/APIReference/API_AssociateAddress.html diff --git a/website/source/docs/providers/aws/r/elasticache_cluster.html.markdown b/website/source/docs/providers/aws/r/elasticache_cluster.html.markdown index d2cec07e8..ef1d69ed4 100644 --- a/website/source/docs/providers/aws/r/elasticache_cluster.html.markdown +++ b/website/source/docs/providers/aws/r/elasticache_cluster.html.markdown @@ -27,8 +27,8 @@ resource "aws_elasticache_cluster" "bar" { The following arguments are supported: -* `cluster_id` – (Required) Group identifier. This parameter is stored as a -lowercase string +* `cluster_id` – (Required) Group identifier. Elasticache converts + this name to lowercase * `engine` – (Required) Name of the cache engine to be used for this cache cluster. Valid values for this parameter are `memcached` or `redis` diff --git a/website/source/docs/providers/aws/r/elasticsearch_domain.html.markdown b/website/source/docs/providers/aws/r/elasticsearch_domain.html.markdown new file mode 100644 index 000000000..373edd59b --- /dev/null +++ b/website/source/docs/providers/aws/r/elasticsearch_domain.html.markdown @@ -0,0 +1,83 @@ +--- +layout: "aws" +page_title: "AWS: aws_elasticsearch_domain" +sidebar_current: "docs-aws-elasticsearch-domain" +description: |- + Provides an ElasticSearch Domain. +--- + +# aws\_elasticsearch\_domain + + +## Example Usage + +``` +resource "aws_elasticsearch_domain" "es" { + domain_name = "tf-test" + advanced_options { + "rest.action.multi.allow_explicit_index" = true + } + + access_policies = < **NOTE:** When removing a Glacier Vault, the Vault must be empty. + +## Example Usage + +``` +resource "aws_sns_topic" "aws_sns_topic" { + name = "glacier-sns-topic" +} + +resource "aws_glacier_vault" "my_archive" { + name = "MyArchive" + + notification { + sns_topic = "${aws_sns_topic.aws_sns_topic.arn}" + events = ["ArchiveRetrievalCompleted","InventoryRetrievalCompleted"] + } + + access_policy = < ## Block devices @@ -134,3 +169,4 @@ The following attributes are exported: [1]: /docs/providers/aws/r/autoscaling_group.html [2]: /docs/configuration/resources.html#lifecycle +[3]: /docs/providers/aws/r/spot_instance_request.html diff --git a/website/source/docs/providers/aws/r/lb_cookie_stickiness_policy.html.markdown b/website/source/docs/providers/aws/r/lb_cookie_stickiness_policy.html.markdown index bb4ad524e..59e581c12 100644 --- a/website/source/docs/providers/aws/r/lb_cookie_stickiness_policy.html.markdown +++ b/website/source/docs/providers/aws/r/lb_cookie_stickiness_policy.html.markdown @@ -25,7 +25,7 @@ resource "aws_elb" "lb" { } resource "aws_lb_cookie_stickiness_policy" "foo" { - name = "foo_policy" + name = "foo-policy" load_balancer = "${aws_elb.lb.id}" lb_port = 80 cookie_expiration_period = 600 diff --git a/website/source/docs/providers/aws/r/opsworks_custom_layer.html.markdown b/website/source/docs/providers/aws/r/opsworks_custom_layer.html.markdown new file mode 100644 index 000000000..8bab63692 --- /dev/null +++ b/website/source/docs/providers/aws/r/opsworks_custom_layer.html.markdown @@ -0,0 +1,65 @@ +--- +layout: "aws" +page_title: "AWS: aws_opsworks_custom_layer" +sidebar_current: "docs-aws-resource-opsworks-custom-layer" +description: |- + Provides an OpsWorks custom layer resource. +--- + +# aws\_opsworks\_custom\_layer + +Provides an OpsWorks custom layer resource. + +## Example Usage + +``` +resource "aws_opsworks_custom_layer" "custlayer" { + name = "My Awesome Custom Layer" + short_name = "awesome" + stack_id = "${aws_opsworks_stack.main.id}" +} +``` + +## Argument Reference + +The following arguments are supported: + +* `name` - (Required) A human-readable name for the layer. +* `short_name` - (Required) A short, machine-readable name for the layer, which will be used to identify it in the Chef node JSON. +* `stack_id` - (Required) The id of the stack the layer will belong to. +* `auto_assign_elastic_ips` - (Optional) Whether to automatically assign an elastic IP address to the layer's instances. +* `auto_assign_public_ips` - (Optional) For stacks belonging to a VPC, whether to automatically assign a public IP address to each of the layer's instances. +* `custom_instance_profile_arn` - (Optional) The ARN of an IAM profile that will be used for the layer's instances. +* `custom_security_group_ids` - (Optional) Ids for a set of security groups to apply to the layer's instances. +* `auto_healing` - (Optional) Whether to enable auto-healing for the layer. +* `install_updates_on_boot` - (Optional) Whether to install OS and package updates on each instance when it boots. +* `instance_shutdown_timeout` - (Optional) The time, in seconds, that OpsWorks will wait for Chef to complete after triggering the Shutdown event. +* `drain_elb_on_shutdown` - (Optional) Whether to enable Elastic Load Balancing connection draining. +* `system_packages` - (Optional) Names of a set of system packages to install on the layer's instances. +* `use_ebs_optimized_instances` - (Optional) Whether to use EBS-optimized instances. +* `ebs_volume` - (Optional) `ebs_volume` blocks, as described below, will each create an EBS volume and connect it to the layer's instances. + +The following extra optional arguments, all lists of Chef recipe names, allow +custom Chef recipes to be applied to layer instances at the five different +lifecycle events, if custom cookbooks are enabled on the layer's stack: + +* `custom_configure_recipes` +* `custom_deploy_recipes` +* `custom_setup_recipes` +* `custom_shutdown_recipes` +* `custom_undeploy_recipes` + +An `ebs_volume` block supports the following arguments: + +* `mount_point` - (Required) The path to mount the EBS volume on the layer's instances. +* `size` - (Required) The size of the volume in gigabytes. +* `number_of_disks` - (Required) The number of disks to use for the EBS volume. +* `raid_level` - (Required) The RAID level to use for the volume. +* `type` - (Optional) The type of volume to create. This may be `standard` (the default), `io1` or `gp2`. +* `iops` - (Optional) For PIOPS volumes, the IOPS per disk. + +## Attributes Reference + +The following attributes are exported: + +* `id` - The id of the layer. diff --git a/website/source/docs/providers/aws/r/opsworks_ganglia_layer.html.markdown b/website/source/docs/providers/aws/r/opsworks_ganglia_layer.html.markdown new file mode 100644 index 000000000..2137e0bf2 --- /dev/null +++ b/website/source/docs/providers/aws/r/opsworks_ganglia_layer.html.markdown @@ -0,0 +1,66 @@ +--- +layout: "aws" +page_title: "AWS: aws_opsworks_ganglia_layer" +sidebar_current: "docs-aws-resource-opsworks-ganglia-layer" +description: |- + Provides an OpsWorks Ganglia layer resource. +--- + +# aws\_opsworks\_ganglia\_layer + +Provides an OpsWorks Ganglia layer resource. + +## Example Usage + +``` +resource "aws_opsworks_ganglia_layer" "monitor" { + stack_id = "${aws_opsworks_stack.main.id}" + password = "foobarbaz" +} +``` + +## Argument Reference + +The following arguments are supported: + +* `stack_id` - (Required) The id of the stack the layer will belong to. +* `password` - (Required) The password to use for Ganglia. +* `name` - (Optional) A human-readable name for the layer. +* `auto_assign_elastic_ips` - (Optional) Whether to automatically assign an elastic IP address to the layer's instances. +* `auto_assign_public_ips` - (Optional) For stacks belonging to a VPC, whether to automatically assign a public IP address to each of the layer's instances. +* `custom_instance_profile_arn` - (Optional) The ARN of an IAM profile that will be used for the layer's instances. +* `custom_security_group_ids` - (Optional) Ids for a set of security groups to apply to the layer's instances. +* `auto_healing` - (Optional) Whether to enable auto-healing for the layer. +* `install_updates_on_boot` - (Optional) Whether to install OS and package updates on each instance when it boots. +* `instance_shutdown_timeout` - (Optional) The time, in seconds, that OpsWorks will wait for Chef to complete after triggering the Shutdown event. +* `drain_elb_on_shutdown` - (Optional) Whether to enable Elastic Load Balancing connection draining. +* `system_packages` - (Optional) Names of a set of system packages to install on the layer's instances. +* `url` - (Optional) The URL path to use for Ganglia. Defaults to "/ganglia". +* `username` - (Optiona) The username to use for Ganglia. Defaults to "opsworks". +* `use_ebs_optimized_instances` - (Optional) Whether to use EBS-optimized instances. +* `ebs_volume` - (Optional) `ebs_volume` blocks, as described below, will each create an EBS volume and connect it to the layer's instances. + +The following extra optional arguments, all lists of Chef recipe names, allow +custom Chef recipes to be applied to layer instances at the five different +lifecycle events, if custom cookbooks are enabled on the layer's stack: + +* `custom_configure_recipes` +* `custom_deploy_recipes` +* `custom_setup_recipes` +* `custom_shutdown_recipes` +* `custom_undeploy_recipes` + +An `ebs_volume` block supports the following arguments: + +* `mount_point` - (Required) The path to mount the EBS volume on the layer's instances. +* `size` - (Required) The size of the volume in gigabytes. +* `number_of_disks` - (Required) The number of disks to use for the EBS volume. +* `raid_level` - (Required) The RAID level to use for the volume. +* `type` - (Optional) The type of volume to create. This may be `standard` (the default), `io1` or `gp2`. +* `iops` - (Optional) For PIOPS volumes, the IOPS per disk. + +## Attributes Reference + +The following attributes are exported: + +* `id` - The id of the layer. diff --git a/website/source/docs/providers/aws/r/opsworks_haproxy_layer.html.markdown b/website/source/docs/providers/aws/r/opsworks_haproxy_layer.html.markdown new file mode 100644 index 000000000..a921a8135 --- /dev/null +++ b/website/source/docs/providers/aws/r/opsworks_haproxy_layer.html.markdown @@ -0,0 +1,69 @@ +--- +layout: "aws" +page_title: "AWS: aws_opsworks_haproxy_layer" +sidebar_current: "docs-aws-resource-opsworks-haproxy-layer" +description: |- + Provides an OpsWorks HAProxy layer resource. +--- + +# aws\_opsworks\_haproxy\_layer + +Provides an OpsWorks haproxy layer resource. + +## Example Usage + +``` +resource "aws_opsworks_haproxy_layer" "lb" { + stack_id = "${aws_opsworks_stack.main.id}" + stats_password = "foobarbaz" +} +``` + +## Argument Reference + +The following arguments are supported: + +* `stack_id` - (Required) The id of the stack the layer will belong to. +* `stats_password` - (Required) The password to use for HAProxy stats. +* `name` - (Optional) A human-readable name for the layer. +* `auto_assign_elastic_ips` - (Optional) Whether to automatically assign an elastic IP address to the layer's instances. +* `auto_assign_public_ips` - (Optional) For stacks belonging to a VPC, whether to automatically assign a public IP address to each of the layer's instances. +* `custom_instance_profile_arn` - (Optional) The ARN of an IAM profile that will be used for the layer's instances. +* `custom_security_group_ids` - (Optional) Ids for a set of security groups to apply to the layer's instances. +* `auto_healing` - (Optional) Whether to enable auto-healing for the layer. +* `healthcheck_method` - (Optional) HTTP method to use for instance healthchecks. Defaults to "OPTIONS". +* `healthcheck_url` - (Optional) URL path to use for instance healthchecks. Defaults to "/". +* `install_updates_on_boot` - (Optional) Whether to install OS and package updates on each instance when it boots. +* `instance_shutdown_timeout` - (Optional) The time, in seconds, that OpsWorks will wait for Chef to complete after triggering the Shutdown event. +* `drain_elb_on_shutdown` - (Optional) Whether to enable Elastic Load Balancing connection draining. +* `stats_enabled` - (Optional) Whether to enable HAProxy stats. +* `stats_url` - (Optional) The HAProxy stats URL. Defaults to "/haproxy?stats". +* `stats_user` - (Optional) The username for HAProxy stats. Defaults to "opsworks". +* `system_packages` - (Optional) Names of a set of system packages to install on the layer's instances. +* `use_ebs_optimized_instances` - (Optional) Whether to use EBS-optimized instances. +* `ebs_volume` - (Optional) `ebs_volume` blocks, as described below, will each create an EBS volume and connect it to the layer's instances. + +The following extra optional arguments, all lists of Chef recipe names, allow +custom Chef recipes to be applied to layer instances at the five different +lifecycle events, if custom cookbooks are enabled on the layer's stack: + +* `custom_configure_recipes` +* `custom_deploy_recipes` +* `custom_setup_recipes` +* `custom_shutdown_recipes` +* `custom_undeploy_recipes` + +An `ebs_volume` block supports the following arguments: + +* `mount_point` - (Required) The path to mount the EBS volume on the layer's instances. +* `size` - (Required) The size of the volume in gigabytes. +* `number_of_disks` - (Required) The number of disks to use for the EBS volume. +* `raid_level` - (Required) The RAID level to use for the volume. +* `type` - (Optional) The type of volume to create. This may be `standard` (the default), `io1` or `gp2`. +* `iops` - (Optional) For PIOPS volumes, the IOPS per disk. + +## Attributes Reference + +The following attributes are exported: + +* `id` - The id of the layer. diff --git a/website/source/docs/providers/aws/r/opsworks_java_app_layer.html.markdown b/website/source/docs/providers/aws/r/opsworks_java_app_layer.html.markdown new file mode 100644 index 000000000..c9a4823fe --- /dev/null +++ b/website/source/docs/providers/aws/r/opsworks_java_app_layer.html.markdown @@ -0,0 +1,67 @@ +--- +layout: "aws" +page_title: "AWS: aws_opsworks_java_app_layer" +sidebar_current: "docs-aws-resource-opsworks-java-app-layer" +description: |- + Provides an OpsWorks Java application layer resource. +--- + +# aws\_opsworks\_java\_app\_layer + +Provides an OpsWorks Java application layer resource. + +## Example Usage + +``` +resource "aws_opsworks_java_app_layer" "app" { + stack_id = "${aws_opsworks_stack.main.id}" +} +``` + +## Argument Reference + +The following arguments are supported: + +* `stack_id` - (Required) The id of the stack the layer will belong to. +* `name` - (Optional) A human-readable name for the layer. +* `app_server` - (Optional) Keyword for the application container to use. Defaults to "tomcat". +* `app_server_version` - (Optional) Version of the selected application container to use. Defaults to "7". +* `auto_assign_elastic_ips` - (Optional) Whether to automatically assign an elastic IP address to the layer's instances. +* `auto_assign_public_ips` - (Optional) For stacks belonging to a VPC, whether to automatically assign a public IP address to each of the layer's instances. +* `custom_instance_profile_arn` - (Optional) The ARN of an IAM profile that will be used for the layer's instances. +* `custom_security_group_ids` - (Optional) Ids for a set of security groups to apply to the layer's instances. +* `auto_healing` - (Optional) Whether to enable auto-healing for the layer. +* `install_updates_on_boot` - (Optional) Whether to install OS and package updates on each instance when it boots. +* `instance_shutdown_timeout` - (Optional) The time, in seconds, that OpsWorks will wait for Chef to complete after triggering the Shutdown event. +* `jvm_type` - (Optional) Keyword for the type of JVM to use. Defaults to `openjdk`. +* `jvm_options` - (Optional) Options to set for the JVM. +* `jvm_version` - (Optional) Version of JVM to use. Defaults to "7". +* `drain_elb_on_shutdown` - (Optional) Whether to enable Elastic Load Balancing connection draining. +* `system_packages` - (Optional) Names of a set of system packages to install on the layer's instances. +* `use_ebs_optimized_instances` - (Optional) Whether to use EBS-optimized instances. +* `ebs_volume` - (Optional) `ebs_volume` blocks, as described below, will each create an EBS volume and connect it to the layer's instances. + +The following extra optional arguments, all lists of Chef recipe names, allow +custom Chef recipes to be applied to layer instances at the five different +lifecycle events, if custom cookbooks are enabled on the layer's stack: + +* `custom_configure_recipes` +* `custom_deploy_recipes` +* `custom_setup_recipes` +* `custom_shutdown_recipes` +* `custom_undeploy_recipes` + +An `ebs_volume` block supports the following arguments: + +* `mount_point` - (Required) The path to mount the EBS volume on the layer's instances. +* `size` - (Required) The size of the volume in gigabytes. +* `number_of_disks` - (Required) The number of disks to use for the EBS volume. +* `raid_level` - (Required) The RAID level to use for the volume. +* `type` - (Optional) The type of volume to create. This may be `standard` (the default), `io1` or `gp2`. +* `iops` - (Optional) For PIOPS volumes, the IOPS per disk. + +## Attributes Reference + +The following attributes are exported: + +* `id` - The id of the layer. diff --git a/website/source/docs/providers/aws/r/opsworks_memcached_layer.html.markdown b/website/source/docs/providers/aws/r/opsworks_memcached_layer.html.markdown new file mode 100644 index 000000000..4a725bd7c --- /dev/null +++ b/website/source/docs/providers/aws/r/opsworks_memcached_layer.html.markdown @@ -0,0 +1,63 @@ +--- +layout: "aws" +page_title: "AWS: aws_opsworks_memcached_layer" +sidebar_current: "docs-aws-resource-opsworks-memcached-layer" +description: |- + Provides an OpsWorks memcached layer resource. +--- + +# aws\_opsworks\_memcached\_layer + +Provides an OpsWorks memcached layer resource. + +## Example Usage + +``` +resource "aws_opsworks_memcached_layer" "cache" { + stack_id = "${aws_opsworks_stack.main.id}" +} +``` + +## Argument Reference + +The following arguments are supported: + +* `stack_id` - (Required) The id of the stack the layer will belong to. +* `name` - (Optional) A human-readable name for the layer. +* `allocated_memory` - (Optional) Amount of memory to allocate for the cache on each instance, in megabytes. Defaults to 512MB. +* `auto_assign_elastic_ips` - (Optional) Whether to automatically assign an elastic IP address to the layer's instances. +* `auto_assign_public_ips` - (Optional) For stacks belonging to a VPC, whether to automatically assign a public IP address to each of the layer's instances. +* `custom_instance_profile_arn` - (Optional) The ARN of an IAM profile that will be used for the layer's instances. +* `custom_security_group_ids` - (Optional) Ids for a set of security groups to apply to the layer's instances. +* `auto_healing` - (Optional) Whether to enable auto-healing for the layer. +* `install_updates_on_boot` - (Optional) Whether to install OS and package updates on each instance when it boots. +* `instance_shutdown_timeout` - (Optional) The time, in seconds, that OpsWorks will wait for Chef to complete after triggering the Shutdown event. +* `drain_elb_on_shutdown` - (Optional) Whether to enable Elastic Load Balancing connection draining. +* `system_packages` - (Optional) Names of a set of system packages to install on the layer's instances. +* `use_ebs_optimized_instances` - (Optional) Whether to use EBS-optimized instances. +* `ebs_volume` - (Optional) `ebs_volume` blocks, as described below, will each create an EBS volume and connect it to the layer's instances. + +The following extra optional arguments, all lists of Chef recipe names, allow +custom Chef recipes to be applied to layer instances at the five different +lifecycle events, if custom cookbooks are enabled on the layer's stack: + +* `custom_configure_recipes` +* `custom_deploy_recipes` +* `custom_setup_recipes` +* `custom_shutdown_recipes` +* `custom_undeploy_recipes` + +An `ebs_volume` block supports the following arguments: + +* `mount_point` - (Required) The path to mount the EBS volume on the layer's instances. +* `size` - (Required) The size of the volume in gigabytes. +* `number_of_disks` - (Required) The number of disks to use for the EBS volume. +* `raid_level` - (Required) The RAID level to use for the volume. +* `type` - (Optional) The type of volume to create. This may be `standard` (the default), `io1` or `gp2`. +* `iops` - (Optional) For PIOPS volumes, the IOPS per disk. + +## Attributes Reference + +The following attributes are exported: + +* `id` - The id of the layer. diff --git a/website/source/docs/providers/aws/r/opsworks_mysql_layer.html.markdown b/website/source/docs/providers/aws/r/opsworks_mysql_layer.html.markdown new file mode 100644 index 000000000..fcbcef97d --- /dev/null +++ b/website/source/docs/providers/aws/r/opsworks_mysql_layer.html.markdown @@ -0,0 +1,64 @@ +--- +layout: "aws" +page_title: "AWS: aws_opsworks_mysql_layer" +sidebar_current: "docs-aws-resource-opsworks-mysql-layer" +description: |- + Provides an OpsWorks MySQL layer resource. +--- + +# aws\_opsworks\_mysql\_layer + +Provides an OpsWorks MySQL layer resource. + +## Example Usage + +``` +resource "aws_opsworks_mysql_layer" "db" { + stack_id = "${aws_opsworks_stack.main.id}" +} +``` + +## Argument Reference + +The following arguments are supported: + +* `stack_id` - (Required) The id of the stack the layer will belong to. +* `name` - (Optional) A human-readable name for the layer. +* `auto_assign_elastic_ips` - (Optional) Whether to automatically assign an elastic IP address to the layer's instances. +* `auto_assign_public_ips` - (Optional) For stacks belonging to a VPC, whether to automatically assign a public IP address to each of the layer's instances. +* `custom_instance_profile_arn` - (Optional) The ARN of an IAM profile that will be used for the layer's instances. +* `custom_security_group_ids` - (Optional) Ids for a set of security groups to apply to the layer's instances. +* `auto_healing` - (Optional) Whether to enable auto-healing for the layer. +* `install_updates_on_boot` - (Optional) Whether to install OS and package updates on each instance when it boots. +* `instance_shutdown_timeout` - (Optional) The time, in seconds, that OpsWorks will wait for Chef to complete after triggering the Shutdown event. +* `drain_elb_on_shutdown` - (Optional) Whether to enable Elastic Load Balancing connection draining. +* `root_password` - (Optional) Root password to use for MySQL. +* `root_password_on_all_instances` - (Optional) Whether to set the root user password to all instances in the stack so they can access the instances in this layer. +* `system_packages` - (Optional) Names of a set of system packages to install on the layer's instances. +* `use_ebs_optimized_instances` - (Optional) Whether to use EBS-optimized instances. +* `ebs_volume` - (Optional) `ebs_volume` blocks, as described below, will each create an EBS volume and connect it to the layer's instances. + +The following extra optional arguments, all lists of Chef recipe names, allow +custom Chef recipes to be applied to layer instances at the five different +lifecycle events, if custom cookbooks are enabled on the layer's stack: + +* `custom_configure_recipes` +* `custom_deploy_recipes` +* `custom_setup_recipes` +* `custom_shutdown_recipes` +* `custom_undeploy_recipes` + +An `ebs_volume` block supports the following arguments: + +* `mount_point` - (Required) The path to mount the EBS volume on the layer's instances. +* `size` - (Required) The size of the volume in gigabytes. +* `number_of_disks` - (Required) The number of disks to use for the EBS volume. +* `raid_level` - (Required) The RAID level to use for the volume. +* `type` - (Optional) The type of volume to create. This may be `standard` (the default), `io1` or `gp2`. +* `iops` - (Optional) For PIOPS volumes, the IOPS per disk. + +## Attributes Reference + +The following attributes are exported: + +* `id` - The id of the layer. diff --git a/website/source/docs/providers/aws/r/opsworks_nodejs_app_layer.html.markdown b/website/source/docs/providers/aws/r/opsworks_nodejs_app_layer.html.markdown new file mode 100644 index 000000000..e5a0f5b8a --- /dev/null +++ b/website/source/docs/providers/aws/r/opsworks_nodejs_app_layer.html.markdown @@ -0,0 +1,63 @@ +--- +layout: "aws" +page_title: "AWS: aws_opsworks_nodejs_app_layer" +sidebar_current: "docs-aws-resource-opsworks-nodejs-app-layer" +description: |- + Provides an OpsWorks NodeJS application layer resource. +--- + +# aws\_opsworks\_nodejs\_app\_layer + +Provides an OpsWorks NodeJS application layer resource. + +## Example Usage + +``` +resource "aws_opsworks_nodejs_app_layer" "app" { + stack_id = "${aws_opsworks_stack.main.id}" +} +``` + +## Argument Reference + +The following arguments are supported: + +* `stack_id` - (Required) The id of the stack the layer will belong to. +* `name` - (Optional) A human-readable name for the layer. +* `auto_assign_elastic_ips` - (Optional) Whether to automatically assign an elastic IP address to the layer's instances. +* `auto_assign_public_ips` - (Optional) For stacks belonging to a VPC, whether to automatically assign a public IP address to each of the layer's instances. +* `custom_instance_profile_arn` - (Optional) The ARN of an IAM profile that will be used for the layer's instances. +* `custom_security_group_ids` - (Optional) Ids for a set of security groups to apply to the layer's instances. +* `auto_healing` - (Optional) Whether to enable auto-healing for the layer. +* `install_updates_on_boot` - (Optional) Whether to install OS and package updates on each instance when it boots. +* `instance_shutdown_timeout` - (Optional) The time, in seconds, that OpsWorks will wait for Chef to complete after triggering the Shutdown event. +* `drain_elb_on_shutdown` - (Optional) Whether to enable Elastic Load Balancing connection draining. +* `nodejs_version` - (Optional) The version of NodeJS to use. Defaults to "0.10.38". +* `system_packages` - (Optional) Names of a set of system packages to install on the layer's instances. +* `use_ebs_optimized_instances` - (Optional) Whether to use EBS-optimized instances. +* `ebs_volume` - (Optional) `ebs_volume` blocks, as described below, will each create an EBS volume and connect it to the layer's instances. + +The following extra optional arguments, all lists of Chef recipe names, allow +custom Chef recipes to be applied to layer instances at the five different +lifecycle events, if custom cookbooks are enabled on the layer's stack: + +* `custom_configure_recipes` +* `custom_deploy_recipes` +* `custom_setup_recipes` +* `custom_shutdown_recipes` +* `custom_undeploy_recipes` + +An `ebs_volume` block supports the following arguments: + +* `mount_point` - (Required) The path to mount the EBS volume on the layer's instances. +* `size` - (Required) The size of the volume in gigabytes. +* `number_of_disks` - (Required) The number of disks to use for the EBS volume. +* `raid_level` - (Required) The RAID level to use for the volume. +* `type` - (Optional) The type of volume to create. This may be `standard` (the default), `io1` or `gp2`. +* `iops` - (Optional) For PIOPS volumes, the IOPS per disk. + +## Attributes Reference + +The following attributes are exported: + +* `id` - The id of the layer. diff --git a/website/source/docs/providers/aws/r/opsworks_php_app_layer.html.markdown b/website/source/docs/providers/aws/r/opsworks_php_app_layer.html.markdown new file mode 100644 index 000000000..ec91e4ed3 --- /dev/null +++ b/website/source/docs/providers/aws/r/opsworks_php_app_layer.html.markdown @@ -0,0 +1,62 @@ +--- +layout: "aws" +page_title: "AWS: aws_opsworks_php_app_layer" +sidebar_current: "docs-aws-resource-opsworks-php-app-layer" +description: |- + Provides an OpsWorks PHP application layer resource. +--- + +# aws\_opsworks\_php\_app\_layer + +Provides an OpsWorks PHP application layer resource. + +## Example Usage + +``` +resource "aws_opsworks_php_app_layer" "app" { + stack_id = "${aws_opsworks_stack.main.id}" +} +``` + +## Argument Reference + +The following arguments are supported: + +* `stack_id` - (Required) The id of the stack the layer will belong to. +* `name` - (Optional) A human-readable name for the layer. +* `auto_assign_elastic_ips` - (Optional) Whether to automatically assign an elastic IP address to the layer's instances. +* `auto_assign_public_ips` - (Optional) For stacks belonging to a VPC, whether to automatically assign a public IP address to each of the layer's instances. +* `custom_instance_profile_arn` - (Optional) The ARN of an IAM profile that will be used for the layer's instances. +* `custom_security_group_ids` - (Optional) Ids for a set of security groups to apply to the layer's instances. +* `auto_healing` - (Optional) Whether to enable auto-healing for the layer. +* `install_updates_on_boot` - (Optional) Whether to install OS and package updates on each instance when it boots. +* `instance_shutdown_timeout` - (Optional) The time, in seconds, that OpsWorks will wait for Chef to complete after triggering the Shutdown event. +* `drain_elb_on_shutdown` - (Optional) Whether to enable Elastic Load Balancing connection draining. +* `system_packages` - (Optional) Names of a set of system packages to install on the layer's instances. +* `use_ebs_optimized_instances` - (Optional) Whether to use EBS-optimized instances. +* `ebs_volume` - (Optional) `ebs_volume` blocks, as described below, will each create an EBS volume and connect it to the layer's instances. + +The following extra optional arguments, all lists of Chef recipe names, allow +custom Chef recipes to be applied to layer instances at the five different +lifecycle events, if custom cookbooks are enabled on the layer's stack: + +* `custom_configure_recipes` +* `custom_deploy_recipes` +* `custom_setup_recipes` +* `custom_shutdown_recipes` +* `custom_undeploy_recipes` + +An `ebs_volume` block supports the following arguments: + +* `mount_point` - (Required) The path to mount the EBS volume on the layer's instances. +* `size` - (Required) The size of the volume in gigabytes. +* `number_of_disks` - (Required) The number of disks to use for the EBS volume. +* `raid_level` - (Required) The RAID level to use for the volume. +* `type` - (Optional) The type of volume to create. This may be `standard` (the default), `io1` or `gp2`. +* `iops` - (Optional) For PIOPS volumes, the IOPS per disk. + +## Attributes Reference + +The following attributes are exported: + +* `id` - The id of the layer. diff --git a/website/source/docs/providers/aws/r/opsworks_rails_app_layer.html.markdown b/website/source/docs/providers/aws/r/opsworks_rails_app_layer.html.markdown new file mode 100644 index 000000000..ee4f85ed4 --- /dev/null +++ b/website/source/docs/providers/aws/r/opsworks_rails_app_layer.html.markdown @@ -0,0 +1,68 @@ +--- +layout: "aws" +page_title: "AWS: aws_opsworks_rails_app_layer" +sidebar_current: "docs-aws-resource-opsworks-rails-app-layer" +description: |- + Provides an OpsWorks Ruby on Rails application layer resource. +--- + +# aws\_opsworks\_rails\_app\_layer + +Provides an OpsWorks Ruby on Rails application layer resource. + +## Example Usage + +``` +resource "aws_opsworks_rails_app_layer" "app" { + stack_id = "${aws_opsworks_stack.main.id}" +} +``` + +## Argument Reference + +The following arguments are supported: + +* `stack_id` - (Required) The id of the stack the layer will belong to. +* `name` - (Optional) A human-readable name for the layer. +* `app_server` - (Optional) Keyword for the app server to use. Defaults to "apache_passenger". +* `auto_assign_elastic_ips` - (Optional) Whether to automatically assign an elastic IP address to the layer's instances. +* `auto_assign_public_ips` - (Optional) For stacks belonging to a VPC, whether to automatically assign a public IP address to each of the layer's instances. +* `bundler_version` - (Optional) When OpsWorks is managing Bundler, which version to use. Defaults to "1.5.3". +* `custom_instance_profile_arn` - (Optional) The ARN of an IAM profile that will be used for the layer's instances. +* `custom_security_group_ids` - (Optional) Ids for a set of security groups to apply to the layer's instances. +* `auto_healing` - (Optional) Whether to enable auto-healing for the layer. +* `install_updates_on_boot` - (Optional) Whether to install OS and package updates on each instance when it boots. +* `instance_shutdown_timeout` - (Optional) The time, in seconds, that OpsWorks will wait for Chef to complete after triggering the Shutdown event. +* `drain_elb_on_shutdown` - (Optional) Whether to enable Elastic Load Balancing connection draining. +* `manage_bundler` - (Optional) Whether OpsWorks should manage bundler. On by default. +* `passenger_version` - (Optional) The version of Passenger to use. Defaults to "4.0.46". +* `ruby_version` - (Optional) The version of Ruby to use. Defaults to "2.0.0". +* `rubygems_version` - (Optional) The version of RubyGems to use. Defaults to "2.2.2". +* `system_packages` - (Optional) Names of a set of system packages to install on the layer's instances. +* `use_ebs_optimized_instances` - (Optional) Whether to use EBS-optimized instances. +* `ebs_volume` - (Optional) `ebs_volume` blocks, as described below, will each create an EBS volume and connect it to the layer's instances. + +The following extra optional arguments, all lists of Chef recipe names, allow +custom Chef recipes to be applied to layer instances at the five different +lifecycle events, if custom cookbooks are enabled on the layer's stack: + +* `custom_configure_recipes` +* `custom_deploy_recipes` +* `custom_setup_recipes` +* `custom_shutdown_recipes` +* `custom_undeploy_recipes` + +An `ebs_volume` block supports the following arguments: + +* `mount_point` - (Required) The path to mount the EBS volume on the layer's instances. +* `size` - (Required) The size of the volume in gigabytes. +* `number_of_disks` - (Required) The number of disks to use for the EBS volume. +* `raid_level` - (Required) The RAID level to use for the volume. +* `type` - (Optional) The type of volume to create. This may be `standard` (the default), `io1` or `gp2`. +* `iops` - (Optional) For PIOPS volumes, the IOPS per disk. + +## Attributes Reference + +The following attributes are exported: + +* `id` - The id of the layer. diff --git a/website/source/docs/providers/aws/r/opsworks_stack.html.markdown b/website/source/docs/providers/aws/r/opsworks_stack.html.markdown new file mode 100644 index 000000000..d664ca1a9 --- /dev/null +++ b/website/source/docs/providers/aws/r/opsworks_stack.html.markdown @@ -0,0 +1,68 @@ +--- +layout: "aws" +page_title: "AWS: aws_opsworks_stack" +sidebar_current: "docs-aws-resource-opsworks-stack" +description: |- + Provides an OpsWorks stack resource. +--- + +# aws\_opsworks\_stack + +Provides an OpsWorks stack resource. + +## Example Usage + +``` +resource "aws_opsworks_stack" "main" { + name = "awesome-stack" + region = "us-west-1" + service_role_arn = "${aws_iam_role.opsworks.arn}" + default_instance_profile_arn = "${aws_iam_instance_profile.opsworks.arn}" +} +``` + +## Argument Reference + +The following arguments are supported: + +* `name` - (Required) The name of the stack. +* `region` - (Required) The name of the region where the stack will exist. +* `service_role_arn` - (Required) The ARN of an IAM role that the OpsWorks service will act as. +* `default_instance_profile_arn` - (Required) The ARN of an IAM Instance Profile that created instances + will have by default. +* `berkshelf_version` - (Optional) If `manage_berkshelf` is enabled, the version of Berkshelf to use. +* `color` - (Optional) Color to paint next to the stack's resources in the OpsWorks console. +* `default_availability_zone` - (Optional) Name of the availability zone where instances will be created + by default. This is required unless you set `vpc_id`. +* `configuration_manager_name` - (Optional) Name of the configuration manager to use. Defaults to "Chef". +* `configuration_manager_version` - (Optional) Version of the configuratino manager to use. Defaults to "11.4". +* `custom_cookbooks_source` - (Optional) When `use_custom_cookbooks` is set, provide this sub-object as + described below. +* `default_os` - (Optional) Name of OS that will be installed on instances by default. +* `default_root_device_type` - (Optional) Name of the type of root device instances will have by default. +* `default_ssh_key_name` - (Optional) Name of the SSH keypair that instances will have by default. +* `default_subnet_id` - (Optional) Id of the subnet in which instances will be created by default. Mandatory + if `vpc_id` is set, and forbidden if it isn't. +* `hostname_theme` - (Optional) Keyword representing the naming scheme that will be used for instance hostnames + within this stack. +* `manage_berkshelf` - (Optional) Boolean value controlling whether Opsworks will run Berkshelf for this stack. +* `use_custom_cookbooks` - (Optional) Boolean value controlling whether the custom cookbook settings are + enabled. +* `use_opsworks_security_groups` - (Optional) Boolean value controlling whether the standard OpsWorks + security groups apply to created instances. +* `vpc_id` - (Optional) The id of the VPC that this stack belongs to. + +The `custom_cookbooks_source` block supports the following arguments: + +* `type` - (Required) The type of source to use. For example, "archive". +* `url` - (Required) The URL where the cookbooks resource can be found. +* `username` - (Optional) Username to use when authenticating to the source. +* `password` - (Optional) Password to use when authenticating to the source. +* `ssh_key` - (Optional) SSH key to use when authenticating to the source. +* `revision` - (Optional) For sources that are version-aware, the revision to use. + +## Attributes Reference + +The following attributes are exported: + +* `id` - The id of the stack. diff --git a/website/source/docs/providers/aws/r/opsworks_static_web_layer.html.markdown b/website/source/docs/providers/aws/r/opsworks_static_web_layer.html.markdown new file mode 100644 index 000000000..70272e8d0 --- /dev/null +++ b/website/source/docs/providers/aws/r/opsworks_static_web_layer.html.markdown @@ -0,0 +1,62 @@ +--- +layout: "aws" +page_title: "AWS: aws_opsworks_static_web_layer" +sidebar_current: "docs-aws-resource-opsworks-static-web-layer" +description: |- + Provides an OpsWorks static web server layer resource. +--- + +# aws\_opsworks\_static\_web\_layer + +Provides an OpsWorks static web server layer resource. + +## Example Usage + +``` +resource "aws_opsworks_static_web_layer" "web" { + stack_id = "${aws_opsworks_stack.main.id}" +} +``` + +## Argument Reference + +The following arguments are supported: + +* `stack_id` - (Required) The id of the stack the layer will belong to. +* `name` - (Optional) A human-readable name for the layer. +* `auto_assign_elastic_ips` - (Optional) Whether to automatically assign an elastic IP address to the layer's instances. +* `auto_assign_public_ips` - (Optional) For stacks belonging to a VPC, whether to automatically assign a public IP address to each of the layer's instances. +* `custom_instance_profile_arn` - (Optional) The ARN of an IAM profile that will be used for the layer's instances. +* `custom_security_group_ids` - (Optional) Ids for a set of security groups to apply to the layer's instances. +* `auto_healing` - (Optional) Whether to enable auto-healing for the layer. +* `install_updates_on_boot` - (Optional) Whether to install OS and package updates on each instance when it boots. +* `instance_shutdown_timeout` - (Optional) The time, in seconds, that OpsWorks will wait for Chef to complete after triggering the Shutdown event. +* `drain_elb_on_shutdown` - (Optional) Whether to enable Elastic Load Balancing connection draining. +* `system_packages` - (Optional) Names of a set of system packages to install on the layer's instances. +* `use_ebs_optimized_instances` - (Optional) Whether to use EBS-optimized instances. +* `ebs_volume` - (Optional) `ebs_volume` blocks, as described below, will each create an EBS volume and connect it to the layer's instances. + +The following extra optional arguments, all lists of Chef recipe names, allow +custom Chef recipes to be applied to layer instances at the five different +lifecycle events, if custom cookbooks are enabled on the layer's stack: + +* `custom_configure_recipes` +* `custom_deploy_recipes` +* `custom_setup_recipes` +* `custom_shutdown_recipes` +* `custom_undeploy_recipes` + +An `ebs_volume` block supports the following arguments: + +* `mount_point` - (Required) The path to mount the EBS volume on the layer's instances. +* `size` - (Required) The size of the volume in gigabytes. +* `number_of_disks` - (Required) The number of disks to use for the EBS volume. +* `raid_level` - (Required) The RAID level to use for the volume. +* `type` - (Optional) The type of volume to create. This may be `standard` (the default), `io1` or `gp2`. +* `iops` - (Optional) For PIOPS volumes, the IOPS per disk. + +## Attributes Reference + +The following attributes are exported: + +* `id` - The id of the layer. diff --git a/website/source/docs/providers/aws/r/placement_group.html.markdown b/website/source/docs/providers/aws/r/placement_group.html.markdown new file mode 100644 index 000000000..e4ad98df8 --- /dev/null +++ b/website/source/docs/providers/aws/r/placement_group.html.markdown @@ -0,0 +1,34 @@ +--- +layout: "aws" +page_title: "AWS: aws_placement_group" +sidebar_current: "docs-aws-resource-placement-group" +description: |- + Provides an EC2 placement group. +--- + +# aws\_placement\_group + +Provides an EC2 placement group. Read more about placement groups +in [AWS Docs](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/placement-groups.html). + +## Example Usage + +``` +resource "aws_placement_group" "web" { + name = "hunky-dory-pg" + strategy = "cluster" +} +``` + +## Argument Reference + +The following arguments are supported: + +* `name` - (Required) The name of the placement group. +* `strategy` - (Required) The placement strategy. The only supported value is `cluster` + +## Attributes Reference + +The following attributes are exported: + +* `id` - The name of the placement group. diff --git a/website/source/docs/providers/aws/r/rds_cluster.html.markdown b/website/source/docs/providers/aws/r/rds_cluster.html.markdown new file mode 100644 index 000000000..c45814f46 --- /dev/null +++ b/website/source/docs/providers/aws/r/rds_cluster.html.markdown @@ -0,0 +1,87 @@ +--- +layout: "aws" +page_title: "AWS: aws_rds_cluster" +sidebar_current: "docs-aws-resource-rds-cluster" +description: |- + Provides an RDS Cluster Resource +--- + +# aws\_rds\_cluster + +Provides an RDS Cluster Resource. A Cluster Resource defines attributes that are +applied to the entire cluster of [RDS Cluster Instances][3]. Use the RDS Cluster +resource and RDS Cluster Instances to create and use Amazon Aurora, a MySQL-compatible +database engine. + +For more information on Amazon Aurora, see [Aurora on Amazon RDS][2] in the Amazon RDS User Guide. + +## Example Usage + +``` +resource "aws_rds_cluster" "default" { + cluster_identifier = "aurora-cluster-demo" + availability_zones = ["us-west-2a","us-west-2b","us-west-2c"] + database_name = "mydb" + master_username = "foo" + master_password = "bar" +} +``` + +~> **NOTE:** RDS Clusters resources that are created without any matching +RDS Cluster Instances do not currently display in the AWS Console. + +## Argument Reference + +For more detailed documentation about each argument, refer to +the [AWS official documentation](http://docs.aws.amazon.com/AmazonRDS/latest/CommandLineReference/CLIReference-cmd-ModifyDBInstance.html). + +The following arguments are supported: + +* `cluster_identifier` - (Required) The Cluster Identifier. Must be a lower case +string. +* `database_name` - (Optional) The name for your database of up to 8 alpha-numeric + characters. If you do not provide a name, Amazon RDS will not create a + database in the DB cluster you are creating +* `master_password` - (Required) Password for the master DB user. Note that this may + show up in logs, and it will be stored in the state file +* `master_username` - (Required) Username for the master DB user +* `final_snapshot_identifier` - (Optional) The name of your final DB snapshot + when this DB cluster is deleted. If omitted, no final snapshot will be + made. +* `availability_zones` - (Optional) A list of EC2 Availability Zones that + instances in the DB cluster can be created in +* `backup_retention_period` - (Optional) The days to retain backups for. Default +1 +* `port` - (Optional) The port on which the DB accepts connections +* `vpc_security_group_ids` - (Optional) List of VPC security groups to associate + with the Cluster +* `apply_immediately` - (Optional) Specifies whether any cluster modifications + are applied immediately, or during the next maintenance window. Default is + `false`. See [Amazon RDS Documentation for more information.](http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.DBInstance.Modifying.html) + +## Attributes Reference + +The following attributes are exported: + +* `id` - The RDS Cluster Identifier +* `cluster_identifier` - The RDS Cluster Identifier +* `cluster_members` – List of RDS Instances that are a part of this cluster +* `address` - The address of the RDS instance. +* `allocated_storage` - The amount of allocated storage +* `availability_zones` - The availability zone of the instance +* `backup_retention_period` - The backup retention period +* `backup_window` - The backup window +* `endpoint` - The primary, writeable connection endpoint +* `engine` - The database engine +* `engine_version` - The database engine version +* `maintenance_window` - The instance maintenance window +* `database_name` - The database name +* `port` - The database port +* `status` - The RDS instance status +* `username` - The master username for the database +* `storage_encrypted` - Specifies whether the DB instance is encrypted + +[1]: http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.Replication.html + +[2]: http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Aurora.html +[3]: /docs/providers/aws/r/rds_cluster_instance.html diff --git a/website/source/docs/providers/aws/r/rds_cluster_instance.html.markdown b/website/source/docs/providers/aws/r/rds_cluster_instance.html.markdown new file mode 100644 index 000000000..782339a34 --- /dev/null +++ b/website/source/docs/providers/aws/r/rds_cluster_instance.html.markdown @@ -0,0 +1,89 @@ +--- +layout: "aws" +page_title: "AWS: aws_rds_cluster_instance" +sidebar_current: "docs-aws-resource-rds-cluster-instance" +description: |- + Provides an RDS Cluster Resource Instance +--- + +# aws\_rds\_cluster\_instance + +Provides an RDS Cluster Resource Instance. A Cluster Instance Resource defines +attributes that are specific to a single instance in a [RDS Cluster][3], +specifically running Amazon Aurora. + +Unlike other RDS resources that support replication, with Amazon Aurora you do +not designate a primary and subsequent replicas. Instead, you simply add RDS +Instances and Aurora manages the replication. You can use the [count][5] +meta-parameter to make multiple instances and join them all to the same RDS +Cluster, or you may specify different Cluster Instance resources with various +`instance_class` sizes. + +For more information on Amazon Aurora, see [Aurora on Amazon RDS][2] in the Amazon RDS User Guide. + +## Example Usage + +``` +resource "aws_rds_cluster_instance" "cluster_instances" { + count = 2 + identifier = "aurora-cluster-demo" + cluster_identifer = "${aws_rds_cluster.default.id}" + instance_class = "db.r3.large" +} + +resource "aws_rds_cluster" "default" { + cluster_identifier = "aurora-cluster-demo" + availability_zones = ["us-west-2a","us-west-2b","us-west-2c"] + database_name = "mydb" + master_username = "foo" + master_password = "bar" +} +``` + +## Argument Reference + +For more detailed documentation about each argument, refer to +the [AWS official documentation](http://docs.aws.amazon.com/AmazonRDS/latest/CommandLineReference/CLIReference-cmd-ModifyDBInstance.html). + +The following arguments are supported: + +* `identifier` - (Optional) The Instance Identifier. Must be a lower case +string. If omitted, a unique identifier will be generated. +* `cluster_identifier` - (Required) The Cluster Identifier for this Instance to +join. Must be a lower case +string. +* `instance_class` - (Required) The instance class to use. For details on CPU +and memory, see [Scaling Aurora DB Instances][4]. Aurora currently + supports the below instance classes. + - db.r3.large + - db.r3.xlarge + - db.r3.2xlarge + - db.r3.4xlarge + - db.r3.8xlarge +* `publicly_accessible` - (Optional) Bool to control if instance is publicly accessible. +Default `false`. See the documentation on [Creating DB Instances][6] for more +details on controlling this property. + +## Attributes Reference + +The following attributes are exported: + +* `cluster_identifier` - The RDS Cluster Identifier +* `identifier` - The Instance identifier +* `id` - The Instance identifier +* `writer` – Boolean indicating if this instance is writable. `False` indicates +this instance is a read replica +* `allocated_storage` - The amount of allocated storage +* `availability_zones` - The availability zone of the instance +* `endpoint` - The IP address for this instance. May not be writable +* `engine` - The database engine +* `engine_version` - The database engine version +* `database_name` - The database name +* `port` - The database port +* `status` - The RDS instance status + +[2]: http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Aurora.html +[3]: /docs/providers/aws/r/rds_cluster.html +[4]: http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Aurora.Managing.html +[5]: /docs/configuration/resources.html#count +[6]: http://docs.aws.amazon.com/fr_fr/AmazonRDS/latest/APIReference/API_CreateDBInstance.html diff --git a/website/source/docs/providers/aws/r/s3_bucket.html.markdown b/website/source/docs/providers/aws/r/s3_bucket.html.markdown index 011f73347..da008053c 100644 --- a/website/source/docs/providers/aws/r/s3_bucket.html.markdown +++ b/website/source/docs/providers/aws/r/s3_bucket.html.markdown @@ -41,6 +41,23 @@ resource "aws_s3_bucket" "b" { } ``` +### Using CORS + +``` +resource "aws_s3_bucket" "b" { + bucket = "s3-website-test.hashicorp.com" + acl = "public-read" + + cors_rule { + allowed_headers = ["*"] + allowed_methods = ["PUT","POST"] + allowed_origins = ["https://s3-website-test.hashicorp.com"] + expose_headers = ["ETag"] + max_age_seconds = 3000 + } +} +``` + ### Using versioning ``` @@ -64,6 +81,7 @@ The following arguments are supported: * `tags` - (Optional) A mapping of tags to assign to the bucket. * `force_destroy` - (Optional, Default:false ) A boolean that indicates all objects should be deleted from the bucket so that the bucket can be destroyed without error. These objects are *not* recoverable. * `website` - (Optional) A website object (documented below). +* `cors_rule` - (Optional) A rule of [Cross-Origin Resource Sharing](http://docs.aws.amazon.com/AmazonS3/latest/dev/cors.html) (documented below). * `versioning` - (Optional) A state of [versioning](http://docs.aws.amazon.com/AmazonS3/latest/dev/Versioning.html) (documented below) The website object supports the following: @@ -72,6 +90,14 @@ The website object supports the following: * `error_document` - (Optional) An absolute path to the document to return in case of a 4XX error. * `redirect_all_requests_to` - (Optional) A hostname to redirect all website requests for this bucket to. +The CORS supports the following: + +* `allowed_headers` (Optional) Specifies which headers are allowed. +* `allowed_methods` (Required) Specifies which methods are allowed. Can be `GET`, `PUT`, `POST`, `DELETE` or `HEAD`. +* `allowed_origins` (Required) Specifies which origins are allowed. +* `expose_headers` (Optional) Specifies expose header in the response. +* `max_age_seconds` (Optional) Specifies time in seconds that browser can cache the response for a preflight request. + The versioning supports the following: * `enabled` - (Optional) Enable versioning. Once you version-enable a bucket, it can never return to an unversioned state. You can, however, suspend versioning on that bucket. diff --git a/website/source/docs/providers/aws/r/s3_bucket_object.html.markdown b/website/source/docs/providers/aws/r/s3_bucket_object.html.markdown index 63d201b82..bd54047d6 100644 --- a/website/source/docs/providers/aws/r/s3_bucket_object.html.markdown +++ b/website/source/docs/providers/aws/r/s3_bucket_object.html.markdown @@ -29,6 +29,15 @@ The following arguments are supported: * `bucket` - (Required) The name of the bucket to put the file in. * `key` - (Required) The name of the object once it is in the bucket. * `source` - (Required) The path to the source file being uploaded to the bucket. +* `content` - (Required unless `source` given) The literal content being uploaded to the bucket. +* `cache_control` - (Optional) Specifies caching behavior along the request/reply chain Read [w3c cache_control](http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.9) for futher details. +* `content_disposition` - (Optional) Specifies presentational information for the object. Read [wc3 content_disposition](http://www.w3.org/Protocols/rfc2616/rfc2616-sec19.html#sec19.5.1) for further information. +* `content_encoding` - (Optional) Specifies what content encodings have been applied to the object and thus what decoding mechanisms must be applied to obtain the media-type referenced by the Content-Type header field. Read [w3c content encoding](http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.11) for further information. +* `content_language` - (Optional) The language the content is in e.g. en-US or en-GB. +* `content_type` - (Optional) A standard MIME type describing the format of the object data, e.g. application/octet-stream. All Valid MIME Types are valid for this input. + +Either `source` or `content` must be provided to specify the bucket content. +These two arguments are mutually-exclusive. ## Attributes Reference diff --git a/website/source/docs/providers/aws/r/security_group_rule.html.markdown b/website/source/docs/providers/aws/r/security_group_rule.html.markdown index b77286296..4a772d1a7 100644 --- a/website/source/docs/providers/aws/r/security_group_rule.html.markdown +++ b/website/source/docs/providers/aws/r/security_group_rule.html.markdown @@ -49,6 +49,7 @@ or `egress` (outbound). depending on the `type`. * `self` - (Optional) If true, the security group itself will be added as a source to this ingress rule. +* `to_port` - (Required) The end range port. ## Attributes Reference diff --git a/website/source/docs/providers/aws/r/spot_instance_request.html.markdown b/website/source/docs/providers/aws/r/spot_instance_request.html.markdown index abb1f4705..5ca6daf9f 100644 --- a/website/source/docs/providers/aws/r/spot_instance_request.html.markdown +++ b/website/source/docs/providers/aws/r/spot_instance_request.html.markdown @@ -51,6 +51,9 @@ Spot Instance Requests support all the same arguments as * `wait_for_fulfillment` - (Optional; Default: false) If set, Terraform will wait for the Spot Request to be fulfilled, and will throw an error if the timeout of 10m is reached. +* `spot_type` - (Optional; Default: "persistent") If set to "one-time", after + the instance is terminated, the spot request will be closed. Also, Terraform + can't manage one-time spot requests, just launch them. ## Attributes Reference diff --git a/website/source/docs/providers/aws/r/vpc_endpoint.html.markdown b/website/source/docs/providers/aws/r/vpc_endpoint.html.markdown index 813aba329..51fdbf5ee 100644 --- a/website/source/docs/providers/aws/r/vpc_endpoint.html.markdown +++ b/website/source/docs/providers/aws/r/vpc_endpoint.html.markdown @@ -27,7 +27,7 @@ The following arguments are supported: * `vpc_id` - (Required) The ID of the VPC in which the endpoint will be used. * `service_name` - (Required) The AWS service name, in the form `com.amazonaws.region.service`. -* `policy_document` - (Optional) A policy to attach to the endpoint that controls access to the service. +* `policy` - (Optional) A policy to attach to the endpoint that controls access to the service. * `route_table_ids` - (Optional) One or more route table IDs. ## Attributes Reference diff --git a/website/source/docs/providers/aws/r/vpn_connection_route.html.markdown b/website/source/docs/providers/aws/r/vpn_connection_route.html.markdown index a0d2f2ccc..0f4782f2d 100644 --- a/website/source/docs/providers/aws/r/vpn_connection_route.html.markdown +++ b/website/source/docs/providers/aws/r/vpn_connection_route.html.markdown @@ -24,7 +24,7 @@ resource "aws_vpn_gateway" "vpn_gateway" { resource "aws_customer_gateway" "customer_gateway" { bgp_asn = 60000 ip_address = "172.0.0.1" - type = ipsec.1 + type = "ipsec.1" } resource "aws_vpn_connection" "main" { diff --git a/website/source/docs/providers/cloudstack/r/vpc.html.markdown b/website/source/docs/providers/cloudstack/r/vpc.html.markdown index d77357631..4610feb0d 100644 --- a/website/source/docs/providers/cloudstack/r/vpc.html.markdown +++ b/website/source/docs/providers/cloudstack/r/vpc.html.markdown @@ -39,7 +39,7 @@ The following arguments are supported: * `project` - (Optional) The name or ID of the project to deploy this instance to. Changing this forces a new resource to be created. - + * `zone` - (Required) The name or ID of the zone where this disk volume will be available. Changing this forces a new resource to be created. @@ -49,3 +49,4 @@ The following attributes are exported: * `id` - The ID of the VPC. * `display_text` - The display text of the VPC. +* `source_nat_ip` - The source NAT IP assigned to the VPC. diff --git a/website/source/docs/providers/docker/r/container.html.markdown b/website/source/docs/providers/docker/r/container.html.markdown index 5653f139a..91a4714b7 100644 --- a/website/source/docs/providers/docker/r/container.html.markdown +++ b/website/source/docs/providers/docker/r/container.html.markdown @@ -35,7 +35,8 @@ The following arguments are supported: as is shown in the example above. * `command` - (Optional, list of strings) The command to use to start the - container. + container. For example, to run `/usr/bin/myprogram -f baz.conf` set the + command to be `["/usr/bin/myprogram", "-f", "baz.conf"]`. * `dns` - (Optional, set of strings) Set of DNS servers. * `env` - (Optional, set of strings) Environmental variables to set. * `links` - (Optional, set of strings) Set of links for link based @@ -76,7 +77,7 @@ the following: volume will be mounted. * `host_path` - (Optional, string) The path on the host where the volume is coming from. -* `read_only` - (Optinal, bool) If true, this volume will be readonly. +* `read_only` - (Optional, bool) If true, this volume will be readonly. Defaults to false. ## Attributes Reference diff --git a/website/source/docs/providers/google/r/compute_backend_service.html.markdown b/website/source/docs/providers/google/r/compute_backend_service.html.markdown index c9d9396c5..5a862e238 100644 --- a/website/source/docs/providers/google/r/compute_backend_service.html.markdown +++ b/website/source/docs/providers/google/r/compute_backend_service.html.markdown @@ -19,6 +19,7 @@ resource "google_compute_backend_service" "foobar" { port_name = "http" protocol = "HTTP" timeout_sec = 10 + region = us-central1 backend { group = "${google_compute_instance_group_manager.foo.instance_group}" @@ -67,6 +68,7 @@ The following arguments are supported: for checking the health of the backend service. * `description` - (Optional) The textual description for the backend service. * `backend` - (Optional) The list of backends that serve this BackendService. See *Backend* below. +* `region` - (Optional) The region the service sits in. If not specified, the project region is used. * `port_name` - (Optional) The name of a service that has been added to an instance group in this backend. See [related docs](https://cloud.google.com/compute/docs/instance-groups/#specifying_service_endpoints) for details. Defaults to http. diff --git a/website/source/docs/providers/google/r/compute_instance.html.markdown b/website/source/docs/providers/google/r/compute_instance.html.markdown index bf8add9e6..7426d700c 100644 --- a/website/source/docs/providers/google/r/compute_instance.html.markdown +++ b/website/source/docs/providers/google/r/compute_instance.html.markdown @@ -82,8 +82,8 @@ The following arguments are supported: are not allowed to be used simultaneously. * `network_interface` - (Required) Networks to attach to the instance. This can be - specified multiple times for multiple networks. Structure is documented - below. + specified multiple times for multiple networks, but GCE is currently limited + to just 1. Structure is documented below. * `network` - (DEPRECATED, Required) Networks to attach to the instance. This can be specified multiple times for multiple networks. Structure is documented @@ -145,6 +145,17 @@ The `service_account` block supports: * `scopes` - (Required) A list of service scopes. Both OAuth2 URLs and gcloud short names are supported. +The `scheduling` block supports: + +* `preemptible` - (Optional) Is the instance preemptible. + +* `on_host_maintenance` - (Optional) Describes maintenance behavior for + the instance. Can be MIGRATE or TERMINATE, for more info, read + [here](https://cloud.google.com/compute/docs/instances/setting-instance-scheduling-options) + +* `automatic_restart` - (Optional) Specifies if the instance should be + restarted if it was terminated by Compute Engine (not a user). + ## Attributes Reference The following attributes are exported: diff --git a/website/source/docs/providers/google/r/compute_target_pool.html.markdown b/website/source/docs/providers/google/r/compute_target_pool.html.markdown index 1efc5905e..82bc4a7d1 100644 --- a/website/source/docs/providers/google/r/compute_target_pool.html.markdown +++ b/website/source/docs/providers/google/r/compute_target_pool.html.markdown @@ -49,6 +49,7 @@ The following arguments are supported: * `session_affinity` - (Optional) How to distribute load. Options are "NONE" (no affinity). "CLIENT\_IP" (hash of the source/dest addresses / ports), and "CLIENT\_IP\_PROTO" also includes the protocol (default "NONE"). +* `region` - (Optional) Where the target pool resides. Defaults to project region. ## Attributes Reference diff --git a/website/source/docs/providers/google/r/storage_bucket.html.markdown b/website/source/docs/providers/google/r/storage_bucket.html.markdown index a7eea21b1..2821e5588 100644 --- a/website/source/docs/providers/google/r/storage_bucket.html.markdown +++ b/website/source/docs/providers/google/r/storage_bucket.html.markdown @@ -17,9 +17,8 @@ Example creating a private bucket in standard storage, in the EU region. ``` resource "google_storage_bucket" "image-store" { - name = "image-store-bucket" - predefined_acl = "projectPrivate" - location = "EU" + name = "image-store-bucket" + location = "EU" website { main_page_suffix = "index.html" not_found_page = "404.html" @@ -33,7 +32,8 @@ resource "google_storage_bucket" "image-store" { The following arguments are supported: * `name` - (Required) The name of the bucket. -* `predefined_acl` - (Optional, Default: 'private') The [canned GCS ACL](https://cloud.google.com/storage/docs/access-control#predefined-acl) to apply. +* `predefined_acl` - (Optional, Deprecated) The [canned GCS ACL](https://cloud.google.com/storage/docs/access-control#predefined-acl) to apply. Please switch +to `google_storage_bucket_acl.predefined_acl`. * `location` - (Optional, Default: 'US') The [GCS location](https://cloud.google.com/storage/docs/bucket-locations) * `force_destroy` - (Optional, Default: false) When deleting a bucket, this boolean option will delete all contained objects. If you try to delete a bucket that contains objects, Terraform will fail that run. diff --git a/website/source/docs/providers/google/r/storage_bucket_acl.html.markdown b/website/source/docs/providers/google/r/storage_bucket_acl.html.markdown new file mode 100644 index 000000000..b7734b065 --- /dev/null +++ b/website/source/docs/providers/google/r/storage_bucket_acl.html.markdown @@ -0,0 +1,36 @@ +--- +layout: "google" +page_title: "Google: google_storage_bucket_acl" +sidebar_current: "docs-google-resource-storage-acl" +description: |- + Creates a new bucket ACL in Google Cloud Storage. +--- + +# google\_storage\_bucket\_acl + +Creates a new bucket ACL in Google cloud storage service(GCS). + +## Example Usage + +Example creating an ACL on a bucket with one owner, and one reader. + +``` +resource "google_storage_bucket" "image-store" { + name = "image-store-bucket" + location = "EU" +} + +resource "google_storage_bucket_acl" "image-store-acl" { + bucket = "${google_storage_bucket.image_store.name}" + role_entity = ["OWNER:user-my.email@gmail.com", + "READER:group-mygroup"] +} + +``` + +## Argument Reference + +* `bucket` - (Required) The name of the bucket it applies to. +* `predefined_acl` - (Optional) The [canned GCS ACL](https://cloud.google.com/storage/docs/access-control#predefined-acl) to apply. Must be set if both `role_entity` and `default_acl` are not. +* `default_acl` - (Optional) The [canned GCS ACL](https://cloud.google.com/storage/docs/access-control#predefined-acl) to apply to future buckets. Must be set both `role_entity` and `predefined_acl` are not. +* `role_entity` - (Optional) List of role/entity pairs in the form `ROLE:entity`. See [GCS Bucket ACL documentation](https://cloud.google.com/storage/docs/json_api/v1/bucketAccessControls) for more details. Must be set if both `predefined_acl` and `default_acl` are not. diff --git a/website/source/docs/providers/google/r/storage_bucket_object.html.markdown b/website/source/docs/providers/google/r/storage_bucket_object.html.markdown index 76e4b7c5d..61b32823f 100644 --- a/website/source/docs/providers/google/r/storage_bucket_object.html.markdown +++ b/website/source/docs/providers/google/r/storage_bucket_object.html.markdown @@ -20,7 +20,6 @@ resource "google_storage_bucket_object" "picture" { name = "butterfly01" source = "/images/nature/garden-tiger-moth.jpg" bucket = "image-store" - predefined_acl = "publicRead" } ``` @@ -32,7 +31,8 @@ The following arguments are supported: * `name` - (Required) The name of the object. * `bucket` - (Required) The name of the containing bucket. * `source` - (Required) A path to the data you want to upload. -* `predefined_acl` - (Optional, Default: 'projectPrivate') The [canned GCS ACL](https://cloud.google.com/storage/docs/access-control#predefined-acl) apply. +* `predefined_acl` - (Optional, Deprecated) The [canned GCS ACL](https://cloud.google.com/storage/docs/access-control#predefined-acl) apply. Please switch +to `google_storage_object_acl.predefined_acl`. ## Attributes Reference diff --git a/website/source/docs/providers/google/r/storage_object_acl.html.markdown b/website/source/docs/providers/google/r/storage_object_acl.html.markdown new file mode 100644 index 000000000..9f04d4844 --- /dev/null +++ b/website/source/docs/providers/google/r/storage_object_acl.html.markdown @@ -0,0 +1,43 @@ +--- +layout: "google" +page_title: "Google: google_storage_object_acl" +sidebar_current: "docs-google-resource-storage-acl" +description: |- + Creates a new object ACL in Google Cloud Storage. +--- + +# google\_storage\_object\_acl + +Creates a new object ACL in Google cloud storage service (GCS) + +## Example Usage + +Create an object ACL with one owner and one reader. + +``` +resource "google_storage_bucket" "image-store" { + name = "image-store-bucket" + location = "EU" +} + +resource "google_storage_bucket_object" "image" { + name = "image1" + bucket = "${google_storage_bucket.name}" + source = "image1.jpg" +} + +resource "google_storage_object_acl" "image-store-acl" { + bucket = "${google_storage_bucket.image_store.name}" + object = "${google_storage_bucket_object.image_store.name}" + role_entity = ["OWNER:user-my.email@gmail.com", + "READER:group-mygroup"] +} + +``` + +## Argument Reference + +* `bucket` - (Required) The name of the bucket it applies to. +* `object` - (Required) The name of the object it applies to. +* `predefined_acl` - (Optional) The [canned GCS ACL](https://cloud.google.com/storage/docs/access-control#predefined-acl) to apply. Must be set if `role_entity` is not. +* `role_entity` - (Optional) List of role/entity pairs in the form `ROLE:entity`. See [GCS Object ACL documentation](https://cloud.google.com/storage/docs/json_api/v1/objectAccessControls) for more details. Must be set if `predefined_acl` is not. diff --git a/website/source/docs/providers/packet/index.html.markdown b/website/source/docs/providers/packet/index.html.markdown new file mode 100644 index 000000000..bbe9f5d1e --- /dev/null +++ b/website/source/docs/providers/packet/index.html.markdown @@ -0,0 +1,47 @@ +--- +layout: "packet" +page_title: "Provider: Packet" +sidebar_current: "docs-packet-index" +description: |- + The Packet provider is used to interact with the resources supported by Packet. The provider needs to be configured with the proper credentials before it can be used. +--- + +# Packet Provider + +The Packet provider is used to interact with the resources supported by Packet. +The provider needs to be configured with the proper credentials before it can be used. + +Use the navigation to the left to read about the available resources. + +## Example Usage + +``` +# Configure the Packet Provider +provider "packet" { + auth_token = "${var.auth_token}" +} + +# Create a project +resource "packet_project" "tf_project_1" { + name = "My First Terraform Project" + payment_method = "PAYMENT_METHOD_ID" +} + +# Create a device and add it to tf_project_1 +resource "packet_device" "web1" { + hostname = "tf.coreos2" + plan = "baremetal_1" + facility = "ewr1" + operating_system = "coreos_stable" + billing_cycle = "hourly" + project_id = "${packet_project.tf_project_1.id}" +} +``` + +## Argument Reference + +The following arguments are supported: + +* `auth_token` - (Required) This is your Packet API Auth token. This can also be specified + with the `PACKET_AUTH_TOKEN` shell environment variable. + diff --git a/website/source/docs/providers/packet/r/device.html.markdown b/website/source/docs/providers/packet/r/device.html.markdown new file mode 100644 index 000000000..6d57dcbb5 --- /dev/null +++ b/website/source/docs/providers/packet/r/device.html.markdown @@ -0,0 +1,55 @@ +--- +layout: "packet" +page_title: "Packet: packet_device" +sidebar_current: "docs-packet-resource-device" +description: |- + Provides a Packet device resource. This can be used to create, modify, and delete devices. +--- + +# packet\_device + +Provides a Packet device resource. This can be used to create, +modify, and delete devices. + +## Example Usage + +``` +# Create a device and add it to tf_project_1 +resource "packet_device" "web1" { + hostname = "tf.coreos2" + plan = "baremetal_1" + facility = "ewr1" + operating_system = "coreos_stable" + billing_cycle = "hourly" + project_id = "${packet_project.tf_project_1.id}" +} +``` + +## Argument Reference + +The following arguments are supported: + +* `hostname` - (Required) The device name +* `project_id` - (Required) The id of the project in which to create the device +* `operating_system` - (Required) The operating system slug +* `facility` - (Required) The facility in which to create the device +* `plan` - (Required) The config type slug +* `billing_cycle` - (Required) monthly or hourly +* `user_data` (Optional) - A string of the desired User Data for the device. + +## Attributes Reference + +The following attributes are exported: + +* `id` - The ID of the device +* `hostname`- The hostname of the device +* `project_id`- The Id of the project the device belonds to +* `facility` - The facility the device is in +* `plan` - The config type of the device +* `network` - The private and public v4 and v6 IPs assigned to the device +* `locked` - Is the device locked +* `billing_cycle` - The billing cycle of the device (monthly or hourly) +* `operating_system` - The operating system running on the device +* `status` - The status of the device +* `created` - The timestamp for when the device was created +* `updated` - The timestamp for the last time the device was udpated diff --git a/website/source/docs/providers/packet/r/project.html.markdown b/website/source/docs/providers/packet/r/project.html.markdown new file mode 100644 index 000000000..b008f864f --- /dev/null +++ b/website/source/docs/providers/packet/r/project.html.markdown @@ -0,0 +1,40 @@ +--- +layout: "packet" +page_title: "Packet: packet_project" +sidebar_current: "docs-packet-resource-project" +description: |- + Provides a Packet Project resource. +--- + +# packet\_project + +Provides a Packet Project resource to allow you manage devices +in your projects. + +## Example Usage + +``` +# Create a new Project +resource "packet_project" "tf_project_1" { + name = "Terraform Fun" + payment_method = "payment-method-id" +} +``` + +## Argument Reference + +The following arguments are supported: + +* `name` - (Required) The name of the SSH key for identification +* `payment_method` - (Required) The id of the payment method on file to use for services created +on this project. + +## Attributes Reference + +The following attributes are exported: + +* `id` - The unique ID of the key +* `payment_method` - The id of the payment method on file to use for services created +on this project. +* `created` - The timestamp for when the SSH key was created +* `updated` - The timestamp for the last time the SSH key was udpated diff --git a/website/source/docs/providers/packet/r/ssh_key.html.markdown b/website/source/docs/providers/packet/r/ssh_key.html.markdown new file mode 100644 index 000000000..cb27aaa77 --- /dev/null +++ b/website/source/docs/providers/packet/r/ssh_key.html.markdown @@ -0,0 +1,43 @@ +--- +layout: "packet" +page_title: "Packet: packet_ssh_key" +sidebar_current: "docs-packet-resource-ssh-key" +description: |- + Provides a Packet SSH key resource. +--- + +# packet\_ssh_key + +Provides a Packet SSH key resource to allow you manage SSH +keys on your account. All ssh keys on your account are loaded on +all new devices, they do not have to be explicitly declared on +device creation. + +## Example Usage + +``` +# Create a new SSH key +resource "packet_ssh_key" "key1" { + name = "terraform-1" + public_key = "${file("/home/terraform/.ssh/id_rsa.pub")}" +} +``` + +## Argument Reference + +The following arguments are supported: + +* `name` - (Required) The name of the SSH key for identification +* `public_key` - (Required) The public key. If this is a file, it +can be read using the file interpolation function + +## Attributes Reference + +The following attributes are exported: + +* `id` - The unique ID of the key +* `name` - The name of the SSH key +* `public_key` - The text of the public key +* `fingerprint` - The fingerprint of the SSH key +* `created` - The timestamp for when the SSH key was created +* `updated` - The timestamp for the last time the SSH key was udpated diff --git a/website/source/docs/providers/rundeck/index.html.markdown b/website/source/docs/providers/rundeck/index.html.markdown index e948ca4e3..529f4f44b 100644 --- a/website/source/docs/providers/rundeck/index.html.markdown +++ b/website/source/docs/providers/rundeck/index.html.markdown @@ -70,6 +70,6 @@ resource "rundeck_public_key" "anvils" { resource "rundeck_private_key" "anvils" { path = "anvils/id_rsa" - key_material_file = "${path.module}/id_rsa.pub" + key_material = "${file(\"id_rsa.pub\")}" } ``` diff --git a/website/source/docs/providers/rundeck/r/private_key.html.md b/website/source/docs/providers/rundeck/r/private_key.html.md index 1850f588e..3d74aaf1d 100644 --- a/website/source/docs/providers/rundeck/r/private_key.html.md +++ b/website/source/docs/providers/rundeck/r/private_key.html.md @@ -17,7 +17,7 @@ it runs commands. ``` resource "rundeck_private_key" "anvils" { path = "anvils/id_rsa" - key_material = "${file(\"/id_rsa\")}" + key_material = "${file("/id_rsa")}" } ``` diff --git a/website/source/docs/providers/vsphere/index.html.markdown b/website/source/docs/providers/vsphere/index.html.markdown new file mode 100644 index 000000000..17448b024 --- /dev/null +++ b/website/source/docs/providers/vsphere/index.html.markdown @@ -0,0 +1,59 @@ +--- +layout: "vsphere" +page_title: "Provider: vSphere" +sidebar_current: "docs-vsphere-index" +description: |- + The vSphere provider is used to interact with the resources supported by + vSphere. The provider needs to be configured with the proper credentials before + it can be used. +--- + +# vSphere Provider + +The vSphere provider is used to interact with the resources supported by vSphere. +The provider needs to be configured with the proper credentials before it can be used. + +Use the navigation to the left to read about the available resources. + +~> **NOTE:** The vSphere Provider currently represents _initial support_ and +therefore may undergo significant changes as the community improves it. + +## Example Usage + +``` +# Configure the vSphere Provider +provider "vsphere" { + user = "${var.vsphere_user}" + password = "${var.vsphere_password}" + vcenter_server = "${var.vsphere_vcenter_server}" +} + +# Create a virtual machine +resource "vsphere_virtual_machine" "web" { + name = "terraform_web" + vcpu = 2 + memory = 4096 + + network_interface { + label = "VM Network" + } + + disk { + size = 1 + iops = 500 + } +} +``` + +## Argument Reference + +The following arguments are used to configure the vSphere Provider: + +* `user` - (Required) This is the username for vSphere API operations. Can also + be specified with the `VSPHERE_USER` environment variable. +* `password` - (Required) This is the password for vSphere API operations. Can + also be specified with the `VSPHERE_PASSWORD` environment variable. +* `vcenter_server` - (Required) This is the vCenter server name for vSphere API + operations. Can also be specified with the `VSPHERE_VCENTER` environment + variable. + diff --git a/website/source/docs/providers/vsphere/r/virtual_machine.html.markdown b/website/source/docs/providers/vsphere/r/virtual_machine.html.markdown new file mode 100644 index 000000000..6ce012d65 --- /dev/null +++ b/website/source/docs/providers/vsphere/r/virtual_machine.html.markdown @@ -0,0 +1,69 @@ +--- +layout: "vsphere" +page_title: "vSphere: vsphere_virtual_machine" +sidebar_current: "docs-vsphere-resource-virtual-machine" +description: |- + Provides a vSphere virtual machine resource. This can be used to create, modify, and delete virtual machines. +--- + +# vsphere\_virtual\_machine + +Provides a vSphere virtual machine resource. This can be used to create, +modify, and delete virtual machines. + +## Example Usage + +``` +resource "vsphere_virtual_machine" "web" { + name = "terraform_web" + vcpu = 2 + memory = 4096 + + network_interface { + label = "VM Network" + } + + disk { + size = 1 + iops = 500 + } +} +``` + +## Argument Reference + +The following arguments are supported: + +* `name` - (Required) The virtual machine name +* `vcpu` - (Required) The number of virtual CPUs to allocate to the virtual machine +* `memory` - (Required) The amount of RAM (in MB) to allocate to the virtual machine +* `datacenter` - (Optional) The name of a Datacenter in which to launch the virtual machine +* `cluster` - (Optional) Name of a Cluster in which to launch the virtual machine +* `resource_pool` (Optional) The name of a Resource Pool in which to launch the virtual machine +* `gateway` - (Optional) Gateway IP address to use for all network interfaces +* `domain` - (Optional) A FQDN for the virtual machine; defaults to "vsphere.local" +* `time_zone` - (Optional) The [time zone](https://www.vmware.com/support/developer/vc-sdk/visdk41pubs/ApiReference/timezone.html) to set on the virtual machine. Defaults to "Etc/UTC" +* `dns_suffixes` - (Optional) List of name resolution suffixes for the virtual network adapter +* `dns_servers` - (Optional) List of DNS servers for the virtual network adapter; defaults to 8.8.8.8, 8.8.4.4 +* `network_interface` - (Required) Configures virtual network interfaces; see [Network Interfaces](#network-interfaces) below for details. +* `disk` - (Required) Configures virtual disks; see [Disks](#disks) below for details +* `boot_delay` - (Optional) Time in seconds to wait for machine network to be ready. + + +## Network Interfaces + +Network interfaces support the following attributes: + +* `label` - (Required) Label to assign to this network interface +* `ip_address` - (Optional) Static IP to assign to this network interface. Interface will use DHCP if this is left blank. +* `subnet_mask` - (Optional) Subnet mask to use when statically assigning an IP. + + +## Disks + +Disks support the following attributes: + +* `template` - (Required if size not provided) Template for this disk. +* `datastore` - (Optional) Datastore for this disk +* `size` - (Required if template not provided) Size of this disk (in GB). +* `iops` - (Optional) Number of virtual iops to allocate for this disk. diff --git a/website/source/docs/provisioners/null_resource.html.markdown b/website/source/docs/provisioners/null_resource.html.markdown new file mode 100644 index 000000000..51a409042 --- /dev/null +++ b/website/source/docs/provisioners/null_resource.html.markdown @@ -0,0 +1,60 @@ +--- +layout: "docs" +page_title: "Provisioners: null_resource" +sidebar_current: "docs-provisioners-null-resource" +description: |- + The `null_resource` is a resource allows you to configure provisioners that + are not directly associated with a single exiting resource. +--- + +# null\_resource + +The `null_resource` is a resource that allows you to configure provisioners +that are not directly associated with a single existing resource. + +A `null_resource` behaves exactly like any other resource, so you configure +[provisioners](/docs/provisioners/index.html), [connection +details](/docs/provisioners/connection.html), and other meta-parameters in the +same way you would on any other resource. + +This allows fine-grained control over when provisioners run in the dependency +graph. + +## Example usage + +``` +# Bootstrap a cluster after all its instances are up +resource "aws_instance" "cluster" { + count = 3 + // ... +} + +resource "null_resource" "cluster" { + # Changes to any instance of the cluster requires re-provisioning + triggers { + cluster_instance_ids = "${join(",", aws_instance.cluster.*.id)}" + } + + # Bootstrap script can run on any instance of the cluster + # So we just choose the first in this case + connection { + host = "${element(aws_instance.cluster.*.public_ip, 0)}" + } + + provisioner "remote-exec" { + # Bootstrap script called with private_ip of each node in the clutser + inline = [ + "bootstrap-cluster.sh ${join(" ", aws_instance.cluster.*.private_ip}" + ] + } +} +``` + +## Argument Reference + +In addition to all the resource configuration available, `null_resource` supports the following specific configuration options: + + * `triggers` - A mapping of values which should trigger a rerun of this set of + provisioners. Values are meant to be interpolated references to variables or + attributes of other resources. + diff --git a/website/source/docs/provisioners/remote-exec.html.markdown b/website/source/docs/provisioners/remote-exec.html.markdown index d771e5586..7ce46c684 100644 --- a/website/source/docs/provisioners/remote-exec.html.markdown +++ b/website/source/docs/provisioners/remote-exec.html.markdown @@ -63,7 +63,10 @@ resource "aws_instance" "web" { } provisioner "remote-exec" { - inline = ["/tmp/script.sh args"] + inline = [ + "chmod +x /tmp/script.sh", + "/tmp/script.sh args" + ] } } ``` diff --git a/website/source/downloads.html.erb b/website/source/downloads.html.erb index 1eb63cc22..a979e2772 100644 --- a/website/source/downloads.html.erb +++ b/website/source/downloads.html.erb @@ -9,46 +9,50 @@ description: |-

Download Terraform

-
-
-

- Below are all available downloads for the latest version of Terraform - (<%= latest_version %>). Please download the proper package for your - operating system and architecture. You can find SHA256 checksums - for packages here. -

+
+
+

+ Below are the available downloads for the latest version of Terraform + (<%= latest_version %>). Please download the proper package for your + operating system and architecture. +

+

+ You can find the + + SHA256 checksums for Terraform <%= latest_version %> + + online and you can + + verify the checksums signature file + + which has been signed using HashiCorp's GPG key. + You can also download older versions of Terraform from the releases service. +

+
+
-

- Each release archive is a zip file containing the Terraform binary - executables at the top level. These executables are meant to be extracted - to a location where they can be found by your shell. -

-
-
- <% product_versions.each do |os, versions| %> -
-
-
<%= system_icon(os) %> -
-
-

<%= os %>

- -
-
-
-
-
-<% end %> + <% product_versions.each do |os, arches| %> +
+
+
<%= system_icon(os) %>
+
+

<%= pretty_os(os) %>

+ +
+
+
+
+ <% end %> -
-
- - - -
-
+
diff --git a/website/source/intro/getting-started/variables.html.md b/website/source/intro/getting-started/variables.html.md index 691c91f38..41e828a72 100644 --- a/website/source/intro/getting-started/variables.html.md +++ b/website/source/intro/getting-started/variables.html.md @@ -95,8 +95,17 @@ files. And like Terraform configuration files, these files can also be JSON. in the form of `TF_VAR_name` to find the value for a variable. For example, the `TF_VAR_access_key` variable can be set to set the `access_key` variable. -We recommend using the "terraform.tfvars" file, and ignoring it from -version control. +We don't recommend saving usernames and password to version control, But you +can create a local secret variables file and use `-var-file` to load it. + +You can use multiple `-var-file` arguments in a single command, with some +checked in to version control and others not checked in. For example: + +``` +$ terraform plan \ + -var-file="secret.tfvars" \ + -var-file="production.tfvars" +``` ## Mappings diff --git a/website/source/layouts/_footer.erb b/website/source/layouts/_footer.erb index df095a820..d42c55cac 100644 --- a/website/source/layouts/_footer.erb +++ b/website/source/layouts/_footer.erb @@ -6,7 +6,10 @@