Merge branch 'master' into pr-3708

* master: (95 commits)
  Update CHANGELOG.md
  Update CHANGELOG.md
  Update CHANGELOG.md
  Update CHANGELOG.md
  upgrade a warning to error
  add some logging around create/update requests for IAM user
  Update CHANGELOG.md
  Update CHANGELOG.md
  Build using `make test` on Travis CI
  Update CHANGELOG.md
  provider/aws: Fix error format in Kinesis Firehose
  Update CHANGELOG.md
  Changes to Aws Kinesis Firehouse Docs
  Update CHANGELOG.md
  modify aws_iam_user_test to correctly check username and path for initial and changed username/path
  Update CHANGELOG.md
  Update CHANGELOG.md
  Prompt for input variables before context validate
  Removing the AWS DBInstance Acceptance Test for withoutEngine as this is now part of the checkInstanceAttributes func
  Making engine_version be computed in the db_instance provider
  ...
This commit is contained in:
clint shryock 2015-11-10 16:52:45 -06:00
commit 0a1890c329
75 changed files with 1982 additions and 570 deletions

View File

@ -9,9 +9,7 @@ go:
install: make updatedeps
script:
- go test ./...
- make vet
#- go test -race ./...
- make test
branches:
only:

View File

@ -6,6 +6,7 @@ FEATURES:
* **New resource: `aws_cloudtrail`** [GH-3094]
* **New resource: `aws_route`** [GH-3548]
* **New resource: `aws_codecommit_repository`** [GH-3274]
* **New resource: `aws_kinesis_firehose_delivery_stream`** [GH-3833]
* **New provider: `tls`** - A utility provider for generating TLS keys/self-signed certificates for development and testing [GH-2778]
* **New resource: `google_sql_database` and `google_sql_database_instance`** [GH-3617]
* **New resource: `google_compute_global_address`** [GH-3701]
@ -15,6 +16,7 @@ FEATURES:
* **New resource: `google_compute_target_https_proxy`** [GH-3728]
* **New resource: `google_compute_global_forwarding_rule`** [GH-3702]
* **New resource: `openstack_networking_port_v2`** [GH-3731]
* New interpolation function: `coalesce` [GH-3814]
IMPROVEMENTS:
@ -27,24 +29,41 @@ IMPROVEMENTS:
* provider/aws: Add notification topic ARN for ElastiCache clusters [GH-3674]
* provider/aws: Add `kinesis_endpoint` for configuring Kinesis [GH-3255]
* provider/aws: Add a computed ARN for S3 Buckets [GH-3685]
* provider/aws: Add S3 support for Lambda Function resource [GH-3794]
* provider/aws: Add `name_prefix` option to launch configurations [GH-3802]
* provider/aws: Provide `source_security_group_id` for ELBs inside a VPC [GH-3780]
* provider/aws: Add snapshot window and retention limits for ElastiCache (Redis) [GH-3707]
* provider/aws: Add username updates for `aws_iam_user` [GH-3227]
* provider/aws: `engine_version` is now optional for DB Instance [GH-3744]
* provider/aws: Add configuration to enable copying RDS tags to final snapshot [GH-3529]
* provider/aws: RDS Cluster additions (`backup_retention_period`, `preferred_backup_window`, `preferred_maintenance_window`) [GH-3757]
* provider/openstack: Use IPv4 as the defeault IP version for subnets [GH-3091]
* provider/aws: Apply security group after restoring db_instance from snapshot [GH-3513]
* provider/aws: Making the AutoScalingGroup name optional [GH-3710]
* provider/openstack: Add "delete on termination" boot-from-volume option [GH-3232]
* provider/digitalocean: Make user_data force a new droplet [GH-3740]
* provider/vsphere: Do not add network interfaces by default [GH-3652]
* provider/openstack: Configure Fixed IPs through ports [GH-3772]
BUG FIXES:
* `terraform remote config`: update `--help` output [GH-3632]
* core: modules on Git branches now update properly [GH-1568]
* core: Fix issue preventing input prompts for unset variables during plan [GH-3843]
* provider/google: Timeout when deleting large instance_group_manager [GH-3591]
* provider/aws: Fix issue with order of Termincation Policies in AutoScaling Groups.
This will introduce plans on upgrade to this version, in order to correct the ordering [GH-2890]
* provider/aws: Allow cluster name, not only ARN for `aws_ecs_service` [GH-3668]
* provider/aws: ignore association not exist on route table destroy [GH-3615]
* provider/aws: Fix policy encoding issue with SNS Topics [GH-3700]
* provider/aws: Correctly export ARN in `iam_saml_provider` [GH-3827]
* provider/aws: Tolerate ElastiCache clusters being deleted outside Terraform [GH-3767]
* provider/aws: Downcase Route 53 record names in statefile to match API output [GH-3574]
* provider/aws: Fix issue that could occur if no ECS Cluster was found for a give name [GH-3829]
* provider/aws: Fix issue with SNS topic policy if omitted [GH-3777]
* provider/azure: various bugfixes [GH-3695]
* provider/digitalocean: fix issue preventing SSH fingerprints from working [GH-3633]
* provider/digitalocean: Fixing the DigitalOcean Droplet 404 potential on refresh of state [GH-3768]
* provider/openstack: Fix several issues causing unresolvable diffs [GH-3440]
* provider/openstack: Safely delete security groups [GH-3696]
* provider/openstack: Ignore order of security_groups in instance [GH-3651]
@ -52,6 +71,7 @@ BUG FIXES:
* provider/openstack: Fix boot from volume [GH-3206]
* provider/openstack: Fix crashing when image is no longer accessible [GH-2189]
* provider/openstack: Better handling of network resource state changes [GH-3712]
* provider/openstack: Fix crashing when no security group is specified [GH-3801]
## 0.6.6 (October 23, 2015)

View File

@ -27,6 +27,7 @@ import (
"github.com/aws/aws-sdk-go/service/elasticache"
elasticsearch "github.com/aws/aws-sdk-go/service/elasticsearchservice"
"github.com/aws/aws-sdk-go/service/elb"
"github.com/aws/aws-sdk-go/service/firehose"
"github.com/aws/aws-sdk-go/service/glacier"
"github.com/aws/aws-sdk-go/service/iam"
"github.com/aws/aws-sdk-go/service/kinesis"
@ -74,6 +75,7 @@ type AWSClient struct {
rdsconn *rds.RDS
iamconn *iam.IAM
kinesisconn *kinesis.Kinesis
firehoseconn *firehose.Firehose
elasticacheconn *elasticache.ElastiCache
lambdaconn *lambda.Lambda
opsworksconn *opsworks.OpsWorks
@ -168,6 +170,9 @@ func (c *Config) Client() (interface{}, error) {
errs = append(errs, authErr)
}
log.Println("[INFO] Initializing Kinesis Firehose Connection")
client.firehoseconn = firehose.New(sess)
log.Println("[INFO] Initializing AutoScaling connection")
client.autoscalingconn = autoscaling.New(sess)

View File

@ -163,107 +163,108 @@ func Provider() terraform.ResourceProvider {
},
ResourcesMap: map[string]*schema.Resource{
"aws_ami": resourceAwsAmi(),
"aws_ami_copy": resourceAwsAmiCopy(),
"aws_ami_from_instance": resourceAwsAmiFromInstance(),
"aws_app_cookie_stickiness_policy": resourceAwsAppCookieStickinessPolicy(),
"aws_autoscaling_group": resourceAwsAutoscalingGroup(),
"aws_autoscaling_notification": resourceAwsAutoscalingNotification(),
"aws_autoscaling_policy": resourceAwsAutoscalingPolicy(),
"aws_cloudformation_stack": resourceAwsCloudFormationStack(),
"aws_cloudtrail": resourceAwsCloudTrail(),
"aws_cloudwatch_log_group": resourceAwsCloudWatchLogGroup(),
"aws_autoscaling_lifecycle_hook": resourceAwsAutoscalingLifecycleHook(),
"aws_cloudwatch_metric_alarm": resourceAwsCloudWatchMetricAlarm(),
"aws_codedeploy_app": resourceAwsCodeDeployApp(),
"aws_codedeploy_deployment_group": resourceAwsCodeDeployDeploymentGroup(),
"aws_codecommit_repository": resourceAwsCodeCommitRepository(),
"aws_customer_gateway": resourceAwsCustomerGateway(),
"aws_db_instance": resourceAwsDbInstance(),
"aws_db_parameter_group": resourceAwsDbParameterGroup(),
"aws_db_security_group": resourceAwsDbSecurityGroup(),
"aws_db_subnet_group": resourceAwsDbSubnetGroup(),
"aws_directory_service_directory": resourceAwsDirectoryServiceDirectory(),
"aws_dynamodb_table": resourceAwsDynamoDbTable(),
"aws_ebs_volume": resourceAwsEbsVolume(),
"aws_ecs_cluster": resourceAwsEcsCluster(),
"aws_ecs_service": resourceAwsEcsService(),
"aws_ecs_task_definition": resourceAwsEcsTaskDefinition(),
"aws_efs_file_system": resourceAwsEfsFileSystem(),
"aws_efs_mount_target": resourceAwsEfsMountTarget(),
"aws_eip": resourceAwsEip(),
"aws_elasticache_cluster": resourceAwsElasticacheCluster(),
"aws_elasticache_parameter_group": resourceAwsElasticacheParameterGroup(),
"aws_elasticache_security_group": resourceAwsElasticacheSecurityGroup(),
"aws_elasticache_subnet_group": resourceAwsElasticacheSubnetGroup(),
"aws_elasticsearch_domain": resourceAwsElasticSearchDomain(),
"aws_elb": resourceAwsElb(),
"aws_flow_log": resourceAwsFlowLog(),
"aws_glacier_vault": resourceAwsGlacierVault(),
"aws_iam_access_key": resourceAwsIamAccessKey(),
"aws_iam_group_policy": resourceAwsIamGroupPolicy(),
"aws_iam_group": resourceAwsIamGroup(),
"aws_iam_group_membership": resourceAwsIamGroupMembership(),
"aws_iam_instance_profile": resourceAwsIamInstanceProfile(),
"aws_iam_policy": resourceAwsIamPolicy(),
"aws_iam_policy_attachment": resourceAwsIamPolicyAttachment(),
"aws_iam_role_policy": resourceAwsIamRolePolicy(),
"aws_iam_role": resourceAwsIamRole(),
"aws_iam_saml_provider": resourceAwsIamSamlProvider(),
"aws_iam_server_certificate": resourceAwsIAMServerCertificate(),
"aws_iam_user_policy": resourceAwsIamUserPolicy(),
"aws_iam_user": resourceAwsIamUser(),
"aws_instance": resourceAwsInstance(),
"aws_internet_gateway": resourceAwsInternetGateway(),
"aws_key_pair": resourceAwsKeyPair(),
"aws_kinesis_stream": resourceAwsKinesisStream(),
"aws_lambda_function": resourceAwsLambdaFunction(),
"aws_launch_configuration": resourceAwsLaunchConfiguration(),
"aws_lb_cookie_stickiness_policy": resourceAwsLBCookieStickinessPolicy(),
"aws_main_route_table_association": resourceAwsMainRouteTableAssociation(),
"aws_network_acl": resourceAwsNetworkAcl(),
"aws_network_interface": resourceAwsNetworkInterface(),
"aws_opsworks_stack": resourceAwsOpsworksStack(),
"aws_opsworks_java_app_layer": resourceAwsOpsworksJavaAppLayer(),
"aws_opsworks_haproxy_layer": resourceAwsOpsworksHaproxyLayer(),
"aws_opsworks_static_web_layer": resourceAwsOpsworksStaticWebLayer(),
"aws_opsworks_php_app_layer": resourceAwsOpsworksPhpAppLayer(),
"aws_opsworks_rails_app_layer": resourceAwsOpsworksRailsAppLayer(),
"aws_opsworks_nodejs_app_layer": resourceAwsOpsworksNodejsAppLayer(),
"aws_opsworks_memcached_layer": resourceAwsOpsworksMemcachedLayer(),
"aws_opsworks_mysql_layer": resourceAwsOpsworksMysqlLayer(),
"aws_opsworks_ganglia_layer": resourceAwsOpsworksGangliaLayer(),
"aws_opsworks_custom_layer": resourceAwsOpsworksCustomLayer(),
"aws_placement_group": resourceAwsPlacementGroup(),
"aws_proxy_protocol_policy": resourceAwsProxyProtocolPolicy(),
"aws_rds_cluster": resourceAwsRDSCluster(),
"aws_rds_cluster_instance": resourceAwsRDSClusterInstance(),
"aws_route53_delegation_set": resourceAwsRoute53DelegationSet(),
"aws_route53_record": resourceAwsRoute53Record(),
"aws_route53_zone_association": resourceAwsRoute53ZoneAssociation(),
"aws_route53_zone": resourceAwsRoute53Zone(),
"aws_route53_health_check": resourceAwsRoute53HealthCheck(),
"aws_route": resourceAwsRoute(),
"aws_route_table": resourceAwsRouteTable(),
"aws_route_table_association": resourceAwsRouteTableAssociation(),
"aws_s3_bucket": resourceAwsS3Bucket(),
"aws_s3_bucket_object": resourceAwsS3BucketObject(),
"aws_security_group": resourceAwsSecurityGroup(),
"aws_security_group_rule": resourceAwsSecurityGroupRule(),
"aws_spot_instance_request": resourceAwsSpotInstanceRequest(),
"aws_sqs_queue": resourceAwsSqsQueue(),
"aws_sns_topic": resourceAwsSnsTopic(),
"aws_sns_topic_subscription": resourceAwsSnsTopicSubscription(),
"aws_subnet": resourceAwsSubnet(),
"aws_volume_attachment": resourceAwsVolumeAttachment(),
"aws_vpc_dhcp_options_association": resourceAwsVpcDhcpOptionsAssociation(),
"aws_vpc_dhcp_options": resourceAwsVpcDhcpOptions(),
"aws_vpc_peering_connection": resourceAwsVpcPeeringConnection(),
"aws_vpc": resourceAwsVpc(),
"aws_vpc_endpoint": resourceAwsVpcEndpoint(),
"aws_vpn_connection": resourceAwsVpnConnection(),
"aws_vpn_connection_route": resourceAwsVpnConnectionRoute(),
"aws_vpn_gateway": resourceAwsVpnGateway(),
"aws_ami": resourceAwsAmi(),
"aws_ami_copy": resourceAwsAmiCopy(),
"aws_ami_from_instance": resourceAwsAmiFromInstance(),
"aws_app_cookie_stickiness_policy": resourceAwsAppCookieStickinessPolicy(),
"aws_autoscaling_group": resourceAwsAutoscalingGroup(),
"aws_autoscaling_notification": resourceAwsAutoscalingNotification(),
"aws_autoscaling_policy": resourceAwsAutoscalingPolicy(),
"aws_cloudformation_stack": resourceAwsCloudFormationStack(),
"aws_cloudtrail": resourceAwsCloudTrail(),
"aws_cloudwatch_log_group": resourceAwsCloudWatchLogGroup(),
"aws_autoscaling_lifecycle_hook": resourceAwsAutoscalingLifecycleHook(),
"aws_cloudwatch_metric_alarm": resourceAwsCloudWatchMetricAlarm(),
"aws_codedeploy_app": resourceAwsCodeDeployApp(),
"aws_codedeploy_deployment_group": resourceAwsCodeDeployDeploymentGroup(),
"aws_codecommit_repository": resourceAwsCodeCommitRepository(),
"aws_customer_gateway": resourceAwsCustomerGateway(),
"aws_db_instance": resourceAwsDbInstance(),
"aws_db_parameter_group": resourceAwsDbParameterGroup(),
"aws_db_security_group": resourceAwsDbSecurityGroup(),
"aws_db_subnet_group": resourceAwsDbSubnetGroup(),
"aws_directory_service_directory": resourceAwsDirectoryServiceDirectory(),
"aws_dynamodb_table": resourceAwsDynamoDbTable(),
"aws_ebs_volume": resourceAwsEbsVolume(),
"aws_ecs_cluster": resourceAwsEcsCluster(),
"aws_ecs_service": resourceAwsEcsService(),
"aws_ecs_task_definition": resourceAwsEcsTaskDefinition(),
"aws_efs_file_system": resourceAwsEfsFileSystem(),
"aws_efs_mount_target": resourceAwsEfsMountTarget(),
"aws_eip": resourceAwsEip(),
"aws_elasticache_cluster": resourceAwsElasticacheCluster(),
"aws_elasticache_parameter_group": resourceAwsElasticacheParameterGroup(),
"aws_elasticache_security_group": resourceAwsElasticacheSecurityGroup(),
"aws_elasticache_subnet_group": resourceAwsElasticacheSubnetGroup(),
"aws_elasticsearch_domain": resourceAwsElasticSearchDomain(),
"aws_elb": resourceAwsElb(),
"aws_flow_log": resourceAwsFlowLog(),
"aws_glacier_vault": resourceAwsGlacierVault(),
"aws_iam_access_key": resourceAwsIamAccessKey(),
"aws_iam_group_policy": resourceAwsIamGroupPolicy(),
"aws_iam_group": resourceAwsIamGroup(),
"aws_iam_group_membership": resourceAwsIamGroupMembership(),
"aws_iam_instance_profile": resourceAwsIamInstanceProfile(),
"aws_iam_policy": resourceAwsIamPolicy(),
"aws_iam_policy_attachment": resourceAwsIamPolicyAttachment(),
"aws_iam_role_policy": resourceAwsIamRolePolicy(),
"aws_iam_role": resourceAwsIamRole(),
"aws_iam_saml_provider": resourceAwsIamSamlProvider(),
"aws_iam_server_certificate": resourceAwsIAMServerCertificate(),
"aws_iam_user_policy": resourceAwsIamUserPolicy(),
"aws_iam_user": resourceAwsIamUser(),
"aws_instance": resourceAwsInstance(),
"aws_internet_gateway": resourceAwsInternetGateway(),
"aws_key_pair": resourceAwsKeyPair(),
"aws_kinesis_firehose_delivery_stream": resourceAwsKinesisFirehoseDeliveryStream(),
"aws_kinesis_stream": resourceAwsKinesisStream(),
"aws_lambda_function": resourceAwsLambdaFunction(),
"aws_launch_configuration": resourceAwsLaunchConfiguration(),
"aws_lb_cookie_stickiness_policy": resourceAwsLBCookieStickinessPolicy(),
"aws_main_route_table_association": resourceAwsMainRouteTableAssociation(),
"aws_network_acl": resourceAwsNetworkAcl(),
"aws_network_interface": resourceAwsNetworkInterface(),
"aws_opsworks_stack": resourceAwsOpsworksStack(),
"aws_opsworks_java_app_layer": resourceAwsOpsworksJavaAppLayer(),
"aws_opsworks_haproxy_layer": resourceAwsOpsworksHaproxyLayer(),
"aws_opsworks_static_web_layer": resourceAwsOpsworksStaticWebLayer(),
"aws_opsworks_php_app_layer": resourceAwsOpsworksPhpAppLayer(),
"aws_opsworks_rails_app_layer": resourceAwsOpsworksRailsAppLayer(),
"aws_opsworks_nodejs_app_layer": resourceAwsOpsworksNodejsAppLayer(),
"aws_opsworks_memcached_layer": resourceAwsOpsworksMemcachedLayer(),
"aws_opsworks_mysql_layer": resourceAwsOpsworksMysqlLayer(),
"aws_opsworks_ganglia_layer": resourceAwsOpsworksGangliaLayer(),
"aws_opsworks_custom_layer": resourceAwsOpsworksCustomLayer(),
"aws_placement_group": resourceAwsPlacementGroup(),
"aws_proxy_protocol_policy": resourceAwsProxyProtocolPolicy(),
"aws_rds_cluster": resourceAwsRDSCluster(),
"aws_rds_cluster_instance": resourceAwsRDSClusterInstance(),
"aws_route53_delegation_set": resourceAwsRoute53DelegationSet(),
"aws_route53_record": resourceAwsRoute53Record(),
"aws_route53_zone_association": resourceAwsRoute53ZoneAssociation(),
"aws_route53_zone": resourceAwsRoute53Zone(),
"aws_route53_health_check": resourceAwsRoute53HealthCheck(),
"aws_route": resourceAwsRoute(),
"aws_route_table": resourceAwsRouteTable(),
"aws_route_table_association": resourceAwsRouteTableAssociation(),
"aws_s3_bucket": resourceAwsS3Bucket(),
"aws_s3_bucket_object": resourceAwsS3BucketObject(),
"aws_security_group": resourceAwsSecurityGroup(),
"aws_security_group_rule": resourceAwsSecurityGroupRule(),
"aws_spot_instance_request": resourceAwsSpotInstanceRequest(),
"aws_sqs_queue": resourceAwsSqsQueue(),
"aws_sns_topic": resourceAwsSnsTopic(),
"aws_sns_topic_subscription": resourceAwsSnsTopicSubscription(),
"aws_subnet": resourceAwsSubnet(),
"aws_volume_attachment": resourceAwsVolumeAttachment(),
"aws_vpc_dhcp_options_association": resourceAwsVpcDhcpOptionsAssociation(),
"aws_vpc_dhcp_options": resourceAwsVpcDhcpOptions(),
"aws_vpc_peering_connection": resourceAwsVpcPeeringConnection(),
"aws_vpc": resourceAwsVpc(),
"aws_vpc_endpoint": resourceAwsVpcEndpoint(),
"aws_vpn_connection": resourceAwsVpnConnection(),
"aws_vpn_connection_route": resourceAwsVpnConnectionRoute(),
"aws_vpn_gateway": resourceAwsVpnGateway(),
},
ConfigureFunc: providerConfigure,

View File

@ -54,7 +54,8 @@ func resourceAwsDbInstance() *schema.Resource {
"engine_version": &schema.Schema{
Type: schema.TypeString,
Required: true,
Optional: true,
Computed: true,
},
"storage_encrypted": &schema.Schema{

View File

@ -31,8 +31,6 @@ func TestAccAWSDBInstance_basic(t *testing.T) {
"aws_db_instance.bar", "allocated_storage", "10"),
resource.TestCheckResourceAttr(
"aws_db_instance.bar", "engine", "mysql"),
resource.TestCheckResourceAttr(
"aws_db_instance.bar", "engine_version", "5.6.21"),
resource.TestCheckResourceAttr(
"aws_db_instance.bar", "license_model", "general-public-license"),
resource.TestCheckResourceAttr(
@ -111,7 +109,7 @@ func testAccCheckAWSDBInstanceAttributes(v *rds.DBInstance) resource.TestCheckFu
return fmt.Errorf("bad engine: %#v", *v.Engine)
}
if *v.EngineVersion != "5.6.21" {
if *v.EngineVersion == "" {
return fmt.Errorf("bad engine_version: %#v", *v.EngineVersion)
}

View File

@ -59,9 +59,16 @@ func resourceAwsEcsClusterRead(d *schema.ResourceData, meta interface{}) error {
}
log.Printf("[DEBUG] Received ECS clusters: %s", out.Clusters)
d.SetId(*out.Clusters[0].ClusterArn)
d.Set("name", *out.Clusters[0].ClusterName)
for _, c := range out.Clusters {
if *c.ClusterName == clusterName {
d.SetId(*c.ClusterArn)
d.Set("name", c.ClusterName)
return nil
}
}
log.Printf("[ERR] No matching ECS Cluster found for (%s)", d.Id())
d.SetId("")
return nil
}

View File

@ -138,6 +138,24 @@ func resourceAwsElasticacheCluster() *schema.Resource {
},
},
"snapshot_window": &schema.Schema{
Type: schema.TypeString,
Optional: true,
},
"snapshot_retention_limit": &schema.Schema{
Type: schema.TypeInt,
Optional: true,
ValidateFunc: func(v interface{}, k string) (ws []string, es []error) {
value := v.(int)
if value > 35 {
es = append(es, fmt.Errorf(
"snapshot retention limit cannot be more than 35 days"))
}
return
},
},
"tags": tagsSchema(),
// apply_immediately is used to determine when the update modifications
@ -187,6 +205,14 @@ func resourceAwsElasticacheClusterCreate(d *schema.ResourceData, meta interface{
req.CacheParameterGroupName = aws.String(v.(string))
}
if v, ok := d.GetOk("snapshot_retention_limit"); ok {
req.SnapshotRetentionLimit = aws.Int64(int64(v.(int)))
}
if v, ok := d.GetOk("snapshot_window"); ok {
req.SnapshotWindow = aws.String(v.(string))
}
if v, ok := d.GetOk("maintenance_window"); ok {
req.PreferredMaintenanceWindow = aws.String(v.(string))
}
@ -241,6 +267,12 @@ func resourceAwsElasticacheClusterRead(d *schema.ResourceData, meta interface{})
res, err := conn.DescribeCacheClusters(req)
if err != nil {
if eccErr, ok := err.(awserr.Error); ok && eccErr.Code() == "CacheClusterNotFound" {
log.Printf("[WARN] ElastiCache Cluster (%s) not found", d.Id())
d.SetId("")
return nil
}
return err
}
@ -261,6 +293,8 @@ func resourceAwsElasticacheClusterRead(d *schema.ResourceData, meta interface{})
d.Set("security_group_ids", c.SecurityGroups)
d.Set("parameter_group_name", c.CacheParameterGroup)
d.Set("maintenance_window", c.PreferredMaintenanceWindow)
d.Set("snapshot_window", c.SnapshotWindow)
d.Set("snapshot_retention_limit", c.SnapshotRetentionLimit)
if c.NotificationConfiguration != nil {
if *c.NotificationConfiguration.TopicStatus == "active" {
d.Set("notification_topic_arn", c.NotificationConfiguration.TopicArn)
@ -344,6 +378,16 @@ func resourceAwsElasticacheClusterUpdate(d *schema.ResourceData, meta interface{
requestUpdate = true
}
if d.HasChange("snapshot_window") {
req.SnapshotWindow = aws.String(d.Get("snapshot_window").(string))
requestUpdate = true
}
if d.HasChange("snapshot_retention_limit") {
req.SnapshotRetentionLimit = aws.Int64(int64(d.Get("snapshot_retention_limit").(int)))
requestUpdate = true
}
if d.HasChange("num_cache_nodes") {
req.NumCacheNodes = aws.Int64(int64(d.Get("num_cache_nodes").(int)))
requestUpdate = true

View File

@ -33,6 +33,45 @@ func TestAccAWSElasticacheCluster_basic(t *testing.T) {
})
}
func TestAccAWSElasticacheCluster_snapshotsWithUpdates(t *testing.T) {
var ec elasticache.CacheCluster
ri := genRandInt()
preConfig := fmt.Sprintf(testAccAWSElasticacheClusterConfig_snapshots, ri, ri, ri)
postConfig := fmt.Sprintf(testAccAWSElasticacheClusterConfig_snapshotsUpdated, ri, ri, ri)
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
CheckDestroy: testAccCheckAWSElasticacheClusterDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: preConfig,
Check: resource.ComposeTestCheckFunc(
testAccCheckAWSElasticacheSecurityGroupExists("aws_elasticache_security_group.bar"),
testAccCheckAWSElasticacheClusterExists("aws_elasticache_cluster.bar", &ec),
resource.TestCheckResourceAttr(
"aws_elasticache_cluster.bar", "snapshot_window", "05:00-09:00"),
resource.TestCheckResourceAttr(
"aws_elasticache_cluster.bar", "snapshot_retention_limit", "3"),
),
},
resource.TestStep{
Config: postConfig,
Check: resource.ComposeTestCheckFunc(
testAccCheckAWSElasticacheSecurityGroupExists("aws_elasticache_security_group.bar"),
testAccCheckAWSElasticacheClusterExists("aws_elasticache_cluster.bar", &ec),
resource.TestCheckResourceAttr(
"aws_elasticache_cluster.bar", "snapshot_window", "07:00-09:00"),
resource.TestCheckResourceAttr(
"aws_elasticache_cluster.bar", "snapshot_retention_limit", "7"),
),
},
},
})
}
func TestAccAWSElasticacheCluster_vpc(t *testing.T) {
var csg elasticache.CacheSubnetGroup
var ec elasticache.CacheCluster
@ -152,6 +191,75 @@ resource "aws_elasticache_cluster" "bar" {
}
`, genRandInt(), genRandInt(), genRandInt())
var testAccAWSElasticacheClusterConfig_snapshots = `
provider "aws" {
region = "us-east-1"
}
resource "aws_security_group" "bar" {
name = "tf-test-security-group-%03d"
description = "tf-test-security-group-descr"
ingress {
from_port = -1
to_port = -1
protocol = "icmp"
cidr_blocks = ["0.0.0.0/0"]
}
}
resource "aws_elasticache_security_group" "bar" {
name = "tf-test-security-group-%03d"
description = "tf-test-security-group-descr"
security_group_names = ["${aws_security_group.bar.name}"]
}
resource "aws_elasticache_cluster" "bar" {
cluster_id = "tf-test-%03d"
engine = "redis"
node_type = "cache.m1.small"
num_cache_nodes = 1
port = 6379
parameter_group_name = "default.redis2.8"
security_group_names = ["${aws_elasticache_security_group.bar.name}"]
snapshot_window = "05:00-09:00"
snapshot_retention_limit = 3
}
`
var testAccAWSElasticacheClusterConfig_snapshotsUpdated = `
provider "aws" {
region = "us-east-1"
}
resource "aws_security_group" "bar" {
name = "tf-test-security-group-%03d"
description = "tf-test-security-group-descr"
ingress {
from_port = -1
to_port = -1
protocol = "icmp"
cidr_blocks = ["0.0.0.0/0"]
}
}
resource "aws_elasticache_security_group" "bar" {
name = "tf-test-security-group-%03d"
description = "tf-test-security-group-descr"
security_group_names = ["${aws_security_group.bar.name}"]
}
resource "aws_elasticache_cluster" "bar" {
cluster_id = "tf-test-%03d"
engine = "redis"
node_type = "cache.m1.small"
num_cache_nodes = 1
port = 6379
parameter_group_name = "default.redis2.8"
security_group_names = ["${aws_elasticache_security_group.bar.name}"]
snapshot_window = "07:00-09:00"
snapshot_retention_limit = 7
apply_immediately = true
}
`
var testAccAWSElasticacheClusterInVPCConfig = fmt.Sprintf(`
resource "aws_vpc" "foo" {
cidr_block = "192.168.0.0/16"

View File

@ -9,6 +9,7 @@ import (
"github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/aws/awserr"
"github.com/aws/aws-sdk-go/service/ec2"
"github.com/aws/aws-sdk-go/service/elb"
"github.com/hashicorp/terraform/helper/hashcode"
"github.com/hashicorp/terraform/helper/resource"
@ -74,6 +75,11 @@ func resourceAwsElb() *schema.Resource {
Computed: true,
},
"source_security_group_id": &schema.Schema{
Type: schema.TypeString,
Computed: true,
},
"subnets": &schema.Schema{
Type: schema.TypeSet,
Elem: &schema.Schema{Type: schema.TypeString},
@ -323,6 +329,18 @@ func resourceAwsElbRead(d *schema.ResourceData, meta interface{}) error {
d.Set("security_groups", lb.SecurityGroups)
if lb.SourceSecurityGroup != nil {
d.Set("source_security_group", lb.SourceSecurityGroup.GroupName)
// Manually look up the ELB Security Group ID, since it's not provided
var elbVpc string
if lb.VPCId != nil {
elbVpc = *lb.VPCId
}
sgId, err := sourceSGIdByName(meta, *lb.SourceSecurityGroup.GroupName, elbVpc)
if err != nil {
return fmt.Errorf("[WARN] Error looking up ELB Security Group ID: %s", err)
} else {
d.Set("source_security_group_id", sgId)
}
}
d.Set("subnets", lb.Subnets)
d.Set("idle_timeout", lbAttrs.ConnectionSettings.IdleTimeout)
@ -659,3 +677,52 @@ func validateElbName(v interface{}, k string) (ws []string, errors []error) {
return
}
func sourceSGIdByName(meta interface{}, sg, vpcId string) (string, error) {
conn := meta.(*AWSClient).ec2conn
var filters []*ec2.Filter
var sgFilterName, sgFilterVPCID *ec2.Filter
sgFilterName = &ec2.Filter{
Name: aws.String("group-name"),
Values: []*string{aws.String(sg)},
}
if vpcId != "" {
sgFilterVPCID = &ec2.Filter{
Name: aws.String("vpc-id"),
Values: []*string{aws.String(vpcId)},
}
}
filters = append(filters, sgFilterName)
if sgFilterVPCID != nil {
filters = append(filters, sgFilterVPCID)
}
req := &ec2.DescribeSecurityGroupsInput{
Filters: filters,
}
resp, err := conn.DescribeSecurityGroups(req)
if err != nil {
if ec2err, ok := err.(awserr.Error); ok {
if ec2err.Code() == "InvalidSecurityGroupID.NotFound" ||
ec2err.Code() == "InvalidGroup.NotFound" {
resp = nil
err = nil
}
}
if err != nil {
log.Printf("Error on ELB SG look up: %s", err)
return "", err
}
}
if resp == nil || len(resp.SecurityGroups) == 0 {
return "", fmt.Errorf("No security groups found for name %s and vpc id %s", sg, vpcId)
}
group := resp.SecurityGroups[0]
return *group.GroupId, nil
}

View File

@ -657,6 +657,15 @@ func testAccCheckAWSELBExists(n string, res *elb.LoadBalancerDescription) resour
*res = *describe.LoadBalancerDescriptions[0]
// Confirm source_security_group_id for ELBs in a VPC
// See https://github.com/hashicorp/terraform/pull/3780
if res.VPCId != nil {
sgid := rs.Primary.Attributes["source_security_group_id"]
if sgid == "" {
return fmt.Errorf("Expected to find source_security_group_id for ELB, but was empty")
}
}
return nil
}
}

View File

@ -135,6 +135,9 @@ func removeUsersFromGroup(conn *iam.IAM, users []*string, group string) error {
})
if err != nil {
if iamerr, ok := err.(awserr.Error); ok && iamerr.Code() == "NoSuchEntity" {
return nil
}
return err
}
}

View File

@ -68,6 +68,7 @@ func resourceAwsIamSamlProviderRead(d *schema.ResourceData, meta interface{}) er
}
validUntil := out.ValidUntil.Format(time.RFC1123)
d.Set("arn", d.Id())
d.Set("valid_until", validUntil)
d.Set("saml_metadata_document", *out.SAMLMetadataDocument)

View File

@ -2,6 +2,7 @@ package aws
import (
"fmt"
"log"
"github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/aws/awserr"
@ -14,9 +15,7 @@ func resourceAwsIamUser() *schema.Resource {
return &schema.Resource{
Create: resourceAwsIamUserCreate,
Read: resourceAwsIamUserRead,
// There is an UpdateUser API call, but goamz doesn't support it yet.
// XXX but we aren't using goamz anymore.
//Update: resourceAwsIamUserUpdate,
Update: resourceAwsIamUserUpdate,
Delete: resourceAwsIamUserDelete,
Schema: map[string]*schema.Schema{
@ -39,7 +38,6 @@ func resourceAwsIamUser() *schema.Resource {
"name": &schema.Schema{
Type: schema.TypeString,
Required: true,
ForceNew: true,
},
"path": &schema.Schema{
Type: schema.TypeString,
@ -54,12 +52,14 @@ func resourceAwsIamUser() *schema.Resource {
func resourceAwsIamUserCreate(d *schema.ResourceData, meta interface{}) error {
iamconn := meta.(*AWSClient).iamconn
name := d.Get("name").(string)
path := d.Get("path").(string)
request := &iam.CreateUserInput{
Path: aws.String(d.Get("path").(string)),
Path: aws.String(path),
UserName: aws.String(name),
}
log.Println("[DEBUG] Create IAM User request:", request)
createResp, err := iamconn.CreateUser(request)
if err != nil {
return fmt.Errorf("Error creating IAM User %s: %s", name, err)
@ -69,14 +69,15 @@ func resourceAwsIamUserCreate(d *schema.ResourceData, meta interface{}) error {
func resourceAwsIamUserRead(d *schema.ResourceData, meta interface{}) error {
iamconn := meta.(*AWSClient).iamconn
name := d.Get("name").(string)
request := &iam.GetUserInput{
UserName: aws.String(d.Id()),
UserName: aws.String(name),
}
getResp, err := iamconn.GetUser(request)
if err != nil {
if iamerr, ok := err.(awserr.Error); ok && iamerr.Code() == "NoSuchEntity" { // XXX test me
log.Printf("[WARN] No IAM user by name (%s) found", d.Id())
d.SetId("")
return nil
}
@ -102,6 +103,32 @@ func resourceAwsIamUserReadResult(d *schema.ResourceData, user *iam.User) error
return nil
}
func resourceAwsIamUserUpdate(d *schema.ResourceData, meta interface{}) error {
if d.HasChange("name") || d.HasChange("path") {
iamconn := meta.(*AWSClient).iamconn
on, nn := d.GetChange("name")
_, np := d.GetChange("path")
request := &iam.UpdateUserInput{
UserName: aws.String(on.(string)),
NewUserName: aws.String(nn.(string)),
NewPath: aws.String(np.(string)),
}
log.Println("[DEBUG] Update IAM User request:", request)
_, err := iamconn.UpdateUser(request)
if err != nil {
if iamerr, ok := err.(awserr.Error); ok && iamerr.Code() == "NoSuchEntity" {
log.Printf("[WARN] No IAM user by name (%s) found", d.Id())
d.SetId("")
return nil
}
return fmt.Errorf("Error updating IAM User %s: %s", d.Id(), err)
}
return resourceAwsIamUserRead(d, meta)
}
return nil
}
func resourceAwsIamUserDelete(d *schema.ResourceData, meta interface{}) error {
iamconn := meta.(*AWSClient).iamconn
@ -109,6 +136,7 @@ func resourceAwsIamUserDelete(d *schema.ResourceData, meta interface{}) error {
UserName: aws.String(d.Id()),
}
log.Println("[DEBUG] Delete IAM User request:", request)
if _, err := iamconn.DeleteUser(request); err != nil {
return fmt.Errorf("Error deleting IAM User %s: %s", d.Id(), err)
}

View File

@ -23,7 +23,14 @@ func TestAccAWSUser_basic(t *testing.T) {
Config: testAccAWSUserConfig,
Check: resource.ComposeTestCheckFunc(
testAccCheckAWSUserExists("aws_iam_user.user", &conf),
testAccCheckAWSUserAttributes(&conf),
testAccCheckAWSUserAttributes(&conf, "test-user", "/"),
),
},
resource.TestStep{
Config: testAccAWSUserConfig2,
Check: resource.ComposeTestCheckFunc(
testAccCheckAWSUserExists("aws_iam_user.user", &conf),
testAccCheckAWSUserAttributes(&conf, "test-user2", "/path2/"),
),
},
},
@ -85,13 +92,13 @@ func testAccCheckAWSUserExists(n string, res *iam.GetUserOutput) resource.TestCh
}
}
func testAccCheckAWSUserAttributes(user *iam.GetUserOutput) resource.TestCheckFunc {
func testAccCheckAWSUserAttributes(user *iam.GetUserOutput, name string, path string) resource.TestCheckFunc {
return func(s *terraform.State) error {
if *user.User.UserName != "test-user" {
if *user.User.UserName != name {
return fmt.Errorf("Bad name: %s", *user.User.UserName)
}
if *user.User.Path != "/" {
if *user.User.Path != path {
return fmt.Errorf("Bad path: %s", *user.User.Path)
}
@ -105,3 +112,9 @@ resource "aws_iam_user" "user" {
path = "/"
}
`
const testAccAWSUserConfig2 = `
resource "aws_iam_user" "user" {
name = "test-user2"
path = "/path2/"
}
`

View File

@ -0,0 +1,281 @@
package aws
import (
"fmt"
"strings"
"time"
"github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/aws/awserr"
"github.com/aws/aws-sdk-go/service/firehose"
"github.com/hashicorp/terraform/helper/resource"
"github.com/hashicorp/terraform/helper/schema"
)
func resourceAwsKinesisFirehoseDeliveryStream() *schema.Resource {
return &schema.Resource{
Create: resourceAwsKinesisFirehoseDeliveryStreamCreate,
Read: resourceAwsKinesisFirehoseDeliveryStreamRead,
Update: resourceAwsKinesisFirehoseDeliveryStreamUpdate,
Delete: resourceAwsKinesisFirehoseDeliveryStreamDelete,
Schema: map[string]*schema.Schema{
"name": &schema.Schema{
Type: schema.TypeString,
Required: true,
ForceNew: true,
},
"destination": &schema.Schema{
Type: schema.TypeString,
Required: true,
ForceNew: true,
StateFunc: func(v interface{}) string {
value := v.(string)
return strings.ToLower(value)
},
},
"role_arn": &schema.Schema{
Type: schema.TypeString,
Required: true,
},
"s3_bucket_arn": &schema.Schema{
Type: schema.TypeString,
Required: true,
},
"s3_prefix": &schema.Schema{
Type: schema.TypeString,
Optional: true,
},
"s3_buffer_size": &schema.Schema{
Type: schema.TypeInt,
Optional: true,
Default: 5,
},
"s3_buffer_interval": &schema.Schema{
Type: schema.TypeInt,
Optional: true,
Default: 300,
},
"s3_data_compression": &schema.Schema{
Type: schema.TypeString,
Optional: true,
Default: "UNCOMPRESSED",
},
"arn": &schema.Schema{
Type: schema.TypeString,
Optional: true,
Computed: true,
},
"version_id": &schema.Schema{
Type: schema.TypeString,
Optional: true,
Computed: true,
},
"destination_id": &schema.Schema{
Type: schema.TypeString,
Optional: true,
Computed: true,
},
},
}
}
func resourceAwsKinesisFirehoseDeliveryStreamCreate(d *schema.ResourceData, meta interface{}) error {
conn := meta.(*AWSClient).firehoseconn
if d.Get("destination").(string) != "s3" {
return fmt.Errorf("[ERROR] AWS Kinesis Firehose only supports S3 destinations for the first implementation")
}
sn := d.Get("name").(string)
input := &firehose.CreateDeliveryStreamInput{
DeliveryStreamName: aws.String(sn),
}
s3_config := &firehose.S3DestinationConfiguration{
BucketARN: aws.String(d.Get("s3_bucket_arn").(string)),
RoleARN: aws.String(d.Get("role_arn").(string)),
BufferingHints: &firehose.BufferingHints{
IntervalInSeconds: aws.Int64(int64(d.Get("s3_buffer_interval").(int))),
SizeInMBs: aws.Int64(int64(d.Get("s3_buffer_size").(int))),
},
CompressionFormat: aws.String(d.Get("s3_data_compression").(string)),
}
if v, ok := d.GetOk("s3_prefix"); ok {
s3_config.Prefix = aws.String(v.(string))
}
input.S3DestinationConfiguration = s3_config
_, err := conn.CreateDeliveryStream(input)
if err != nil {
if awsErr, ok := err.(awserr.Error); ok {
return fmt.Errorf("[WARN] Error creating Kinesis Firehose Delivery Stream: \"%s\", code: \"%s\"", awsErr.Message(), awsErr.Code())
}
return err
}
stateConf := &resource.StateChangeConf{
Pending: []string{"CREATING"},
Target: "ACTIVE",
Refresh: firehoseStreamStateRefreshFunc(conn, sn),
Timeout: 5 * time.Minute,
Delay: 10 * time.Second,
MinTimeout: 3 * time.Second,
}
firehoseStream, err := stateConf.WaitForState()
if err != nil {
return fmt.Errorf(
"Error waiting for Kinesis Stream (%s) to become active: %s",
sn, err)
}
s := firehoseStream.(*firehose.DeliveryStreamDescription)
d.SetId(*s.DeliveryStreamARN)
d.Set("arn", s.DeliveryStreamARN)
return resourceAwsKinesisFirehoseDeliveryStreamRead(d, meta)
}
func resourceAwsKinesisFirehoseDeliveryStreamUpdate(d *schema.ResourceData, meta interface{}) error {
conn := meta.(*AWSClient).firehoseconn
if d.Get("destination").(string) != "s3" {
return fmt.Errorf("[ERROR] AWS Kinesis Firehose only supports S3 destinations for the first implementation")
}
sn := d.Get("name").(string)
s3_config := &firehose.S3DestinationUpdate{}
if d.HasChange("role_arn") {
s3_config.RoleARN = aws.String(d.Get("role_arn").(string))
}
if d.HasChange("s3_bucket_arn") {
s3_config.BucketARN = aws.String(d.Get("s3_bucket_arn").(string))
}
if d.HasChange("s3_prefix") {
s3_config.Prefix = aws.String(d.Get("s3_prefix").(string))
}
if d.HasChange("s3_data_compression") {
s3_config.CompressionFormat = aws.String(d.Get("s3_data_compression").(string))
}
if d.HasChange("s3_buffer_interval") || d.HasChange("s3_buffer_size") {
bufferingHints := &firehose.BufferingHints{
IntervalInSeconds: aws.Int64(int64(d.Get("s3_buffer_interval").(int))),
SizeInMBs: aws.Int64(int64(d.Get("s3_buffer_size").(int))),
}
s3_config.BufferingHints = bufferingHints
}
destOpts := &firehose.UpdateDestinationInput{
DeliveryStreamName: aws.String(sn),
CurrentDeliveryStreamVersionId: aws.String(d.Get("version_id").(string)),
DestinationId: aws.String(d.Get("destination_id").(string)),
S3DestinationUpdate: s3_config,
}
_, err := conn.UpdateDestination(destOpts)
if err != nil {
return fmt.Errorf(
"Error Updating Kinesis Firehose Delivery Stream: \"%s\"\n%s",
sn, err)
}
return resourceAwsKinesisFirehoseDeliveryStreamRead(d, meta)
}
func resourceAwsKinesisFirehoseDeliveryStreamRead(d *schema.ResourceData, meta interface{}) error {
conn := meta.(*AWSClient).firehoseconn
sn := d.Get("name").(string)
describeOpts := &firehose.DescribeDeliveryStreamInput{
DeliveryStreamName: aws.String(sn),
}
resp, err := conn.DescribeDeliveryStream(describeOpts)
if err != nil {
if awsErr, ok := err.(awserr.Error); ok {
if awsErr.Code() == "ResourceNotFoundException" {
d.SetId("")
return nil
}
return fmt.Errorf("[WARN] Error reading Kinesis Firehose Delivery Stream: \"%s\", code: \"%s\"", awsErr.Message(), awsErr.Code())
}
return err
}
s := resp.DeliveryStreamDescription
d.Set("version_id", s.VersionId)
d.Set("arn", *s.DeliveryStreamARN)
if len(s.Destinations) > 0 {
destination := s.Destinations[0]
d.Set("destination_id", *destination.DestinationId)
}
return nil
}
func resourceAwsKinesisFirehoseDeliveryStreamDelete(d *schema.ResourceData, meta interface{}) error {
conn := meta.(*AWSClient).firehoseconn
sn := d.Get("name").(string)
_, err := conn.DeleteDeliveryStream(&firehose.DeleteDeliveryStreamInput{
DeliveryStreamName: aws.String(sn),
})
if err != nil {
return err
}
stateConf := &resource.StateChangeConf{
Pending: []string{"DELETING"},
Target: "DESTROYED",
Refresh: firehoseStreamStateRefreshFunc(conn, sn),
Timeout: 5 * time.Minute,
Delay: 10 * time.Second,
MinTimeout: 3 * time.Second,
}
_, err = stateConf.WaitForState()
if err != nil {
return fmt.Errorf(
"Error waiting for Delivery Stream (%s) to be destroyed: %s",
sn, err)
}
d.SetId("")
return nil
}
func firehoseStreamStateRefreshFunc(conn *firehose.Firehose, sn string) resource.StateRefreshFunc {
return func() (interface{}, string, error) {
describeOpts := &firehose.DescribeDeliveryStreamInput{
DeliveryStreamName: aws.String(sn),
}
resp, err := conn.DescribeDeliveryStream(describeOpts)
if err != nil {
if awsErr, ok := err.(awserr.Error); ok {
if awsErr.Code() == "ResourceNotFoundException" {
return 42, "DESTROYED", nil
}
return nil, awsErr.Code(), err
}
return nil, "failed", err
}
return resp.DeliveryStreamDescription, *resp.DeliveryStreamDescription.DeliveryStreamStatus, nil
}
}

View File

@ -0,0 +1,189 @@
package aws
import (
"fmt"
"log"
"math/rand"
"strings"
"testing"
"time"
"github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/service/firehose"
"github.com/hashicorp/terraform/helper/resource"
"github.com/hashicorp/terraform/terraform"
)
func TestAccAWSKinesisFirehoseDeliveryStream_basic(t *testing.T) {
var stream firehose.DeliveryStreamDescription
ri := rand.New(rand.NewSource(time.Now().UnixNano())).Int()
config := fmt.Sprintf(testAccKinesisFirehoseDeliveryStreamConfig_basic, ri, ri)
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
CheckDestroy: testAccCheckKinesisFirehoseDeliveryStreamDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: config,
Check: resource.ComposeTestCheckFunc(
testAccCheckKinesisFirehoseDeliveryStreamExists("aws_kinesis_firehose_delivery_stream.test_stream", &stream),
testAccCheckAWSKinesisFirehoseDeliveryStreamAttributes(&stream),
),
},
},
})
}
func TestAccAWSKinesisFirehoseDeliveryStream_s3ConfigUpdates(t *testing.T) {
var stream firehose.DeliveryStreamDescription
ri := rand.New(rand.NewSource(time.Now().UnixNano())).Int()
preconfig := fmt.Sprintf(testAccKinesisFirehoseDeliveryStreamConfig_s3, ri, ri)
postConfig := fmt.Sprintf(testAccKinesisFirehoseDeliveryStreamConfig_s3Updates, ri, ri)
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
CheckDestroy: testAccCheckKinesisFirehoseDeliveryStreamDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: preconfig,
Check: resource.ComposeTestCheckFunc(
testAccCheckKinesisFirehoseDeliveryStreamExists("aws_kinesis_firehose_delivery_stream.test_stream", &stream),
testAccCheckAWSKinesisFirehoseDeliveryStreamAttributes(&stream),
resource.TestCheckResourceAttr(
"aws_kinesis_firehose_delivery_stream.test_stream", "s3_buffer_size", "5"),
resource.TestCheckResourceAttr(
"aws_kinesis_firehose_delivery_stream.test_stream", "s3_buffer_interval", "300"),
resource.TestCheckResourceAttr(
"aws_kinesis_firehose_delivery_stream.test_stream", "s3_data_compression", "UNCOMPRESSED"),
),
},
resource.TestStep{
Config: postConfig,
Check: resource.ComposeTestCheckFunc(
testAccCheckKinesisFirehoseDeliveryStreamExists("aws_kinesis_firehose_delivery_stream.test_stream", &stream),
testAccCheckAWSKinesisFirehoseDeliveryStreamAttributes(&stream),
resource.TestCheckResourceAttr(
"aws_kinesis_firehose_delivery_stream.test_stream", "s3_buffer_size", "10"),
resource.TestCheckResourceAttr(
"aws_kinesis_firehose_delivery_stream.test_stream", "s3_buffer_interval", "400"),
resource.TestCheckResourceAttr(
"aws_kinesis_firehose_delivery_stream.test_stream", "s3_data_compression", "GZIP"),
),
},
},
})
}
func testAccCheckKinesisFirehoseDeliveryStreamExists(n string, stream *firehose.DeliveryStreamDescription) resource.TestCheckFunc {
return func(s *terraform.State) error {
rs, ok := s.RootModule().Resources[n]
log.Printf("State: %#v", s.RootModule().Resources)
if !ok {
return fmt.Errorf("Not found: %s", n)
}
if rs.Primary.ID == "" {
return fmt.Errorf("No Kinesis Firehose ID is set")
}
conn := testAccProvider.Meta().(*AWSClient).firehoseconn
describeOpts := &firehose.DescribeDeliveryStreamInput{
DeliveryStreamName: aws.String(rs.Primary.Attributes["name"]),
}
resp, err := conn.DescribeDeliveryStream(describeOpts)
if err != nil {
return err
}
*stream = *resp.DeliveryStreamDescription
return nil
}
}
func testAccCheckAWSKinesisFirehoseDeliveryStreamAttributes(stream *firehose.DeliveryStreamDescription) resource.TestCheckFunc {
return func(s *terraform.State) error {
if !strings.HasPrefix(*stream.DeliveryStreamName, "terraform-kinesis-firehose") {
return fmt.Errorf("Bad Stream name: %s", *stream.DeliveryStreamName)
}
for _, rs := range s.RootModule().Resources {
if rs.Type != "aws_kinesis_firehose_delivery_stream" {
continue
}
if *stream.DeliveryStreamARN != rs.Primary.Attributes["arn"] {
return fmt.Errorf("Bad Delivery Stream ARN\n\t expected: %s\n\tgot: %s\n", rs.Primary.Attributes["arn"], *stream.DeliveryStreamARN)
}
}
return nil
}
}
func testAccCheckKinesisFirehoseDeliveryStreamDestroy(s *terraform.State) error {
for _, rs := range s.RootModule().Resources {
if rs.Type != "aws_kinesis_firehose_delivery_stream" {
continue
}
conn := testAccProvider.Meta().(*AWSClient).firehoseconn
describeOpts := &firehose.DescribeDeliveryStreamInput{
DeliveryStreamName: aws.String(rs.Primary.Attributes["name"]),
}
resp, err := conn.DescribeDeliveryStream(describeOpts)
if err == nil {
if resp.DeliveryStreamDescription != nil && *resp.DeliveryStreamDescription.DeliveryStreamStatus != "DELETING" {
return fmt.Errorf("Error: Delivery Stream still exists")
}
}
return nil
}
return nil
}
var testAccKinesisFirehoseDeliveryStreamConfig_basic = `
resource "aws_s3_bucket" "bucket" {
bucket = "tf-test-bucket-%d"
acl = "private"
}
resource "aws_kinesis_firehose_delivery_stream" "test_stream" {
name = "terraform-kinesis-firehose-basictest-%d"
destination = "s3"
role_arn = "arn:aws:iam::946579370547:role/firehose_delivery_role"
s3_bucket_arn = "${aws_s3_bucket.bucket.arn}"
}`
var testAccKinesisFirehoseDeliveryStreamConfig_s3 = `
resource "aws_s3_bucket" "bucket" {
bucket = "tf-test-bucket-%d"
acl = "private"
}
resource "aws_kinesis_firehose_delivery_stream" "test_stream" {
name = "terraform-kinesis-firehose-s3test-%d"
destination = "s3"
role_arn = "arn:aws:iam::946579370547:role/firehose_delivery_role"
s3_bucket_arn = "${aws_s3_bucket.bucket.arn}"
}`
var testAccKinesisFirehoseDeliveryStreamConfig_s3Updates = `
resource "aws_s3_bucket" "bucket" {
bucket = "tf-test-bucket-01-%d"
acl = "private"
}
resource "aws_kinesis_firehose_delivery_stream" "test_stream" {
name = "terraform-kinesis-firehose-s3test-%d"
destination = "s3"
role_arn = "arn:aws:iam::946579370547:role/firehose_delivery_role"
s3_bucket_arn = "${aws_s3_bucket.bucket.arn}"
s3_buffer_size = 10
s3_buffer_interval = 400
s3_data_compression = "GZIP"
}`

View File

@ -13,6 +13,8 @@ import (
"github.com/aws/aws-sdk-go/service/lambda"
"github.com/mitchellh/go-homedir"
"errors"
"github.com/hashicorp/terraform/helper/schema"
)
@ -25,13 +27,28 @@ func resourceAwsLambdaFunction() *schema.Resource {
Schema: map[string]*schema.Schema{
"filename": &schema.Schema{
Type: schema.TypeString,
Required: true,
Type: schema.TypeString,
Optional: true,
ConflictsWith: []string{"s3_bucket", "s3_key", "s3_object_version"},
},
"s3_bucket": &schema.Schema{
Type: schema.TypeString,
Optional: true,
ConflictsWith: []string{"filename"},
},
"s3_key": &schema.Schema{
Type: schema.TypeString,
Optional: true,
ConflictsWith: []string{"filename"},
},
"s3_object_version": &schema.Schema{
Type: schema.TypeString,
Optional: true,
ConflictsWith: []string{"filename"},
},
"description": &schema.Schema{
Type: schema.TypeString,
Optional: true,
ForceNew: true, // TODO make this editable
},
"function_name": &schema.Schema{
Type: schema.TypeString,
@ -93,22 +110,36 @@ func resourceAwsLambdaFunctionCreate(d *schema.ResourceData, meta interface{}) e
log.Printf("[DEBUG] Creating Lambda Function %s with role %s", functionName, iamRole)
filename, err := homedir.Expand(d.Get("filename").(string))
if err != nil {
return err
var functionCode *lambda.FunctionCode
if v, ok := d.GetOk("filename"); ok {
filename, err := homedir.Expand(v.(string))
if err != nil {
return err
}
zipfile, err := ioutil.ReadFile(filename)
if err != nil {
return err
}
d.Set("source_code_hash", sha256.Sum256(zipfile))
functionCode = &lambda.FunctionCode{
ZipFile: zipfile,
}
} else {
s3Bucket, bucketOk := d.GetOk("s3_bucket")
s3Key, keyOk := d.GetOk("s3_key")
s3ObjectVersion, versionOk := d.GetOk("s3_object_version")
if !bucketOk || !keyOk || !versionOk {
return errors.New("s3_bucket, s3_key and s3_object_version must all be set while using S3 code source")
}
functionCode = &lambda.FunctionCode{
S3Bucket: aws.String(s3Bucket.(string)),
S3Key: aws.String(s3Key.(string)),
S3ObjectVersion: aws.String(s3ObjectVersion.(string)),
}
}
zipfile, err := ioutil.ReadFile(filename)
if err != nil {
return err
}
d.Set("source_code_hash", sha256.Sum256(zipfile))
log.Printf("[DEBUG] ")
params := &lambda.CreateFunctionInput{
Code: &lambda.FunctionCode{
ZipFile: zipfile,
},
Code: functionCode,
Description: aws.String(d.Get("description").(string)),
FunctionName: aws.String(functionName),
Handler: aws.String(d.Get("handler").(string)),
@ -118,6 +149,7 @@ func resourceAwsLambdaFunctionCreate(d *schema.ResourceData, meta interface{}) e
Timeout: aws.Int64(int64(d.Get("timeout").(int))),
}
var err error
for i := 0; i < 5; i++ {
_, err = conn.CreateFunction(params)
if awsErr, ok := err.(awserr.Error); ok {

View File

@ -26,10 +26,11 @@ func resourceAwsLaunchConfiguration() *schema.Resource {
Schema: map[string]*schema.Schema{
"name": &schema.Schema{
Type: schema.TypeString,
Optional: true,
Computed: true,
ForceNew: true,
Type: schema.TypeString,
Optional: true,
Computed: true,
ForceNew: true,
ConflictsWith: []string{"name_prefix"},
ValidateFunc: func(v interface{}, k string) (ws []string, errors []error) {
// https://github.com/boto/botocore/blob/9f322b1/botocore/data/autoscaling/2011-01-01/service-2.json#L1932-L1939
value := v.(string)
@ -41,6 +42,22 @@ func resourceAwsLaunchConfiguration() *schema.Resource {
},
},
"name_prefix": &schema.Schema{
Type: schema.TypeString,
Optional: true,
ForceNew: true,
ValidateFunc: func(v interface{}, k string) (ws []string, errors []error) {
// https://github.com/boto/botocore/blob/9f322b1/botocore/data/autoscaling/2011-01-01/service-2.json#L1932-L1939
// uuid is 26 characters, limit the prefix to 229.
value := v.(string)
if len(value) > 229 {
errors = append(errors, fmt.Errorf(
"%q cannot be longer than 229 characters, name is limited to 255", k))
}
return
},
},
"image_id": &schema.Schema{
Type: schema.TypeString,
Required: true,
@ -386,6 +403,8 @@ func resourceAwsLaunchConfigurationCreate(d *schema.ResourceData, meta interface
var lcName string
if v, ok := d.GetOk("name"); ok {
lcName = v.(string)
} else if v, ok := d.GetOk("name_prefix"); ok {
lcName = resource.PrefixedUniqueId(v.(string))
} else {
lcName = resource.UniqueId()
}

View File

@ -30,6 +30,14 @@ func TestAccAWSLaunchConfiguration_basic(t *testing.T) {
"aws_launch_configuration.bar", "terraform-"),
),
},
resource.TestStep{
Config: testAccAWSLaunchConfigurationPrefixNameConfig,
Check: resource.ComposeTestCheckFunc(
testAccCheckAWSLaunchConfigurationExists("aws_launch_configuration.baz", &conf),
testAccCheckAWSLaunchConfigurationGeneratedNamePrefix(
"aws_launch_configuration.baz", "baz-"),
),
},
},
})
}
@ -255,3 +263,13 @@ resource "aws_launch_configuration" "bar" {
associate_public_ip_address = false
}
`
const testAccAWSLaunchConfigurationPrefixNameConfig = `
resource "aws_launch_configuration" "baz" {
name_prefix = "baz-"
image_id = "ami-21f78e11"
instance_type = "t1.micro"
user_data = "foobar-user-data-change"
associate_public_ip_address = false
}
`

View File

@ -4,6 +4,7 @@ import (
"fmt"
"log"
"regexp"
"strings"
"time"
"github.com/aws/aws-sdk-go/aws"
@ -122,6 +123,38 @@ func resourceAwsRDSCluster() *schema.Resource {
Elem: &schema.Schema{Type: schema.TypeString},
Set: schema.HashString,
},
"preferred_backup_window": &schema.Schema{
Type: schema.TypeString,
Optional: true,
Computed: true,
},
"preferred_maintenance_window": &schema.Schema{
Type: schema.TypeString,
Optional: true,
Computed: true,
StateFunc: func(val interface{}) string {
if val == nil {
return ""
}
return strings.ToLower(val.(string))
},
},
"backup_retention_period": &schema.Schema{
Type: schema.TypeInt,
Optional: true,
Default: 1,
ValidateFunc: func(v interface{}, k string) (ws []string, es []error) {
value := v.(int)
if value > 35 {
es = append(es, fmt.Errorf(
"backup retention period cannot be more than 35 days"))
}
return
},
},
},
}
}
@ -156,6 +189,18 @@ func resourceAwsRDSClusterCreate(d *schema.ResourceData, meta interface{}) error
createOpts.AvailabilityZones = expandStringList(attr.List())
}
if v, ok := d.GetOk("backup_retention_period"); ok {
createOpts.BackupRetentionPeriod = aws.Int64(int64(v.(int)))
}
if v, ok := d.GetOk("preferred_backup_window"); ok {
createOpts.PreferredBackupWindow = aws.String(v.(string))
}
if v, ok := d.GetOk("preferred_maintenance_window"); ok {
createOpts.PreferredMaintenanceWindow = aws.String(v.(string))
}
log.Printf("[DEBUG] RDS Cluster create options: %s", createOpts)
resp, err := conn.CreateDBCluster(createOpts)
if err != nil {
@ -223,6 +268,9 @@ func resourceAwsRDSClusterRead(d *schema.ResourceData, meta interface{}) error {
d.Set("engine", dbc.Engine)
d.Set("master_username", dbc.MasterUsername)
d.Set("port", dbc.Port)
d.Set("backup_retention_period", dbc.BackupRetentionPeriod)
d.Set("preferred_backup_window", dbc.PreferredBackupWindow)
d.Set("preferred_maintenance_window", dbc.PreferredMaintenanceWindow)
var vpcg []string
for _, g := range dbc.VpcSecurityGroups {
@ -263,6 +311,18 @@ func resourceAwsRDSClusterUpdate(d *schema.ResourceData, meta interface{}) error
}
}
if d.HasChange("preferred_backup_window") {
req.PreferredBackupWindow = aws.String(d.Get("preferred_backup_window").(string))
}
if d.HasChange("preferred_maintenance_window") {
req.PreferredMaintenanceWindow = aws.String(d.Get("preferred_maintenance_window").(string))
}
if d.HasChange("backup_retention_period") {
req.BackupRetentionPeriod = aws.Int64(int64(d.Get("backup_retention_period").(int)))
}
_, err := conn.ModifyDBCluster(req)
if err != nil {
return fmt.Errorf("[WARN] Error modifying RDS Cluster (%s): %s", d.Id(), err)

View File

@ -17,13 +17,16 @@ import (
func TestAccAWSRDSCluster_basic(t *testing.T) {
var v rds.DBCluster
ri := rand.New(rand.NewSource(time.Now().UnixNano())).Int()
config := fmt.Sprintf(testAccAWSClusterConfig, ri)
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
CheckDestroy: testAccCheckAWSClusterDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: testAccAWSClusterConfig,
Config: config,
Check: resource.ComposeTestCheckFunc(
testAccCheckAWSClusterExists("aws_rds_cluster.default", &v),
),
@ -32,6 +35,47 @@ func TestAccAWSRDSCluster_basic(t *testing.T) {
})
}
func TestAccAWSRDSCluster_backupsUpdate(t *testing.T) {
var v rds.DBCluster
ri := rand.New(rand.NewSource(time.Now().UnixNano())).Int()
preConfig := fmt.Sprintf(testAccAWSClusterConfig_backups, ri)
postConfig := fmt.Sprintf(testAccAWSClusterConfig_backupsUpdate, ri)
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
CheckDestroy: testAccCheckAWSClusterDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: preConfig,
Check: resource.ComposeTestCheckFunc(
testAccCheckAWSClusterExists("aws_rds_cluster.default", &v),
resource.TestCheckResourceAttr(
"aws_rds_cluster.default", "preferred_backup_window", "07:00-09:00"),
resource.TestCheckResourceAttr(
"aws_rds_cluster.default", "backup_retention_period", "5"),
resource.TestCheckResourceAttr(
"aws_rds_cluster.default", "preferred_maintenance_window", "tue:04:00-tue:04:30"),
),
},
resource.TestStep{
Config: postConfig,
Check: resource.ComposeTestCheckFunc(
testAccCheckAWSClusterExists("aws_rds_cluster.default", &v),
resource.TestCheckResourceAttr(
"aws_rds_cluster.default", "preferred_backup_window", "03:00-09:00"),
resource.TestCheckResourceAttr(
"aws_rds_cluster.default", "backup_retention_period", "10"),
resource.TestCheckResourceAttr(
"aws_rds_cluster.default", "preferred_maintenance_window", "wed:01:00-wed:01:30"),
),
},
},
})
}
func testAccCheckAWSClusterDestroy(s *terraform.State) error {
for _, rs := range s.RootModule().Resources {
if rs.Type != "aws_rds_cluster" {
@ -97,12 +141,36 @@ func testAccCheckAWSClusterExists(n string, v *rds.DBCluster) resource.TestCheck
}
}
// Add some random to the name, to avoid collision
var testAccAWSClusterConfig = fmt.Sprintf(`
var testAccAWSClusterConfig = `
resource "aws_rds_cluster" "default" {
cluster_identifier = "tf-aurora-cluster-%d"
availability_zones = ["us-west-2a","us-west-2b","us-west-2c"]
database_name = "mydb"
master_username = "foo"
master_password = "mustbeeightcharaters"
}`, rand.New(rand.NewSource(time.Now().UnixNano())).Int())
}`
var testAccAWSClusterConfig_backups = `
resource "aws_rds_cluster" "default" {
cluster_identifier = "tf-aurora-cluster-%d"
availability_zones = ["us-west-2a","us-west-2b","us-west-2c"]
database_name = "mydb"
master_username = "foo"
master_password = "mustbeeightcharaters"
backup_retention_period = 5
preferred_backup_window = "07:00-09:00"
preferred_maintenance_window = "tue:04:00-tue:04:30"
}`
var testAccAWSClusterConfig_backupsUpdate = `
resource "aws_rds_cluster" "default" {
cluster_identifier = "tf-aurora-cluster-%d"
availability_zones = ["us-west-2a","us-west-2b","us-west-2c"]
database_name = "mydb"
master_username = "foo"
master_password = "mustbeeightcharaters"
backup_retention_period = 10
preferred_backup_window = "03:00-09:00"
preferred_maintenance_window = "wed:01:00-wed:01:30"
apply_immediately = true
}`

View File

@ -28,6 +28,10 @@ func resourceAwsRoute53Record() *schema.Resource {
Type: schema.TypeString,
Required: true,
ForceNew: true,
StateFunc: func(v interface{}) string {
value := v.(string)
return strings.ToLower(value)
},
},
"fqdn": &schema.Schema{
@ -192,12 +196,13 @@ func resourceAwsRoute53RecordCreate(d *schema.ResourceData, meta interface{}) er
// Generate an ID
vars := []string{
zone,
d.Get("name").(string),
strings.ToLower(d.Get("name").(string)),
d.Get("type").(string),
}
if v, ok := d.GetOk("set_identifier"); ok {
vars = append(vars, v.(string))
}
d.SetId(strings.Join(vars, "_"))
// Wait until we are done
@ -242,6 +247,8 @@ func resourceAwsRoute53RecordRead(d *schema.ResourceData, meta interface{}) erro
StartRecordType: aws.String(d.Get("type").(string)),
}
log.Printf("[DEBUG] List resource records sets for zone: %s, opts: %s",
zone, lopts)
resp, err := conn.ListResourceRecordSets(lopts)
if err != nil {
return err
@ -251,7 +258,7 @@ func resourceAwsRoute53RecordRead(d *schema.ResourceData, meta interface{}) erro
found := false
for _, record := range resp.ResourceRecordSets {
name := cleanRecordName(*record.Name)
if FQDN(name) != FQDN(*lopts.StartRecordName) {
if FQDN(strings.ToLower(name)) != FQDN(strings.ToLower(*lopts.StartRecordName)) {
continue
}
if strings.ToUpper(*record.Type) != strings.ToUpper(*lopts.StartRecordType) {
@ -279,6 +286,7 @@ func resourceAwsRoute53RecordRead(d *schema.ResourceData, meta interface{}) erro
}
if !found {
log.Printf("[DEBUG] No matching record found for: %s, removing from state file", en)
d.SetId("")
}
@ -440,7 +448,7 @@ func cleanRecordName(name string) string {
// If it does not, add the zone name to form a fully qualified name
// and keep AWS happy.
func expandRecordName(name, zone string) string {
rn := strings.TrimSuffix(name, ".")
rn := strings.ToLower(strings.TrimSuffix(name, "."))
zone = strings.TrimSuffix(zone, ".")
if !strings.HasSuffix(rn, zone) {
rn = strings.Join([]string{name, zone}, ".")

View File

@ -291,7 +291,7 @@ func testAccCheckRoute53RecordExists(n string) resource.TestCheckFunc {
// rec := resp.ResourceRecordSets[0]
for _, rec := range resp.ResourceRecordSets {
recName := cleanRecordName(*rec.Name)
if FQDN(recName) == FQDN(en) && *rec.Type == rType {
if FQDN(strings.ToLower(recName)) == FQDN(strings.ToLower(en)) && *rec.Type == rType {
return nil
}
}
@ -306,7 +306,7 @@ resource "aws_route53_zone" "main" {
resource "aws_route53_record" "default" {
zone_id = "${aws_route53_zone.main.zone_id}"
name = "www.notexample.com"
name = "www.NOTexamplE.com"
type = "A"
ttl = "30"
records = ["127.0.0.1", "127.0.0.27"]

View File

@ -44,10 +44,13 @@ func resourceAwsSnsTopic() *schema.Resource {
"policy": &schema.Schema{
Type: schema.TypeString,
Optional: true,
ForceNew: false,
Computed: true,
StateFunc: func(v interface{}) string {
jsonb := []byte(v.(string))
s, ok := v.(string)
if !ok || s == "" {
return ""
}
jsonb := []byte(s)
buffer := new(bytes.Buffer)
if err := json.Compact(buffer, jsonb); err != nil {
log.Printf("[WARN] Error compacting JSON for Policy in SNS Topic")

View File

@ -100,6 +100,7 @@ func resourceDigitalOceanDroplet() *schema.Resource {
"user_data": &schema.Schema{
Type: schema.TypeString,
Optional: true,
ForceNew: true,
},
},
}
@ -185,10 +186,11 @@ func resourceDigitalOceanDropletRead(d *schema.ResourceData, meta interface{}) e
}
// Retrieve the droplet properties for updating the state
droplet, _, err := client.Droplets.Get(id)
droplet, resp, err := client.Droplets.Get(id)
if err != nil {
// check if the droplet no longer exists.
if err.Error() == "Error retrieving droplet: API Error: 404 Not Found" {
if resp.StatusCode == 404 {
log.Printf("[WARN] DigitalOcean Droplet (%s) not found", d.Id())
d.SetId("")
return nil
}

View File

@ -71,6 +71,38 @@ func TestAccDigitalOceanDroplet_Update(t *testing.T) {
})
}
func TestAccDigitalOceanDroplet_UpdateUserData(t *testing.T) {
var afterCreate, afterUpdate godo.Droplet
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
CheckDestroy: testAccCheckDigitalOceanDropletDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: testAccCheckDigitalOceanDropletConfig_basic,
Check: resource.ComposeTestCheckFunc(
testAccCheckDigitalOceanDropletExists("digitalocean_droplet.foobar", &afterCreate),
testAccCheckDigitalOceanDropletAttributes(&afterCreate),
),
},
resource.TestStep{
Config: testAccCheckDigitalOceanDropletConfig_userdata_update,
Check: resource.ComposeTestCheckFunc(
testAccCheckDigitalOceanDropletExists("digitalocean_droplet.foobar", &afterUpdate),
resource.TestCheckResourceAttr(
"digitalocean_droplet.foobar",
"user_data",
"foobar foobar"),
testAccCheckDigitalOceanDropletRecreated(
t, &afterCreate, &afterUpdate),
),
},
},
})
}
func TestAccDigitalOceanDroplet_PrivateNetworkingIpv6(t *testing.T) {
var droplet godo.Droplet
@ -233,6 +265,16 @@ func testAccCheckDigitalOceanDropletExists(n string, droplet *godo.Droplet) reso
}
}
func testAccCheckDigitalOceanDropletRecreated(t *testing.T,
before, after *godo.Droplet) resource.TestCheckFunc {
return func(s *terraform.State) error {
if before.ID == after.ID {
t.Fatalf("Expected change of droplet IDs, but both were %v", before.ID)
}
return nil
}
}
// Not sure if this check should remain here as the underlaying
// function is changed and is tested indirectly by almost all
// other test already
@ -261,6 +303,16 @@ resource "digitalocean_droplet" "foobar" {
}
`
const testAccCheckDigitalOceanDropletConfig_userdata_update = `
resource "digitalocean_droplet" "foobar" {
name = "foo"
size = "512mb"
image = "centos-5-8-x32"
region = "nyc3"
user_data = "foobar foobar"
}
`
const testAccCheckDigitalOceanDropletConfig_RenameAndResize = `
resource "digitalocean_droplet" "foobar" {
name = "baz"

View File

@ -11,6 +11,7 @@ func canonicalizeServiceScope(scope string) string {
"datastore": "https://www.googleapis.com/auth/datastore",
"logging-write": "https://www.googleapis.com/auth/logging.write",
"monitoring": "https://www.googleapis.com/auth/monitoring",
"pubsub": "https://www.googleapis.com/auth/pubsub",
"sql": "https://www.googleapis.com/auth/sqlservice",
"sql-admin": "https://www.googleapis.com/auth/sqlservice.admin",
"storage-full": "https://www.googleapis.com/auth/devstorage.full_control",
@ -22,9 +23,9 @@ func canonicalizeServiceScope(scope string) string {
"userinfo-email": "https://www.googleapis.com/auth/userinfo.email",
}
if matchedUrl, ok := scopeMap[scope]; ok {
return matchedUrl
} else {
return scope
if matchedURL, ok := scopeMap[scope]; ok {
return matchedURL
}
return scope
}

View File

@ -95,6 +95,7 @@ func resourceComputeInstanceV2() *schema.Resource {
Type: schema.TypeSet,
Optional: true,
ForceNew: false,
Computed: true,
Elem: &schema.Schema{Type: schema.TypeString},
Set: schema.HashString,
},

View File

@ -117,51 +117,53 @@ func TestAccNetworkingV2Network_fullstack(t *testing.T) {
var subnet subnets.Subnet
var testAccNetworkingV2Network_fullstack = fmt.Sprintf(`
resource "openstack_networking_network_v2" "foo" {
region = "%s"
name = "network_1"
admin_state_up = "true"
}
resource "openstack_networking_network_v2" "foo" {
region = "%s"
name = "network_1"
admin_state_up = "true"
}
resource "openstack_networking_subnet_v2" "foo" {
region = "%s"
name = "subnet_1"
network_id = "${openstack_networking_network_v2.foo.id}"
cidr = "192.168.199.0/24"
ip_version = 4
}
resource "openstack_networking_subnet_v2" "foo" {
region = "%s"
name = "subnet_1"
network_id = "${openstack_networking_network_v2.foo.id}"
cidr = "192.168.199.0/24"
ip_version = 4
}
resource "openstack_compute_secgroup_v2" "foo" {
region = "%s"
name = "secgroup_1"
description = "a security group"
rule {
from_port = 22
to_port = 22
ip_protocol = "tcp"
cidr = "0.0.0.0/0"
}
}
resource "openstack_compute_secgroup_v2" "foo" {
region = "%s"
name = "secgroup_1"
description = "a security group"
rule {
from_port = 22
to_port = 22
ip_protocol = "tcp"
cidr = "0.0.0.0/0"
}
}
resource "openstack_networking_port_v2" "foo" {
region = "%s"
name = "port_1"
network_id = "${openstack_networking_network_v2.foo.id}"
admin_state_up = "true"
security_groups = ["${openstack_compute_secgroup_v2.foo.id}"]
resource "openstack_networking_port_v2" "foo" {
region = "%s"
name = "port_1"
network_id = "${openstack_networking_network_v2.foo.id}"
admin_state_up = "true"
security_groups = ["${openstack_compute_secgroup_v2.foo.id}"]
fixed_ips {
"subnet_id" = "${openstack_networking_subnet_v2.foo.id}"
"ip_address" = "192.168.199.23"
}
}
depends_on = ["openstack_networking_subnet_v2.foo"]
}
resource "openstack_compute_instance_v2" "foo" {
region = "%s"
name = "terraform-test"
security_groups = ["${openstack_compute_secgroup_v2.foo.name}"]
resource "openstack_compute_instance_v2" "foo" {
region = "%s"
name = "terraform-test"
security_groups = ["${openstack_compute_secgroup_v2.foo.name}"]
network {
port = "${openstack_networking_port_v2.foo.id}"
}
}`, region, region, region, region, region)
network {
port = "${openstack_networking_port_v2.foo.id}"
}
}`, region, region, region, region, region)
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },

View File

@ -78,6 +78,23 @@ func resourceNetworkingPortV2() *schema.Resource {
ForceNew: true,
Computed: true,
},
"fixed_ips": &schema.Schema{
Type: schema.TypeList,
Optional: true,
ForceNew: false,
Elem: &schema.Resource{
Schema: map[string]*schema.Schema{
"subnet_id": &schema.Schema{
Type: schema.TypeString,
Required: true,
},
"ip_address": &schema.Schema{
Type: schema.TypeString,
Required: true,
},
},
},
},
},
}
}
@ -98,6 +115,7 @@ func resourceNetworkingPortV2Create(d *schema.ResourceData, meta interface{}) er
DeviceOwner: d.Get("device_owner").(string),
SecurityGroups: resourcePortSecurityGroupsV2(d),
DeviceID: d.Get("device_id").(string),
FixedIPs: resourcePortFixedIpsV2(d),
}
log.Printf("[DEBUG] Create Options: %#v", createOpts)
@ -146,6 +164,7 @@ func resourceNetworkingPortV2Read(d *schema.ResourceData, meta interface{}) erro
d.Set("device_owner", p.DeviceOwner)
d.Set("security_groups", p.SecurityGroups)
d.Set("device_id", p.DeviceID)
d.Set("fixed_ips", p.FixedIPs)
return nil
}
@ -179,6 +198,10 @@ func resourceNetworkingPortV2Update(d *schema.ResourceData, meta interface{}) er
updateOpts.DeviceID = d.Get("device_id").(string)
}
if d.HasChange("fixed_ips") {
updateOpts.FixedIPs = resourcePortFixedIpsV2(d)
}
log.Printf("[DEBUG] Updating Port %s with options: %+v", d.Id(), updateOpts)
_, err = ports.Update(networkingClient, d.Id(), updateOpts).Extract()
@ -223,6 +246,20 @@ func resourcePortSecurityGroupsV2(d *schema.ResourceData) []string {
return groups
}
func resourcePortFixedIpsV2(d *schema.ResourceData) []ports.IP {
rawIP := d.Get("fixed_ips").([]interface{})
ip := make([]ports.IP, len(rawIP))
for i, raw := range rawIP {
rawMap := raw.(map[string]interface{})
ip[i] = ports.IP{
SubnetID: rawMap["subnet_id"].(string),
IPAddress: rawMap["ip_address"].(string),
}
}
return ip
}
func resourcePortAdminStateUpV2(d *schema.ResourceData) *bool {
value := false

View File

@ -10,6 +10,7 @@ import (
"github.com/rackspace/gophercloud/openstack/networking/v2/networks"
"github.com/rackspace/gophercloud/openstack/networking/v2/ports"
"github.com/rackspace/gophercloud/openstack/networking/v2/subnets"
)
func TestAccNetworkingV2Port_basic(t *testing.T) {
@ -17,6 +18,7 @@ func TestAccNetworkingV2Port_basic(t *testing.T) {
var network networks.Network
var port ports.Port
var subnet subnets.Subnet
var testAccNetworkingV2Port_basic = fmt.Sprintf(`
resource "openstack_networking_network_v2" "foo" {
@ -25,12 +27,24 @@ func TestAccNetworkingV2Port_basic(t *testing.T) {
admin_state_up = "true"
}
resource "openstack_networking_subnet_v2" "foo" {
region = "%s"
name = "subnet_1"
network_id = "${openstack_networking_network_v2.foo.id}"
cidr = "192.168.199.0/24"
ip_version = 4
}
resource "openstack_networking_port_v2" "foo" {
region = "%s"
name = "port_1"
network_id = "${openstack_networking_network_v2.foo.id}"
admin_state_up = "true"
}`, region, region)
fixed_ips {
subnet_id = "${openstack_networking_subnet_v2.foo.id}"
ip_address = "192.168.199.23"
}
}`, region, region, region)
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
@ -40,6 +54,7 @@ func TestAccNetworkingV2Port_basic(t *testing.T) {
resource.TestStep{
Config: testAccNetworkingV2Port_basic,
Check: resource.ComposeTestCheckFunc(
testAccCheckNetworkingV2SubnetExists(t, "openstack_networking_subnet_v2.foo", &subnet),
testAccCheckNetworkingV2NetworkExists(t, "openstack_networking_network_v2.foo", &network),
testAccCheckNetworkingV2PortExists(t, "openstack_networking_port_v2.foo", &port),
),

View File

@ -1000,7 +1000,6 @@ func (vm *virtualMachine) deployVirtualMachine(c *govmomi.Client) error {
NumCPUs: vm.vcpu,
NumCoresPerSocket: 1,
MemoryMB: vm.memoryMb,
DeviceChange: networkDevices,
}
log.Printf("[DEBUG] virtual machine config spec: %v", configSpec)
@ -1024,11 +1023,10 @@ func (vm *virtualMachine) deployVirtualMachine(c *govmomi.Client) error {
// make vm clone spec
cloneSpec := types.VirtualMachineCloneSpec{
Location: relocateSpec,
Template: false,
Config: &configSpec,
Customization: &customSpec,
PowerOn: true,
Location: relocateSpec,
Template: false,
Config: &configSpec,
PowerOn: false,
}
log.Printf("[DEBUG] clone spec: %v", cloneSpec)
@ -1048,6 +1046,43 @@ func (vm *virtualMachine) deployVirtualMachine(c *govmomi.Client) error {
}
log.Printf("[DEBUG] new vm: %v", newVM)
devices, err := newVM.Device(context.TODO())
if err != nil {
log.Printf("[DEBUG] Template devices can't be found")
return err
}
for _, dvc := range devices {
// Issue 3559/3560: Delete all ethernet devices to add the correct ones later
if devices.Type(dvc) == "ethernet" {
err := newVM.RemoveDevice(context.TODO(), dvc)
if err != nil {
return err
}
}
}
// Add Network devices
for _, dvc := range networkDevices {
err := newVM.AddDevice(
context.TODO(), dvc.GetVirtualDeviceConfigSpec().Device)
if err != nil {
return err
}
}
taskb, err := newVM.Customize(context.TODO(), customSpec)
if err != nil {
return err
}
_, err = taskb.WaitForResult(context.TODO(), nil)
if err != nil {
return err
}
log.Printf("[DEBUG]VM customization finished")
newVM.PowerOn(context.TODO())
ip, err := newVM.WaitForIP(context.TODO())
if err != nil {
return err

View File

@ -15,9 +15,21 @@ import (
func TestAccVSphereVirtualMachine_basic(t *testing.T) {
var vm virtualMachine
datacenter := os.Getenv("VSPHERE_DATACENTER")
cluster := os.Getenv("VSPHERE_CLUSTER")
datastore := os.Getenv("VSPHERE_DATASTORE")
var locationOpt string
var datastoreOpt string
if v := os.Getenv("VSPHERE_DATACENTER"); v != "" {
locationOpt += fmt.Sprintf(" datacenter = \"%s\"\n", v)
}
if v := os.Getenv("VSPHERE_CLUSTER"); v != "" {
locationOpt += fmt.Sprintf(" cluster = \"%s\"\n", v)
}
if v := os.Getenv("VSPHERE_RESOURCE_POOL"); v != "" {
locationOpt += fmt.Sprintf(" resource_pool = \"%s\"\n", v)
}
if v := os.Getenv("VSPHERE_DATASTORE"); v != "" {
datastoreOpt = fmt.Sprintf(" datastore = \"%s\"\n", v)
}
template := os.Getenv("VSPHERE_TEMPLATE")
gateway := os.Getenv("VSPHERE_NETWORK_GATEWAY")
label := os.Getenv("VSPHERE_NETWORK_LABEL")
@ -31,28 +43,23 @@ func TestAccVSphereVirtualMachine_basic(t *testing.T) {
resource.TestStep{
Config: fmt.Sprintf(
testAccCheckVSphereVirtualMachineConfig_basic,
datacenter,
cluster,
locationOpt,
gateway,
label,
ip_address,
datastore,
datastoreOpt,
template,
),
Check: resource.ComposeTestCheckFunc(
testAccCheckVSphereVirtualMachineExists("vsphere_virtual_machine.foo", &vm),
resource.TestCheckResourceAttr(
"vsphere_virtual_machine.foo", "name", "terraform-test"),
resource.TestCheckResourceAttr(
"vsphere_virtual_machine.foo", "datacenter", datacenter),
resource.TestCheckResourceAttr(
"vsphere_virtual_machine.foo", "vcpu", "2"),
resource.TestCheckResourceAttr(
"vsphere_virtual_machine.foo", "memory", "4096"),
resource.TestCheckResourceAttr(
"vsphere_virtual_machine.foo", "disk.#", "2"),
resource.TestCheckResourceAttr(
"vsphere_virtual_machine.foo", "disk.0.datastore", datastore),
resource.TestCheckResourceAttr(
"vsphere_virtual_machine.foo", "disk.0.template", template),
resource.TestCheckResourceAttr(
@ -67,12 +74,23 @@ func TestAccVSphereVirtualMachine_basic(t *testing.T) {
func TestAccVSphereVirtualMachine_dhcp(t *testing.T) {
var vm virtualMachine
datacenter := os.Getenv("VSPHERE_DATACENTER")
cluster := os.Getenv("VSPHERE_CLUSTER")
datastore := os.Getenv("VSPHERE_DATASTORE")
var locationOpt string
var datastoreOpt string
if v := os.Getenv("VSPHERE_DATACENTER"); v != "" {
locationOpt += fmt.Sprintf(" datacenter = \"%s\"\n", v)
}
if v := os.Getenv("VSPHERE_CLUSTER"); v != "" {
locationOpt += fmt.Sprintf(" cluster = \"%s\"\n", v)
}
if v := os.Getenv("VSPHERE_RESOURCE_POOL"); v != "" {
locationOpt += fmt.Sprintf(" resource_pool = \"%s\"\n", v)
}
if v := os.Getenv("VSPHERE_DATASTORE"); v != "" {
datastoreOpt = fmt.Sprintf(" datastore = \"%s\"\n", v)
}
template := os.Getenv("VSPHERE_TEMPLATE")
label := os.Getenv("VSPHERE_NETWORK_LABEL_DHCP")
password := os.Getenv("VSPHERE_VM_PASSWORD")
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
@ -82,27 +100,21 @@ func TestAccVSphereVirtualMachine_dhcp(t *testing.T) {
resource.TestStep{
Config: fmt.Sprintf(
testAccCheckVSphereVirtualMachineConfig_dhcp,
datacenter,
cluster,
locationOpt,
label,
datastore,
datastoreOpt,
template,
password,
),
Check: resource.ComposeTestCheckFunc(
testAccCheckVSphereVirtualMachineExists("vsphere_virtual_machine.bar", &vm),
resource.TestCheckResourceAttr(
"vsphere_virtual_machine.bar", "name", "terraform-test"),
resource.TestCheckResourceAttr(
"vsphere_virtual_machine.bar", "datacenter", datacenter),
resource.TestCheckResourceAttr(
"vsphere_virtual_machine.bar", "vcpu", "2"),
resource.TestCheckResourceAttr(
"vsphere_virtual_machine.bar", "memory", "4096"),
resource.TestCheckResourceAttr(
"vsphere_virtual_machine.bar", "disk.#", "1"),
resource.TestCheckResourceAttr(
"vsphere_virtual_machine.bar", "disk.0.datastore", datastore),
resource.TestCheckResourceAttr(
"vsphere_virtual_machine.bar", "disk.0.template", template),
resource.TestCheckResourceAttr(
@ -168,20 +180,6 @@ func testAccCheckVSphereVirtualMachineExists(n string, vm *virtualMachine) resou
}
_, err = object.NewSearchIndex(client.Client).FindChild(context.TODO(), dcFolders.VmFolder, rs.Primary.Attributes["name"])
/*
vmRef, err := client.SearchIndex().FindChild(dcFolders.VmFolder, rs.Primary.Attributes["name"])
if err != nil {
return fmt.Errorf("error %s", err)
}
found := govmomi.NewVirtualMachine(client, vmRef.Reference())
fmt.Printf("%v", found)
if found.Name != rs.Primary.ID {
return fmt.Errorf("Instance not found")
}
*instance = *found
*/
*vm = virtualMachine{
name: rs.Primary.ID,
@ -194,8 +192,7 @@ func testAccCheckVSphereVirtualMachineExists(n string, vm *virtualMachine) resou
const testAccCheckVSphereVirtualMachineConfig_basic = `
resource "vsphere_virtual_machine" "foo" {
name = "terraform-test"
datacenter = "%s"
cluster = "%s"
%s
vcpu = 2
memory = 4096
gateway = "%s"
@ -205,7 +202,7 @@ resource "vsphere_virtual_machine" "foo" {
subnet_mask = "255.255.255.0"
}
disk {
datastore = "%s"
%s
template = "%s"
iops = 500
}
@ -219,22 +216,15 @@ resource "vsphere_virtual_machine" "foo" {
const testAccCheckVSphereVirtualMachineConfig_dhcp = `
resource "vsphere_virtual_machine" "bar" {
name = "terraform-test"
datacenter = "%s"
cluster = "%s"
%s
vcpu = 2
memory = 4096
network_interface {
label = "%s"
}
disk {
datastore = "%s"
%s
template = "%s"
}
connection {
host = "${self.network_interface.0.ip_address}"
user = "root"
password = "%s"
}
}
`

View File

@ -1,4 +1,4 @@
// generated by stringer -type=countHookAction hook_count_action.go; DO NOT EDIT
// Code generated by "stringer -type=countHookAction hook_count_action.go"; DO NOT EDIT
package command

View File

@ -68,14 +68,16 @@ func (c *PlanCommand) Run(args []string) int {
c.Ui.Error(err.Error())
return 1
}
if !validateContext(ctx, c.Ui) {
return 1
}
if err := ctx.Input(c.InputMode()); err != nil {
c.Ui.Error(fmt.Sprintf("Error configuring: %s", err))
return 1
}
if !validateContext(ctx, c.Ui) {
return 1
}
if refresh {
c.Ui.Output("Refreshing Terraform state prior to plan...\n")
state, err := ctx.Refresh()

View File

@ -1,6 +1,7 @@
package command
import (
"bytes"
"io/ioutil"
"os"
"path/filepath"
@ -330,6 +331,30 @@ func TestPlan_vars(t *testing.T) {
}
}
func TestPlan_varsUnset(t *testing.T) {
// Disable test mode so input would be asked
test = false
defer func() { test = true }()
defaultInputReader = bytes.NewBufferString("bar\n")
p := testProvider()
ui := new(cli.MockUi)
c := &PlanCommand{
Meta: Meta{
ContextOpts: testCtxConfig(p),
Ui: ui,
},
}
args := []string{
testFixturePath("plan-vars"),
}
if code := c.Run(args); code != 0 {
t.Fatalf("bad: %d\n\n%s", code, ui.ErrorWriter.String())
}
}
func TestPlan_varFile(t *testing.T) {
varFilePath := testTempFile(t)
if err := ioutil.WriteFile(varFilePath, []byte(planVarFile), 0644); err != nil {

View File

@ -25,6 +25,7 @@ func init() {
"cidrhost": interpolationFuncCidrHost(),
"cidrnetmask": interpolationFuncCidrNetmask(),
"cidrsubnet": interpolationFuncCidrSubnet(),
"coalesce": interpolationFuncCoalesce(),
"compact": interpolationFuncCompact(),
"concat": interpolationFuncConcat(),
"element": interpolationFuncElement(),
@ -145,6 +146,30 @@ func interpolationFuncCidrSubnet() ast.Function {
}
}
// interpolationFuncCoalesce implements the "coalesce" function that
// returns the first non null / empty string from the provided input
func interpolationFuncCoalesce() ast.Function {
return ast.Function{
ArgTypes: []ast.Type{ast.TypeString},
ReturnType: ast.TypeString,
Variadic: true,
VariadicType: ast.TypeString,
Callback: func(args []interface{}) (interface{}, error) {
if len(args) < 2 {
return nil, fmt.Errorf("must provide at least two arguments")
}
for _, arg := range args {
argument := arg.(string)
if argument != "" {
return argument, nil
}
}
return "", nil
},
}
}
// interpolationFuncConcat implements the "concat" function that
// concatenates multiple strings. This isn't actually necessary anymore
// since our language supports string concat natively, but for backwards

View File

@ -147,6 +147,33 @@ func TestInterpolateFuncCidrSubnet(t *testing.T) {
})
}
func TestInterpolateFuncCoalesce(t *testing.T) {
testFunction(t, testFunctionConfig{
Cases: []testFunctionCase{
{
`${coalesce("first", "second", "third")}`,
"first",
false,
},
{
`${coalesce("", "second", "third")}`,
"second",
false,
},
{
`${coalesce("", "", "")}`,
"",
false,
},
{
`${coalesce("foo")}`,
nil,
true,
},
},
})
}
func TestInterpolateFuncDeprecatedConcat(t *testing.T) {
testFunction(t, testFunctionConfig{
Cases: []testFunctionCase{

View File

@ -1,4 +1,4 @@
// generated by stringer -type=Type; DO NOT EDIT
// Code generated by "stringer -type=Type"; DO NOT EDIT
package ast

View File

@ -25,7 +25,7 @@ func LoadJSON(raw json.RawMessage) (*Config, error) {
// Start building the result
hclConfig := &hclConfigurable{
Object: obj,
Root: obj,
}
return hclConfig.Config()

View File

@ -5,15 +5,15 @@ import (
"io/ioutil"
"github.com/hashicorp/hcl"
hclobj "github.com/hashicorp/hcl/hcl"
"github.com/hashicorp/hcl/hcl/ast"
"github.com/mitchellh/mapstructure"
)
// hclConfigurable is an implementation of configurable that knows
// how to turn HCL configuration into a *Config object.
type hclConfigurable struct {
File string
Object *hclobj.Object
File string
Root *ast.File
}
func (t *hclConfigurable) Config() (*Config, error) {
@ -36,7 +36,13 @@ func (t *hclConfigurable) Config() (*Config, error) {
Variable map[string]*hclVariable
}
if err := hcl.DecodeObject(&rawConfig, t.Object); err != nil {
// Top-level item should be the object list
list, ok := t.Root.Node.(*ast.ObjectList)
if !ok {
return nil, fmt.Errorf("error parsing: file doesn't contain a root object")
}
if err := hcl.DecodeObject(&rawConfig, list); err != nil {
return nil, err
}
@ -73,7 +79,7 @@ func (t *hclConfigurable) Config() (*Config, error) {
}
// Get Atlas configuration
if atlas := t.Object.Get("atlas", false); atlas != nil {
if atlas := list.Filter("atlas"); len(atlas.Items) > 0 {
var err error
config.Atlas, err = loadAtlasHcl(atlas)
if err != nil {
@ -82,7 +88,7 @@ func (t *hclConfigurable) Config() (*Config, error) {
}
// Build the modules
if modules := t.Object.Get("module", false); modules != nil {
if modules := list.Filter("module"); len(modules.Items) > 0 {
var err error
config.Modules, err = loadModulesHcl(modules)
if err != nil {
@ -91,7 +97,7 @@ func (t *hclConfigurable) Config() (*Config, error) {
}
// Build the provider configs
if providers := t.Object.Get("provider", false); providers != nil {
if providers := list.Filter("provider"); len(providers.Items) > 0 {
var err error
config.ProviderConfigs, err = loadProvidersHcl(providers)
if err != nil {
@ -100,7 +106,7 @@ func (t *hclConfigurable) Config() (*Config, error) {
}
// Build the resources
if resources := t.Object.Get("resource", false); resources != nil {
if resources := list.Filter("resource"); len(resources.Items) > 0 {
var err error
config.Resources, err = loadResourcesHcl(resources)
if err != nil {
@ -109,7 +115,7 @@ func (t *hclConfigurable) Config() (*Config, error) {
}
// Build the outputs
if outputs := t.Object.Get("output", false); outputs != nil {
if outputs := list.Filter("output"); len(outputs.Items) > 0 {
var err error
config.Outputs, err = loadOutputsHcl(outputs)
if err != nil {
@ -118,8 +124,13 @@ func (t *hclConfigurable) Config() (*Config, error) {
}
// Check for invalid keys
for _, elem := range t.Object.Elem(true) {
k := elem.Key
for _, item := range list.Items {
if len(item.Keys) == 0 {
// Not sure how this would happen, but let's avoid a panic
continue
}
k := item.Keys[0].Token.Value().(string)
if _, ok := validKeys[k]; ok {
continue
}
@ -133,8 +144,6 @@ func (t *hclConfigurable) Config() (*Config, error) {
// loadFileHcl is a fileLoaderFunc that knows how to read HCL
// files and turn them into hclConfigurables.
func loadFileHcl(root string) (configurable, []string, error) {
var obj *hclobj.Object = nil
// Read the HCL file and prepare for parsing
d, err := ioutil.ReadFile(root)
if err != nil {
@ -143,7 +152,7 @@ func loadFileHcl(root string) (configurable, []string, error) {
}
// Parse it
obj, err = hcl.Parse(string(d))
hclRoot, err := hcl.Parse(string(d))
if err != nil {
return nil, nil, fmt.Errorf(
"Error parsing %s: %s", root, err)
@ -151,8 +160,8 @@ func loadFileHcl(root string) (configurable, []string, error) {
// Start building the result
result := &hclConfigurable{
File: root,
Object: obj,
File: root,
Root: hclRoot,
}
// Dive in, find the imports. This is disabled for now since
@ -200,9 +209,16 @@ func loadFileHcl(root string) (configurable, []string, error) {
// Given a handle to a HCL object, this transforms it into the Atlas
// configuration.
func loadAtlasHcl(obj *hclobj.Object) (*AtlasConfig, error) {
func loadAtlasHcl(list *ast.ObjectList) (*AtlasConfig, error) {
if len(list.Items) > 1 {
return nil, fmt.Errorf("only one 'atlas' block allowed")
}
// Get our one item
item := list.Items[0]
var config AtlasConfig
if err := hcl.DecodeObject(&config, obj); err != nil {
if err := hcl.DecodeObject(&config, item.Val); err != nil {
return nil, fmt.Errorf(
"Error reading atlas config: %s",
err)
@ -217,18 +233,10 @@ func loadAtlasHcl(obj *hclobj.Object) (*AtlasConfig, error) {
// The resulting modules may not be unique, but each module
// represents exactly one module definition in the HCL configuration.
// We leave it up to another pass to merge them together.
func loadModulesHcl(os *hclobj.Object) ([]*Module, error) {
var allNames []*hclobj.Object
// See loadResourcesHcl for why this exists. Don't touch this.
for _, o1 := range os.Elem(false) {
// Iterate the inner to get the list of types
for _, o2 := range o1.Elem(true) {
// Iterate all of this type to get _all_ the types
for _, o3 := range o2.Elem(false) {
allNames = append(allNames, o3)
}
}
func loadModulesHcl(list *ast.ObjectList) ([]*Module, error) {
list = list.Children()
if len(list.Items) == 0 {
return nil, nil
}
// Where all the results will go
@ -236,11 +244,18 @@ func loadModulesHcl(os *hclobj.Object) ([]*Module, error) {
// Now go over all the types and their children in order to get
// all of the actual resources.
for _, obj := range allNames {
k := obj.Key
for _, item := range list.Items {
k := item.Keys[0].Token.Value().(string)
var listVal *ast.ObjectList
if ot, ok := item.Val.(*ast.ObjectType); ok {
listVal = ot.List
} else {
return nil, fmt.Errorf("module '%s': should be an object", k)
}
var config map[string]interface{}
if err := hcl.DecodeObject(&config, obj); err != nil {
if err := hcl.DecodeObject(&config, item.Val); err != nil {
return nil, fmt.Errorf(
"Error reading config for %s: %s",
k,
@ -260,8 +275,8 @@ func loadModulesHcl(os *hclobj.Object) ([]*Module, error) {
// If we have a count, then figure it out
var source string
if o := obj.Get("source", false); o != nil {
err = hcl.DecodeObject(&source, o)
if o := listVal.Filter("source"); len(o.Items) > 0 {
err = hcl.DecodeObject(&source, o.Items[0].Val)
if err != nil {
return nil, fmt.Errorf(
"Error parsing source for %s: %s",
@ -282,27 +297,19 @@ func loadModulesHcl(os *hclobj.Object) ([]*Module, error) {
// LoadOutputsHcl recurses into the given HCL object and turns
// it into a mapping of outputs.
func loadOutputsHcl(os *hclobj.Object) ([]*Output, error) {
objects := make(map[string]*hclobj.Object)
// Iterate over all the "output" blocks and get the keys along with
// their raw configuration objects. We'll parse those later.
for _, o1 := range os.Elem(false) {
for _, o2 := range o1.Elem(true) {
objects[o2.Key] = o2
}
}
if len(objects) == 0 {
func loadOutputsHcl(list *ast.ObjectList) ([]*Output, error) {
list = list.Children()
if len(list.Items) == 0 {
return nil, nil
}
// Go through each object and turn it into an actual result.
result := make([]*Output, 0, len(objects))
for n, o := range objects {
var config map[string]interface{}
result := make([]*Output, 0, len(list.Items))
for _, item := range list.Items {
n := item.Keys[0].Token.Value().(string)
if err := hcl.DecodeObject(&config, o); err != nil {
var config map[string]interface{}
if err := hcl.DecodeObject(&config, item.Val); err != nil {
return nil, err
}
@ -325,27 +332,26 @@ func loadOutputsHcl(os *hclobj.Object) ([]*Output, error) {
// LoadProvidersHcl recurses into the given HCL object and turns
// it into a mapping of provider configs.
func loadProvidersHcl(os *hclobj.Object) ([]*ProviderConfig, error) {
var objects []*hclobj.Object
// Iterate over all the "provider" blocks and get the keys along with
// their raw configuration objects. We'll parse those later.
for _, o1 := range os.Elem(false) {
for _, o2 := range o1.Elem(true) {
objects = append(objects, o2)
}
}
if len(objects) == 0 {
func loadProvidersHcl(list *ast.ObjectList) ([]*ProviderConfig, error) {
list = list.Children()
if len(list.Items) == 0 {
return nil, nil
}
// Go through each object and turn it into an actual result.
result := make([]*ProviderConfig, 0, len(objects))
for _, o := range objects {
var config map[string]interface{}
result := make([]*ProviderConfig, 0, len(list.Items))
for _, item := range list.Items {
n := item.Keys[0].Token.Value().(string)
if err := hcl.DecodeObject(&config, o); err != nil {
var listVal *ast.ObjectList
if ot, ok := item.Val.(*ast.ObjectType); ok {
listVal = ot.List
} else {
return nil, fmt.Errorf("module '%s': should be an object", n)
}
var config map[string]interface{}
if err := hcl.DecodeObject(&config, item.Val); err != nil {
return nil, err
}
@ -355,24 +361,24 @@ func loadProvidersHcl(os *hclobj.Object) ([]*ProviderConfig, error) {
if err != nil {
return nil, fmt.Errorf(
"Error reading config for provider config %s: %s",
o.Key,
n,
err)
}
// If we have an alias field, then add those in
var alias string
if a := o.Get("alias", false); a != nil {
err := hcl.DecodeObject(&alias, a)
if a := listVal.Filter("alias"); len(a.Items) > 0 {
err := hcl.DecodeObject(&alias, a.Items[0].Val)
if err != nil {
return nil, fmt.Errorf(
"Error reading alias for provider[%s]: %s",
o.Key,
n,
err)
}
}
result = append(result, &ProviderConfig{
Name: o.Key,
Name: n,
Alias: alias,
RawConfig: rawConfig,
})
@ -387,27 +393,10 @@ func loadProvidersHcl(os *hclobj.Object) ([]*ProviderConfig, error) {
// The resulting resources may not be unique, but each resource
// represents exactly one resource definition in the HCL configuration.
// We leave it up to another pass to merge them together.
func loadResourcesHcl(os *hclobj.Object) ([]*Resource, error) {
var allTypes []*hclobj.Object
// HCL object iteration is really nasty. Below is likely to make
// no sense to anyone approaching this code. Luckily, it is very heavily
// tested. If working on a bug fix or feature, we recommend writing a
// test first then doing whatever you want to the code below. If you
// break it, the tests will catch it. Likewise, if you change this,
// MAKE SURE you write a test for your change, because its fairly impossible
// to reason about this mess.
//
// Functionally, what the code does below is get the libucl.Objects
// for all the TYPES, such as "aws_security_group".
for _, o1 := range os.Elem(false) {
// Iterate the inner to get the list of types
for _, o2 := range o1.Elem(true) {
// Iterate all of this type to get _all_ the types
for _, o3 := range o2.Elem(false) {
allTypes = append(allTypes, o3)
}
}
func loadResourcesHcl(list *ast.ObjectList) ([]*Resource, error) {
list = list.Children()
if len(list.Items) == 0 {
return nil, nil
}
// Where all the results will go
@ -415,191 +404,178 @@ func loadResourcesHcl(os *hclobj.Object) ([]*Resource, error) {
// Now go over all the types and their children in order to get
// all of the actual resources.
for _, t := range allTypes {
for _, obj := range t.Elem(true) {
k := obj.Key
var config map[string]interface{}
if err := hcl.DecodeObject(&config, obj); err != nil {
return nil, fmt.Errorf(
"Error reading config for %s[%s]: %s",
t.Key,
k,
err)
}
// Remove the fields we handle specially
delete(config, "connection")
delete(config, "count")
delete(config, "depends_on")
delete(config, "provisioner")
delete(config, "provider")
delete(config, "lifecycle")
rawConfig, err := NewRawConfig(config)
if err != nil {
return nil, fmt.Errorf(
"Error reading config for %s[%s]: %s",
t.Key,
k,
err)
}
// If we have a count, then figure it out
var count string = "1"
if o := obj.Get("count", false); o != nil {
err = hcl.DecodeObject(&count, o)
if err != nil {
return nil, fmt.Errorf(
"Error parsing count for %s[%s]: %s",
t.Key,
k,
err)
}
}
countConfig, err := NewRawConfig(map[string]interface{}{
"count": count,
})
if err != nil {
return nil, err
}
countConfig.Key = "count"
// If we have depends fields, then add those in
var dependsOn []string
if o := obj.Get("depends_on", false); o != nil {
err := hcl.DecodeObject(&dependsOn, o)
if err != nil {
return nil, fmt.Errorf(
"Error reading depends_on for %s[%s]: %s",
t.Key,
k,
err)
}
}
// If we have connection info, then parse those out
var connInfo map[string]interface{}
if o := obj.Get("connection", false); o != nil {
err := hcl.DecodeObject(&connInfo, o)
if err != nil {
return nil, fmt.Errorf(
"Error reading connection info for %s[%s]: %s",
t.Key,
k,
err)
}
}
// If we have provisioners, then parse those out
var provisioners []*Provisioner
if os := obj.Get("provisioner", false); os != nil {
var err error
provisioners, err = loadProvisionersHcl(os, connInfo)
if err != nil {
return nil, fmt.Errorf(
"Error reading provisioners for %s[%s]: %s",
t.Key,
k,
err)
}
}
// If we have a provider, then parse it out
var provider string
if o := obj.Get("provider", false); o != nil {
err := hcl.DecodeObject(&provider, o)
if err != nil {
return nil, fmt.Errorf(
"Error reading provider for %s[%s]: %s",
t.Key,
k,
err)
}
}
// Check if the resource should be re-created before
// destroying the existing instance
var lifecycle ResourceLifecycle
if o := obj.Get("lifecycle", false); o != nil {
var raw map[string]interface{}
if err = hcl.DecodeObject(&raw, o); err != nil {
return nil, fmt.Errorf(
"Error parsing lifecycle for %s[%s]: %s",
t.Key,
k,
err)
}
if err := mapstructure.WeakDecode(raw, &lifecycle); err != nil {
return nil, fmt.Errorf(
"Error parsing lifecycle for %s[%s]: %s",
t.Key,
k,
err)
}
}
result = append(result, &Resource{
Name: k,
Type: t.Key,
RawCount: countConfig,
RawConfig: rawConfig,
Provisioners: provisioners,
Provider: provider,
DependsOn: dependsOn,
Lifecycle: lifecycle,
})
for _, item := range list.Items {
if len(item.Keys) != 2 {
// TODO: bad error message
return nil, fmt.Errorf("resource needs exactly 2 names")
}
t := item.Keys[0].Token.Value().(string)
k := item.Keys[1].Token.Value().(string)
var listVal *ast.ObjectList
if ot, ok := item.Val.(*ast.ObjectType); ok {
listVal = ot.List
} else {
return nil, fmt.Errorf("resources %s[%s]: should be an object", t, k)
}
var config map[string]interface{}
if err := hcl.DecodeObject(&config, item.Val); err != nil {
return nil, fmt.Errorf(
"Error reading config for %s[%s]: %s",
t,
k,
err)
}
// Remove the fields we handle specially
delete(config, "connection")
delete(config, "count")
delete(config, "depends_on")
delete(config, "provisioner")
delete(config, "provider")
delete(config, "lifecycle")
rawConfig, err := NewRawConfig(config)
if err != nil {
return nil, fmt.Errorf(
"Error reading config for %s[%s]: %s",
t,
k,
err)
}
// If we have a count, then figure it out
var count string = "1"
if o := listVal.Filter("count"); len(o.Items) > 0 {
err = hcl.DecodeObject(&count, o.Items[0].Val)
if err != nil {
return nil, fmt.Errorf(
"Error parsing count for %s[%s]: %s",
t,
k,
err)
}
}
countConfig, err := NewRawConfig(map[string]interface{}{
"count": count,
})
if err != nil {
return nil, err
}
countConfig.Key = "count"
// If we have depends fields, then add those in
var dependsOn []string
if o := listVal.Filter("depends_on"); len(o.Items) > 0 {
err := hcl.DecodeObject(&dependsOn, o.Items[0].Val)
if err != nil {
return nil, fmt.Errorf(
"Error reading depends_on for %s[%s]: %s",
t,
k,
err)
}
}
// If we have connection info, then parse those out
var connInfo map[string]interface{}
if o := listVal.Filter("connection"); len(o.Items) > 0 {
err := hcl.DecodeObject(&connInfo, o.Items[0].Val)
if err != nil {
return nil, fmt.Errorf(
"Error reading connection info for %s[%s]: %s",
t,
k,
err)
}
}
// If we have provisioners, then parse those out
var provisioners []*Provisioner
if os := listVal.Filter("provisioner"); len(os.Items) > 0 {
var err error
provisioners, err = loadProvisionersHcl(os, connInfo)
if err != nil {
return nil, fmt.Errorf(
"Error reading provisioners for %s[%s]: %s",
t,
k,
err)
}
}
// If we have a provider, then parse it out
var provider string
if o := listVal.Filter("provider"); len(o.Items) > 0 {
err := hcl.DecodeObject(&provider, o.Items[0].Val)
if err != nil {
return nil, fmt.Errorf(
"Error reading provider for %s[%s]: %s",
t,
k,
err)
}
}
// Check if the resource should be re-created before
// destroying the existing instance
var lifecycle ResourceLifecycle
if o := listVal.Filter("lifecycle"); len(o.Items) > 0 {
var raw map[string]interface{}
if err = hcl.DecodeObject(&raw, o.Items[0].Val); err != nil {
return nil, fmt.Errorf(
"Error parsing lifecycle for %s[%s]: %s",
t,
k,
err)
}
if err := mapstructure.WeakDecode(raw, &lifecycle); err != nil {
return nil, fmt.Errorf(
"Error parsing lifecycle for %s[%s]: %s",
t,
k,
err)
}
}
result = append(result, &Resource{
Name: k,
Type: t,
RawCount: countConfig,
RawConfig: rawConfig,
Provisioners: provisioners,
Provider: provider,
DependsOn: dependsOn,
Lifecycle: lifecycle,
})
}
return result, nil
}
func loadProvisionersHcl(os *hclobj.Object, connInfo map[string]interface{}) ([]*Provisioner, error) {
pos := make([]*hclobj.Object, 0, int(os.Len()))
// Accumulate all the actual provisioner configuration objects. We
// have to iterate twice here:
//
// 1. The first iteration is of the list of `provisioner` blocks.
// 2. The second iteration is of the dictionary within the
// provisioner which will have only one element which is the
// type of provisioner to use along with tis config.
//
// In JSON it looks kind of like this:
//
// [
// {
// "shell": {
// ...
// }
// }
// ]
//
for _, o1 := range os.Elem(false) {
for _, o2 := range o1.Elem(true) {
switch o1.Type {
case hclobj.ValueTypeList:
for _, o3 := range o2.Elem(true) {
pos = append(pos, o3)
}
case hclobj.ValueTypeObject:
pos = append(pos, o2)
}
}
}
// Short-circuit if there are no items
if len(pos) == 0 {
func loadProvisionersHcl(list *ast.ObjectList, connInfo map[string]interface{}) ([]*Provisioner, error) {
list = list.Children()
if len(list.Items) == 0 {
return nil, nil
}
result := make([]*Provisioner, 0, len(pos))
for _, po := range pos {
// Go through each object and turn it into an actual result.
result := make([]*Provisioner, 0, len(list.Items))
for _, item := range list.Items {
n := item.Keys[0].Token.Value().(string)
var listVal *ast.ObjectList
if ot, ok := item.Val.(*ast.ObjectType); ok {
listVal = ot.List
} else {
return nil, fmt.Errorf("provisioner '%s': should be an object", n)
}
var config map[string]interface{}
if err := hcl.DecodeObject(&config, po); err != nil {
if err := hcl.DecodeObject(&config, item.Val); err != nil {
return nil, err
}
@ -614,8 +590,8 @@ func loadProvisionersHcl(os *hclobj.Object, connInfo map[string]interface{}) ([]
// Check if we have a provisioner-level connection
// block that overrides the resource-level
var subConnInfo map[string]interface{}
if o := po.Get("connection", false); o != nil {
err := hcl.DecodeObject(&subConnInfo, o)
if o := listVal.Filter("connection"); len(o.Items) > 0 {
err := hcl.DecodeObject(&subConnInfo, o.Items[0].Val)
if err != nil {
return nil, err
}
@ -640,7 +616,7 @@ func loadProvisionersHcl(os *hclobj.Object, connInfo map[string]interface{}) ([]
}
result = append(result, &Provisioner{
Type: po.Key,
Type: n,
RawConfig: rawConfig,
ConnInfo: connRaw,
})

View File

@ -1,4 +1,4 @@
// generated by stringer -type=getSource resource_data_get_source.go; DO NOT EDIT
// Code generated by "stringer -type=getSource resource_data_get_source.go"; DO NOT EDIT
package schema

View File

@ -1,4 +1,4 @@
// generated by stringer -type=ValueType valuetype.go; DO NOT EDIT
// Code generated by "stringer -type=ValueType valuetype.go"; DO NOT EDIT
package schema

View File

@ -1,5 +1,8 @@
#!/bin/bash
# Switch to the stable-website branch
git checkout stable-website
# Set the tmpdir
if [ -z "$TMPDIR" ]; then
TMPDIR="/tmp"

View File

@ -2851,6 +2851,55 @@ func TestContext2Apply_outputInvalid(t *testing.T) {
}
}
func TestContext2Apply_outputAdd(t *testing.T) {
m1 := testModule(t, "apply-output-add-before")
p1 := testProvider("aws")
p1.ApplyFn = testApplyFn
p1.DiffFn = testDiffFn
ctx1 := testContext2(t, &ContextOpts{
Module: m1,
Providers: map[string]ResourceProviderFactory{
"aws": testProviderFuncFixed(p1),
},
})
if _, err := ctx1.Plan(); err != nil {
t.Fatalf("err: %s", err)
}
state1, err := ctx1.Apply()
if err != nil {
t.Fatalf("err: %s", err)
}
m2 := testModule(t, "apply-output-add-after")
p2 := testProvider("aws")
p2.ApplyFn = testApplyFn
p2.DiffFn = testDiffFn
ctx2 := testContext2(t, &ContextOpts{
Module: m2,
Providers: map[string]ResourceProviderFactory{
"aws": testProviderFuncFixed(p2),
},
State: state1,
})
if _, err := ctx2.Plan(); err != nil {
t.Fatalf("err: %s", err)
}
state2, err := ctx2.Apply()
if err != nil {
t.Fatalf("err: %s", err)
}
actual := strings.TrimSpace(state2.String())
expected := strings.TrimSpace(testTerraformApplyOutputAddStr)
if actual != expected {
t.Fatalf("bad: \n%s", actual)
}
}
func TestContext2Apply_outputList(t *testing.T) {
m := testModule(t, "apply-output-list")
p := testProvider("aws")

View File

@ -1,4 +1,4 @@
// generated by stringer -type=GraphNodeConfigType graph_config_node_type.go; DO NOT EDIT
// Code generated by "stringer -type=GraphNodeConfigType graph_config_node_type.go"; DO NOT EDIT
package terraform

View File

@ -1,4 +1,4 @@
// generated by stringer -type=InstanceType instancetype.go; DO NOT EDIT
// Code generated by "stringer -type=InstanceType instancetype.go"; DO NOT EDIT
package terraform

View File

@ -575,6 +575,22 @@ Outputs:
foo_num = 2
`
const testTerraformApplyOutputAddStr = `
aws_instance.test.0:
ID = foo
foo = foo0
type = aws_instance
aws_instance.test.1:
ID = foo
foo = foo1
type = aws_instance
Outputs:
firstOutput = foo0
secondOutput = foo1
`
const testTerraformApplyOutputListStr = `
aws_instance.bar.0:
ID = foo

View File

@ -0,0 +1,6 @@
provider "aws" {}
resource "aws_instance" "test" {
foo = "${format("foo%d", count.index)}"
count = 2
}

View File

@ -0,0 +1,10 @@
{
"output": {
"firstOutput": {
"value": "${aws_instance.test.0.foo}"
},
"secondOutput": {
"value": "${aws_instance.test.1.foo}"
}
}
}

View File

@ -0,0 +1,6 @@
provider "aws" {}
resource "aws_instance" "test" {
foo = "${format("foo%d", count.index)}"
count = 2
}

View File

@ -0,0 +1,7 @@
{
"output": {
"firstOutput": {
"value": "${aws_instance.test.0.foo}"
}
}
}

View File

@ -1,4 +1,4 @@
// generated by stringer -type=walkOperation graph_walk_operation.go; DO NOT EDIT
// Code generated by "stringer -type=walkOperation graph_walk_operation.go"; DO NOT EDIT
package terraform

View File

@ -95,6 +95,9 @@ The supported built-in functions are:
CIDR notation (like ``10.0.0.0/8``) and extends its prefix to include an
additional subnet number. For example,
``cidrsubnet("10.0.0.0/8", 8, 2)`` returns ``10.2.0.0/16``.
* `coalesce(string1, string2, ...)` - Returns the first non-empty value from
the given arguments. At least two arguments must be provided.
* `compact(list)` - Removes empty string elements from a list. This can be
useful in some cases, for example when passing joined lists as module

View File

@ -59,5 +59,4 @@ The following arguments are supported in the `provider` block:
* `kinesis_endpoint` - (Optional) Use this to override the default endpoint URL constructed from the `region`. It's typically used to connect to kinesalite.
In addition to the above parameters, the `AWS_SESSION_TOKEN` environmental
variable can be set to set an MFA token.
* `token` - (Optional) Use this to set an MFA token. It can also be sourced from the `AWS_SECURITY_TOKEN` environment variable.

View File

@ -36,7 +36,7 @@ The following arguments are supported:
* `allocated_storage` - (Required) The allocated storage in gigabytes.
* `engine` - (Required) The database engine to use.
* `engine_version` - (Required) The engine version to use.
* `engine_version` - (Optional) The engine version to use.
* `identifier` - (Required) The name of the RDS instance
* `instance_class` - (Required) The instance type of the RDS instance.
* `storage_type` - (Optional) One of "standard" (magnetic), "gp2" (general
@ -81,6 +81,7 @@ database, and to use this value as the source database. This correlates to the
[Working with PostgreSQL and MySQL Read Replicas](http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ReadRepl.html) for
more information on using Replication.
* `snapshot_identifier` - (Optional) Specifies whether or not to create this database from a snapshot. This correlates to the snapshot ID you'd find in the RDS console, e.g: rds:production-2015-06-26-06-05.
* `license_model` - (Optional, but required for some DB engines, i.e. Oracle SE1) License model information for this DB instance.
~> **NOTE:** Removing the `replicate_source_db` attribute from an existing RDS
Replicate database managed by Terraform will promote the database to a fully

View File

@ -73,12 +73,22 @@ names to associate with this cache cluster
Amazon Resource Name (ARN) of a Redis RDB snapshot file stored in Amazon S3.
Example: `arn:aws:s3:::my_bucket/snapshot1.rdb`
* `snapshot_window` - (Optional) The daily time range (in UTC) during which ElastiCache will
begin taking a daily snapshot of your cache cluster. Can only be used for the Redis engine. Example: 05:00-09:00
* `snapshow_retention_limit` - (Optional) The number of days for which ElastiCache will
retain automatic cache cluster snapshots before deleting them. For example, if you set
SnapshotRetentionLimit to 5, then a snapshot that was taken today will be retained for 5 days
before being deleted. If the value of SnapshotRetentionLimit is set to zero (0), backups are turned off.
Can only be used for the Redis engine.
* `notification_topic_arn`  (Optional) An Amazon Resource Name (ARN) of an
SNS topic to send ElastiCache notifications to. Example:
`arn:aws:sns:us-east-1:012345678999:my_sns_topic`
* `tags` - (Optional) A mapping of tags to assign to the resource.
~> **NOTE:** Snapshotting functionality is not compatible with t2 instance types.
## Attributes Reference

View File

@ -113,5 +113,8 @@ The following attributes are exported:
* `instances` - The list of instances in the ELB
* `source_security_group` - The name of the security group that you can use as
part of your inbound rules for your load balancer's back-end application
instances.
instances. Use this for Classic or Default VPC only.
* `source_security_group_id` - The ID of the security group that you can use as
part of your inbound rules for your load balancer's back-end application
instances. Only available on ELBs launch in a VPC.
* `zone_id` - The canonical hosted zone ID of the ELB (to be used in a Route 53 Alias record)

View File

@ -1,6 +1,6 @@
---
layout: "aws"
page_title: "AWS: aws_saml_provider"
page_title: "AWS: aws_iam_saml_provider"
sidebar_current: "docs-aws-resource-iam-saml-provider"
description: |-
Provides an IAM SAML provider.
@ -13,7 +13,7 @@ Provides an IAM SAML provider.
## Example Usage
```
resource "aws_saml_provider" "default" {
resource "aws_iam_saml_provider" "default" {
name = "myprovider"
saml_metadata_document = "${file("saml-metadata.xml")}"
}

View File

@ -0,0 +1,72 @@
---
layout: "aws"
page_title: "AWS: aws_kinesis_firehose_delivery_stream"
sidebar_current: "docs-aws-resource-kinesis-firehose-delivery-stream"
description: |-
Provides a AWS Kinesis Firehose Delivery Stream
---
# aws\_kinesis\_stream
Provides a Kinesis Firehose Delivery Stream resource. Amazon Kinesis Firehose is a fully managed, elastic service to easily deliver real-time data streams to destinations such as Amazon S3 and Amazon Redshift.
For more details, see the [Amazon Kinesis Firehose Documentation][1].
## Example Usage
```
resource "aws_s3_bucket" "bucket" {
bucket = "tf-test-bucket"
acl = "private"
}
esource "aws_iam_role" "firehose_role" {
name = "firehose_test_role"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "firehose.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
}
]
}
EOF
}
resource "aws_kinesis_firehose_delivery_stream" "test_stream" {
name = "terraform-kinesis-firehose-test-stream"
destination = "s3"
role_arn = "${aws_iam_role.firehose_role.arn}"
s3_bucket_arn = "${aws_s3_bucket.bucket.arn}"
}
```
~> **NOTE:** Kinesis Firehose is currently only supported in us-east-1, us-west-2 and eu-west-1. This implementation of Kinesis Firehose only supports the s3 destination type as Terraform doesn't support Redshift yet.
## Argument Reference
The following arguments are supported:
* `name` - (Required) A name to identify the stream. This is unique to the
AWS account and region the Stream is created in.
* `destination`  (Required) This is the destination to where the data is delivered. The only options are `s3` & `redshift`
* `role_arn` - (Required) The ARN of the AWS credentials.
* `s3_bucket_arn` - (Required) The ARN of the S3 bucket
* `s3_prefix` - (Optional) The "YYYY/MM/DD/HH" time format prefix is automatically used for delivered S3 files. You can specify an extra prefix to be added in front of the time format prefix. Note that if the prefix ends with a slash, it appears as a folder in the S3 bucket
* `s3_buffer_size` - (Optional) Buffer incoming data to the specified size, in MBs, before delivering it to the destination. The default value is 5.
We recommend setting SizeInMBs to a value greater than the amount of data you typically ingest into the delivery stream in 10 seconds. For example, if you typically ingest data at 1 MB/sec set SizeInMBs to be 10 MB or highe
* `s3_buffer_interval` - (Optional) Buffer incoming data for the specified period of time, in seconds, before delivering it to the destination. The default value is 300
* `s3_data_compression` - (Optional) The compression format. If no value is specified, the default is NOCOMPRESSION. Other supported values are GZIP, ZIP & Snappy
## Attributes Reference
* `arn` - The Amazon Resource Name (ARN) specifying the Stream
[1]: http://aws.amazon.com/documentation/firehose/

View File

@ -44,7 +44,10 @@ resource "aws_lambda_function" "test_lambda" {
## Argument Reference
* `filename` - (Required) A [zip file][2] containing your lambda function source code.
* `filename` - (Optional) A [zip file][2] containing your lambda function source code. If defined, The `s3_*` options cannot be used.
* `s3_bucket` - (Optional) The S3 bucket location containing your lambda function source code. Conflicts with `filename`.
* `s3_key` - (Optional) The S3 key containing your lambda function source code. Conflicts with `filename`.
* `s3_object_version` - (Optional) The object version of your lambda function source code. Conflicts with `filename`.
* `function_name` - (Required) A unique name for your Lambda Function.
* `handler` - (Required) The function [entrypoint][3] in your code.
* `role` - (Required) IAM role attached to the Lambda Function. This governs both who / what can invoke your Lambda Function, as well as what resources our Lambda Function has access to. See [Lambda Permission Model][4] for more details.

View File

@ -26,11 +26,13 @@ Launch Configurations cannot be updated after creation with the Amazon
Web Service API. In order to update a Launch Configuration, Terraform will
destroy the existing resource and create a replacement. In order to effectively
use a Launch Configuration resource with an [AutoScaling Group resource][1],
it's recommend to omit the Launch Configuration `name` attribute, and
specify `create_before_destroy` in a [lifecycle][2] block, as shown:
it's recommended to specify `create_before_destroy` in a [lifecycle][2] block.
Either omit the Launch Configuration `name` attribute, or specify a partial name
with `name_prefix`. Example:
```
resource "aws_launch_configuration" "as_conf" {
name_prefix = "terraform-lc-example-"
image_id = "ami-1234"
instance_type = "m1.small"
@ -87,7 +89,9 @@ resource "aws_autoscaling_group" "bar" {
The following arguments are supported:
* `name` - (Optional) The name of the launch configuration. If you leave
this blank, Terraform will auto-generate it.
this blank, Terraform will auto-generate a unique name.
* `name_prefix` - (Optional) Creates a unique name beginning with the specified
prefix. Conflicts with `name`.
* `image_id` - (Required) The EC2 image ID to launch.
* `instance_type` - (Required) The size of instance to launch.
* `iam_instance_profile` - (Optional) The IAM instance profile to associate

View File

@ -24,6 +24,8 @@ resource "aws_rds_cluster" "default" {
database_name = "mydb"
master_username = "foo"
master_password = "bar"
backup_retention_period = 5
preferred_backup_window = "07:00-09:00"
}
```
@ -52,12 +54,16 @@ string.
instances in the DB cluster can be created in
* `backup_retention_period` - (Optional) The days to retain backups for. Default
1
* `preferred_backup_window` - (Optional) The daily time range during which automated backups are created if automated backups are enabled using the BackupRetentionPeriod parameter.
Default: A 30-minute window selected at random from an 8-hour block of time per region. e.g. 04:00-09:00
* `preferred_maintenance_window` - (Optional) The weekly time range during which system maintenance can occur, in (UTC) e.g. wed:04:00-wed:04:30
* `port` - (Optional) The port on which the DB accepts connections
* `vpc_security_group_ids` - (Optional) List of VPC security groups to associate
with the Cluster
* `apply_immediately` - (Optional) Specifies whether any cluster modifications
are applied immediately, or during the next maintenance window. Default is
`false`. See [Amazon RDS Documentation for more information.](http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.DBInstance.Modifying.html)
* `db_subnet_group_name` - (Optional) A DB subnet group to associate with this DB instance.
## Attributes Reference
@ -70,7 +76,8 @@ The following attributes are exported:
* `allocated_storage` - The amount of allocated storage
* `availability_zones` - The availability zone of the instance
* `backup_retention_period` - The backup retention period
* `backup_window` - The backup window
* `preferred_backup_window` - The backup window
* `preferred_maintenance_window` - The maintenance window
* `endpoint` - The primary, writeable connection endpoint
* `engine` - The database engine
* `engine_version` - The database engine version
@ -80,6 +87,7 @@ The following attributes are exported:
* `status` - The RDS instance status
* `username` - The master username for the database
* `storage_encrypted` - Specifies whether the DB instance is encrypted
* `preferred_backup_window` - The daily time range during which the backups happen
[1]: http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.Replication.html

View File

@ -27,7 +27,7 @@ For more information on Amazon Aurora, see [Aurora on Amazon RDS][2] in the Amaz
resource "aws_rds_cluster_instance" "cluster_instances" {
count = 2
identifier = "aurora-cluster-demo"
cluster_identifer = "${aws_rds_cluster.default.id}"
cluster_identifier = "${aws_rds_cluster.default.id}"
instance_class = "db.r3.large"
}
@ -64,6 +64,10 @@ and memory, see [Scaling Aurora DB Instances][4]. Aurora currently
Default `false`. See the documentation on [Creating DB Instances][6] for more
details on controlling this property.
* `db_subnet_group_name` - (Optional) A DB subnet group to associate with this DB instance.
~> **NOTE:** `db_subnet_group_name` is a required field when you are trying to create a private instance (`publicly_accessible` = false)
## Attributes Reference
The following attributes are exported:

View File

@ -17,6 +17,9 @@ Use the navigation to the left to read about the available resources.
## Example Usage
```
# Set the variable value in *.tfvars file or using -var="do_token=..." CLI option
variable "do_token" {}
# Configure the DigitalOcean Provider
provider "digitalocean" {
token = "${var.do_token}"

View File

@ -35,6 +35,9 @@ The following arguments are supported:
* `pool` - (Required) The name of the pool from which to obtain the floating
IP. Changing this creates a new floating IP.
* `port_id` - ID of an existing port with at least one IP address to associate with
this floating IP.
## Attributes Reference
The following attributes are exported:
@ -42,3 +45,4 @@ The following attributes are exported:
* `region` - See Argument Reference above.
* `pool` - See Argument Reference above.
* `address` - The actual floating IP address itself.
* `port_id` - ID of associated port.

View File

@ -42,7 +42,10 @@ resource "openstack_networking_port_v2" "port_1" {
admin_state_up = "true"
security_groups = ["${openstack_compute_secgroup_v2.secgroup_1.id}"]
depends_on = ["openstack_networking_subnet_v2.subnet_1"]
fixed_ips {
"subnet_id" = "008ba151-0b8c-4a67-98b5-0d2b87666062"
"ip_address" = "172.24.4.2"
}
}
resource "openstack_compute_instance_v2" "instance_1" {

View File

@ -60,6 +60,17 @@ The following arguments are supported:
* `device_id` - (Optional) The ID of the device attached to the port. Changing this
creates a new port.
* `fixed_ips` - (Optional) An array of desired IPs for this port.
The `fixed_ips` block supports:
* `subnet_id` - (Required) Subnet in which to allocate IP address for
this port.
* `ip_address` - (Required) IP address desired in the subnet for this
port.
## Attributes Reference
The following attributes are exported:

View File

@ -1,27 +1,28 @@
---
layout: "vsphere"
page_title: "Provider: vSphere"
page_title: "Provider: VMware vSphere"
sidebar_current: "docs-vsphere-index"
description: |-
The vSphere provider is used to interact with the resources supported by
vSphere. The provider needs to be configured with the proper credentials before
it can be used.
The VMware vSphere provider is used to interact with the resources supported by
VMware vSphere. The provider needs to be configured with the proper credentials
before it can be used.
---
# vSphere Provider
# VMware vSphere Provider
The vSphere provider is used to interact with the resources supported by vSphere.
The VMware vSphere provider is used to interact with the resources supported by
VMware vSphere.
The provider needs to be configured with the proper credentials before it can be used.
Use the navigation to the left to read about the available resources.
~> **NOTE:** The vSphere Provider currently represents _initial support_ and
therefore may undergo significant changes as the community improves it.
~> **NOTE:** The VMware vSphere Provider currently represents _initial support_
and therefore may undergo significant changes as the community improves it.
## Example Usage
```
# Configure the vSphere Provider
# Configure the VMware vSphere Provider
provider "vsphere" {
user = "${var.vsphere_user}"
password = "${var.vsphere_password}"
@ -47,7 +48,7 @@ resource "vsphere_virtual_machine" "web" {
## Argument Reference
The following arguments are used to configure the vSphere Provider:
The following arguments are used to configure the VMware vSphere Provider:
* `user` - (Required) This is the username for vSphere API operations. Can also
be specified with the `VSPHERE_USER` environment variable.
@ -59,20 +60,24 @@ The following arguments are used to configure the vSphere Provider:
## Acceptance Tests
The vSphere provider's acceptance tests require the above provider
The VMware vSphere provider's acceptance tests require the above provider
configuration fields to be set using the documented environment variables.
In addition, the following environment variables are used in tests, and must be set to valid values for your vSphere environment:
In addition, the following environment variables are used in tests, and must be set to valid values for your VMware vSphere environment:
* VSPHERE\_CLUSTER
* VSPHERE\_DATACENTER
* VSPHERE\_DATASTORE
* VSPHERE\_NETWORK\_GATEWAY
* VSPHERE\_NETWORK\_IP\_ADDRESS
* VSPHERE\_NETWORK\_LABEL
* VSPHERE\_NETWORK\_LABEL\_DHCP
* VSPHERE\_TEMPLATE
* VSPHERE\_VM\_PASSWORD
The following environment variables depend on your vSphere environment:
* VSPHERE\_DATACENTER
* VSPHERE\_CLUSTER
* VSPHERE\_RESOURCE\_POOL
* VSPHERE\_DATASTORE
These are used to set and verify attributes on the `vsphere_virtual_machine`
resource in tests.

View File

@ -1,14 +1,14 @@
---
layout: "vsphere"
page_title: "vSphere: vsphere_virtual_machine"
page_title: "VMware vSphere: vsphere_virtual_machine"
sidebar_current: "docs-vsphere-resource-virtual-machine"
description: |-
Provides a vSphere virtual machine resource. This can be used to create, modify, and delete virtual machines.
Provides a VMware vSphere virtual machine resource. This can be used to create, modify, and delete virtual machines.
---
# vsphere\_virtual\_machine
Provides a vSphere virtual machine resource. This can be used to create,
Provides a VMware vSphere virtual machine resource. This can be used to create,
modify, and delete virtual machines.
## Example Usage

View File

@ -186,7 +186,7 @@ And access them via `lookup()`:
```
output "ami" {
value = "${lookup(var.amis, var.region)}
value = "${lookup(var.amis, var.region)}"
}
```

View File

@ -324,6 +324,17 @@
</ul>
</li>
<li<%= sidebar_current(/^docs-aws-resource-kinesis-firehose/) %>>
<a href="#">Kinesis Firehose Resources</a>
<ul class="nav nav-visible">
<li<%= sidebar_current("docs-aws-resource-kinesis-firehose-delivery-stream") %>>
<a href="/docs/providers/aws/r/kinesis_firehose_delivery_stream.html">aws_kinesis_firehose_delivery_stream</a>
</li>
</ul>
</li>
<li<%= sidebar_current(/^docs-aws-resource-lambda/) %>>
<a href="#">Lambda Resources</a>

View File

@ -194,7 +194,7 @@
</li>
<li<%= sidebar_current("docs-providers-vsphere") %>>
<a href="/docs/providers/vsphere/index.html">vSphere</a>
<a href="/docs/providers/vsphere/index.html">VMware vSphere</a>
</li>
</ul>
</li>

View File

@ -7,7 +7,7 @@
</li>
<li<%= sidebar_current("docs-vsphere-index") %>>
<a href="/docs/providers/vsphere/index.html">vSphere Provider</a>
<a href="/docs/providers/vsphere/index.html">VMware vSphere Provider</a>
</li>
<li<%= sidebar_current(/^docs-vsphere-resource/) %>>