merging upstream master

This commit is contained in:
chrislovecnm 2015-11-18 16:09:05 -07:00
commit 98167cea79
106 changed files with 3003 additions and 2189 deletions

View File

@ -2,12 +2,13 @@
FEATURES: FEATURES:
* **New provider: `tls`** - A utility provider for generating TLS keys/self-signed certificates for development and testing [GH-2778]
* **New provider: `dyn`** - Manage DNS records on Dyn
* **New resource: `aws_cloudformation_stack`** [GH-2636] * **New resource: `aws_cloudformation_stack`** [GH-2636]
* **New resource: `aws_cloudtrail`** [GH-3094] * **New resource: `aws_cloudtrail`** [GH-3094]
* **New resource: `aws_route`** [GH-3548] * **New resource: `aws_route`** [GH-3548]
* **New resource: `aws_codecommit_repository`** [GH-3274] * **New resource: `aws_codecommit_repository`** [GH-3274]
* **New resource: `aws_kinesis_firehose_delivery_stream`** [GH-3833] * **New resource: `aws_kinesis_firehose_delivery_stream`** [GH-3833]
* **New provider: `tls`** - A utility provider for generating TLS keys/self-signed certificates for development and testing [GH-2778]
* **New resource: `google_sql_database` and `google_sql_database_instance`** [GH-3617] * **New resource: `google_sql_database` and `google_sql_database_instance`** [GH-3617]
* **New resource: `google_compute_global_address`** [GH-3701] * **New resource: `google_compute_global_address`** [GH-3701]
* **New resource: `google_compute_https_health_check`** [GH-3883] * **New resource: `google_compute_https_health_check`** [GH-3883]
@ -27,6 +28,7 @@ IMPROVEMENTS:
* provider/google: Accurate Terraform Version [GH-3554] * provider/google: Accurate Terraform Version [GH-3554]
* provider/google: Simplified auth (DefaultClient support) [GH-3553] * provider/google: Simplified auth (DefaultClient support) [GH-3553]
* provider/google: automatic_restart, preemptible, on_host_maintenance options [GH-3643] * provider/google: automatic_restart, preemptible, on_host_maintenance options [GH-3643]
* provider/google: read credentials as contents instead of path [GH-3901]
* null_resource: enhance and document [GH-3244, GH-3659] * null_resource: enhance and document [GH-3244, GH-3659]
* provider/aws: Add CORS settings to S3 bucket [GH-3387] * provider/aws: Add CORS settings to S3 bucket [GH-3387]
* provider/aws: Add notification topic ARN for ElastiCache clusters [GH-3674] * provider/aws: Add notification topic ARN for ElastiCache clusters [GH-3674]
@ -34,6 +36,7 @@ IMPROVEMENTS:
* provider/aws: Add a computed ARN for S3 Buckets [GH-3685] * provider/aws: Add a computed ARN for S3 Buckets [GH-3685]
* provider/aws: Add S3 support for Lambda Function resource [GH-3794] * provider/aws: Add S3 support for Lambda Function resource [GH-3794]
* provider/aws: Add `name_prefix` option to launch configurations [GH-3802] * provider/aws: Add `name_prefix` option to launch configurations [GH-3802]
* provider/aws: add support for group name and path changes with IAM group update function [GH-3237]
* provider/aws: Provide `source_security_group_id` for ELBs inside a VPC [GH-3780] * provider/aws: Provide `source_security_group_id` for ELBs inside a VPC [GH-3780]
* provider/aws: Add snapshot window and retention limits for ElastiCache (Redis) [GH-3707] * provider/aws: Add snapshot window and retention limits for ElastiCache (Redis) [GH-3707]
* provider/aws: Add username updates for `aws_iam_user` [GH-3227] * provider/aws: Add username updates for `aws_iam_user` [GH-3227]
@ -43,7 +46,8 @@ IMPROVEMENTS:
* provider/aws: `engine_version` is now optional for DB Instance [GH-3744] * provider/aws: `engine_version` is now optional for DB Instance [GH-3744]
* provider/aws: Add configuration to enable copying RDS tags to final snapshot [GH-3529] * provider/aws: Add configuration to enable copying RDS tags to final snapshot [GH-3529]
* provider/aws: RDS Cluster additions (`backup_retention_period`, `preferred_backup_window`, `preferred_maintenance_window`) [GH-3757] * provider/aws: RDS Cluster additions (`backup_retention_period`, `preferred_backup_window`, `preferred_maintenance_window`) [GH-3757]
* providers/aws: Document and validate ELB ssl_cert and protocol requirements [GH-3887] * provider/aws: Document and validate ELB ssl_cert and protocol requirements [GH-3887]
* provider/azure: Read publish_settings as contents instead of path [GH-3899]
* provider/openstack: Use IPv4 as the defeault IP version for subnets [GH-3091] * provider/openstack: Use IPv4 as the defeault IP version for subnets [GH-3091]
* provider/aws: Apply security group after restoring db_instance from snapshot [GH-3513] * provider/aws: Apply security group after restoring db_instance from snapshot [GH-3513]
* provider/aws: Making the AutoScalingGroup name optional [GH-3710] * provider/aws: Making the AutoScalingGroup name optional [GH-3710]
@ -51,12 +55,15 @@ IMPROVEMENTS:
* provider/digitalocean: Make user_data force a new droplet [GH-3740] * provider/digitalocean: Make user_data force a new droplet [GH-3740]
* provider/vsphere: Do not add network interfaces by default [GH-3652] * provider/vsphere: Do not add network interfaces by default [GH-3652]
* provider/openstack: Configure Fixed IPs through ports [GH-3772] * provider/openstack: Configure Fixed IPs through ports [GH-3772]
* provider/openstack: Specify a port ID on a Router Interface [GH-3903]
* provider/openstack: Made LBaaS Virtual IP computed [GH-3927]
BUG FIXES: BUG FIXES:
* `terraform remote config`: update `--help` output [GH-3632] * `terraform remote config`: update `--help` output [GH-3632]
* core: modules on Git branches now update properly [GH-1568] * core: modules on Git branches now update properly [GH-1568]
* core: Fix issue preventing input prompts for unset variables during plan [GH-3843] * core: Fix issue preventing input prompts for unset variables during plan [GH-3843]
* core: Orphan resources can now be targets [GH-3912]
* provider/google: Timeout when deleting large instance_group_manager [GH-3591] * provider/google: Timeout when deleting large instance_group_manager [GH-3591]
* provider/aws: Fix issue with order of Termincation Policies in AutoScaling Groups. * provider/aws: Fix issue with order of Termincation Policies in AutoScaling Groups.
This will introduce plans on upgrade to this version, in order to correct the ordering [GH-2890] This will introduce plans on upgrade to this version, in order to correct the ordering [GH-2890]
@ -65,12 +72,18 @@ BUG FIXES:
* provider/aws: ignore association not exist on route table destroy [GH-3615] * provider/aws: ignore association not exist on route table destroy [GH-3615]
* provider/aws: Fix policy encoding issue with SNS Topics [GH-3700] * provider/aws: Fix policy encoding issue with SNS Topics [GH-3700]
* provider/aws: Correctly export ARN in `aws_iam_saml_provider` [GH-3827] * provider/aws: Correctly export ARN in `aws_iam_saml_provider` [GH-3827]
* provider/aws: Fix crash in Route53 Record if Zone not found [GH-3945]
* providers/aws: Fix typo in error checking for IAM Policy Attachments #3970
* provider/aws: Tolerate ElastiCache clusters being deleted outside Terraform [GH-3767] * provider/aws: Tolerate ElastiCache clusters being deleted outside Terraform [GH-3767]
* provider/aws: Downcase Route 53 record names in statefile to match API output [GH-3574] * provider/aws: Downcase Route 53 record names in statefile to match API output [GH-3574]
* provider/aws: Fix issue that could occur if no ECS Cluster was found for a give name [GH-3829] * provider/aws: Fix issue that could occur if no ECS Cluster was found for a give name [GH-3829]
* provider/aws: Fix issue with SNS topic policy if omitted [GH-3777] * provider/aws: Fix issue with SNS topic policy if omitted [GH-3777]
* provider/aws: Support scratch volumes in `aws_ecs_task_definition` [GH-3810] * provider/aws: Support scratch volumes in `aws_ecs_task_definition` [GH-3810]
* provider/aws: Treat `aws_ecs_service` w/ Status==INACTIVE as deleted [GH-3828] * provider/aws: Treat `aws_ecs_service` w/ Status==INACTIVE as deleted [GH-3828]
* provider/aws: Expand ~ to homedir in `aws_s3_bucket_object.source` [GH-3910]
* provider/aws: Fix issue with updating the `aws_ecs_task_definition` where `aws_ecs_service` didn't wait for a new computed ARN [GH-3924]
* provider/aws: Prevent crashing when deleting `aws_ecs_service` that is already gone [GH-3914]
* provider/aws: Allow spaces in `aws_db_subnet_group.name` (undocumented in the API) [GH-3955]
* provider/azure: various bugfixes [GH-3695] * provider/azure: various bugfixes [GH-3695]
* provider/digitalocean: fix issue preventing SSH fingerprints from working [GH-3633] * provider/digitalocean: fix issue preventing SSH fingerprints from working [GH-3633]
* provider/digitalocean: Fixing the DigitalOcean Droplet 404 potential on refresh of state [GH-3768] * provider/digitalocean: Fixing the DigitalOcean Droplet 404 potential on refresh of state [GH-3768]
@ -83,6 +96,9 @@ BUG FIXES:
* provider/openstack: Better handling of network resource state changes [GH-3712] * provider/openstack: Better handling of network resource state changes [GH-3712]
* provider/openstack: Fix crashing when no security group is specified [GH-3801] * provider/openstack: Fix crashing when no security group is specified [GH-3801]
* provider/packet: Fix issue that could cause errors when provisioning many devices at once [GH-3847] * provider/packet: Fix issue that could cause errors when provisioning many devices at once [GH-3847]
* provider/packet: Fix connection information for devices, allowing provisioners to run [GH-3948]
* provider/openstack: Fix issue preventing security group rules from being removed [GH-3796]
* provider/template: template_file: source contents instead of path [GH-3909]
## 0.6.6 (October 23, 2015) ## 0.6.6 (October 23, 2015)

View File

@ -15,6 +15,12 @@ dev: generate
quickdev: generate quickdev: generate
@TF_QUICKDEV=1 TF_DEV=1 sh -c "'$(CURDIR)/scripts/build.sh'" @TF_QUICKDEV=1 TF_DEV=1 sh -c "'$(CURDIR)/scripts/build.sh'"
# Shorthand for quickly building the core of Terraform. Note that some
# changes will require a rebuild of everything, in which case the dev
# target should be used.
core-dev: generate
go install github.com/hashicorp/terraform
# Shorthand for building and installing just one plugin for local testing. # Shorthand for building and installing just one plugin for local testing.
# Run as (for example): make plugin-dev PLUGIN=provider-aws # Run as (for example): make plugin-dev PLUGIN=provider-aws
plugin-dev: generate plugin-dev: generate

View File

@ -0,0 +1,12 @@
package main
import (
"github.com/hashicorp/terraform/builtin/providers/dyn"
"github.com/hashicorp/terraform/plugin"
)
func main() {
plugin.Serve(&plugin.ServeOpts{
ProviderFunc: dyn.Provider,
})
}

View File

@ -0,0 +1 @@
package main

View File

@ -1,8 +1,9 @@
package aws package aws
import ( import (
"github.com/awslabs/aws-sdk-go/aws"
"github.com/hashicorp/terraform/helper/schema" "github.com/hashicorp/terraform/helper/schema"
"github.com/aws/aws-sdk-go/aws"
) )
func makeAwsStringList(in []interface{}) []*string { func makeAwsStringList(in []interface{}) []*string {

View File

@ -5,11 +5,13 @@ import (
"sync" "sync"
"time" "time"
"github.com/aws/aws-sdk-go/aws/credentials"
"github.com/awslabs/aws-sdk-go/aws/credentials/ec2rolecreds"
"github.com/hashicorp/terraform/helper/hashcode" "github.com/hashicorp/terraform/helper/hashcode"
"github.com/hashicorp/terraform/helper/mutexkv"
"github.com/hashicorp/terraform/helper/schema" "github.com/hashicorp/terraform/helper/schema"
"github.com/hashicorp/terraform/terraform" "github.com/hashicorp/terraform/terraform"
"github.com/aws/aws-sdk-go/aws/credentials"
"github.com/aws/aws-sdk-go/aws/credentials/ec2rolecreds"
) )
// Provider returns a terraform.ResourceProvider. // Provider returns a terraform.ResourceProvider.
@ -320,3 +322,6 @@ func providerConfigure(d *schema.ResourceData) (interface{}, error) {
return config.Client() return config.Client()
} }
// This is a global MutexKV for use within this plugin.
var awsMutexKV = mutexkv.NewMutexKV()

View File

@ -23,7 +23,7 @@ func TestAccAWSCodeDeployApp_basic(t *testing.T) {
), ),
}, },
resource.TestStep{ resource.TestStep{
Config: testAccAWSCodeDeployAppModifier, Config: testAccAWSCodeDeployAppModified,
Check: resource.ComposeTestCheckFunc( Check: resource.ComposeTestCheckFunc(
testAccCheckAWSCodeDeployAppExists("aws_codedeploy_app.foo"), testAccCheckAWSCodeDeployAppExists("aws_codedeploy_app.foo"),
), ),
@ -72,7 +72,7 @@ resource "aws_codedeploy_app" "foo" {
name = "foo" name = "foo"
}` }`
var testAccAWSCodeDeployAppModifier = ` var testAccAWSCodeDeployAppModified = `
resource "aws_codedeploy_app" "foo" { resource "aws_codedeploy_app" "foo" {
name = "bar" name = "bar"
}` }`

View File

@ -23,7 +23,7 @@ func TestAccAWSCodeDeployDeploymentGroup_basic(t *testing.T) {
), ),
}, },
resource.TestStep{ resource.TestStep{
Config: testAccAWSCodeDeployDeploymentGroupModifier, Config: testAccAWSCodeDeployDeploymentGroupModified,
Check: resource.ComposeTestCheckFunc( Check: resource.ComposeTestCheckFunc(
testAccCheckAWSCodeDeployDeploymentGroupExists("aws_codedeploy_deployment_group.foo"), testAccCheckAWSCodeDeployDeploymentGroupExists("aws_codedeploy_deployment_group.foo"),
), ),
@ -133,7 +133,7 @@ resource "aws_codedeploy_deployment_group" "foo" {
} }
}` }`
var testAccAWSCodeDeployDeploymentGroupModifier = ` var testAccAWSCodeDeployDeploymentGroupModified = `
resource "aws_codedeploy_app" "foo_app" { resource "aws_codedeploy_app" "foo_app" {
name = "foo_app" name = "foo_app"
} }

View File

@ -29,9 +29,9 @@ func resourceAwsDbSubnetGroup() *schema.Resource {
Required: true, Required: true,
ValidateFunc: func(v interface{}, k string) (ws []string, errors []error) { ValidateFunc: func(v interface{}, k string) (ws []string, errors []error) {
value := v.(string) value := v.(string)
if !regexp.MustCompile(`^[.0-9A-Za-z-_]+$`).MatchString(value) { if !regexp.MustCompile(`^[ .0-9A-Za-z-_]+$`).MatchString(value) {
errors = append(errors, fmt.Errorf( errors = append(errors, fmt.Errorf(
"only alphanumeric characters, hyphens, underscores, and periods allowed in %q", k)) "only alphanumeric characters, hyphens, underscores, periods, and spaces allowed in %q", k))
} }
if len(value) > 255 { if len(value) > 255 {
errors = append(errors, fmt.Errorf( errors = append(errors, fmt.Errorf(

View File

@ -51,12 +51,14 @@ func TestAccAWSDBSubnetGroup_withUndocumentedCharacters(t *testing.T) {
CheckDestroy: testAccCheckDBSubnetGroupDestroy, CheckDestroy: testAccCheckDBSubnetGroupDestroy,
Steps: []resource.TestStep{ Steps: []resource.TestStep{
resource.TestStep{ resource.TestStep{
Config: testAccDBSubnetGroupConfig_withUnderscoresAndPeriods, Config: testAccDBSubnetGroupConfig_withUnderscoresAndPeriodsAndSpaces,
Check: resource.ComposeTestCheckFunc( Check: resource.ComposeTestCheckFunc(
testAccCheckDBSubnetGroupExists( testAccCheckDBSubnetGroupExists(
"aws_db_subnet_group.underscores", &v), "aws_db_subnet_group.underscores", &v),
testAccCheckDBSubnetGroupExists( testAccCheckDBSubnetGroupExists(
"aws_db_subnet_group.periods", &v), "aws_db_subnet_group.periods", &v),
testAccCheckDBSubnetGroupExists(
"aws_db_subnet_group.spaces", &v),
testCheck, testCheck,
), ),
}, },
@ -156,7 +158,7 @@ resource "aws_db_subnet_group" "foo" {
} }
` `
const testAccDBSubnetGroupConfig_withUnderscoresAndPeriods = ` const testAccDBSubnetGroupConfig_withUnderscoresAndPeriodsAndSpaces = `
resource "aws_vpc" "main" { resource "aws_vpc" "main" {
cidr_block = "192.168.0.0/16" cidr_block = "192.168.0.0/16"
} }
@ -184,4 +186,10 @@ resource "aws_db_subnet_group" "periods" {
description = "Our main group of subnets" description = "Our main group of subnets"
subnet_ids = ["${aws_subnet.frontend.id}", "${aws_subnet.backend.id}"] subnet_ids = ["${aws_subnet.frontend.id}", "${aws_subnet.backend.id}"]
} }
resource "aws_db_subnet_group" "spaces" {
name = "with spaces"
description = "Our main group of subnets"
subnet_ids = ["${aws_subnet.frontend.id}", "${aws_subnet.backend.id}"]
}
` `

View File

@ -156,6 +156,8 @@ func resourceAwsEcsServiceRead(d *schema.ResourceData, meta interface{}) error {
} }
if len(out.Services) < 1 { if len(out.Services) < 1 {
log.Printf("[DEBUG] Removing ECS service %s (%s) because it's gone", d.Get("name").(string), d.Id())
d.SetId("")
return nil return nil
} }
@ -163,7 +165,7 @@ func resourceAwsEcsServiceRead(d *schema.ResourceData, meta interface{}) error {
// Status==INACTIVE means deleted service // Status==INACTIVE means deleted service
if *service.Status == "INACTIVE" { if *service.Status == "INACTIVE" {
log.Printf("[DEBUG] Removing ECS service %q because it's INACTIVE", service.ServiceArn) log.Printf("[DEBUG] Removing ECS service %q because it's INACTIVE", *service.ServiceArn)
d.SetId("") d.SetId("")
return nil return nil
} }
@ -247,6 +249,12 @@ func resourceAwsEcsServiceDelete(d *schema.ResourceData, meta interface{}) error
if err != nil { if err != nil {
return err return err
} }
if len(resp.Services) == 0 {
log.Printf("[DEBUG] ECS Service %q is already gone", d.Id())
return nil
}
log.Printf("[DEBUG] ECS service %s is currently %s", d.Id(), *resp.Services[0].Status) log.Printf("[DEBUG] ECS service %s is currently %s", d.Id(), *resp.Services[0].Status)
if *resp.Services[0].Status == "INACTIVE" { if *resp.Services[0].Status == "INACTIVE" {

View File

@ -17,7 +17,6 @@ func resourceAwsEcsTaskDefinition() *schema.Resource {
return &schema.Resource{ return &schema.Resource{
Create: resourceAwsEcsTaskDefinitionCreate, Create: resourceAwsEcsTaskDefinitionCreate,
Read: resourceAwsEcsTaskDefinitionRead, Read: resourceAwsEcsTaskDefinitionRead,
Update: resourceAwsEcsTaskDefinitionUpdate,
Delete: resourceAwsEcsTaskDefinitionDelete, Delete: resourceAwsEcsTaskDefinitionDelete,
Schema: map[string]*schema.Schema{ Schema: map[string]*schema.Schema{
@ -40,6 +39,7 @@ func resourceAwsEcsTaskDefinition() *schema.Resource {
"container_definitions": &schema.Schema{ "container_definitions": &schema.Schema{
Type: schema.TypeString, Type: schema.TypeString,
Required: true, Required: true,
ForceNew: true,
StateFunc: func(v interface{}) string { StateFunc: func(v interface{}) string {
hash := sha1.Sum([]byte(v.(string))) hash := sha1.Sum([]byte(v.(string)))
return hex.EncodeToString(hash[:]) return hex.EncodeToString(hash[:])
@ -49,6 +49,7 @@ func resourceAwsEcsTaskDefinition() *schema.Resource {
"volume": &schema.Schema{ "volume": &schema.Schema{
Type: schema.TypeSet, Type: schema.TypeSet,
Optional: true, Optional: true,
ForceNew: true,
Elem: &schema.Resource{ Elem: &schema.Resource{
Schema: map[string]*schema.Schema{ Schema: map[string]*schema.Schema{
"name": &schema.Schema{ "name": &schema.Schema{
@ -131,29 +132,6 @@ func resourceAwsEcsTaskDefinitionRead(d *schema.ResourceData, meta interface{})
return nil return nil
} }
func resourceAwsEcsTaskDefinitionUpdate(d *schema.ResourceData, meta interface{}) error {
oldArn := d.Get("arn").(string)
log.Printf("[DEBUG] Creating new revision of task definition %q", d.Id())
err := resourceAwsEcsTaskDefinitionCreate(d, meta)
if err != nil {
return err
}
log.Printf("[DEBUG] New revision of %q created: %q", d.Id(), d.Get("arn").(string))
log.Printf("[DEBUG] Deregistering old revision of task definition %q: %q", d.Id(), oldArn)
conn := meta.(*AWSClient).ecsconn
_, err = conn.DeregisterTaskDefinition(&ecs.DeregisterTaskDefinitionInput{
TaskDefinition: aws.String(oldArn),
})
if err != nil {
return err
}
log.Printf("[DEBUG] Old revision of task definition deregistered: %q", oldArn)
return resourceAwsEcsTaskDefinitionRead(d, meta)
}
func resourceAwsEcsTaskDefinitionDelete(d *schema.ResourceData, meta interface{}) error { func resourceAwsEcsTaskDefinitionDelete(d *schema.ResourceData, meta interface{}) error {
conn := meta.(*AWSClient).ecsconn conn := meta.(*AWSClient).ecsconn

View File

@ -23,7 +23,7 @@ func TestAccAWSEcsTaskDefinition_basic(t *testing.T) {
), ),
}, },
resource.TestStep{ resource.TestStep{
Config: testAccAWSEcsTaskDefinitionModifier, Config: testAccAWSEcsTaskDefinitionModified,
Check: resource.ComposeTestCheckFunc( Check: resource.ComposeTestCheckFunc(
testAccCheckAWSEcsTaskDefinitionExists("aws_ecs_task_definition.jenkins"), testAccCheckAWSEcsTaskDefinitionExists("aws_ecs_task_definition.jenkins"),
), ),
@ -49,6 +49,31 @@ func TestAccAWSEcsTaskDefinition_withScratchVolume(t *testing.T) {
}) })
} }
// Regression for https://github.com/hashicorp/terraform/issues/2694
func TestAccAWSEcsTaskDefinition_withEcsService(t *testing.T) {
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
CheckDestroy: testAccCheckAWSEcsTaskDefinitionDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: testAccAWSEcsTaskDefinitionWithEcsService,
Check: resource.ComposeTestCheckFunc(
testAccCheckAWSEcsTaskDefinitionExists("aws_ecs_task_definition.sleep"),
testAccCheckAWSEcsServiceExists("aws_ecs_service.sleep-svc"),
),
},
resource.TestStep{
Config: testAccAWSEcsTaskDefinitionWithEcsServiceModified,
Check: resource.ComposeTestCheckFunc(
testAccCheckAWSEcsTaskDefinitionExists("aws_ecs_task_definition.sleep"),
testAccCheckAWSEcsServiceExists("aws_ecs_service.sleep-svc"),
),
},
},
})
}
func testAccCheckAWSEcsTaskDefinitionDestroy(s *terraform.State) error { func testAccCheckAWSEcsTaskDefinitionDestroy(s *terraform.State) error {
conn := testAccProvider.Meta().(*AWSClient).ecsconn conn := testAccProvider.Meta().(*AWSClient).ecsconn
@ -155,7 +180,72 @@ TASK_DEFINITION
} }
` `
var testAccAWSEcsTaskDefinitionModifier = ` var testAccAWSEcsTaskDefinitionWithEcsService = `
resource "aws_ecs_cluster" "default" {
name = "terraform-acc-test"
}
resource "aws_ecs_service" "sleep-svc" {
name = "tf-acc-ecs-svc"
cluster = "${aws_ecs_cluster.default.id}"
task_definition = "${aws_ecs_task_definition.sleep.arn}"
desired_count = 1
}
resource "aws_ecs_task_definition" "sleep" {
family = "terraform-acc-sc-volume-test"
container_definitions = <<TASK_DEFINITION
[
{
"name": "sleep",
"image": "busybox",
"cpu": 10,
"command": ["sleep","360"],
"memory": 10,
"essential": true
}
]
TASK_DEFINITION
volume {
name = "database_scratch"
}
}
`
var testAccAWSEcsTaskDefinitionWithEcsServiceModified = `
resource "aws_ecs_cluster" "default" {
name = "terraform-acc-test"
}
resource "aws_ecs_service" "sleep-svc" {
name = "tf-acc-ecs-svc"
cluster = "${aws_ecs_cluster.default.id}"
task_definition = "${aws_ecs_task_definition.sleep.arn}"
desired_count = 1
}
resource "aws_ecs_task_definition" "sleep" {
family = "terraform-acc-sc-volume-test"
container_definitions = <<TASK_DEFINITION
[
{
"name": "sleep",
"image": "busybox",
"cpu": 20,
"command": ["sleep","360"],
"memory": 50,
"essential": true
}
]
TASK_DEFINITION
volume {
name = "database_scratch"
}
}
`
var testAccAWSEcsTaskDefinitionModified = `
resource "aws_ecs_task_definition" "jenkins" { resource "aws_ecs_task_definition" "jenkins" {
family = "terraform-acc-test" family = "terraform-acc-test"
container_definitions = <<TASK_DEFINITION container_definitions = <<TASK_DEFINITION

View File

@ -14,8 +14,7 @@ func resourceAwsIamGroup() *schema.Resource {
return &schema.Resource{ return &schema.Resource{
Create: resourceAwsIamGroupCreate, Create: resourceAwsIamGroupCreate,
Read: resourceAwsIamGroupRead, Read: resourceAwsIamGroupRead,
// TODO Update: resourceAwsIamGroupUpdate,
//Update: resourceAwsIamGroupUpdate,
Delete: resourceAwsIamGroupDelete, Delete: resourceAwsIamGroupDelete,
Schema: map[string]*schema.Schema{ Schema: map[string]*schema.Schema{
@ -30,13 +29,11 @@ func resourceAwsIamGroup() *schema.Resource {
"name": &schema.Schema{ "name": &schema.Schema{
Type: schema.TypeString, Type: schema.TypeString,
Required: true, Required: true,
ForceNew: true,
}, },
"path": &schema.Schema{ "path": &schema.Schema{
Type: schema.TypeString, Type: schema.TypeString,
Optional: true, Optional: true,
Default: "/", Default: "/",
ForceNew: true,
}, },
}, },
} }
@ -45,9 +42,10 @@ func resourceAwsIamGroup() *schema.Resource {
func resourceAwsIamGroupCreate(d *schema.ResourceData, meta interface{}) error { func resourceAwsIamGroupCreate(d *schema.ResourceData, meta interface{}) error {
iamconn := meta.(*AWSClient).iamconn iamconn := meta.(*AWSClient).iamconn
name := d.Get("name").(string) name := d.Get("name").(string)
path := d.Get("path").(string)
request := &iam.CreateGroupInput{ request := &iam.CreateGroupInput{
Path: aws.String(d.Get("path").(string)), Path: aws.String(path),
GroupName: aws.String(name), GroupName: aws.String(name),
} }
@ -60,9 +58,10 @@ func resourceAwsIamGroupCreate(d *schema.ResourceData, meta interface{}) error {
func resourceAwsIamGroupRead(d *schema.ResourceData, meta interface{}) error { func resourceAwsIamGroupRead(d *schema.ResourceData, meta interface{}) error {
iamconn := meta.(*AWSClient).iamconn iamconn := meta.(*AWSClient).iamconn
name := d.Get("name").(string)
request := &iam.GetGroupInput{ request := &iam.GetGroupInput{
GroupName: aws.String(d.Id()), GroupName: aws.String(name),
} }
getResp, err := iamconn.GetGroup(request) getResp, err := iamconn.GetGroup(request)
@ -93,6 +92,26 @@ func resourceAwsIamGroupReadResult(d *schema.ResourceData, group *iam.Group) err
return nil return nil
} }
func resourceAwsIamGroupUpdate(d *schema.ResourceData, meta interface{}) error {
if d.HasChange("name") || d.HasChange("path") {
iamconn := meta.(*AWSClient).iamconn
on, nn := d.GetChange("name")
_, np := d.GetChange("path")
request := &iam.UpdateGroupInput{
GroupName: aws.String(on.(string)),
NewGroupName: aws.String(nn.(string)),
NewPath: aws.String(np.(string)),
}
_, err := iamconn.UpdateGroup(request)
if err != nil {
return fmt.Errorf("Error updating IAM Group %s: %s", d.Id(), err)
}
return resourceAwsIamGroupRead(d, meta)
}
return nil
}
func resourceAwsIamGroupDelete(d *schema.ResourceData, meta interface{}) error { func resourceAwsIamGroupDelete(d *schema.ResourceData, meta interface{}) error {
iamconn := meta.(*AWSClient).iamconn iamconn := meta.(*AWSClient).iamconn

View File

@ -23,7 +23,14 @@ func TestAccAWSIAMGroup_basic(t *testing.T) {
Config: testAccAWSGroupConfig, Config: testAccAWSGroupConfig,
Check: resource.ComposeTestCheckFunc( Check: resource.ComposeTestCheckFunc(
testAccCheckAWSGroupExists("aws_iam_group.group", &conf), testAccCheckAWSGroupExists("aws_iam_group.group", &conf),
testAccCheckAWSGroupAttributes(&conf), testAccCheckAWSGroupAttributes(&conf, "test-group", "/"),
),
},
resource.TestStep{
Config: testAccAWSGroupConfig2,
Check: resource.ComposeTestCheckFunc(
testAccCheckAWSGroupExists("aws_iam_group.group", &conf),
testAccCheckAWSGroupAttributes(&conf, "test-group2", "/funnypath/"),
), ),
}, },
}, },
@ -85,14 +92,14 @@ func testAccCheckAWSGroupExists(n string, res *iam.GetGroupOutput) resource.Test
} }
} }
func testAccCheckAWSGroupAttributes(group *iam.GetGroupOutput) resource.TestCheckFunc { func testAccCheckAWSGroupAttributes(group *iam.GetGroupOutput, name string, path string) resource.TestCheckFunc {
return func(s *terraform.State) error { return func(s *terraform.State) error {
if *group.Group.GroupName != "test-group" { if *group.Group.GroupName != name {
return fmt.Errorf("Bad name: %s", *group.Group.GroupName) return fmt.Errorf("Bad name: %s when %s was expected", *group.Group.GroupName, name)
} }
if *group.Group.Path != "/" { if *group.Group.Path != path {
return fmt.Errorf("Bad path: %s", *group.Group.Path) return fmt.Errorf("Bad path: %s when %s was expected", *group.Group.Path, path)
} }
return nil return nil
@ -105,3 +112,9 @@ resource "aws_iam_group" "group" {
path = "/" path = "/"
} }
` `
const testAccAWSGroupConfig2 = `
resource "aws_iam_group" "group" {
name = "test-group2"
path = "/funnypath/"
}
`

View File

@ -2,6 +2,7 @@ package aws
import ( import (
"fmt" "fmt"
"log"
"github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/aws/awserr" "github.com/aws/aws-sdk-go/aws/awserr"
@ -90,7 +91,8 @@ func resourceAwsIamPolicyAttachmentRead(d *schema.ResourceData, meta interface{}
if err != nil { if err != nil {
if awsErr, ok := err.(awserr.Error); ok { if awsErr, ok := err.(awserr.Error); ok {
if awsErr.Code() == "NoSuchIdentity" { if awsErr.Code() == "NoSuchEntity" {
log.Printf("[WARN] No such entity found for Policy Attachment (%s)", d.Id())
d.SetId("") d.SetId("")
return nil return nil
} }

View File

@ -4,11 +4,12 @@ import (
"fmt" "fmt"
"testing" "testing"
"github.com/hashicorp/terraform/helper/resource"
"github.com/hashicorp/terraform/terraform"
"github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/service/iam" "github.com/aws/aws-sdk-go/service/iam"
"github.com/aws/aws-sdk-go/service/opsworks" "github.com/aws/aws-sdk-go/service/opsworks"
"github.com/hashicorp/terraform/helper/resource"
"github.com/hashicorp/terraform/terraform"
) )
// These tests assume the existence of predefined Opsworks IAM roles named `aws-opsworks-ec2-role` // These tests assume the existence of predefined Opsworks IAM roles named `aws-opsworks-ec2-role`
@ -49,7 +50,7 @@ resource "aws_opsworks_stack" "tf-acc" {
custom_cookbooks_source { custom_cookbooks_source {
type = "git" type = "git"
revision = "master" revision = "master"
url = "https://github.com/awslabs/opsworks-example-cookbooks.git" url = "https://github.com/aws/opsworks-example-cookbooks.git"
} }
} }
` `
@ -129,7 +130,7 @@ resource "aws_opsworks_stack" "tf-acc" {
custom_cookbooks_source { custom_cookbooks_source {
type = "git" type = "git"
revision = "master" revision = "master"
url = "https://github.com/awslabs/opsworks-example-cookbooks.git" url = "https://github.com/aws/opsworks-example-cookbooks.git"
} }
} }
` `
@ -259,7 +260,7 @@ var testAccAwsOpsworksStackCheckResourceAttrsUpdate = resource.ComposeTestCheckF
resource.TestCheckResourceAttr( resource.TestCheckResourceAttr(
"aws_opsworks_stack.tf-acc", "aws_opsworks_stack.tf-acc",
"custom_cookbooks_source.0.url", "custom_cookbooks_source.0.url",
"https://github.com/awslabs/opsworks-example-cookbooks.git", "https://github.com/aws/opsworks-example-cookbooks.git",
), ),
) )

View File

@ -49,6 +49,13 @@ func resourceAwsRoute53Record() *schema.Resource {
Type: schema.TypeString, Type: schema.TypeString,
Required: true, Required: true,
ForceNew: true, ForceNew: true,
ValidateFunc: func(v interface{}, k string) (ws []string, es []error) {
value := v.(string)
if value == "" {
es = append(es, fmt.Errorf("Cannot have empty zone_id"))
}
return
},
}, },
"ttl": &schema.Schema{ "ttl": &schema.Schema{
@ -136,6 +143,9 @@ func resourceAwsRoute53RecordCreate(d *schema.ResourceData, meta interface{}) er
if err != nil { if err != nil {
return err return err
} }
if zoneRecord.HostedZone == nil {
return fmt.Errorf("[WARN] No Route53 Zone found for id (%s)", zone)
}
// Get the record // Get the record
rec, err := resourceAwsRoute53RecordBuildSet(d, *zoneRecord.HostedZone.Name) rec, err := resourceAwsRoute53RecordBuildSet(d, *zoneRecord.HostedZone.Name)

View File

@ -139,7 +139,7 @@ func TestAccAWSRoute53Record_failover(t *testing.T) {
}) })
} }
func TestAccAWSRoute53Record_weighted(t *testing.T) { func TestAccAWSRoute53Record_weighted_basic(t *testing.T) {
resource.Test(t, resource.TestCase{ resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) }, PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders, Providers: testAccProviders,

View File

@ -8,6 +8,7 @@ import (
"os" "os"
"github.com/hashicorp/terraform/helper/schema" "github.com/hashicorp/terraform/helper/schema"
"github.com/mitchellh/go-homedir"
"github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/aws/awserr" "github.com/aws/aws-sdk-go/aws/awserr"
@ -95,7 +96,11 @@ func resourceAwsS3BucketObjectPut(d *schema.ResourceData, meta interface{}) erro
if v, ok := d.GetOk("source"); ok { if v, ok := d.GetOk("source"); ok {
source := v.(string) source := v.(string)
file, err := os.Open(source) path, err := homedir.Expand(source)
if err != nil {
return fmt.Errorf("Error expanding homedir in source (%s): %s", source, err)
}
file, err := os.Open(path)
if err != nil { if err != nil {
return fmt.Errorf("Error opening S3 bucket object source (%s): %s", source, err) return fmt.Errorf("Error opening S3 bucket object source (%s): %s", source, err)
} }

View File

@ -84,8 +84,11 @@ func resourceAwsSecurityGroupRule() *schema.Resource {
func resourceAwsSecurityGroupRuleCreate(d *schema.ResourceData, meta interface{}) error { func resourceAwsSecurityGroupRuleCreate(d *schema.ResourceData, meta interface{}) error {
conn := meta.(*AWSClient).ec2conn conn := meta.(*AWSClient).ec2conn
sg_id := d.Get("security_group_id").(string) sg_id := d.Get("security_group_id").(string)
sg, err := findResourceSecurityGroup(conn, sg_id)
awsMutexKV.Lock(sg_id)
defer awsMutexKV.Unlock(sg_id)
sg, err := findResourceSecurityGroup(conn, sg_id)
if err != nil { if err != nil {
return err return err
} }
@ -249,8 +252,11 @@ func resourceAwsSecurityGroupRuleRead(d *schema.ResourceData, meta interface{})
func resourceAwsSecurityGroupRuleDelete(d *schema.ResourceData, meta interface{}) error { func resourceAwsSecurityGroupRuleDelete(d *schema.ResourceData, meta interface{}) error {
conn := meta.(*AWSClient).ec2conn conn := meta.(*AWSClient).ec2conn
sg_id := d.Get("security_group_id").(string) sg_id := d.Get("security_group_id").(string)
sg, err := findResourceSecurityGroup(conn, sg_id)
awsMutexKV.Lock(sg_id)
defer awsMutexKV.Unlock(sg_id)
sg, err := findResourceSecurityGroup(conn, sg_id)
if err != nil { if err != nil {
return err return err
} }

View File

@ -1,6 +1,7 @@
package aws package aws
import ( import (
"bytes"
"fmt" "fmt"
"log" "log"
"testing" "testing"
@ -339,7 +340,24 @@ func TestAccAWSSecurityGroupRule_PartialMatching_Source(t *testing.T) {
}, },
}, },
}) })
}
func TestAccAWSSecurityGroupRule_Race(t *testing.T) {
var group ec2.SecurityGroup
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
CheckDestroy: testAccCheckAWSSecurityGroupRuleDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: testAccAWSSecurityGroupRuleRace,
Check: resource.ComposeTestCheckFunc(
testAccCheckAWSSecurityGroupRuleExists("aws_security_group.race", &group),
),
},
},
})
} }
func testAccCheckAWSSecurityGroupRuleDestroy(s *terraform.State) error { func testAccCheckAWSSecurityGroupRuleDestroy(s *terraform.State) error {
@ -718,3 +736,41 @@ resource "aws_security_group_rule" "other_ingress" {
security_group_id = "${aws_security_group.web.id}" security_group_id = "${aws_security_group.web.id}"
} }
` `
var testAccAWSSecurityGroupRuleRace = func() string {
var b bytes.Buffer
iterations := 50
b.WriteString(fmt.Sprintf(`
resource "aws_vpc" "default" {
cidr_block = "10.0.0.0/16"
tags { Name = "tf-sg-rule-race" }
}
resource "aws_security_group" "race" {
name = "tf-sg-rule-race-group-%d"
vpc_id = "${aws_vpc.default.id}"
}
`, genRandInt()))
for i := 1; i < iterations; i++ {
b.WriteString(fmt.Sprintf(`
resource "aws_security_group_rule" "ingress%d" {
security_group_id = "${aws_security_group.race.id}"
type = "ingress"
from_port = %d
to_port = %d
protocol = "tcp"
cidr_blocks = ["10.0.0.%d/32"]
}
resource "aws_security_group_rule" "egress%d" {
security_group_id = "${aws_security_group.race.id}"
type = "egress"
from_port = %d
to_port = %d
protocol = "tcp"
cidr_blocks = ["10.0.0.%d/32"]
}
`, i, i, i, i, i, i, i, i))
}
return b.String()
}()

View File

@ -3,12 +3,10 @@ package azure
import ( import (
"encoding/xml" "encoding/xml"
"fmt" "fmt"
"io/ioutil"
"os"
"github.com/hashicorp/terraform/helper/pathorcontents"
"github.com/hashicorp/terraform/helper/schema" "github.com/hashicorp/terraform/helper/schema"
"github.com/hashicorp/terraform/terraform" "github.com/hashicorp/terraform/terraform"
"github.com/mitchellh/go-homedir"
) )
// Provider returns a terraform.ResourceProvider. // Provider returns a terraform.ResourceProvider.
@ -20,6 +18,14 @@ func Provider() terraform.ResourceProvider {
Optional: true, Optional: true,
DefaultFunc: schema.EnvDefaultFunc("AZURE_SETTINGS_FILE", nil), DefaultFunc: schema.EnvDefaultFunc("AZURE_SETTINGS_FILE", nil),
ValidateFunc: validateSettingsFile, ValidateFunc: validateSettingsFile,
Deprecated: "Use the publish_settings field instead",
},
"publish_settings": &schema.Schema{
Type: schema.TypeString,
Optional: true,
DefaultFunc: schema.EnvDefaultFunc("AZURE_PUBLISH_SETTINGS", nil),
ValidateFunc: validatePublishSettings,
}, },
"subscription_id": &schema.Schema{ "subscription_id": &schema.Schema{
@ -64,11 +70,14 @@ func providerConfigure(d *schema.ResourceData) (interface{}, error) {
Certificate: []byte(d.Get("certificate").(string)), Certificate: []byte(d.Get("certificate").(string)),
} }
settingsFile := d.Get("settings_file").(string) publishSettings := d.Get("publish_settings").(string)
if settingsFile != "" { if publishSettings == "" {
publishSettings = d.Get("settings_file").(string)
}
if publishSettings != "" {
// any errors from readSettings would have been caught at the validate // any errors from readSettings would have been caught at the validate
// step, so we can avoid handling them now // step, so we can avoid handling them now
settings, _, _ := readSettings(settingsFile) settings, _, _ := readSettings(publishSettings)
config.Settings = settings config.Settings = settings
return config.NewClientFromSettingsData() return config.NewClientFromSettingsData()
} }
@ -92,37 +101,42 @@ func validateSettingsFile(v interface{}, k string) ([]string, []error) {
return warnings, errors return warnings, errors
} }
const settingsPathWarnMsg = ` func validatePublishSettings(v interface{}, k string) (ws []string, es []error) {
settings_file is not valid XML, so we are assuming it is a file path. This value := v.(string)
support will be removed in the future. Please update your configuration to use if value == "" {
${file("filename.publishsettings")} instead.` return
}
func readSettings(pathOrContents string) (s []byte, ws []string, es []error) {
var settings settingsData var settings settingsData
if err := xml.Unmarshal([]byte(pathOrContents), &settings); err == nil { if err := xml.Unmarshal([]byte(value), &settings); err != nil {
s = []byte(pathOrContents) es = append(es, fmt.Errorf("error parsing publish_settings as XML: %s", err))
return
} }
ws = append(ws, settingsPathWarnMsg)
path, err := homedir.Expand(pathOrContents)
if err != nil {
es = append(es, fmt.Errorf("Error expanding path: %s", err))
return
}
s, err = ioutil.ReadFile(path)
if err != nil {
es = append(es, fmt.Errorf("Could not read file '%s': %s", path, err))
}
return return
} }
func isFile(v string) (bool, error) { const settingsPathWarnMsg = `
if _, err := os.Stat(v); err != nil { settings_file was provided as a file path. This support
return false, err will be removed in the future. Please update your configuration
to use ${file("filename.publishsettings")} instead.`
func readSettings(pathOrContents string) (s []byte, ws []string, es []error) {
contents, wasPath, err := pathorcontents.Read(pathOrContents)
if err != nil {
es = append(es, fmt.Errorf("error reading settings_file: %s", err))
} }
return true, nil if wasPath {
ws = append(ws, settingsPathWarnMsg)
}
var settings settingsData
if err := xml.Unmarshal([]byte(contents), &settings); err != nil {
es = append(es, fmt.Errorf("error parsing settings_file as XML: %s", err))
}
s = []byte(contents)
return
} }
// settingsData is a private struct used to test the unmarshalling of the // settingsData is a private struct used to test the unmarshalling of the

View File

@ -51,12 +51,12 @@ func TestProvider_impl(t *testing.T) {
} }
func testAccPreCheck(t *testing.T) { func testAccPreCheck(t *testing.T) {
if v := os.Getenv("AZURE_SETTINGS_FILE"); v == "" { if v := os.Getenv("AZURE_PUBLISH_SETTINGS"); v == "" {
subscriptionID := os.Getenv("AZURE_SUBSCRIPTION_ID") subscriptionID := os.Getenv("AZURE_SUBSCRIPTION_ID")
certificate := os.Getenv("AZURE_CERTIFICATE") certificate := os.Getenv("AZURE_CERTIFICATE")
if subscriptionID == "" || certificate == "" { if subscriptionID == "" || certificate == "" {
t.Fatal("either AZURE_SETTINGS_FILE, or AZURE_SUBSCRIPTION_ID " + t.Fatal("either AZURE_PUBLISH_SETTINGS, or AZURE_SUBSCRIPTION_ID " +
"and AZURE_CERTIFICATE must be set for acceptance tests") "and AZURE_CERTIFICATE must be set for acceptance tests")
} }
} }
@ -78,6 +78,11 @@ func TestAzure_validateSettingsFile(t *testing.T) {
t.Fatalf("Error creating temporary file with XML in TestAzure_validateSettingsFile: %s", err) t.Fatalf("Error creating temporary file with XML in TestAzure_validateSettingsFile: %s", err)
} }
defer os.Remove(fx.Name()) defer os.Remove(fx.Name())
_, err = io.WriteString(fx, "<PublishData></PublishData>")
if err != nil {
t.Fatalf("Error writing XML File: %s", err)
}
fx.Close()
home, err := homedir.Dir() home, err := homedir.Dir()
if err != nil { if err != nil {
@ -88,12 +93,11 @@ func TestAzure_validateSettingsFile(t *testing.T) {
t.Fatalf("Error creating homedir-based temporary file: %s", err) t.Fatalf("Error creating homedir-based temporary file: %s", err)
} }
defer os.Remove(fh.Name()) defer os.Remove(fh.Name())
_, err = io.WriteString(fh, "<PublishData></PublishData>")
_, err = io.WriteString(fx, "<PublishData></PublishData>")
if err != nil { if err != nil {
t.Fatalf("Error writing XML File: %s", err) t.Fatalf("Error writing XML File: %s", err)
} }
fx.Close() fh.Close()
r := strings.NewReplacer(home, "~") r := strings.NewReplacer(home, "~")
homePath := r.Replace(fh.Name()) homePath := r.Replace(fh.Name())
@ -103,8 +107,8 @@ func TestAzure_validateSettingsFile(t *testing.T) {
W int // expected count of warnings W int // expected count of warnings
E int // expected count of errors E int // expected count of errors
}{ }{
{"test", 1, 1}, {"test", 0, 1},
{f.Name(), 1, 0}, {f.Name(), 1, 1},
{fx.Name(), 1, 0}, {fx.Name(), 1, 0},
{homePath, 1, 0}, {homePath, 1, 0},
{"<PublishData></PublishData>", 0, 0}, {"<PublishData></PublishData>", 0, 0},
@ -114,10 +118,10 @@ func TestAzure_validateSettingsFile(t *testing.T) {
w, e := validateSettingsFile(tc.Input, "") w, e := validateSettingsFile(tc.Input, "")
if len(w) != tc.W { if len(w) != tc.W {
t.Errorf("Error in TestAzureValidateSettingsFile: input: %s , warnings: %#v, errors: %#v", tc.Input, w, e) t.Errorf("Error in TestAzureValidateSettingsFile: input: %s , warnings: %v, errors: %v", tc.Input, w, e)
} }
if len(e) != tc.E { if len(e) != tc.E {
t.Errorf("Error in TestAzureValidateSettingsFile: input: %s , warnings: %#v, errors: %#v", tc.Input, w, e) t.Errorf("Error in TestAzureValidateSettingsFile: input: %s , warnings: %v, errors: %v", tc.Input, w, e)
} }
} }
} }
@ -164,33 +168,8 @@ func TestAzure_providerConfigure(t *testing.T) {
err = rp.Configure(terraform.NewResourceConfig(rawConfig)) err = rp.Configure(terraform.NewResourceConfig(rawConfig))
meta := rp.(*schema.Provider).Meta() meta := rp.(*schema.Provider).Meta()
if (meta == nil) != tc.NilMeta { if (meta == nil) != tc.NilMeta {
t.Fatalf("expected NilMeta: %t, got meta: %#v", tc.NilMeta, meta) t.Fatalf("expected NilMeta: %t, got meta: %#v, settings_file: %q",
} tc.NilMeta, meta, tc.SettingsFile)
}
}
func TestAzure_isFile(t *testing.T) {
f, err := ioutil.TempFile("", "tf-test-file")
if err != nil {
t.Fatalf("Error creating temporary file with XML in TestAzure_isFile: %s", err)
}
cases := []struct {
Input string // String path to file
B bool // expected true/false
E bool // expect error
}{
{"test", false, true},
{f.Name(), true, false},
}
for _, tc := range cases {
x, y := isFile(tc.Input)
if tc.B != x {
t.Errorf("Error in TestAzure_isFile: input: %s , returned: %#v, expected: %#v", tc.Input, x, tc.B)
}
if tc.E != (y != nil) {
t.Errorf("Error in TestAzure_isFile: input: %s , returned: %#v, expected: %#v", tc.Input, y, tc.E)
} }
} }
} }

View File

@ -0,0 +1,28 @@
package dyn
import (
"fmt"
"log"
"github.com/nesv/go-dynect/dynect"
)
type Config struct {
CustomerName string
Username string
Password string
}
// Client() returns a new client for accessing dyn.
func (c *Config) Client() (*dynect.ConvenientClient, error) {
client := dynect.NewConvenientClient(c.CustomerName)
err := client.Login(c.Username, c.Password)
if err != nil {
return nil, fmt.Errorf("Error setting up Dyn client: %s", err)
}
log.Printf("[INFO] Dyn client configured for customer: %s, user: %s", c.CustomerName, c.Username)
return client, nil
}

View File

@ -0,0 +1,50 @@
package dyn
import (
"github.com/hashicorp/terraform/helper/schema"
"github.com/hashicorp/terraform/terraform"
)
// Provider returns a terraform.ResourceProvider.
func Provider() terraform.ResourceProvider {
return &schema.Provider{
Schema: map[string]*schema.Schema{
"customer_name": &schema.Schema{
Type: schema.TypeString,
Required: true,
DefaultFunc: schema.EnvDefaultFunc("DYN_CUSTOMER_NAME", nil),
Description: "A Dyn customer name.",
},
"username": &schema.Schema{
Type: schema.TypeString,
Required: true,
DefaultFunc: schema.EnvDefaultFunc("DYN_USERNAME", nil),
Description: "A Dyn username.",
},
"password": &schema.Schema{
Type: schema.TypeString,
Required: true,
DefaultFunc: schema.EnvDefaultFunc("DYN_PASSWORD", nil),
Description: "The Dyn password.",
},
},
ResourcesMap: map[string]*schema.Resource{
"dyn_record": resourceDynRecord(),
},
ConfigureFunc: providerConfigure,
}
}
func providerConfigure(d *schema.ResourceData) (interface{}, error) {
config := Config{
CustomerName: d.Get("customer_name").(string),
Username: d.Get("username").(string),
Password: d.Get("password").(string),
}
return config.Client()
}

View File

@ -0,0 +1,47 @@
package dyn
import (
"os"
"testing"
"github.com/hashicorp/terraform/helper/schema"
"github.com/hashicorp/terraform/terraform"
)
var testAccProviders map[string]terraform.ResourceProvider
var testAccProvider *schema.Provider
func init() {
testAccProvider = Provider().(*schema.Provider)
testAccProviders = map[string]terraform.ResourceProvider{
"dyn": testAccProvider,
}
}
func TestProvider(t *testing.T) {
if err := Provider().(*schema.Provider).InternalValidate(); err != nil {
t.Fatalf("err: %s", err)
}
}
func TestProvider_impl(t *testing.T) {
var _ terraform.ResourceProvider = Provider()
}
func testAccPreCheck(t *testing.T) {
if v := os.Getenv("DYN_CUSTOMER_NAME"); v == "" {
t.Fatal("DYN_CUSTOMER_NAME must be set for acceptance tests")
}
if v := os.Getenv("DYN_USERNAME"); v == "" {
t.Fatal("DYN_USERNAME must be set for acceptance tests")
}
if v := os.Getenv("DYN_PASSWORD"); v == "" {
t.Fatal("DYN_PASSWORD must be set for acceptance tests.")
}
if v := os.Getenv("DYN_ZONE"); v == "" {
t.Fatal("DYN_ZONE must be set for acceptance tests. The domain is used to ` and destroy record against.")
}
}

View File

@ -0,0 +1,198 @@
package dyn
import (
"fmt"
"log"
"sync"
"github.com/hashicorp/terraform/helper/schema"
"github.com/nesv/go-dynect/dynect"
)
var mutex = &sync.Mutex{}
func resourceDynRecord() *schema.Resource {
return &schema.Resource{
Create: resourceDynRecordCreate,
Read: resourceDynRecordRead,
Update: resourceDynRecordUpdate,
Delete: resourceDynRecordDelete,
Schema: map[string]*schema.Schema{
"zone": &schema.Schema{
Type: schema.TypeString,
Required: true,
ForceNew: true,
},
"name": &schema.Schema{
Type: schema.TypeString,
Required: true,
ForceNew: true,
},
"fqdn": &schema.Schema{
Type: schema.TypeString,
Computed: true,
},
"type": &schema.Schema{
Type: schema.TypeString,
Required: true,
ForceNew: true,
},
"value": &schema.Schema{
Type: schema.TypeString,
Required: true,
},
"ttl": &schema.Schema{
Type: schema.TypeString,
Optional: true,
Default: "0", // 0 means use zone default
},
},
}
}
func resourceDynRecordCreate(d *schema.ResourceData, meta interface{}) error {
mutex.Lock()
client := meta.(*dynect.ConvenientClient)
record := &dynect.Record{
Name: d.Get("name").(string),
Zone: d.Get("zone").(string),
Type: d.Get("type").(string),
TTL: d.Get("ttl").(string),
Value: d.Get("value").(string),
}
log.Printf("[DEBUG] Dyn record create configuration: %#v", record)
// create the record
err := client.CreateRecord(record)
if err != nil {
mutex.Unlock()
return fmt.Errorf("Failed to create Dyn record: %s", err)
}
// publish the zone
err = client.PublishZone(record.Zone)
if err != nil {
mutex.Unlock()
return fmt.Errorf("Failed to publish Dyn zone: %s", err)
}
// get the record ID
err = client.GetRecordID(record)
if err != nil {
mutex.Unlock()
return fmt.Errorf("%s", err)
}
d.SetId(record.ID)
mutex.Unlock()
return resourceDynRecordRead(d, meta)
}
func resourceDynRecordRead(d *schema.ResourceData, meta interface{}) error {
mutex.Lock()
defer mutex.Unlock()
client := meta.(*dynect.ConvenientClient)
record := &dynect.Record{
ID: d.Id(),
Name: d.Get("name").(string),
Zone: d.Get("zone").(string),
TTL: d.Get("ttl").(string),
FQDN: d.Get("fqdn").(string),
Type: d.Get("type").(string),
}
err := client.GetRecord(record)
if err != nil {
return fmt.Errorf("Couldn't find Dyn record: %s", err)
}
d.Set("zone", record.Zone)
d.Set("fqdn", record.FQDN)
d.Set("name", record.Name)
d.Set("type", record.Type)
d.Set("ttl", record.TTL)
d.Set("value", record.Value)
return nil
}
func resourceDynRecordUpdate(d *schema.ResourceData, meta interface{}) error {
mutex.Lock()
client := meta.(*dynect.ConvenientClient)
record := &dynect.Record{
Name: d.Get("name").(string),
Zone: d.Get("zone").(string),
TTL: d.Get("ttl").(string),
Type: d.Get("type").(string),
Value: d.Get("value").(string),
}
log.Printf("[DEBUG] Dyn record update configuration: %#v", record)
// update the record
err := client.UpdateRecord(record)
if err != nil {
mutex.Unlock()
return fmt.Errorf("Failed to update Dyn record: %s", err)
}
// publish the zone
err = client.PublishZone(record.Zone)
if err != nil {
mutex.Unlock()
return fmt.Errorf("Failed to publish Dyn zone: %s", err)
}
// get the record ID
err = client.GetRecordID(record)
if err != nil {
mutex.Unlock()
return fmt.Errorf("%s", err)
}
d.SetId(record.ID)
mutex.Unlock()
return resourceDynRecordRead(d, meta)
}
func resourceDynRecordDelete(d *schema.ResourceData, meta interface{}) error {
mutex.Lock()
defer mutex.Unlock()
client := meta.(*dynect.ConvenientClient)
record := &dynect.Record{
ID: d.Id(),
Name: d.Get("name").(string),
Zone: d.Get("zone").(string),
FQDN: d.Get("fqdn").(string),
Type: d.Get("type").(string),
}
log.Printf("[INFO] Deleting Dyn record: %s, %s", record.FQDN, record.ID)
// delete the record
err := client.DeleteRecord(record)
if err != nil {
return fmt.Errorf("Failed to delete Dyn record: %s", err)
}
// publish the zone
err = client.PublishZone(record.Zone)
if err != nil {
return fmt.Errorf("Failed to publish Dyn zone: %s", err)
}
return nil
}

View File

@ -0,0 +1,239 @@
package dyn
import (
"fmt"
"os"
"testing"
"github.com/hashicorp/terraform/helper/resource"
"github.com/hashicorp/terraform/terraform"
"github.com/nesv/go-dynect/dynect"
)
func TestAccDynRecord_Basic(t *testing.T) {
var record dynect.Record
zone := os.Getenv("DYN_ZONE")
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
CheckDestroy: testAccCheckDynRecordDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: fmt.Sprintf(testAccCheckDynRecordConfig_basic, zone),
Check: resource.ComposeTestCheckFunc(
testAccCheckDynRecordExists("dyn_record.foobar", &record),
testAccCheckDynRecordAttributes(&record),
resource.TestCheckResourceAttr(
"dyn_record.foobar", "name", "terraform"),
resource.TestCheckResourceAttr(
"dyn_record.foobar", "zone", zone),
resource.TestCheckResourceAttr(
"dyn_record.foobar", "value", "192.168.0.10"),
),
},
},
})
}
func TestAccDynRecord_Updated(t *testing.T) {
var record dynect.Record
zone := os.Getenv("DYN_ZONE")
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
CheckDestroy: testAccCheckDynRecordDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: fmt.Sprintf(testAccCheckDynRecordConfig_basic, zone),
Check: resource.ComposeTestCheckFunc(
testAccCheckDynRecordExists("dyn_record.foobar", &record),
testAccCheckDynRecordAttributes(&record),
resource.TestCheckResourceAttr(
"dyn_record.foobar", "name", "terraform"),
resource.TestCheckResourceAttr(
"dyn_record.foobar", "zone", zone),
resource.TestCheckResourceAttr(
"dyn_record.foobar", "value", "192.168.0.10"),
),
},
resource.TestStep{
Config: fmt.Sprintf(testAccCheckDynRecordConfig_new_value, zone),
Check: resource.ComposeTestCheckFunc(
testAccCheckDynRecordExists("dyn_record.foobar", &record),
testAccCheckDynRecordAttributesUpdated(&record),
resource.TestCheckResourceAttr(
"dyn_record.foobar", "name", "terraform"),
resource.TestCheckResourceAttr(
"dyn_record.foobar", "zone", zone),
resource.TestCheckResourceAttr(
"dyn_record.foobar", "value", "192.168.0.11"),
),
},
},
})
}
func TestAccDynRecord_Multiple(t *testing.T) {
var record dynect.Record
zone := os.Getenv("DYN_ZONE")
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
CheckDestroy: testAccCheckDynRecordDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: fmt.Sprintf(testAccCheckDynRecordConfig_multiple, zone, zone, zone),
Check: resource.ComposeTestCheckFunc(
testAccCheckDynRecordExists("dyn_record.foobar1", &record),
testAccCheckDynRecordAttributes(&record),
resource.TestCheckResourceAttr(
"dyn_record.foobar1", "name", "terraform1"),
resource.TestCheckResourceAttr(
"dyn_record.foobar1", "zone", zone),
resource.TestCheckResourceAttr(
"dyn_record.foobar1", "value", "192.168.0.10"),
resource.TestCheckResourceAttr(
"dyn_record.foobar2", "name", "terraform2"),
resource.TestCheckResourceAttr(
"dyn_record.foobar2", "zone", zone),
resource.TestCheckResourceAttr(
"dyn_record.foobar2", "value", "192.168.1.10"),
resource.TestCheckResourceAttr(
"dyn_record.foobar3", "name", "terraform3"),
resource.TestCheckResourceAttr(
"dyn_record.foobar3", "zone", zone),
resource.TestCheckResourceAttr(
"dyn_record.foobar3", "value", "192.168.2.10"),
),
},
},
})
}
func testAccCheckDynRecordDestroy(s *terraform.State) error {
client := testAccProvider.Meta().(*dynect.ConvenientClient)
for _, rs := range s.RootModule().Resources {
if rs.Type != "dyn_record" {
continue
}
foundRecord := &dynect.Record{
Zone: rs.Primary.Attributes["zone"],
ID: rs.Primary.ID,
FQDN: rs.Primary.Attributes["fqdn"],
Type: rs.Primary.Attributes["type"],
}
err := client.GetRecord(foundRecord)
if err != nil {
return fmt.Errorf("Record still exists")
}
}
return nil
}
func testAccCheckDynRecordAttributes(record *dynect.Record) resource.TestCheckFunc {
return func(s *terraform.State) error {
if record.Value != "192.168.0.10" {
return fmt.Errorf("Bad value: %s", record.Value)
}
return nil
}
}
func testAccCheckDynRecordAttributesUpdated(record *dynect.Record) resource.TestCheckFunc {
return func(s *terraform.State) error {
if record.Value != "192.168.0.11" {
return fmt.Errorf("Bad value: %s", record.Value)
}
return nil
}
}
func testAccCheckDynRecordExists(n string, record *dynect.Record) resource.TestCheckFunc {
return func(s *terraform.State) error {
rs, ok := s.RootModule().Resources[n]
if !ok {
return fmt.Errorf("Not found: %s", n)
}
if rs.Primary.ID == "" {
return fmt.Errorf("No Record ID is set")
}
client := testAccProvider.Meta().(*dynect.ConvenientClient)
foundRecord := &dynect.Record{
Zone: rs.Primary.Attributes["zone"],
ID: rs.Primary.ID,
FQDN: rs.Primary.Attributes["fqdn"],
Type: rs.Primary.Attributes["type"],
}
err := client.GetRecord(foundRecord)
if err != nil {
return err
}
if foundRecord.ID != rs.Primary.ID {
return fmt.Errorf("Record not found")
}
*record = *foundRecord
return nil
}
}
const testAccCheckDynRecordConfig_basic = `
resource "dyn_record" "foobar" {
zone = "%s"
name = "terraform"
value = "192.168.0.10"
type = "A"
ttl = 3600
}`
const testAccCheckDynRecordConfig_new_value = `
resource "dyn_record" "foobar" {
zone = "%s"
name = "terraform"
value = "192.168.0.11"
type = "A"
ttl = 3600
}`
const testAccCheckDynRecordConfig_multiple = `
resource "dyn_record" "foobar1" {
zone = "%s"
name = "terraform1"
value = "192.168.0.10"
type = "A"
ttl = 3600
}
resource "dyn_record" "foobar2" {
zone = "%s"
name = "terraform2"
value = "192.168.1.10"
type = "A"
ttl = 3600
}
resource "dyn_record" "foobar3" {
zone = "%s"
name = "terraform3"
value = "192.168.2.10"
type = "A"
ttl = 3600
}`

View File

@ -3,13 +3,12 @@ package google
import ( import (
"encoding/json" "encoding/json"
"fmt" "fmt"
"io/ioutil"
"log" "log"
"net/http" "net/http"
"os"
"runtime" "runtime"
"strings" "strings"
"github.com/hashicorp/terraform/helper/pathorcontents"
"github.com/hashicorp/terraform/terraform" "github.com/hashicorp/terraform/terraform"
"golang.org/x/oauth2" "golang.org/x/oauth2"
"golang.org/x/oauth2/google" "golang.org/x/oauth2/google"
@ -24,7 +23,7 @@ import (
// Config is the configuration structure used to instantiate the Google // Config is the configuration structure used to instantiate the Google
// provider. // provider.
type Config struct { type Config struct {
AccountFile string Credentials string
Project string Project string
Region string Region string
@ -44,46 +43,17 @@ func (c *Config) loadAndValidate() error {
"https://www.googleapis.com/auth/devstorage.full_control", "https://www.googleapis.com/auth/devstorage.full_control",
} }
if c.AccountFile == "" {
c.AccountFile = os.Getenv("GOOGLE_ACCOUNT_FILE")
}
if c.Project == "" {
c.Project = os.Getenv("GOOGLE_PROJECT")
}
if c.Region == "" {
c.Region = os.Getenv("GOOGLE_REGION")
}
var client *http.Client var client *http.Client
if c.AccountFile != "" { if c.Credentials != "" {
contents := c.AccountFile contents, _, err := pathorcontents.Read(c.Credentials)
if err != nil {
return fmt.Errorf("Error loading credentials: %s", err)
}
// Assume account_file is a JSON string // Assume account_file is a JSON string
if err := parseJSON(&account, contents); err != nil { if err := parseJSON(&account, contents); err != nil {
// If account_file was not JSON, assume it is a file path instead return fmt.Errorf("Error parsing credentials '%s': %s", contents, err)
if _, err := os.Stat(c.AccountFile); os.IsNotExist(err) {
return fmt.Errorf(
"account_file path does not exist: %s",
c.AccountFile)
}
b, err := ioutil.ReadFile(c.AccountFile)
if err != nil {
return fmt.Errorf(
"Error reading account_file from path '%s': %s",
c.AccountFile,
err)
}
contents = string(b)
if err := parseJSON(&account, contents); err != nil {
return fmt.Errorf(
"Error parsing account file '%s': %s",
contents,
err)
}
} }
// Get the token for use in our requests // Get the token for use in our requests

View File

@ -5,11 +5,11 @@ import (
"testing" "testing"
) )
const testFakeAccountFilePath = "./test-fixtures/fake_account.json" const testFakeCredentialsPath = "./test-fixtures/fake_account.json"
func TestConfigLoadAndValidate_accountFilePath(t *testing.T) { func TestConfigLoadAndValidate_accountFilePath(t *testing.T) {
config := Config{ config := Config{
AccountFile: testFakeAccountFilePath, Credentials: testFakeCredentialsPath,
Project: "my-gce-project", Project: "my-gce-project",
Region: "us-central1", Region: "us-central1",
} }
@ -21,12 +21,12 @@ func TestConfigLoadAndValidate_accountFilePath(t *testing.T) {
} }
func TestConfigLoadAndValidate_accountFileJSON(t *testing.T) { func TestConfigLoadAndValidate_accountFileJSON(t *testing.T) {
contents, err := ioutil.ReadFile(testFakeAccountFilePath) contents, err := ioutil.ReadFile(testFakeCredentialsPath)
if err != nil { if err != nil {
t.Fatalf("error: %v", err) t.Fatalf("error: %v", err)
} }
config := Config{ config := Config{
AccountFile: string(contents), Credentials: string(contents),
Project: "my-gce-project", Project: "my-gce-project",
Region: "us-central1", Region: "us-central1",
} }
@ -39,7 +39,7 @@ func TestConfigLoadAndValidate_accountFileJSON(t *testing.T) {
func TestConfigLoadAndValidate_accountFileJSONInvalid(t *testing.T) { func TestConfigLoadAndValidate_accountFileJSONInvalid(t *testing.T) {
config := Config{ config := Config{
AccountFile: "{this is not json}", Credentials: "{this is not json}",
Project: "my-gce-project", Project: "my-gce-project",
Region: "us-central1", Region: "us-central1",
} }

View File

@ -3,8 +3,8 @@ package google
import ( import (
"encoding/json" "encoding/json"
"fmt" "fmt"
"os"
"github.com/hashicorp/terraform/helper/pathorcontents"
"github.com/hashicorp/terraform/helper/schema" "github.com/hashicorp/terraform/helper/schema"
"github.com/hashicorp/terraform/terraform" "github.com/hashicorp/terraform/terraform"
) )
@ -18,6 +18,14 @@ func Provider() terraform.ResourceProvider {
Optional: true, Optional: true,
DefaultFunc: schema.EnvDefaultFunc("GOOGLE_ACCOUNT_FILE", nil), DefaultFunc: schema.EnvDefaultFunc("GOOGLE_ACCOUNT_FILE", nil),
ValidateFunc: validateAccountFile, ValidateFunc: validateAccountFile,
Deprecated: "Use the credentials field instead",
},
"credentials": &schema.Schema{
Type: schema.TypeString,
Optional: true,
DefaultFunc: schema.EnvDefaultFunc("GOOGLE_CREDENTIALS", nil),
ValidateFunc: validateCredentials,
}, },
"project": &schema.Schema{ "project": &schema.Schema{
@ -73,8 +81,12 @@ func Provider() terraform.ResourceProvider {
} }
func providerConfigure(d *schema.ResourceData) (interface{}, error) { func providerConfigure(d *schema.ResourceData) (interface{}, error) {
credentials := d.Get("credentials").(string)
if credentials == "" {
credentials = d.Get("account_file").(string)
}
config := Config{ config := Config{
AccountFile: d.Get("account_file").(string), Credentials: credentials,
Project: d.Get("project").(string), Project: d.Get("project").(string),
Region: d.Get("region").(string), Region: d.Get("region").(string),
} }
@ -97,22 +109,34 @@ func validateAccountFile(v interface{}, k string) (warnings []string, errors []e
return return
} }
var account accountFile contents, wasPath, err := pathorcontents.Read(value)
if err := json.Unmarshal([]byte(value), &account); err != nil { if err != nil {
warnings = append(warnings, ` errors = append(errors, fmt.Errorf("Error loading Account File: %s", err))
account_file is not valid JSON, so we are assuming it is a file path. This }
support will be removed in the future. Please update your configuration to use if wasPath {
${file("filename.json")} instead.`) warnings = append(warnings, `account_file was provided as a path instead of
} else { as file contents. This support will be removed in the future. Please update
return your configuration to use ${file("filename.json")} instead.`)
} }
if _, err := os.Stat(value); err != nil { var account accountFile
if err := json.Unmarshal([]byte(contents), &account); err != nil {
errors = append(errors, errors = append(errors,
fmt.Errorf( fmt.Errorf("account_file not valid JSON '%s': %s", contents, err))
"account_file path could not be read from '%s': %s", }
value,
err)) return
}
func validateCredentials(v interface{}, k string) (warnings []string, errors []error) {
if v == nil || v.(string) == "" {
return
}
creds := v.(string)
var account accountFile
if err := json.Unmarshal([]byte(creds), &account); err != nil {
errors = append(errors,
fmt.Errorf("credentials are not valid JSON '%s': %s", creds, err))
} }
return return

View File

@ -29,8 +29,8 @@ func TestProvider_impl(t *testing.T) {
} }
func testAccPreCheck(t *testing.T) { func testAccPreCheck(t *testing.T) {
if v := os.Getenv("GOOGLE_ACCOUNT_FILE"); v == "" { if v := os.Getenv("GOOGLE_CREDENTIALS"); v == "" {
t.Fatal("GOOGLE_ACCOUNT_FILE must be set for acceptance tests") t.Fatal("GOOGLE_CREDENTIALS must be set for acceptance tests")
} }
if v := os.Getenv("GOOGLE_PROJECT"); v == "" { if v := os.Getenv("GOOGLE_PROJECT"); v == "" {

View File

@ -38,8 +38,9 @@ func resourceComputeSecGroupV2() *schema.Resource {
ForceNew: false, ForceNew: false,
}, },
"rule": &schema.Schema{ "rule": &schema.Schema{
Type: schema.TypeList, Type: schema.TypeSet,
Optional: true, Optional: true,
Computed: true,
Elem: &schema.Resource{ Elem: &schema.Resource{
Schema: map[string]*schema.Schema{ Schema: map[string]*schema.Schema{
"id": &schema.Schema{ "id": &schema.Schema{
@ -79,6 +80,7 @@ func resourceComputeSecGroupV2() *schema.Resource {
}, },
}, },
}, },
Set: secgroupRuleV2Hash,
}, },
}, },
} }
@ -129,13 +131,10 @@ func resourceComputeSecGroupV2Read(d *schema.ResourceData, meta interface{}) err
d.Set("name", sg.Name) d.Set("name", sg.Name)
d.Set("description", sg.Description) d.Set("description", sg.Description)
rtm := rulesToMap(sg.Rules)
for _, v := range rtm { rtm, err := rulesToMap(computeClient, d, sg.Rules)
if v["group"] == d.Get("name") { if err != nil {
v["self"] = "1" return err
} else {
v["self"] = "0"
}
} }
log.Printf("[DEBUG] rulesToMap(sg.Rules): %+v", rtm) log.Printf("[DEBUG] rulesToMap(sg.Rules): %+v", rtm)
d.Set("rule", rtm) d.Set("rule", rtm)
@ -164,14 +163,11 @@ func resourceComputeSecGroupV2Update(d *schema.ResourceData, meta interface{}) e
if d.HasChange("rule") { if d.HasChange("rule") {
oldSGRaw, newSGRaw := d.GetChange("rule") oldSGRaw, newSGRaw := d.GetChange("rule")
oldSGRSlice, newSGRSlice := oldSGRaw.([]interface{}), newSGRaw.([]interface{}) oldSGRSet, newSGRSet := oldSGRaw.(*schema.Set), newSGRaw.(*schema.Set)
oldSGRSet := schema.NewSet(secgroupRuleV2Hash, oldSGRSlice)
newSGRSet := schema.NewSet(secgroupRuleV2Hash, newSGRSlice)
secgrouprulesToAdd := newSGRSet.Difference(oldSGRSet) secgrouprulesToAdd := newSGRSet.Difference(oldSGRSet)
secgrouprulesToRemove := oldSGRSet.Difference(newSGRSet) secgrouprulesToRemove := oldSGRSet.Difference(newSGRSet)
log.Printf("[DEBUG] Security group rules to add: %v", secgrouprulesToAdd) log.Printf("[DEBUG] Security group rules to add: %v", secgrouprulesToAdd)
log.Printf("[DEBUG] Security groups rules to remove: %v", secgrouprulesToRemove) log.Printf("[DEBUG] Security groups rules to remove: %v", secgrouprulesToRemove)
for _, rawRule := range secgrouprulesToAdd.List() { for _, rawRule := range secgrouprulesToAdd.List() {
@ -231,67 +227,83 @@ func resourceComputeSecGroupV2Delete(d *schema.ResourceData, meta interface{}) e
} }
func resourceSecGroupRulesV2(d *schema.ResourceData) []secgroups.CreateRuleOpts { func resourceSecGroupRulesV2(d *schema.ResourceData) []secgroups.CreateRuleOpts {
rawRules := d.Get("rule").([]interface{}) rawRules := d.Get("rule").(*schema.Set).List()
createRuleOptsList := make([]secgroups.CreateRuleOpts, len(rawRules)) createRuleOptsList := make([]secgroups.CreateRuleOpts, len(rawRules))
for i, raw := range rawRules { for i, rawRule := range rawRules {
rawMap := raw.(map[string]interface{}) createRuleOptsList[i] = resourceSecGroupRuleCreateOptsV2(d, rawRule)
groupId := rawMap["from_group_id"].(string)
if rawMap["self"].(bool) {
groupId = d.Id()
}
createRuleOptsList[i] = secgroups.CreateRuleOpts{
ParentGroupID: d.Id(),
FromPort: rawMap["from_port"].(int),
ToPort: rawMap["to_port"].(int),
IPProtocol: rawMap["ip_protocol"].(string),
CIDR: rawMap["cidr"].(string),
FromGroupID: groupId,
}
} }
return createRuleOptsList return createRuleOptsList
} }
func resourceSecGroupRuleCreateOptsV2(d *schema.ResourceData, raw interface{}) secgroups.CreateRuleOpts { func resourceSecGroupRuleCreateOptsV2(d *schema.ResourceData, rawRule interface{}) secgroups.CreateRuleOpts {
rawMap := raw.(map[string]interface{}) rawRuleMap := rawRule.(map[string]interface{})
groupId := rawMap["from_group_id"].(string) groupId := rawRuleMap["from_group_id"].(string)
if rawMap["self"].(bool) { if rawRuleMap["self"].(bool) {
groupId = d.Id() groupId = d.Id()
} }
return secgroups.CreateRuleOpts{ return secgroups.CreateRuleOpts{
ParentGroupID: d.Id(), ParentGroupID: d.Id(),
FromPort: rawMap["from_port"].(int), FromPort: rawRuleMap["from_port"].(int),
ToPort: rawMap["to_port"].(int), ToPort: rawRuleMap["to_port"].(int),
IPProtocol: rawMap["ip_protocol"].(string), IPProtocol: rawRuleMap["ip_protocol"].(string),
CIDR: rawMap["cidr"].(string), CIDR: rawRuleMap["cidr"].(string),
FromGroupID: groupId, FromGroupID: groupId,
} }
} }
func resourceSecGroupRuleV2(d *schema.ResourceData, raw interface{}) secgroups.Rule { func resourceSecGroupRuleV2(d *schema.ResourceData, rawRule interface{}) secgroups.Rule {
rawMap := raw.(map[string]interface{}) rawRuleMap := rawRule.(map[string]interface{})
return secgroups.Rule{ return secgroups.Rule{
ID: rawMap["id"].(string), ID: rawRuleMap["id"].(string),
ParentGroupID: d.Id(), ParentGroupID: d.Id(),
FromPort: rawMap["from_port"].(int), FromPort: rawRuleMap["from_port"].(int),
ToPort: rawMap["to_port"].(int), ToPort: rawRuleMap["to_port"].(int),
IPProtocol: rawMap["ip_protocol"].(string), IPProtocol: rawRuleMap["ip_protocol"].(string),
IPRange: secgroups.IPRange{CIDR: rawMap["cidr"].(string)}, IPRange: secgroups.IPRange{CIDR: rawRuleMap["cidr"].(string)},
} }
} }
func rulesToMap(sgrs []secgroups.Rule) []map[string]interface{} { func rulesToMap(computeClient *gophercloud.ServiceClient, d *schema.ResourceData, sgrs []secgroups.Rule) ([]map[string]interface{}, error) {
sgrMap := make([]map[string]interface{}, len(sgrs)) sgrMap := make([]map[string]interface{}, len(sgrs))
for i, sgr := range sgrs { for i, sgr := range sgrs {
groupId := ""
self := false
if sgr.Group.Name != "" {
if sgr.Group.Name == d.Get("name").(string) {
self = true
} else {
// Since Nova only returns the secgroup Name (and not the ID) for the group attribute,
// we need to look up all security groups and match the name.
// Nevermind that Nova wants the ID when setting the Group *and* that multiple groups
// with the same name can exist...
allPages, err := secgroups.List(computeClient).AllPages()
if err != nil {
return nil, err
}
securityGroups, err := secgroups.ExtractSecurityGroups(allPages)
if err != nil {
return nil, err
}
for _, sg := range securityGroups {
if sg.Name == sgr.Group.Name {
groupId = sg.ID
}
}
}
}
sgrMap[i] = map[string]interface{}{ sgrMap[i] = map[string]interface{}{
"id": sgr.ID, "id": sgr.ID,
"from_port": sgr.FromPort, "from_port": sgr.FromPort,
"to_port": sgr.ToPort, "to_port": sgr.ToPort,
"ip_protocol": sgr.IPProtocol, "ip_protocol": sgr.IPProtocol,
"cidr": sgr.IPRange.CIDR, "cidr": sgr.IPRange.CIDR,
"group": sgr.Group.Name, "self": self,
"from_group_id": groupId,
} }
} }
return sgrMap return sgrMap, nil
} }
func secgroupRuleV2Hash(v interface{}) int { func secgroupRuleV2Hash(v interface{}) int {
@ -301,6 +313,8 @@ func secgroupRuleV2Hash(v interface{}) int {
buf.WriteString(fmt.Sprintf("%d-", m["to_port"].(int))) buf.WriteString(fmt.Sprintf("%d-", m["to_port"].(int)))
buf.WriteString(fmt.Sprintf("%s-", m["ip_protocol"].(string))) buf.WriteString(fmt.Sprintf("%s-", m["ip_protocol"].(string)))
buf.WriteString(fmt.Sprintf("%s-", m["cidr"].(string))) buf.WriteString(fmt.Sprintf("%s-", m["cidr"].(string)))
buf.WriteString(fmt.Sprintf("%s-", m["from_group_id"].(string)))
buf.WriteString(fmt.Sprintf("%t-", m["self"].(bool)))
return hashcode.String(buf.String()) return hashcode.String(buf.String())
} }

View File

@ -19,7 +19,7 @@ func TestAccComputeV2SecGroup_basic(t *testing.T) {
CheckDestroy: testAccCheckComputeV2SecGroupDestroy, CheckDestroy: testAccCheckComputeV2SecGroupDestroy,
Steps: []resource.TestStep{ Steps: []resource.TestStep{
resource.TestStep{ resource.TestStep{
Config: testAccComputeV2SecGroup_basic, Config: testAccComputeV2SecGroup_basic_orig,
Check: resource.ComposeTestCheckFunc( Check: resource.ComposeTestCheckFunc(
testAccCheckComputeV2SecGroupExists(t, "openstack_compute_secgroup_v2.foo", &secgroup), testAccCheckComputeV2SecGroupExists(t, "openstack_compute_secgroup_v2.foo", &secgroup),
), ),
@ -28,6 +28,84 @@ func TestAccComputeV2SecGroup_basic(t *testing.T) {
}) })
} }
func TestAccComputeV2SecGroup_update(t *testing.T) {
var secgroup secgroups.SecurityGroup
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
CheckDestroy: testAccCheckComputeV2SecGroupDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: testAccComputeV2SecGroup_basic_orig,
Check: resource.ComposeTestCheckFunc(
testAccCheckComputeV2SecGroupExists(t, "openstack_compute_secgroup_v2.foo", &secgroup),
),
},
resource.TestStep{
Config: testAccComputeV2SecGroup_basic_update,
Check: resource.ComposeTestCheckFunc(
testAccCheckComputeV2SecGroupExists(t, "openstack_compute_secgroup_v2.foo", &secgroup),
testAccCheckComputeV2SecGroupRuleCount(t, &secgroup, 2),
),
},
},
})
}
func TestAccComputeV2SecGroup_groupID(t *testing.T) {
var secgroup1, secgroup2, secgroup3 secgroups.SecurityGroup
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
CheckDestroy: testAccCheckComputeV2SecGroupDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: testAccComputeV2SecGroup_groupID_orig,
Check: resource.ComposeTestCheckFunc(
testAccCheckComputeV2SecGroupExists(t, "openstack_compute_secgroup_v2.test_group_1", &secgroup1),
testAccCheckComputeV2SecGroupExists(t, "openstack_compute_secgroup_v2.test_group_2", &secgroup2),
testAccCheckComputeV2SecGroupExists(t, "openstack_compute_secgroup_v2.test_group_3", &secgroup3),
testAccCheckComputeV2SecGroupGroupIDMatch(t, &secgroup1, &secgroup3),
),
},
resource.TestStep{
Config: testAccComputeV2SecGroup_groupID_update,
Check: resource.ComposeTestCheckFunc(
testAccCheckComputeV2SecGroupExists(t, "openstack_compute_secgroup_v2.test_group_1", &secgroup1),
testAccCheckComputeV2SecGroupExists(t, "openstack_compute_secgroup_v2.test_group_2", &secgroup2),
testAccCheckComputeV2SecGroupExists(t, "openstack_compute_secgroup_v2.test_group_3", &secgroup3),
testAccCheckComputeV2SecGroupGroupIDMatch(t, &secgroup2, &secgroup3),
),
},
},
})
}
func TestAccComputeV2SecGroup_self(t *testing.T) {
var secgroup secgroups.SecurityGroup
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
CheckDestroy: testAccCheckComputeV2SecGroupDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: testAccComputeV2SecGroup_self,
Check: resource.ComposeTestCheckFunc(
testAccCheckComputeV2SecGroupExists(t, "openstack_compute_secgroup_v2.test_group_1", &secgroup),
testAccCheckComputeV2SecGroupGroupIDMatch(t, &secgroup, &secgroup),
resource.TestCheckResourceAttr(
"openstack_compute_secgroup_v2.test_group_1", "rule.1118853483.self", "true"),
resource.TestCheckResourceAttr(
"openstack_compute_secgroup_v2.test_group_1", "rule.1118853483.from_group_id", ""),
),
},
},
})
}
func testAccCheckComputeV2SecGroupDestroy(s *terraform.State) error { func testAccCheckComputeV2SecGroupDestroy(s *terraform.State) error {
config := testAccProvider.Meta().(*Config) config := testAccProvider.Meta().(*Config)
computeClient, err := config.computeV2Client(OS_REGION_NAME) computeClient, err := config.computeV2Client(OS_REGION_NAME)
@ -81,10 +159,148 @@ func testAccCheckComputeV2SecGroupExists(t *testing.T, n string, secgroup *secgr
} }
} }
var testAccComputeV2SecGroup_basic = fmt.Sprintf(` func testAccCheckComputeV2SecGroupRuleCount(t *testing.T, secgroup *secgroups.SecurityGroup, count int) resource.TestCheckFunc {
return func(s *terraform.State) error {
if len(secgroup.Rules) != count {
return fmt.Errorf("Security group rule count does not match. Expected %d, got %d", count, len(secgroup.Rules))
}
return nil
}
}
func testAccCheckComputeV2SecGroupGroupIDMatch(t *testing.T, sg1, sg2 *secgroups.SecurityGroup) resource.TestCheckFunc {
return func(s *terraform.State) error {
if len(sg2.Rules) == 1 {
if sg1.Name != sg2.Rules[0].Group.Name || sg1.TenantID != sg2.Rules[0].Group.TenantID {
return fmt.Errorf("%s was not correctly applied to %s", sg1.Name, sg2.Name)
}
} else {
return fmt.Errorf("%s rule count is incorrect", sg2.Name)
}
return nil
}
}
var testAccComputeV2SecGroup_basic_orig = fmt.Sprintf(`
resource "openstack_compute_secgroup_v2" "foo" { resource "openstack_compute_secgroup_v2" "foo" {
region = "%s"
name = "test_group_1" name = "test_group_1"
description = "first test security group" description = "first test security group"
}`, rule {
OS_REGION_NAME) from_port = 22
to_port = 22
ip_protocol = "tcp"
cidr = "0.0.0.0/0"
}
rule {
from_port = 1
to_port = 65535
ip_protocol = "udp"
cidr = "0.0.0.0/0"
}
rule {
from_port = -1
to_port = -1
ip_protocol = "icmp"
cidr = "0.0.0.0/0"
}
}`)
var testAccComputeV2SecGroup_basic_update = fmt.Sprintf(`
resource "openstack_compute_secgroup_v2" "foo" {
name = "test_group_1"
description = "first test security group"
rule {
from_port = 2200
to_port = 2200
ip_protocol = "tcp"
cidr = "0.0.0.0/0"
}
rule {
from_port = -1
to_port = -1
ip_protocol = "icmp"
cidr = "0.0.0.0/0"
}
}`)
var testAccComputeV2SecGroup_groupID_orig = fmt.Sprintf(`
resource "openstack_compute_secgroup_v2" "test_group_1" {
name = "test_group_1"
description = "first test security group"
rule {
from_port = 22
to_port = 22
ip_protocol = "tcp"
cidr = "0.0.0.0/0"
}
}
resource "openstack_compute_secgroup_v2" "test_group_2" {
name = "test_group_2"
description = "second test security group"
rule {
from_port = -1
to_port = -1
ip_protocol = "icmp"
cidr = "0.0.0.0/0"
}
}
resource "openstack_compute_secgroup_v2" "test_group_3" {
name = "test_group_3"
description = "third test security group"
rule {
from_port = 80
to_port = 80
ip_protocol = "tcp"
from_group_id = "${openstack_compute_secgroup_v2.test_group_1.id}"
}
}`)
var testAccComputeV2SecGroup_groupID_update = fmt.Sprintf(`
resource "openstack_compute_secgroup_v2" "test_group_1" {
name = "test_group_1"
description = "first test security group"
rule {
from_port = 22
to_port = 22
ip_protocol = "tcp"
cidr = "0.0.0.0/0"
}
}
resource "openstack_compute_secgroup_v2" "test_group_2" {
name = "test_group_2"
description = "second test security group"
rule {
from_port = -1
to_port = -1
ip_protocol = "icmp"
cidr = "0.0.0.0/0"
}
}
resource "openstack_compute_secgroup_v2" "test_group_3" {
name = "test_group_3"
description = "third test security group"
rule {
from_port = 80
to_port = 80
ip_protocol = "tcp"
from_group_id = "${openstack_compute_secgroup_v2.test_group_2.id}"
}
}`)
var testAccComputeV2SecGroup_self = fmt.Sprintf(`
resource "openstack_compute_secgroup_v2" "test_group_1" {
name = "test_group_1"
description = "first test security group"
rule {
from_port = 22
to_port = 22
ip_protocol = "tcp"
self = true
}
}`)

View File

@ -3,7 +3,6 @@ package openstack
import ( import (
"fmt" "fmt"
"log" "log"
"strconv"
"github.com/hashicorp/terraform/helper/schema" "github.com/hashicorp/terraform/helper/schema"
"github.com/rackspace/gophercloud" "github.com/rackspace/gophercloud"
@ -53,16 +52,19 @@ func resourceLBVipV1() *schema.Resource {
"tenant_id": &schema.Schema{ "tenant_id": &schema.Schema{
Type: schema.TypeString, Type: schema.TypeString,
Optional: true, Optional: true,
Computed: true,
ForceNew: true, ForceNew: true,
}, },
"address": &schema.Schema{ "address": &schema.Schema{
Type: schema.TypeString, Type: schema.TypeString,
Optional: true, Optional: true,
Computed: true,
ForceNew: true, ForceNew: true,
}, },
"description": &schema.Schema{ "description": &schema.Schema{
Type: schema.TypeString, Type: schema.TypeString,
Optional: true, Optional: true,
Computed: true,
ForceNew: false, ForceNew: false,
}, },
"persistence": &schema.Schema{ "persistence": &schema.Schema{
@ -73,6 +75,7 @@ func resourceLBVipV1() *schema.Resource {
"conn_limit": &schema.Schema{ "conn_limit": &schema.Schema{
Type: schema.TypeInt, Type: schema.TypeInt,
Optional: true, Optional: true,
Computed: true,
ForceNew: false, ForceNew: false,
}, },
"port_id": &schema.Schema{ "port_id": &schema.Schema{
@ -86,8 +89,9 @@ func resourceLBVipV1() *schema.Resource {
ForceNew: false, ForceNew: false,
}, },
"admin_state_up": &schema.Schema{ "admin_state_up": &schema.Schema{
Type: schema.TypeString, Type: schema.TypeBool,
Optional: true, Optional: true,
Computed: true,
ForceNew: false, ForceNew: false,
}, },
}, },
@ -114,14 +118,8 @@ func resourceLBVipV1Create(d *schema.ResourceData, meta interface{}) error {
ConnLimit: gophercloud.MaybeInt(d.Get("conn_limit").(int)), ConnLimit: gophercloud.MaybeInt(d.Get("conn_limit").(int)),
} }
asuRaw := d.Get("admin_state_up").(string) asu := d.Get("admin_state_up").(bool)
if asuRaw != "" {
asu, err := strconv.ParseBool(asuRaw)
if err != nil {
return fmt.Errorf("admin_state_up, if provided, must be either 'true' or 'false'")
}
createOpts.AdminStateUp = &asu createOpts.AdminStateUp = &asu
}
log.Printf("[DEBUG] Create Options: %#v", createOpts) log.Printf("[DEBUG] Create Options: %#v", createOpts)
p, err := vips.Create(networkingClient, createOpts).Extract() p, err := vips.Create(networkingClient, createOpts).Extract()
@ -160,40 +158,11 @@ func resourceLBVipV1Read(d *schema.ResourceData, meta interface{}) error {
d.Set("port", p.ProtocolPort) d.Set("port", p.ProtocolPort)
d.Set("pool_id", p.PoolID) d.Set("pool_id", p.PoolID)
d.Set("port_id", p.PortID) d.Set("port_id", p.PortID)
if t, exists := d.GetOk("tenant_id"); exists && t != "" {
d.Set("tenant_id", p.TenantID) d.Set("tenant_id", p.TenantID)
} else {
d.Set("tenant_id", "")
}
if t, exists := d.GetOk("address"); exists && t != "" {
d.Set("address", p.Address) d.Set("address", p.Address)
} else {
d.Set("address", "")
}
if t, exists := d.GetOk("description"); exists && t != "" {
d.Set("description", p.Description) d.Set("description", p.Description)
} else {
d.Set("description", "")
}
if t, exists := d.GetOk("persistence"); exists && t != "" {
d.Set("persistence", p.Description)
}
if t, exists := d.GetOk("conn_limit"); exists && t != "" {
d.Set("conn_limit", p.ConnLimit) d.Set("conn_limit", p.ConnLimit)
} else { d.Set("admin_state_up", p.AdminStateUp)
d.Set("conn_limit", "")
}
if t, exists := d.GetOk("admin_state_up"); exists && t != "" {
d.Set("admin_state_up", strconv.FormatBool(p.AdminStateUp))
} else {
d.Set("admin_state_up", "")
}
return nil return nil
} }
@ -255,15 +224,9 @@ func resourceLBVipV1Update(d *schema.ResourceData, meta interface{}) error {
} }
} }
if d.HasChange("admin_state_up") { if d.HasChange("admin_state_up") {
asuRaw := d.Get("admin_state_up").(string) asu := d.Get("admin_state_up").(bool)
if asuRaw != "" {
asu, err := strconv.ParseBool(asuRaw)
if err != nil {
return fmt.Errorf("admin_state_up, if provided, must be either 'true' or 'false'")
}
updateOpts.AdminStateUp = &asu updateOpts.AdminStateUp = &asu
} }
}
log.Printf("[DEBUG] Updating OpenStack LB VIP %s with options: %+v", d.Id(), updateOpts) log.Printf("[DEBUG] Updating OpenStack LB VIP %s with options: %+v", d.Id(), updateOpts)

View File

@ -116,6 +116,9 @@ var testAccLBV1VIP_basic = fmt.Sprintf(`
protocol = "HTTP" protocol = "HTTP"
port = 80 port = 80
pool_id = "${openstack_lb_pool_v1.pool_1.id}" pool_id = "${openstack_lb_pool_v1.pool_1.id}"
persistence {
type = "SOURCE_IP"
}
}`, }`,
OS_REGION_NAME, OS_REGION_NAME, OS_REGION_NAME) OS_REGION_NAME, OS_REGION_NAME, OS_REGION_NAME)
@ -148,5 +151,8 @@ var testAccLBV1VIP_update = fmt.Sprintf(`
protocol = "HTTP" protocol = "HTTP"
port = 80 port = 80
pool_id = "${openstack_lb_pool_v1.pool_1.id}" pool_id = "${openstack_lb_pool_v1.pool_1.id}"
persistence {
type = "SOURCE_IP"
}
}`, }`,
OS_REGION_NAME, OS_REGION_NAME, OS_REGION_NAME) OS_REGION_NAME, OS_REGION_NAME, OS_REGION_NAME)

View File

@ -148,8 +148,8 @@ func TestAccNetworkingV2Network_fullstack(t *testing.T) {
name = "port_1" name = "port_1"
network_id = "${openstack_networking_network_v2.foo.id}" network_id = "${openstack_networking_network_v2.foo.id}"
admin_state_up = "true" admin_state_up = "true"
security_groups = ["${openstack_compute_secgroup_v2.foo.id}"] security_group_ids = ["${openstack_compute_secgroup_v2.foo.id}"]
fixed_ips { fixed_ip {
"subnet_id" = "${openstack_networking_subnet_v2.foo.id}" "subnet_id" = "${openstack_networking_subnet_v2.foo.id}"
"ip_address" = "192.168.199.23" "ip_address" = "192.168.199.23"
} }

View File

@ -3,7 +3,6 @@ package openstack
import ( import (
"fmt" "fmt"
"log" "log"
"strconv"
"time" "time"
"github.com/hashicorp/terraform/helper/hashcode" "github.com/hashicorp/terraform/helper/hashcode"
@ -39,7 +38,7 @@ func resourceNetworkingPortV2() *schema.Resource {
ForceNew: true, ForceNew: true,
}, },
"admin_state_up": &schema.Schema{ "admin_state_up": &schema.Schema{
Type: schema.TypeString, Type: schema.TypeBool,
Optional: true, Optional: true,
ForceNew: false, ForceNew: false,
Computed: true, Computed: true,
@ -62,7 +61,7 @@ func resourceNetworkingPortV2() *schema.Resource {
ForceNew: true, ForceNew: true,
Computed: true, Computed: true,
}, },
"security_groups": &schema.Schema{ "security_group_ids": &schema.Schema{
Type: schema.TypeSet, Type: schema.TypeSet,
Optional: true, Optional: true,
ForceNew: false, ForceNew: false,
@ -78,7 +77,7 @@ func resourceNetworkingPortV2() *schema.Resource {
ForceNew: true, ForceNew: true,
Computed: true, Computed: true,
}, },
"fixed_ips": &schema.Schema{ "fixed_ip": &schema.Schema{
Type: schema.TypeList, Type: schema.TypeList,
Optional: true, Optional: true,
ForceNew: false, ForceNew: false,
@ -157,14 +156,14 @@ func resourceNetworkingPortV2Read(d *schema.ResourceData, meta interface{}) erro
log.Printf("[DEBUG] Retreived Port %s: %+v", d.Id(), p) log.Printf("[DEBUG] Retreived Port %s: %+v", d.Id(), p)
d.Set("name", p.Name) d.Set("name", p.Name)
d.Set("admin_state_up", strconv.FormatBool(p.AdminStateUp)) d.Set("admin_state_up", p.AdminStateUp)
d.Set("network_id", p.NetworkID) d.Set("network_id", p.NetworkID)
d.Set("mac_address", p.MACAddress) d.Set("mac_address", p.MACAddress)
d.Set("tenant_id", p.TenantID) d.Set("tenant_id", p.TenantID)
d.Set("device_owner", p.DeviceOwner) d.Set("device_owner", p.DeviceOwner)
d.Set("security_groups", p.SecurityGroups) d.Set("security_group_ids", p.SecurityGroups)
d.Set("device_id", p.DeviceID) d.Set("device_id", p.DeviceID)
d.Set("fixed_ips", p.FixedIPs) d.Set("fixed_ip", p.FixedIPs)
return nil return nil
} }
@ -190,7 +189,7 @@ func resourceNetworkingPortV2Update(d *schema.ResourceData, meta interface{}) er
updateOpts.DeviceOwner = d.Get("device_owner").(string) updateOpts.DeviceOwner = d.Get("device_owner").(string)
} }
if d.HasChange("security_groups") { if d.HasChange("security_group_ids") {
updateOpts.SecurityGroups = resourcePortSecurityGroupsV2(d) updateOpts.SecurityGroups = resourcePortSecurityGroupsV2(d)
} }
@ -198,7 +197,7 @@ func resourceNetworkingPortV2Update(d *schema.ResourceData, meta interface{}) er
updateOpts.DeviceID = d.Get("device_id").(string) updateOpts.DeviceID = d.Get("device_id").(string)
} }
if d.HasChange("fixed_ips") { if d.HasChange("fixed_ip") {
updateOpts.FixedIPs = resourcePortFixedIpsV2(d) updateOpts.FixedIPs = resourcePortFixedIpsV2(d)
} }
@ -238,7 +237,7 @@ func resourceNetworkingPortV2Delete(d *schema.ResourceData, meta interface{}) er
} }
func resourcePortSecurityGroupsV2(d *schema.ResourceData) []string { func resourcePortSecurityGroupsV2(d *schema.ResourceData) []string {
rawSecurityGroups := d.Get("security_groups").(*schema.Set) rawSecurityGroups := d.Get("security_group_ids").(*schema.Set)
groups := make([]string, rawSecurityGroups.Len()) groups := make([]string, rawSecurityGroups.Len())
for i, raw := range rawSecurityGroups.List() { for i, raw := range rawSecurityGroups.List() {
groups[i] = raw.(string) groups[i] = raw.(string)
@ -247,7 +246,7 @@ func resourcePortSecurityGroupsV2(d *schema.ResourceData) []string {
} }
func resourcePortFixedIpsV2(d *schema.ResourceData) []ports.IP { func resourcePortFixedIpsV2(d *schema.ResourceData) []ports.IP {
rawIP := d.Get("fixed_ips").([]interface{}) rawIP := d.Get("fixed_ip").([]interface{})
ip := make([]ports.IP, len(rawIP)) ip := make([]ports.IP, len(rawIP))
for i, raw := range rawIP { for i, raw := range rawIP {
rawMap := raw.(map[string]interface{}) rawMap := raw.(map[string]interface{})
@ -263,7 +262,7 @@ func resourcePortFixedIpsV2(d *schema.ResourceData) []ports.IP {
func resourcePortAdminStateUpV2(d *schema.ResourceData) *bool { func resourcePortAdminStateUpV2(d *schema.ResourceData) *bool {
value := false value := false
if raw, ok := d.GetOk("admin_state_up"); ok && raw == "true" { if raw, ok := d.GetOk("admin_state_up"); ok && raw == true {
value = true value = true
} }

View File

@ -40,7 +40,7 @@ func TestAccNetworkingV2Port_basic(t *testing.T) {
name = "port_1" name = "port_1"
network_id = "${openstack_networking_network_v2.foo.id}" network_id = "${openstack_networking_network_v2.foo.id}"
admin_state_up = "true" admin_state_up = "true"
fixed_ips { fixed_ip {
subnet_id = "${openstack_networking_subnet_v2.foo.id}" subnet_id = "${openstack_networking_subnet_v2.foo.id}"
ip_address = "192.168.199.23" ip_address = "192.168.199.23"
} }

View File

@ -160,7 +160,7 @@ var testAccNetworkingV2RouterInterface_basic_port = fmt.Sprintf(`
name = "port_1" name = "port_1"
network_id = "${openstack_networking_network_v2.network_1.id}" network_id = "${openstack_networking_network_v2.network_1.id}"
admin_state_up = "true" admin_state_up = "true"
fixed_ips { fixed_ip {
subnet_id = "${openstack_networking_subnet_v2.subnet_1.id}" subnet_id = "${openstack_networking_subnet_v2.subnet_1.id}"
ip_address = "192.168.199.1" ip_address = "192.168.199.1"
} }

View File

@ -184,7 +184,7 @@ func resourcePacketDeviceRead(d *schema.ResourceData, meta interface{}) error {
d.Set("billing_cycle", device.BillingCycle) d.Set("billing_cycle", device.BillingCycle)
d.Set("locked", device.Locked) d.Set("locked", device.Locked)
d.Set("created", device.Created) d.Set("created", device.Created)
d.Set("udpated", device.Updated) d.Set("updated", device.Updated)
tags := make([]string, 0) tags := make([]string, 0)
for _, tag := range device.Tags { for _, tag := range device.Tags {
@ -192,6 +192,8 @@ func resourcePacketDeviceRead(d *schema.ResourceData, meta interface{}) error {
} }
d.Set("tags", tags) d.Set("tags", tags)
provisionerAddress := ""
networks := make([]map[string]interface{}, 0, 1) networks := make([]map[string]interface{}, 0, 1)
for _, ip := range device.Network { for _, ip := range device.Network {
network := make(map[string]interface{}) network := make(map[string]interface{})
@ -201,9 +203,21 @@ func resourcePacketDeviceRead(d *schema.ResourceData, meta interface{}) error {
network["cidr"] = ip.Cidr network["cidr"] = ip.Cidr
network["public"] = ip.Public network["public"] = ip.Public
networks = append(networks, network) networks = append(networks, network)
if ip.Family == 4 && ip.Public == true {
provisionerAddress = ip.Address
}
} }
d.Set("network", networks) d.Set("network", networks)
log.Printf("[DEBUG] Provisioner Address set to %v", provisionerAddress)
if provisionerAddress != "" {
d.SetConnInfo(map[string]string{
"type": "ssh",
"host": provisionerAddress,
})
}
return nil return nil
} }

View File

@ -4,7 +4,6 @@ import (
"crypto/sha256" "crypto/sha256"
"encoding/hex" "encoding/hex"
"fmt" "fmt"
"io/ioutil"
"log" "log"
"os" "os"
"path/filepath" "path/filepath"
@ -12,8 +11,8 @@ import (
"github.com/hashicorp/terraform/config" "github.com/hashicorp/terraform/config"
"github.com/hashicorp/terraform/config/lang" "github.com/hashicorp/terraform/config/lang"
"github.com/hashicorp/terraform/config/lang/ast" "github.com/hashicorp/terraform/config/lang/ast"
"github.com/hashicorp/terraform/helper/pathorcontents"
"github.com/hashicorp/terraform/helper/schema" "github.com/hashicorp/terraform/helper/schema"
"github.com/mitchellh/go-homedir"
) )
func resource() *schema.Resource { func resource() *schema.Resource {
@ -24,13 +23,23 @@ func resource() *schema.Resource {
Read: Read, Read: Read,
Schema: map[string]*schema.Schema{ Schema: map[string]*schema.Schema{
"template": &schema.Schema{
Type: schema.TypeString,
Optional: true,
Description: "Contents of the template",
ForceNew: true,
ConflictsWith: []string{"filename"},
},
"filename": &schema.Schema{ "filename": &schema.Schema{
Type: schema.TypeString, Type: schema.TypeString,
Required: true, Optional: true,
Description: "file to read template from", Description: "file to read template from",
ForceNew: true, ForceNew: true,
// Make a "best effort" attempt to relativize the file path. // Make a "best effort" attempt to relativize the file path.
StateFunc: func(v interface{}) string { StateFunc: func(v interface{}) string {
if v == nil || v.(string) == "" {
return ""
}
pwd, err := os.Getwd() pwd, err := os.Getwd()
if err != nil { if err != nil {
return v.(string) return v.(string)
@ -41,6 +50,8 @@ func resource() *schema.Resource {
} }
return rel return rel
}, },
Deprecated: "Use the 'template' attribute instead.",
ConflictsWith: []string{"template"},
}, },
"vars": &schema.Schema{ "vars": &schema.Schema{
Type: schema.TypeMap, Type: schema.TypeMap,
@ -96,23 +107,21 @@ func Read(d *schema.ResourceData, meta interface{}) error {
type templateRenderError error type templateRenderError error
var readfile func(string) ([]byte, error) = ioutil.ReadFile // testing hook
func render(d *schema.ResourceData) (string, error) { func render(d *schema.ResourceData) (string, error) {
template := d.Get("template").(string)
filename := d.Get("filename").(string) filename := d.Get("filename").(string)
vars := d.Get("vars").(map[string]interface{}) vars := d.Get("vars").(map[string]interface{})
path, err := homedir.Expand(filename) if template == "" && filename != "" {
template = filename
}
contents, _, err := pathorcontents.Read(template)
if err != nil { if err != nil {
return "", err return "", err
} }
buf, err := readfile(path) rendered, err := execute(contents, vars)
if err != nil {
return "", err
}
rendered, err := execute(string(buf), vars)
if err != nil { if err != nil {
return "", templateRenderError( return "", templateRenderError(
fmt.Errorf("failed to render %v: %v", filename, err), fmt.Errorf("failed to render %v: %v", filename, err),

View File

@ -26,15 +26,10 @@ func TestTemplateRendering(t *testing.T) {
for _, tt := range cases { for _, tt := range cases {
r.Test(t, r.TestCase{ r.Test(t, r.TestCase{
PreCheck: func() {
readfile = func(string) ([]byte, error) {
return []byte(tt.template), nil
}
},
Providers: testProviders, Providers: testProviders,
Steps: []r.TestStep{ Steps: []r.TestStep{
r.TestStep{ r.TestStep{
Config: testTemplateConfig(tt.vars), Config: testTemplateConfig(tt.template, tt.vars),
Check: func(s *terraform.State) error { Check: func(s *terraform.State) error {
got := s.RootModule().Outputs["rendered"] got := s.RootModule().Outputs["rendered"]
if tt.want != got { if tt.want != got {
@ -62,14 +57,7 @@ func TestTemplateVariableChange(t *testing.T) {
var testSteps []r.TestStep var testSteps []r.TestStep
for i, step := range steps { for i, step := range steps {
testSteps = append(testSteps, r.TestStep{ testSteps = append(testSteps, r.TestStep{
PreConfig: func(template string) func() { Config: testTemplateConfig(step.template, step.vars),
return func() {
readfile = func(string) ([]byte, error) {
return []byte(template), nil
}
}
}(step.template),
Config: testTemplateConfig(step.vars),
Check: func(i int, want string) r.TestCheckFunc { Check: func(i int, want string) r.TestCheckFunc {
return func(s *terraform.State) error { return func(s *terraform.State) error {
got := s.RootModule().Outputs["rendered"] got := s.RootModule().Outputs["rendered"]
@ -88,14 +76,13 @@ func TestTemplateVariableChange(t *testing.T) {
}) })
} }
func testTemplateConfig(vars string) string { func testTemplateConfig(template, vars string) string {
return ` return fmt.Sprintf(`
resource "template_file" "t0" { resource "template_file" "t0" {
filename = "mock" template = "%s"
vars = ` + vars + ` vars = %s
} }
output "rendered" { output "rendered" {
value = "${template_file.t0.rendered}" value = "${template_file.t0.rendered}"
} }`, template, vars)
`
} }

View File

@ -8,7 +8,7 @@ import (
"io" "io"
"log" "log"
"os" "os"
"path" "path/filepath"
"regexp" "regexp"
"strings" "strings"
"text/template" "text/template"
@ -16,6 +16,7 @@ import (
"github.com/hashicorp/terraform/communicator" "github.com/hashicorp/terraform/communicator"
"github.com/hashicorp/terraform/communicator/remote" "github.com/hashicorp/terraform/communicator/remote"
"github.com/hashicorp/terraform/helper/pathorcontents"
"github.com/hashicorp/terraform/terraform" "github.com/hashicorp/terraform/terraform"
"github.com/mitchellh/go-homedir" "github.com/mitchellh/go-homedir"
"github.com/mitchellh/go-linereader" "github.com/mitchellh/go-linereader"
@ -79,18 +80,22 @@ type Provisioner struct {
OSType string `mapstructure:"os_type"` OSType string `mapstructure:"os_type"`
PreventSudo bool `mapstructure:"prevent_sudo"` PreventSudo bool `mapstructure:"prevent_sudo"`
RunList []string `mapstructure:"run_list"` RunList []string `mapstructure:"run_list"`
SecretKeyPath string `mapstructure:"secret_key_path"` SecretKey string `mapstructure:"secret_key"`
ServerURL string `mapstructure:"server_url"` ServerURL string `mapstructure:"server_url"`
SkipInstall bool `mapstructure:"skip_install"` SkipInstall bool `mapstructure:"skip_install"`
SSLVerifyMode string `mapstructure:"ssl_verify_mode"` SSLVerifyMode string `mapstructure:"ssl_verify_mode"`
ValidationClientName string `mapstructure:"validation_client_name"` ValidationClientName string `mapstructure:"validation_client_name"`
ValidationKeyPath string `mapstructure:"validation_key_path"` ValidationKey string `mapstructure:"validation_key"`
Version string `mapstructure:"version"` Version string `mapstructure:"version"`
installChefClient func(terraform.UIOutput, communicator.Communicator) error installChefClient func(terraform.UIOutput, communicator.Communicator) error
createConfigFiles func(terraform.UIOutput, communicator.Communicator) error createConfigFiles func(terraform.UIOutput, communicator.Communicator) error
runChefClient func(terraform.UIOutput, communicator.Communicator) error runChefClient func(terraform.UIOutput, communicator.Communicator) error
useSudo bool useSudo bool
// Deprecated Fields
SecretKeyPath string `mapstructure:"secret_key_path"`
ValidationKeyPath string `mapstructure:"validation_key_path"`
} }
// ResourceProvisioner represents a generic chef provisioner // ResourceProvisioner represents a generic chef provisioner
@ -189,8 +194,9 @@ func (r *ResourceProvisioner) Validate(c *terraform.ResourceConfig) (ws []string
if p.ValidationClientName == "" { if p.ValidationClientName == "" {
es = append(es, fmt.Errorf("Key not found: validation_client_name")) es = append(es, fmt.Errorf("Key not found: validation_client_name"))
} }
if p.ValidationKeyPath == "" { if p.ValidationKey == "" && p.ValidationKeyPath == "" {
es = append(es, fmt.Errorf("Key not found: validation_key_path")) es = append(es, fmt.Errorf(
"One of validation_key or the deprecated validation_key_path must be provided"))
} }
if p.UsePolicyfile && p.PolicyName == "" { if p.UsePolicyfile && p.PolicyName == "" {
es = append(es, fmt.Errorf("Policyfile enabled but key not found: policy_name")) es = append(es, fmt.Errorf("Policyfile enabled but key not found: policy_name"))
@ -198,6 +204,14 @@ func (r *ResourceProvisioner) Validate(c *terraform.ResourceConfig) (ws []string
if p.UsePolicyfile && p.PolicyGroup == "" { if p.UsePolicyfile && p.PolicyGroup == "" {
es = append(es, fmt.Errorf("Policyfile enabled but key not found: policy_group")) es = append(es, fmt.Errorf("Policyfile enabled but key not found: policy_group"))
} }
if p.ValidationKeyPath != "" {
ws = append(ws, "validation_key_path is deprecated, please use "+
"validation_key instead and load the key contents via file()")
}
if p.SecretKeyPath != "" {
ws = append(ws, "secret_key_path is deprecated, please use "+
"secret_key instead and load the key contents via file()")
}
return ws, es return ws, es
} }
@ -247,20 +261,12 @@ func (r *ResourceProvisioner) decodeConfig(c *terraform.ResourceConfig) (*Provis
p.OhaiHints[i] = hintPath p.OhaiHints[i] = hintPath
} }
if p.ValidationKeyPath != "" { if p.ValidationKey == "" && p.ValidationKeyPath != "" {
keyPath, err := homedir.Expand(p.ValidationKeyPath) p.ValidationKey = p.ValidationKeyPath
if err != nil {
return nil, fmt.Errorf("Error expanding the validation key path: %v", err)
}
p.ValidationKeyPath = keyPath
} }
if p.SecretKeyPath != "" { if p.SecretKey == "" && p.SecretKeyPath != "" {
keyPath, err := homedir.Expand(p.SecretKeyPath) p.SecretKey = p.SecretKeyPath
if err != nil {
return nil, fmt.Errorf("Error expanding the secret key path: %v", err)
}
p.SecretKeyPath = keyPath
} }
if attrs, ok := c.Config["attributes"]; ok { if attrs, ok := c.Config["attributes"]; ok {
@ -316,7 +322,7 @@ func (p *Provisioner) runChefClientFunc(
chefCmd string, chefCmd string,
confDir string) func(terraform.UIOutput, communicator.Communicator) error { confDir string) func(terraform.UIOutput, communicator.Communicator) error {
return func(o terraform.UIOutput, comm communicator.Communicator) error { return func(o terraform.UIOutput, comm communicator.Communicator) error {
fb := path.Join(confDir, firstBoot) fb := filepath.Join(confDir, firstBoot)
var cmd string var cmd string
// Policyfiles do not support chef environments, so don't pass the `-E` flag. // Policyfiles do not support chef environments, so don't pass the `-E` flag.
@ -331,8 +337,8 @@ func (p *Provisioner) runChefClientFunc(
return fmt.Errorf("Error creating logfile directory %s: %v", logfileDir, err) return fmt.Errorf("Error creating logfile directory %s: %v", logfileDir, err)
} }
logFile := path.Join(logfileDir, p.NodeName) logFile := filepath.Join(logfileDir, p.NodeName)
f, err := os.Create(path.Join(logFile)) f, err := os.Create(filepath.Join(logFile))
if err != nil { if err != nil {
return fmt.Errorf("Error creating logfile %s: %v", logFile, err) return fmt.Errorf("Error creating logfile %s: %v", logFile, err)
} }
@ -348,7 +354,7 @@ func (p *Provisioner) runChefClientFunc(
// Output implementation of terraform.UIOutput interface // Output implementation of terraform.UIOutput interface
func (p *Provisioner) Output(output string) { func (p *Provisioner) Output(output string) {
logFile := path.Join(logfileDir, p.NodeName) logFile := filepath.Join(logfileDir, p.NodeName)
f, err := os.OpenFile(logFile, os.O_APPEND|os.O_WRONLY, 0666) f, err := os.OpenFile(logFile, os.O_APPEND|os.O_WRONLY, 0666)
if err != nil { if err != nil {
log.Printf("Error creating logfile %s: %v", logFile, err) log.Printf("Error creating logfile %s: %v", logFile, err)
@ -376,28 +382,25 @@ func (p *Provisioner) deployConfigFiles(
o terraform.UIOutput, o terraform.UIOutput,
comm communicator.Communicator, comm communicator.Communicator,
confDir string) error { confDir string) error {
// Open the validation key file contents, _, err := pathorcontents.Read(p.ValidationKey)
f, err := os.Open(p.ValidationKeyPath)
if err != nil { if err != nil {
return err return err
} }
defer f.Close() f := strings.NewReader(contents)
// Copy the validation key to the new instance // Copy the validation key to the new instance
if err := comm.Upload(path.Join(confDir, validationKey), f); err != nil { if err := comm.Upload(filepath.Join(confDir, validationKey), f); err != nil {
return fmt.Errorf("Uploading %s failed: %v", validationKey, err) return fmt.Errorf("Uploading %s failed: %v", validationKey, err)
} }
if p.SecretKeyPath != "" { if p.SecretKey != "" {
// Open the secret key file contents, _, err := pathorcontents.Read(p.SecretKey)
s, err := os.Open(p.SecretKeyPath)
if err != nil { if err != nil {
return err return err
} }
defer s.Close() s := strings.NewReader(contents)
// Copy the secret key to the new instance // Copy the secret key to the new instance
if err := comm.Upload(path.Join(confDir, secretKey), s); err != nil { if err := comm.Upload(filepath.Join(confDir, secretKey), s); err != nil {
return fmt.Errorf("Uploading %s failed: %v", secretKey, err) return fmt.Errorf("Uploading %s failed: %v", secretKey, err)
} }
} }
@ -417,7 +420,7 @@ func (p *Provisioner) deployConfigFiles(
} }
// Copy the client config to the new instance // Copy the client config to the new instance
if err := comm.Upload(path.Join(confDir, clienrb), &buf); err != nil { if err := comm.Upload(filepath.Join(confDir, clienrb), &buf); err != nil {
return fmt.Errorf("Uploading %s failed: %v", clienrb, err) return fmt.Errorf("Uploading %s failed: %v", clienrb, err)
} }
@ -446,7 +449,7 @@ func (p *Provisioner) deployConfigFiles(
} }
// Copy the first-boot.json to the new instance // Copy the first-boot.json to the new instance
if err := comm.Upload(path.Join(confDir, firstBoot), bytes.NewReader(d)); err != nil { if err := comm.Upload(filepath.Join(confDir, firstBoot), bytes.NewReader(d)); err != nil {
return fmt.Errorf("Uploading %s failed: %v", firstBoot, err) return fmt.Errorf("Uploading %s failed: %v", firstBoot, err)
} }
@ -466,8 +469,8 @@ func (p *Provisioner) deployOhaiHints(
defer f.Close() defer f.Close()
// Copy the hint to the new instance // Copy the hint to the new instance
if err := comm.Upload(path.Join(hintDir, path.Base(hint)), f); err != nil { if err := comm.Upload(filepath.Join(hintDir, filepath.Base(hint)), f); err != nil {
return fmt.Errorf("Uploading %s failed: %v", path.Base(hint), err) return fmt.Errorf("Uploading %s failed: %v", filepath.Base(hint), err)
} }
} }

View File

@ -22,7 +22,7 @@ func TestResourceProvider_Validate_good(t *testing.T) {
"run_list": []interface{}{"cookbook::recipe"}, "run_list": []interface{}{"cookbook::recipe"},
"server_url": "https://chef.local", "server_url": "https://chef.local",
"validation_client_name": "validator", "validation_client_name": "validator",
"validation_key_path": "validator.pem", "validation_key": "contentsofsomevalidator.pem",
}) })
r := new(ResourceProvisioner) r := new(ResourceProvisioner)
warn, errs := r.Validate(c) warn, errs := r.Validate(c)

View File

@ -76,6 +76,13 @@ type SelfVariable struct {
key string key string
} }
// SimpleVariable is an unprefixed variable, which can show up when users have
// strings they are passing down to resources that use interpolation
// internally. The template_file resource is an example of this.
type SimpleVariable struct {
Key string
}
// A UserVariable is a variable that is referencing a user variable // A UserVariable is a variable that is referencing a user variable
// that is inputted from outside the configuration. This looks like // that is inputted from outside the configuration. This looks like
// "${var.foo}" // "${var.foo}"
@ -97,6 +104,8 @@ func NewInterpolatedVariable(v string) (InterpolatedVariable, error) {
return NewUserVariable(v) return NewUserVariable(v)
} else if strings.HasPrefix(v, "module.") { } else if strings.HasPrefix(v, "module.") {
return NewModuleVariable(v) return NewModuleVariable(v)
} else if !strings.ContainsRune(v, '.') {
return NewSimpleVariable(v)
} else { } else {
return NewResourceVariable(v) return NewResourceVariable(v)
} }
@ -227,6 +236,18 @@ func (v *SelfVariable) GoString() string {
return fmt.Sprintf("*%#v", *v) return fmt.Sprintf("*%#v", *v)
} }
func NewSimpleVariable(key string) (*SimpleVariable, error) {
return &SimpleVariable{key}, nil
}
func (v *SimpleVariable) FullKey() string {
return v.Key
}
func (v *SimpleVariable) GoString() string {
return fmt.Sprintf("*%#v", *v)
}
func NewUserVariable(key string) (*UserVariable, error) { func NewUserVariable(key string) (*UserVariable, error) {
name := key[len("var."):] name := key[len("var."):]
elem := "" elem := ""

View File

@ -23,7 +23,7 @@ module APIs
module AWS module AWS
def self.path def self.path
@path ||= Pathname(`go list -f '{{.Dir}}' github.com/awslabs/aws-sdk-go/aws`.chomp).parent @path ||= Pathname(`go list -f '{{.Dir}}' github.com/aws/aws-sdk-go/aws`.chomp).parent
end end
def self.api_json_files def self.api_json_files

View File

@ -1,46 +0,0 @@
package depgraph
import (
"fmt"
"github.com/hashicorp/terraform/digraph"
)
// Dependency is used to create a directed edge between two nouns.
// One noun may depend on another and provide version constraints
// that cannot be violated
type Dependency struct {
Name string
Meta interface{}
Constraints []Constraint
Source *Noun
Target *Noun
}
// Constraint is used by dependencies to allow arbitrary constraints
// between nouns
type Constraint interface {
Satisfied(head, tail *Noun) (bool, error)
}
// Head returns the source, or dependent noun
func (d *Dependency) Head() digraph.Node {
return d.Source
}
// Tail returns the target, or depended upon noun
func (d *Dependency) Tail() digraph.Node {
return d.Target
}
func (d *Dependency) GoString() string {
return fmt.Sprintf(
"*Dependency{Name: %s, Source: %s, Target: %s}",
d.Name,
d.Source.Name,
d.Target.Name)
}
func (d *Dependency) String() string {
return d.Name
}

View File

@ -1,379 +0,0 @@
// The depgraph package is used to create and model a dependency graph
// of nouns. Each noun can represent a service, server, application,
// network switch, etc. Nouns can depend on other nouns, and provide
// versioning constraints. Nouns can also have various meta data that
// may be relevant to their construction or configuration.
package depgraph
import (
"bytes"
"fmt"
"sort"
"strings"
"sync"
"github.com/hashicorp/terraform/digraph"
)
// WalkFunc is the type used for the callback for Walk.
type WalkFunc func(*Noun) error
// Graph is used to represent a dependency graph.
type Graph struct {
Name string
Meta interface{}
Nouns []*Noun
Root *Noun
}
// ValidateError implements the Error interface but provides
// additional information on a validation error.
type ValidateError struct {
// If set, then the graph is missing a single root, on which
// there are no depdendencies
MissingRoot bool
// Unreachable are nodes that could not be reached from
// the root noun.
Unreachable []*Noun
// Cycles are groups of strongly connected nodes, which
// form a cycle. This is disallowed.
Cycles [][]*Noun
}
func (v *ValidateError) Error() string {
var msgs []string
if v.MissingRoot {
msgs = append(msgs, "The graph has no single root")
}
for _, n := range v.Unreachable {
msgs = append(msgs, fmt.Sprintf(
"Unreachable node: %s", n.Name))
}
for _, c := range v.Cycles {
cycleNodes := make([]string, len(c))
for i, n := range c {
cycleNodes[i] = n.Name
}
msgs = append(msgs, fmt.Sprintf(
"Cycle: %s", strings.Join(cycleNodes, " -> ")))
}
for i, m := range msgs {
msgs[i] = fmt.Sprintf("* %s", m)
}
return fmt.Sprintf(
"The dependency graph is not valid:\n\n%s",
strings.Join(msgs, "\n"))
}
// ConstraintError is used to return detailed violation
// information from CheckConstraints
type ConstraintError struct {
Violations []*Violation
}
func (c *ConstraintError) Error() string {
return fmt.Sprintf("%d constraint violations", len(c.Violations))
}
// Violation is used to pass along information about
// a constraint violation
type Violation struct {
Source *Noun
Target *Noun
Dependency *Dependency
Constraint Constraint
Err error
}
func (v *Violation) Error() string {
return fmt.Sprintf("Constraint %v between %v and %v violated: %v",
v.Constraint, v.Source, v.Target, v.Err)
}
// CheckConstraints walks the graph and ensures that all
// user imposed constraints are satisfied.
func (g *Graph) CheckConstraints() error {
// Ensure we have a root
if g.Root == nil {
return fmt.Errorf("Graph must be validated before checking constraint violations")
}
// Create a constraint error
cErr := &ConstraintError{}
// Walk from the root
digraph.DepthFirstWalk(g.Root, func(n digraph.Node) bool {
noun := n.(*Noun)
for _, dep := range noun.Deps {
target := dep.Target
for _, constraint := range dep.Constraints {
ok, err := constraint.Satisfied(noun, target)
if ok {
continue
}
violation := &Violation{
Source: noun,
Target: target,
Dependency: dep,
Constraint: constraint,
Err: err,
}
cErr.Violations = append(cErr.Violations, violation)
}
}
return true
})
if cErr.Violations != nil {
return cErr
}
return nil
}
// Noun returns the noun with the given name, or nil if it cannot be found.
func (g *Graph) Noun(name string) *Noun {
for _, n := range g.Nouns {
if n.Name == name {
return n
}
}
return nil
}
// String generates a little ASCII string of the graph, useful in
// debugging output.
func (g *Graph) String() string {
var buf bytes.Buffer
// Alphabetize the output based on the noun name
keys := make([]string, 0, len(g.Nouns))
mapping := make(map[string]*Noun)
for _, n := range g.Nouns {
mapping[n.Name] = n
keys = append(keys, n.Name)
}
sort.Strings(keys)
if g.Root != nil {
buf.WriteString(fmt.Sprintf("root: %s\n", g.Root.Name))
} else {
buf.WriteString("root: <unknown>\n")
}
for _, k := range keys {
n := mapping[k]
buf.WriteString(fmt.Sprintf("%s\n", n.Name))
// Alphabetize the dependency names
depKeys := make([]string, 0, len(n.Deps))
depMapping := make(map[string]*Dependency)
for _, d := range n.Deps {
depMapping[d.Target.Name] = d
depKeys = append(depKeys, d.Target.Name)
}
sort.Strings(depKeys)
for _, k := range depKeys {
dep := depMapping[k]
buf.WriteString(fmt.Sprintf(
" %s -> %s\n",
dep.Source,
dep.Target))
}
}
return buf.String()
}
// Validate is used to ensure that a few properties of the graph are not violated:
// 1) There must be a single "root", or source on which nothing depends.
// 2) All nouns in the graph must be reachable from the root
// 3) The graph must be cycle free, meaning there are no cicular dependencies
func (g *Graph) Validate() error {
// Convert to node list
nodes := make([]digraph.Node, len(g.Nouns))
for i, n := range g.Nouns {
nodes[i] = n
}
// Create a validate erro
vErr := &ValidateError{}
// Search for all the sources, if we have only 1, it must be the root
if sources := digraph.Sources(nodes); len(sources) != 1 {
vErr.MissingRoot = true
goto CHECK_CYCLES
} else {
g.Root = sources[0].(*Noun)
}
// Check reachability
if unreached := digraph.Unreachable(g.Root, nodes); len(unreached) > 0 {
vErr.Unreachable = make([]*Noun, len(unreached))
for i, u := range unreached {
vErr.Unreachable[i] = u.(*Noun)
}
}
CHECK_CYCLES:
// Check for cycles
if cycles := digraph.StronglyConnectedComponents(nodes, true); len(cycles) > 0 {
vErr.Cycles = make([][]*Noun, len(cycles))
for i, cycle := range cycles {
group := make([]*Noun, len(cycle))
for j, n := range cycle {
group[j] = n.(*Noun)
}
vErr.Cycles[i] = group
}
}
// Check for loops to yourself
for _, n := range g.Nouns {
for _, d := range n.Deps {
if d.Source == d.Target {
vErr.Cycles = append(vErr.Cycles, []*Noun{n})
}
}
}
// Return the detailed error
if vErr.MissingRoot || vErr.Unreachable != nil || vErr.Cycles != nil {
return vErr
}
return nil
}
// Walk will walk the tree depth-first (dependency first) and call
// the callback.
//
// The callbacks will be called in parallel, so if you need non-parallelism,
// then introduce a lock in your callback.
func (g *Graph) Walk(fn WalkFunc) error {
// Set so we don't callback for a single noun multiple times
var seenMapL sync.RWMutex
seenMap := make(map[*Noun]chan struct{})
seenMap[g.Root] = make(chan struct{})
// Keep track of what nodes errored.
var errMapL sync.RWMutex
errMap := make(map[*Noun]struct{})
// Build the list of things to visit
tovisit := make([]*Noun, 1, len(g.Nouns))
tovisit[0] = g.Root
// Spawn off all our goroutines to walk the tree
errCh := make(chan error)
for len(tovisit) > 0 {
// Grab the current thing to use
n := len(tovisit)
current := tovisit[n-1]
tovisit = tovisit[:n-1]
// Go through each dependency and run that first
for _, dep := range current.Deps {
if _, ok := seenMap[dep.Target]; !ok {
seenMapL.Lock()
seenMap[dep.Target] = make(chan struct{})
seenMapL.Unlock()
tovisit = append(tovisit, dep.Target)
}
}
// Spawn off a goroutine to execute our callback once
// all our dependencies are satisfied.
go func(current *Noun) {
seenMapL.RLock()
closeCh := seenMap[current]
seenMapL.RUnlock()
defer close(closeCh)
// Wait for all our dependencies
for _, dep := range current.Deps {
seenMapL.RLock()
ch := seenMap[dep.Target]
seenMapL.RUnlock()
// Wait for the dep to be run
<-ch
// Check if any dependencies errored. If so,
// then return right away, we won't walk it.
errMapL.RLock()
_, errOk := errMap[dep.Target]
errMapL.RUnlock()
if errOk {
return
}
}
// Call our callback!
if err := fn(current); err != nil {
errMapL.Lock()
errMap[current] = struct{}{}
errMapL.Unlock()
errCh <- err
}
}(current)
}
// Aggregate channel that is closed when all goroutines finish
doneCh := make(chan struct{})
go func() {
defer close(doneCh)
for _, ch := range seenMap {
<-ch
}
}()
// Wait for finish OR an error
select {
case <-doneCh:
return nil
case err := <-errCh:
// Drain the error channel
go func() {
for _ = range errCh {
// Nothing
}
}()
// Wait for the goroutines to end
<-doneCh
close(errCh)
return err
}
}
// DependsOn returns the set of nouns that have a
// dependency on a given noun. This can be used to find
// the incoming edges to a noun.
func (g *Graph) DependsOn(n *Noun) []*Noun {
var incoming []*Noun
OUTER:
for _, other := range g.Nouns {
if other == n {
continue
}
for _, d := range other.Deps {
if d.Target == n {
incoming = append(incoming, other)
continue OUTER
}
}
}
return incoming
}

View File

@ -1,467 +0,0 @@
package depgraph
import (
"fmt"
"reflect"
"sort"
"strings"
"sync"
"testing"
)
// ParseNouns is used to parse a string in the format of:
// a -> b ; edge name
// b -> c
// Into a series of nouns and dependencies
func ParseNouns(s string) map[string]*Noun {
lines := strings.Split(s, "\n")
nodes := make(map[string]*Noun)
for _, line := range lines {
var edgeName string
if idx := strings.Index(line, ";"); idx >= 0 {
edgeName = strings.Trim(line[idx+1:], " \t\r\n")
line = line[:idx]
}
parts := strings.SplitN(line, "->", 2)
if len(parts) != 2 {
continue
}
head_name := strings.Trim(parts[0], " \t\r\n")
tail_name := strings.Trim(parts[1], " \t\r\n")
head := nodes[head_name]
if head == nil {
head = &Noun{Name: head_name}
nodes[head_name] = head
}
tail := nodes[tail_name]
if tail == nil {
tail = &Noun{Name: tail_name}
nodes[tail_name] = tail
}
edge := &Dependency{
Name: edgeName,
Source: head,
Target: tail,
}
head.Deps = append(head.Deps, edge)
}
return nodes
}
func NounMapToList(m map[string]*Noun) []*Noun {
list := make([]*Noun, 0, len(m))
for _, n := range m {
list = append(list, n)
}
return list
}
func TestGraph_Noun(t *testing.T) {
nodes := ParseNouns(`a -> b
a -> c
b -> d
b -> e
c -> d
c -> e`)
g := &Graph{
Name: "Test",
Nouns: NounMapToList(nodes),
}
n := g.Noun("a")
if n == nil {
t.Fatal("should not be nil")
}
if n.Name != "a" {
t.Fatalf("bad: %#v", n)
}
}
func TestGraph_String(t *testing.T) {
nodes := ParseNouns(`a -> b
a -> c
b -> d
b -> e
c -> d
c -> e`)
g := &Graph{
Name: "Test",
Nouns: NounMapToList(nodes),
Root: nodes["a"],
}
actual := g.String()
expected := `
root: a
a
a -> b
a -> c
b
b -> d
b -> e
c
c -> d
c -> e
d
e
`
actual = strings.TrimSpace(actual)
expected = strings.TrimSpace(expected)
if actual != expected {
t.Fatalf("bad:\n%s\n!=\n%s", actual, expected)
}
}
func TestGraph_Validate(t *testing.T) {
nodes := ParseNouns(`a -> b
a -> c
b -> d
b -> e
c -> d
c -> e`)
list := NounMapToList(nodes)
g := &Graph{Name: "Test", Nouns: list}
if err := g.Validate(); err != nil {
t.Fatalf("err: %v", err)
}
}
func TestGraph_Validate_Cycle(t *testing.T) {
nodes := ParseNouns(`a -> b
a -> c
b -> d
d -> b`)
list := NounMapToList(nodes)
g := &Graph{Name: "Test", Nouns: list}
err := g.Validate()
if err == nil {
t.Fatalf("expected err")
}
vErr, ok := err.(*ValidateError)
if !ok {
t.Fatalf("expected validate error")
}
if len(vErr.Cycles) != 1 {
t.Fatalf("expected cycles")
}
cycle := vErr.Cycles[0]
cycleNodes := make([]string, len(cycle))
for i, c := range cycle {
cycleNodes[i] = c.Name
}
sort.Strings(cycleNodes)
if cycleNodes[0] != "b" {
t.Fatalf("bad: %v", cycle)
}
if cycleNodes[1] != "d" {
t.Fatalf("bad: %v", cycle)
}
}
func TestGraph_Validate_MultiRoot(t *testing.T) {
nodes := ParseNouns(`a -> b
c -> d`)
list := NounMapToList(nodes)
g := &Graph{Name: "Test", Nouns: list}
err := g.Validate()
if err == nil {
t.Fatalf("expected err")
}
vErr, ok := err.(*ValidateError)
if !ok {
t.Fatalf("expected validate error")
}
if !vErr.MissingRoot {
t.Fatalf("expected missing root")
}
}
func TestGraph_Validate_NoRoot(t *testing.T) {
nodes := ParseNouns(`a -> b
b -> a`)
list := NounMapToList(nodes)
g := &Graph{Name: "Test", Nouns: list}
err := g.Validate()
if err == nil {
t.Fatalf("expected err")
}
vErr, ok := err.(*ValidateError)
if !ok {
t.Fatalf("expected validate error")
}
if !vErr.MissingRoot {
t.Fatalf("expected missing root")
}
}
func TestGraph_Validate_Unreachable(t *testing.T) {
nodes := ParseNouns(`a -> b
a -> c
b -> d
x -> x`)
list := NounMapToList(nodes)
g := &Graph{Name: "Test", Nouns: list}
err := g.Validate()
if err == nil {
t.Fatalf("expected err")
}
vErr, ok := err.(*ValidateError)
if !ok {
t.Fatalf("expected validate error")
}
if len(vErr.Unreachable) != 1 {
t.Fatalf("expected unreachable")
}
if vErr.Unreachable[0].Name != "x" {
t.Fatalf("bad: %v", vErr.Unreachable[0])
}
}
type VersionMeta int
type VersionConstraint struct {
Min int
Max int
}
func (v *VersionConstraint) Satisfied(head, tail *Noun) (bool, error) {
vers := int(tail.Meta.(VersionMeta))
if vers < v.Min {
return false, fmt.Errorf("version %d below minimum %d",
vers, v.Min)
} else if vers > v.Max {
return false, fmt.Errorf("version %d above maximum %d",
vers, v.Max)
}
return true, nil
}
func (v *VersionConstraint) String() string {
return "version"
}
func TestGraph_ConstraintViolation(t *testing.T) {
nodes := ParseNouns(`a -> b
a -> c
b -> d
b -> e
c -> d
c -> e`)
list := NounMapToList(nodes)
// Add a version constraint
vers := &VersionConstraint{1, 3}
// Introduce some constraints
depB := nodes["a"].Deps[0]
depB.Constraints = []Constraint{vers}
depC := nodes["a"].Deps[1]
depC.Constraints = []Constraint{vers}
// Add some versions
nodes["b"].Meta = VersionMeta(0)
nodes["c"].Meta = VersionMeta(4)
g := &Graph{Name: "Test", Nouns: list}
err := g.Validate()
if err != nil {
t.Fatalf("err: %v", err)
}
err = g.CheckConstraints()
if err == nil {
t.Fatalf("Expected err")
}
cErr, ok := err.(*ConstraintError)
if !ok {
t.Fatalf("expected constraint error")
}
if len(cErr.Violations) != 2 {
t.Fatalf("expected 2 violations: %v", cErr)
}
if cErr.Violations[0].Error() != "Constraint version between a and b violated: version 0 below minimum 1" {
t.Fatalf("err: %v", cErr.Violations[0])
}
if cErr.Violations[1].Error() != "Constraint version between a and c violated: version 4 above maximum 3" {
t.Fatalf("err: %v", cErr.Violations[1])
}
}
func TestGraph_Constraint_NoViolation(t *testing.T) {
nodes := ParseNouns(`a -> b
a -> c
b -> d
b -> e
c -> d
c -> e`)
list := NounMapToList(nodes)
// Add a version constraint
vers := &VersionConstraint{1, 3}
// Introduce some constraints
depB := nodes["a"].Deps[0]
depB.Constraints = []Constraint{vers}
depC := nodes["a"].Deps[1]
depC.Constraints = []Constraint{vers}
// Add some versions
nodes["b"].Meta = VersionMeta(2)
nodes["c"].Meta = VersionMeta(3)
g := &Graph{Name: "Test", Nouns: list}
err := g.Validate()
if err != nil {
t.Fatalf("err: %v", err)
}
err = g.CheckConstraints()
if err != nil {
t.Fatalf("err: %v", err)
}
}
func TestGraphWalk(t *testing.T) {
nodes := ParseNouns(`a -> b
a -> c
b -> d
b -> e
c -> d
c -> e`)
list := NounMapToList(nodes)
g := &Graph{Name: "Test", Nouns: list}
if err := g.Validate(); err != nil {
t.Fatalf("err: %s", err)
}
var namesLock sync.Mutex
names := make([]string, 0, 0)
err := g.Walk(func(n *Noun) error {
namesLock.Lock()
defer namesLock.Unlock()
names = append(names, n.Name)
return nil
})
if err != nil {
t.Fatalf("err: %s", err)
}
expected := [][]string{
{"e", "d", "c", "b", "a"},
{"e", "d", "b", "c", "a"},
{"d", "e", "c", "b", "a"},
{"d", "e", "b", "c", "a"},
}
found := false
for _, expect := range expected {
if reflect.DeepEqual(expect, names) {
found = true
break
}
}
if !found {
t.Fatalf("bad: %#v", names)
}
}
func TestGraphWalk_error(t *testing.T) {
nodes := ParseNouns(`a -> b
b -> c
a -> d
a -> e
e -> f
f -> g
g -> h`)
list := NounMapToList(nodes)
g := &Graph{Name: "Test", Nouns: list}
if err := g.Validate(); err != nil {
t.Fatalf("err: %s", err)
}
// We repeat this a lot because sometimes timing causes
// a false positive.
for i := 0; i < 100; i++ {
var lock sync.Mutex
var walked []string
err := g.Walk(func(n *Noun) error {
lock.Lock()
defer lock.Unlock()
walked = append(walked, n.Name)
if n.Name == "b" {
return fmt.Errorf("foo")
}
return nil
})
if err == nil {
t.Fatal("should error")
}
sort.Strings(walked)
expected := []string{"b", "c", "d", "e", "f", "g", "h"}
if !reflect.DeepEqual(walked, expected) {
t.Fatalf("bad: %#v", walked)
}
}
}
func TestGraph_DependsOn(t *testing.T) {
nodes := ParseNouns(`a -> b
a -> c
b -> d
b -> e
c -> d
c -> e`)
g := &Graph{
Name: "Test",
Nouns: NounMapToList(nodes),
}
dNoun := g.Noun("d")
incoming := g.DependsOn(dNoun)
if len(incoming) != 2 {
t.Fatalf("bad: %#v", incoming)
}
var hasB, hasC bool
for _, in := range incoming {
switch in.Name {
case "b":
hasB = true
case "c":
hasC = true
default:
t.Fatalf("Bad: %#v", in)
}
}
if !hasB || !hasC {
t.Fatalf("missing incoming edge")
}
}

View File

@ -1,33 +0,0 @@
package depgraph
import (
"fmt"
"github.com/hashicorp/terraform/digraph"
)
// Nouns are the key structure of the dependency graph. They can
// be used to represent all objects in the graph. They are linked
// by depedencies.
type Noun struct {
Name string // Opaque name
Meta interface{}
Deps []*Dependency
}
// Edges returns the out-going edges of a Noun
func (n *Noun) Edges() []digraph.Edge {
edges := make([]digraph.Edge, len(n.Deps))
for idx, dep := range n.Deps {
edges[idx] = dep
}
return edges
}
func (n *Noun) GoString() string {
return fmt.Sprintf("*%#v", *n)
}
func (n *Noun) String() string {
return n.Name
}

51
helper/mutexkv/mutexkv.go Normal file
View File

@ -0,0 +1,51 @@
package mutexkv
import (
"log"
"sync"
)
// MutexKV is a simple key/value store for arbitrary mutexes. It can be used to
// serialize changes across arbitrary collaborators that share knowledge of the
// keys they must serialize on.
//
// The initial use case is to let aws_security_group_rule resources serialize
// their access to individual security groups based on SG ID.
type MutexKV struct {
lock sync.Mutex
store map[string]*sync.Mutex
}
// Locks the mutex for the given key. Caller is responsible for calling Unlock
// for the same key
func (m *MutexKV) Lock(key string) {
log.Printf("[DEBUG] Locking %q", key)
m.get(key).Lock()
log.Printf("[DEBUG] Locked %q", key)
}
// Unlock the mutex for the given key. Caller must have called Lock for the same key first
func (m *MutexKV) Unlock(key string) {
log.Printf("[DEBUG] Unlocking %q", key)
m.get(key).Unlock()
log.Printf("[DEBUG] Unlocked %q", key)
}
// Returns a mutex for the given key, no guarantee of its lock status
func (m *MutexKV) get(key string) *sync.Mutex {
m.lock.Lock()
defer m.lock.Unlock()
mutex, ok := m.store[key]
if !ok {
mutex = &sync.Mutex{}
m.store[key] = mutex
}
return mutex
}
// Returns a properly initalized MutexKV
func NewMutexKV() *MutexKV {
return &MutexKV{
store: make(map[string]*sync.Mutex),
}
}

View File

@ -0,0 +1,67 @@
package mutexkv
import (
"testing"
"time"
)
func TestMutexKVLock(t *testing.T) {
mkv := NewMutexKV()
mkv.Lock("foo")
doneCh := make(chan struct{})
go func() {
mkv.Lock("foo")
close(doneCh)
}()
select {
case <-doneCh:
t.Fatal("Second lock was able to be taken. This shouldn't happen.")
case <-time.After(50 * time.Millisecond):
// pass
}
}
func TestMutexKVUnlock(t *testing.T) {
mkv := NewMutexKV()
mkv.Lock("foo")
mkv.Unlock("foo")
doneCh := make(chan struct{})
go func() {
mkv.Lock("foo")
close(doneCh)
}()
select {
case <-doneCh:
// pass
case <-time.After(50 * time.Millisecond):
t.Fatal("Second lock blocked after unlock. This shouldn't happen.")
}
}
func TestMutexKVDifferentKeys(t *testing.T) {
mkv := NewMutexKV()
mkv.Lock("foo")
doneCh := make(chan struct{})
go func() {
mkv.Lock("bar")
close(doneCh)
}()
select {
case <-doneCh:
// pass
case <-time.After(50 * time.Millisecond):
t.Fatal("Second lock on a different key blocked. This shouldn't happen.")
}
}

View File

@ -1,14 +0,0 @@
package url
import (
"net/url"
)
// Parse parses rawURL into a URL structure.
// The rawURL may be relative or absolute.
//
// Parse is a wrapper for the Go stdlib net/url Parse function, but returns
// Windows "safe" URLs on Windows platforms.
func Parse(rawURL string) (*url.URL, error) {
return parse(rawURL)
}

View File

@ -1,88 +0,0 @@
package url
import (
"runtime"
"testing"
)
type parseTest struct {
rawURL string
scheme string
host string
path string
str string
err bool
}
var parseTests = []parseTest{
{
rawURL: "/foo/bar",
scheme: "",
host: "",
path: "/foo/bar",
str: "/foo/bar",
err: false,
},
{
rawURL: "file:///dir/",
scheme: "file",
host: "",
path: "/dir/",
str: "file:///dir/",
err: false,
},
}
var winParseTests = []parseTest{
{
rawURL: `C:\`,
scheme: ``,
host: ``,
path: `C:/`,
str: `C:/`,
err: false,
},
{
rawURL: `file://C:\`,
scheme: `file`,
host: ``,
path: `C:/`,
str: `file://C:/`,
err: false,
},
{
rawURL: `file:///C:\`,
scheme: `file`,
host: ``,
path: `C:/`,
str: `file://C:/`,
err: false,
},
}
func TestParse(t *testing.T) {
if runtime.GOOS == "windows" {
parseTests = append(parseTests, winParseTests...)
}
for i, pt := range parseTests {
url, err := Parse(pt.rawURL)
if err != nil && !pt.err {
t.Errorf("test %d: unexpected error: %s", i, err)
}
if err == nil && pt.err {
t.Errorf("test %d: expected an error", i)
}
if url.Scheme != pt.scheme {
t.Errorf("test %d: expected Scheme = %q, got %q", i, pt.scheme, url.Scheme)
}
if url.Host != pt.host {
t.Errorf("test %d: expected Host = %q, got %q", i, pt.host, url.Host)
}
if url.Path != pt.path {
t.Errorf("test %d: expected Path = %q, got %q", i, pt.path, url.Path)
}
if url.String() != pt.str {
t.Errorf("test %d: expected url.String() = %q, got %q", i, pt.str, url.String())
}
}
}

View File

@ -1,11 +0,0 @@
// +build !windows
package url
import (
"net/url"
)
func parse(rawURL string) (*url.URL, error) {
return url.Parse(rawURL)
}

View File

@ -1,40 +0,0 @@
package url
import (
"fmt"
"net/url"
"path/filepath"
"strings"
)
func parse(rawURL string) (*url.URL, error) {
// Make sure we're using "/" since URLs are "/"-based.
rawURL = filepath.ToSlash(rawURL)
u, err := url.Parse(rawURL)
if err != nil {
return nil, err
}
if len(rawURL) > 1 && rawURL[1] == ':' {
// Assume we're dealing with a drive letter file path where the drive
// letter has been parsed into the URL Scheme, and the rest of the path
// has been parsed into the URL Path without the leading ':' character.
u.Path = fmt.Sprintf("%s:%s", string(rawURL[0]), u.Path)
u.Scheme = ""
}
if len(u.Host) > 1 && u.Host[1] == ':' && strings.HasPrefix(rawURL, "file://") {
// Assume we're dealing with a drive letter file path where the drive
// letter has been parsed into the URL Host.
u.Path = fmt.Sprintf("%s%s", u.Host, u.Path)
u.Host = ""
}
// Remove leading slash for absolute file paths.
if len(u.Path) > 2 && u.Path[0] == '/' && u.Path[2] == ':' {
u.Path = u.Path[1:]
}
return u, err
}

View File

@ -1627,6 +1627,53 @@ STATE:
} }
} }
func TestContext2Plan_targetedOrphan(t *testing.T) {
m := testModule(t, "plan-targeted-orphan")
p := testProvider("aws")
p.DiffFn = testDiffFn
ctx := testContext2(t, &ContextOpts{
Module: m,
Providers: map[string]ResourceProviderFactory{
"aws": testProviderFuncFixed(p),
},
State: &State{
Modules: []*ModuleState{
&ModuleState{
Path: rootModulePath,
Resources: map[string]*ResourceState{
"aws_instance.orphan": &ResourceState{
Type: "aws_instance",
Primary: &InstanceState{
ID: "i-789xyz",
},
},
},
},
},
},
Destroy: true,
Targets: []string{"aws_instance.orphan"},
})
plan, err := ctx.Plan()
if err != nil {
t.Fatalf("err: %s", err)
}
actual := strings.TrimSpace(plan.String())
expected := strings.TrimSpace(`DIFF:
DESTROY: aws_instance.orphan
STATE:
aws_instance.orphan:
ID = i-789xyz`)
if actual != expected {
t.Fatalf("expected:\n%s\n\ngot:\n%s", expected, actual)
}
}
func TestContext2Plan_provider(t *testing.T) { func TestContext2Plan_provider(t *testing.T) {
m := testModule(t, "plan-provider") m := testModule(t, "plan-provider")
p := testProvider("aws") p := testProvider("aws")

View File

@ -107,7 +107,6 @@ func (b *BuiltinGraphBuilder) Steps(path []string) []GraphTransformer {
&OrphanTransformer{ &OrphanTransformer{
State: b.State, State: b.State,
Module: b.Root, Module: b.Root,
Targeting: len(b.Targets) > 0,
}, },
// Output-related transformations // Output-related transformations

View File

@ -165,7 +165,7 @@ func (n *GraphNodeConfigResource) DynamicExpand(ctx EvalContext) (*Graph, error)
steps = append(steps, &OrphanTransformer{ steps = append(steps, &OrphanTransformer{
State: state, State: state,
View: n.Resource.Id(), View: n.Resource.Id(),
Targeting: len(n.Targets) > 0, Targets: n.Targets,
}) })
steps = append(steps, &DeposedTransformer{ steps = append(steps, &DeposedTransformer{

View File

@ -73,6 +73,8 @@ func (i *Interpolater) Values(
err = i.valueResourceVar(scope, n, v, result) err = i.valueResourceVar(scope, n, v, result)
case *config.SelfVariable: case *config.SelfVariable:
err = i.valueSelfVar(scope, n, v, result) err = i.valueSelfVar(scope, n, v, result)
case *config.SimpleVariable:
err = i.valueSimpleVar(scope, n, v, result)
case *config.UserVariable: case *config.UserVariable:
err = i.valueUserVar(scope, n, v, result) err = i.valueUserVar(scope, n, v, result)
default: default:
@ -249,6 +251,19 @@ func (i *Interpolater) valueSelfVar(
return i.valueResourceVar(scope, n, rv, result) return i.valueResourceVar(scope, n, rv, result)
} }
func (i *Interpolater) valueSimpleVar(
scope *InterpolationScope,
n string,
v *config.SimpleVariable,
result map[string]ast.Variable) error {
// SimpleVars are never handled by Terraform's interpolator
result[n] = ast.Variable{
Value: config.UnknownVariableValue,
Type: ast.TypeString,
}
return nil
}
func (i *Interpolater) valueUserVar( func (i *Interpolater) valueUserVar(
scope *InterpolationScope, scope *InterpolationScope,
n string, n string,

View File

@ -0,0 +1,6 @@
# This resource was previously "created" and the fixture represents
# it being destroyed subsequently
/*resource "aws_instance" "orphan" {*/
/*foo = "bar"*/
/*}*/

View File

@ -2,7 +2,7 @@ package terraform
import ( import (
"fmt" "fmt"
"log" "strings"
"github.com/hashicorp/terraform/config" "github.com/hashicorp/terraform/config"
"github.com/hashicorp/terraform/config/module" "github.com/hashicorp/terraform/config/module"
@ -29,7 +29,7 @@ type OrphanTransformer struct {
// Targets are user-specified resources to target. We need to be aware of // Targets are user-specified resources to target. We need to be aware of
// these so we don't improperly identify orphans when they've just been // these so we don't improperly identify orphans when they've just been
// filtered out of the graph via targeting. // filtered out of the graph via targeting.
Targeting bool Targets []ResourceAddress
// View, if non-nil will set a view on the module state. // View, if non-nil will set a view on the module state.
View string View string
@ -41,13 +41,6 @@ func (t *OrphanTransformer) Transform(g *Graph) error {
return nil return nil
} }
if t.Targeting {
log.Printf("Skipping orphan transformer because we have targets.")
// If we are in a run where we are targeting nodes, we won't process
// orphans for this run.
return nil
}
// Build up all our state representatives // Build up all our state representatives
resourceRep := make(map[string]struct{}) resourceRep := make(map[string]struct{})
for _, v := range g.Vertices() { for _, v := range g.Vertices() {
@ -74,8 +67,24 @@ func (t *OrphanTransformer) Transform(g *Graph) error {
state = state.View(t.View) state = state.View(t.View)
} }
// Go over each resource orphan and add it to the graph.
resourceOrphans := state.Orphans(config) resourceOrphans := state.Orphans(config)
if len(t.Targets) > 0 {
var targetedOrphans []string
for _, o := range resourceOrphans {
targeted := false
for _, t := range t.Targets {
prefix := fmt.Sprintf("%s.%s.%d", t.Type, t.Name, t.Index)
if strings.HasPrefix(o, prefix) {
targeted = true
}
}
if targeted {
targetedOrphans = append(targetedOrphans, o)
}
}
resourceOrphans = targetedOrphans
}
resourceVertexes = make([]dag.Vertex, len(resourceOrphans)) resourceVertexes = make([]dag.Vertex, len(resourceOrphans))
for i, k := range resourceOrphans { for i, k := range resourceOrphans {
// If this orphan is represented by some other node somehow, // If this orphan is represented by some other node somehow,
@ -173,6 +182,10 @@ type graphNodeOrphanResource struct {
dependentOn []string dependentOn []string
} }
func (n *graphNodeOrphanResource) ResourceAddress() *ResourceAddress {
return n.ResourceAddress()
}
func (n *graphNodeOrphanResource) DependableName() []string { func (n *graphNodeOrphanResource) DependableName() []string {
return []string{n.dependableName()} return []string{n.dependableName()}
} }

View File

@ -186,6 +186,3 @@ PLATFORMS
DEPENDENCIES DEPENDENCIES
middleman-hashicorp! middleman-hashicorp!
BUNDLED WITH
1.10.6

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.7 KiB

View File

Before

Width:  |  Height:  |  Size: 556 B

After

Width:  |  Height:  |  Size: 556 B

View File

Before

Width:  |  Height:  |  Size: 994 B

After

Width:  |  Height:  |  Size: 994 B

View File

@ -21,6 +21,12 @@ var Init = {
if (this.Pages[id]) { if (this.Pages[id]) {
this.Pages[id](); this.Pages[id]();
} }
//always init sidebar
Init.initializeSidebar();
},
initializeSidebar: function(){
new Sidebar();
}, },
generateAnimatedLogo: function(){ generateAnimatedLogo: function(){

View File

@ -0,0 +1,50 @@
(function(){
Sidebar = Base.extend({
$body: null,
$overlay: null,
$sidebar: null,
$sidebarHeader: null,
$sidebarImg: null,
$toggleButton: null,
constructor: function(){
this.$body = $('body');
this.$overlay = $('.sidebar-overlay');
this.$sidebar = $('#sidebar');
this.$sidebarHeader = $('#sidebar .sidebar-header');
this.$toggleButton = $('.navbar-toggle');
this.sidebarImg = this.$sidebarHeader.css('background-image');
this.addEventListeners();
},
addEventListeners: function(){
var _this = this;
_this.$toggleButton.on('click', function() {
_this.$sidebar.toggleClass('open');
if ((_this.$sidebar.hasClass('sidebar-fixed-left') || _this.$sidebar.hasClass('sidebar-fixed-right')) && _this.$sidebar.hasClass('open')) {
_this.$overlay.addClass('active');
_this.$body.css('overflow', 'hidden');
} else {
_this.$overlay.removeClass('active');
_this.$body.css('overflow', 'auto');
}
return false;
});
_this.$overlay.on('click', function() {
$(this).removeClass('active');
_this.$body.css('overflow', 'auto');
_this.$sidebar.removeClass('open');
});
}
});
window.Sidebar = Sidebar;
})();

View File

@ -21,4 +21,5 @@
//= require app/Engine.Shape //= require app/Engine.Shape
//= require app/Engine.Shape.Puller //= require app/Engine.Shape.Puller
//= require app/Engine.Typewriter //= require app/Engine.Typewriter
//= require app/Sidebar
//= require app/Init //= require app/Init

View File

@ -25,7 +25,7 @@ var Init = {
resizeImage: function(){ resizeImage: function(){
var header = document.getElementById('header'), var header = document.getElementById('header'),
footer = document.getElementById('footer-wrap'), footer = document.getElementById('footer'),
main = document.getElementById('main-content'), main = document.getElementById('main-content'),
vp = window.innerHeight, vp = window.innerHeight,
bodyHeight = document.body.clientHeight, bodyHeight = document.body.clientHeight,
@ -33,7 +33,7 @@ var Init = {
fHeight = footer.clientHeight, fHeight = footer.clientHeight,
withMinHeight = hHeight + fHeight + 830; withMinHeight = hHeight + fHeight + 830;
if(withMinHeight > bodyHeight ){ if(withMinHeight < vp && bodyHeight < vp){
var newHeight = (vp - (hHeight+fHeight)) + 'px'; var newHeight = (vp - (hHeight+fHeight)) + 'px';
main.style.height = newHeight; main.style.height = newHeight;
} }

View File

@ -2,6 +2,7 @@
// Typography // Typography
// -------------------------------------------------- // --------------------------------------------------
//light //light
.rls-l{ .rls-l{
font-family: $font-family-lato; font-family: $font-family-lato;

View File

@ -1,210 +1,88 @@
body.page-sub{
#footer-wrap{
background-color: white;
padding: 0 0 50px 0;
}
body.page-home{
#footer{ #footer{
margin-top: -40px; padding: 40px 0;
margin-top: 0;
} }
} }
#footer{ #footer{
padding: 140px 0 40px; background-color: white;
color: black; padding: 150px 0 80px;
margin-top: -40px;
a{ &.white{
color: black; background-color: $black;
.footer-links{
li > a {
@include project-footer-a-subpage-style();
}
}
} }
.footer-links{ .footer-links{
margin-bottom: 20px; li > a {
@include project-footer-a-style();
.li-under a:hover::after, }
.li-under a:focus::after {
opacity: 1;
-webkit-transform: skewY(15deg) translateY(8px);
-moz-transform: skewY(15deg) translateY(8px);
transform: skewY(15deg) translateY(8px);
} }
.li-under a::after { .hashicorp-project{
background-color: $purple; margin-top: 24px;
} }
li{ .pull-right{
padding-right: 15px;
}
}
.edit-page-link{
position: absolute;
top: -70px;
right: 30px;;
a{ a{
text-transform: uppercase; text-transform: uppercase;
font-size: 12px; color: $black;
letter-spacing: 3px; font-size: 13px;
@include transition( color 0.3s ease );
font-weight: 400;
&:hover{
color: $purple;
@include transition( color 0.3s ease );
background-color: transparent;
}
}
}
}
.buttons.navbar-nav{
float: none;
display: inline-block;
margin-bottom: 30px;
margin-top: 0px;
li{
&.first{
margin-right: 12px;
}
&.download{
a{
background: image-url('icon-download-purple.png') 8px 6px no-repeat;
@include img-retina("icon-download-purple.png", "icon-download-purple@2x.png", 20px, 20px);
}
}
&.github{
a{
background: image-url('icon-github-purple.png') 8px 6px no-repeat;
@include img-retina("icon-github-purple.png", "icon-github-purple@2x.png", 20px, 20px);
}
}
}
li > a {
padding-top: 6px;
padding-bottom: 6px;
padding-left: 40px;
}
}
.footer-hashi{
float: right;
padding-top: 5px;
letter-spacing: 2px;
a{
color: black;
font-weight: $font-weight-lato-xb;
}
span{
margin-right: 10px;
}
.hashi-logo{
display: inline-block;
vertical-align: middle;
i{
display: inline-block;
width: 37px;
height: 40px;
background: image-url('footer-hashicorp-logo.png') 0 0 no-repeat;
@include img-retina("footer-hashicorp-logo.png", "footer-hashicorp-logo@2x.png", 37px, 40px);
}
}
}
}
.page-sub{
#footer-wrap{
padding: 0;
}
#footer{
padding: 140px 0 100px;
background-color: $black;
transform: none;
>.container{
transform: none;
}
a{
color: white;
}
.footer-hashi{
color: white;
.hashi-logo{
i{
background: image-url('footer-hashicorp-white-logo.png') 0 0 no-repeat;
@include img-retina("footer-hashicorp-white-logo.png", "footer-hashicorp-white-logo@2x.png", 37px, 40px);
}
}
}
}
}
@media (min-width: 1500px) {
body.page-home{
#footer{
margin-top: -60px;
padding: 190px 0 40px;
}
} }
} }
@media (max-width: 992px) { @media (max-width: 992px) {
.page-sub #footer, #footer{ .footer-links {
.footer-hashi { display: block;
padding-top: 14px;
span{
margin-right: 6px;
}
.hashi-logo{
i{
margin-top: -6px;
width: 20px;
height: 22px;
background-size: 20px 22px;
}
}
}
}
}
@media (max-width: 768px) {
#footer{
padding: 100px 0 40px;
text-align: center; text-align: center;
.footer-links{ ul{
float: none; display: inline-block;;
display: inline-block; float: none !important;
} }
.footer-hashi { .footer-hashi{
float: none; display: block;
display: inline-block;
.pull-right{
float: none !important; float: none !important;
} }
} }
}
} }
@media (max-width: 320px) { @media (max-width: 414px) {
#footer{ #footer{
text-align: center; ul{
display: block;
li{
display: block;
float: none;
}
.footer-links{ &.external-links{
.li-under{ li{
float: none !important; svg{
position: relative;
left: 0;
top: 2px;
margin-top: 0;
margin-right: 4px;
}
}
} }
} }
} }
} }

View File

@ -1,382 +1,87 @@
// //
// Header // Header
// - Project Specific
// - edits should be made here
// -------------------------------------------------- // --------------------------------------------------
body.page-sub{ body.page-sub{
.terra-btn{
background-color: rgba(130, 47, 247, 1);
}
#header{ #header{
height: 90px;
background-color: $purple; background-color: $purple;
.navbar-collapse{
background-color: rgba(255, 255, 255, 0.98);
}
.nav-logo{
height: 90px;
}
.nav-white{
height: 90px;
background-color: white;
}
.main-links.navbar-nav{
float: left !important;
li > a {
color: black;
@include transition( color 0.3s ease );
}
}
.buttons.nav > li > a, .buttons.nav > li > a {
//background-color: lighten($purple, 1%);
@include transition( background-color 0.3s ease );
}
.buttons.nav > li > a:hover, .buttons.nav > li > a:focus {
background-color: black;
@include transition( background-color 0.3s ease );
}
.main-links.nav > li > a:hover, .main-links.nav > li > a:focus {
color: $purple;
@include transition( color 0.3s ease );
}
} }
} }
#header { #header {
position: relative;
color: $white;
text-rendering: optimizeLegibility;
margin-bottom: 0;
&.navbar-static-top{
height:70px;
-webkit-transform:translate3d(0,0,0);
-moz-transform:translate3d(0,0,0);
-ms-transform:translate3d(0,0,0);
-o-transform:translate3d(0,0,0);
transform:translate3d(0,0,0);
z-index: 1000;
}
a{
color: $white;
}
.navbar-toggle{
margin-top: 26px;
margin-bottom: 14px;
margin-right: 0;
border: 2px solid $white;
border-radius: 0;
.icon-bar{
border: 1px solid $white;
border-radius: 0;
}
}
.navbar-brand { .navbar-brand {
&.logo{ .logo{
margin-top: 15px; font-size: 20px;
padding: 5px 0 0 68px; text-transform: uppercase;
height: 56px;
line-height: 56px;
font-size: 24px;
@include lato-light(); @include lato-light();
text-transform: uppercase; background: image-url('../images/logo-header.png') 0 0 no-repeat;
background: image-url('consul-header-logo.png') 0 0 no-repeat; @include img-retina("../images/logo-header.png", "../images/logo-header@2x.png", $project-logo-width, $project-logo-height);
@include img-retina("header-logo.png", "header-logo@2x.png", 50px, 56px); background-position: 0 45%;
-webkit-font-smoothing: default;
&:hover{
opacity: .6;
} }
} }
.navbar-nav{ .by-hashicorp{
-webkit-font-smoothing: antialiased; &:hover{
li{ svg{
position: relative; line{
opacity: .4;
> a {
font-size: 12px;
text-transform: uppercase;
letter-spacing: 3px;
padding-left: 22px;
@include transition( color 0.3s ease );
} }
&.first{
>a{
padding-left: 15px;
} }
} }
} }
} }
.nav > li > a:hover, .nav > li > a:focus { .buttons{
background-color: transparent; margin-top: 2px; //baseline everything
color: lighten($purple, 15%);
@include transition( color 0.3s ease ); ul.navbar-nav{
} li {
// &:hover{
.main-links.navbar-nav{ // svg path{
margin-top: 28px; // fill: $purple;
// }
li + li{ // }
padding-left: 6px;
} svg path{
fill: $white;
li + li::before { }
content: ""; }
position: absolute; }
left: 0;
top: 7px;
width: 1px;
height: 12px;
background-color: $purple;
@include skewY(24deg);
padding-right: 0;
} }
.main-links,
.external-links {
li > a { li > a {
//border-bottom: 2px solid rgba(255, 255, 255, .2); @include project-a-style();
line-height: 26px;
margin: 0 8px;
padding: 0 0 0 4px;
}
}
.buttons.navbar-nav{
margin-top: 25px;
margin-left: 30px;
li{
&.first{
margin-right: 13px;
}
&.download{
a{
padding-left: 30px;
background: image-url("header-download-icon.png") 12px 8px no-repeat;
@include img-retina("header-download-icon.png", "header-download-icon@2x.png", 12px, 13px);
}
}
&.github{
a{
background: image-url("header-github-icon.png") 12px 7px no-repeat;
@include img-retina("header-github-icon.png", "header-github-icon@2x.png", 12px, 13px);
}
}
}
li > a {
color: white;
padding-top: 4px;
padding-bottom: 4px;
padding-left: 32px;
padding-right: 12px;
letter-spacing: 0.05em;
} }
} }
} }
@media (min-width: 1200px) { @media (max-width: 414px) {
#header{
.main-links.navbar-nav{
margin-top: 28px;
li + li{
padding-left: 6px;
}
li + li::before {
content: "";
position: absolute;
left: 0;
top: 9px;
width: 6px;
height: 8px;
background-color: $purple;
@include skewY(24deg);
padding-right: 8px;
}
li > a {
//border-bottom: 2px solid rgba(255, 255, 255, .2);
line-height: 26px;
margin: 0 12px;
padding: 0 0 0 4px;
}
}
}
}
@media (min-width: 992px) {
.collapse{
margin-top: 8px;
}
//homepage has more space at this width to accommodate chevrons
.page-home{
#header{
.main-links.navbar-nav{
li + li{
padding-left: 6px;
}
li + li::before {
content: "";
position: absolute;
left: 0;
top: 9px;
width: 6px;
height: 8px;
background-color: $purple;
@include skewY(24deg);
padding-right: 8px;
}
}
}
}
}
@media (min-width: 768px) and (max-width: 992px) {
body.page-home{
.nav-logo{
width: 30%;
}
.nav-white{
margin-top: 8px;
width: 70%;
}
.buttons.navbar-nav{
li{
> a{
padding-right: 4px !important;
text-indent: -9999px;
white-space: nowrap;
}
}
}
}
}
@media (max-width: 992px) {
#header { #header {
.navbar-brand { .navbar-brand {
&.logo{ .logo{
span{ padding-left: 37px;
width: 120px; font-size: 18px;
height: 39px; @include img-retina("../images/logo-header.png", "../images/logo-header@2x.png", $project-logo-width * .75, $project-logo-height * .75);
margin-top: 12px; //background-position: 0 45%;
background-size: 120px 39px;
}
}
}
}
}
@media (max-width: 768px) {
body.page-sub{
#header{
.nav-white{
background-color: transparent;
}
}
}
#header{
.buttons.navbar-nav{
float: none !important;
margin: 0;
padding-bottom: 0 !important;
li{
&.first{
margin-right: 0;
}
}
}
}
//#footer,
#header{
.buttons.navbar-nav,
.main-links.navbar-nav{
display: block;
padding-bottom: 15px;
li{
display: block;
float: none;
margin-top: 15px;
}
.li-under a::after,
li + li::before {
display: none;
}
}
}
//#footer,
#header{
.main-links.navbar-nav{
float: left !important;
li > a {
padding: 0;
padding-left: 0;
line-height: 22px;
} }
} }
} }
} }
@media (max-width: 763px) {
.navbar-static-top {
.nav-white {
background-color:rgba(0,0,0,0.5);
}
}
}
@media (max-width: 320px) { @media (max-width: 320px) {
#header {
#header{
.navbar-brand { .navbar-brand {
&.logo{ .logo{
padding:0 0 0 54px !important; font-size: 0 !important; //hide terraform text
font-size: 20px !important;
line-height:42px !important;
margin-top: 23px !important ;
@include img-retina("../images/header-logo.png", "../images/header-logo@2x.png", 39px, 44px);
} }
} }
} }
#feature-auto{
.terminal-text{
line-height: 48px !important;
font-size: 20px !important;
}
}
} }

View File

@ -0,0 +1,23 @@
//
// Sidebar
// - Project Specific
// - Make sidebar edits here
// --------------------------------------------------
.sidebar {
.sidebar-nav {
// Links
//----------------
li {
a {
color: $black;
svg{
path{
fill: $black;
}
}
}
}
}
}

View File

@ -2,27 +2,11 @@
// Utility classes // Utility classes
// -------------------------------------------------- // --------------------------------------------------
//
// -------------------------
@mixin anti-alias() { @mixin anti-alias() {
text-rendering: optimizeLegibility; text-rendering: optimizeLegibility;
-webkit-font-smoothing: antialiased; -webkit-font-smoothing: antialiased;
} }
@mixin consul-gradient-bg() {
background: #694a9c; /* Old browsers */
background: -moz-linear-gradient(left, #694a9c 0%, #cd2028 100%); /* FF3.6+ */
background: -webkit-gradient(linear, left top, right top, color-stop(0%,#694a9c), color-stop(100%,#cd2028)); /* Chrome,Safari4+ */
background: -webkit-linear-gradient(left, #694a9c 0%,#cd2028 100%); /* Chrome10+,Safari5.1+ */
background: -o-linear-gradient(left, #694a9c 0%,#cd2028 100%); /* Opera 11.10+ */
background: -ms-linear-gradient(left, #694a9c 0%,#cd2028 100%); /* IE10+ */
background: linear-gradient(to right, #694a9c 0%,#cd2028 100%); /* W3C */
filter: progid:DXImageTransform.Microsoft.gradient( startColorstr='#694a9c', endColorstr='#cd2028',GradientType=1 ); /* IE6-9 */
}
@mixin lato-light() { @mixin lato-light() {
font-family: $font-family-lato; font-family: $font-family-lato;
font-weight: 300; font-weight: 300;

View File

@ -1,13 +1,12 @@
@import 'bootstrap-sprockets'; @import 'bootstrap-sprockets';
@import 'bootstrap'; @import 'bootstrap';
@import url("//fonts.googleapis.com/css?family=Lato:300,400,700"); @import url("//fonts.googleapis.com/css?family=Lato:300,400,700|Open+Sans:300,400,600");
// Core variables and mixins // Core variables and mixins
@import '_variables'; @import '_variables';
@import '_mixins';
// Utility classes // Utility
@import '_utilities'; @import '_utilities';
// Core CSS // Core CSS
@ -16,11 +15,18 @@
//Global Site //Global Site
@import '_global'; @import '_global';
// Hashicorp Shared Project Styles
@import 'hashicorp-shared/_project-utility';
@import 'hashicorp-shared/_hashicorp-utility';
@import 'hashicorp-shared/_hashicorp-header';
@import 'hashicorp-shared/_hashicorp-sidebar';
// Components // Components
@import '_header'; @import '_header';
@import '_footer'; @import '_footer';
@import '_jumbotron'; @import '_jumbotron';
@import '_buttons'; @import '_buttons';
@import '_sidebar';
// Pages // Pages
@import '_home'; @import '_home';

View File

@ -0,0 +1,343 @@
//
// Hashicorp header
// - Shared throughout projects
// - Edits should not be made here
// --------------------------------------------------
#header{
position: relative;
margin-bottom: 0;
}
.navigation {
color: black;
text-rendering: optimizeLegibility;
transition: all 1s ease;
&.white{
.navbar-brand {
.logo {
color: white;
}
}
.main-links,
.external-links {
li > a {
&:hover{
opacity: 1;
}
}
}
}
&.black{
.navbar-brand {
.logo {
color: black;
}
}
.main-links,
.external-links {
li > a {
color: black;
}
}
}
.navbar-toggle{
height: $header-height;
margin: 0;
border-radius: 0;
.icon-bar{
border: 1px solid $black;
border-radius: 0;
}
}
.external-links {
&.white{
svg path{
fill: $white;
}
}
li {
position: relative;
svg path{
@include transition( all 300ms ease-in );
}
&:hover{
svg path{
@include transition( all 300ms ease-in );
}
}
@include project-svg-external-links-style();
&.download{
margin-right: 10px;
}
> a {
padding-left: 12px !important;
svg{
position: absolute;
left: -12px;
top: 50%;
margin-top: -7px;
width: 14px;
height: 14px;
}
}
}
}
.main-links{
margin-right: $nav-margin-right * 2;
}
.main-links,
.external-links {
&.white{
li > a {
color: white;
}
}
li > a {
@include hashi-a-style();
margin: 0 10px;
padding-top: 1px;
line-height: $header-height;
@include project-a-style();
}
}
.nav > li > a:hover, .nav > li > a:focus {
background-color: transparent;
@include transition( all 300ms ease-in );
}
}
.navbar-brand {
display: block;
height: $header-height;
padding: 0;
margin: 0 10px 0 0;
.logo{
display: inline-block;
height: $header-height;
vertical-align:top;
padding: 0;
line-height: $header-height;
padding-left: $project-logo-width + $project-logo-pad-left;
background-position: 0 center;
@include transition(all 300ms ease-in);
&:hover{
@include transition(all 300ms ease-in);
text-decoration: none;
}
}
}
.navbar-toggle{
&.white{
.icon-bar{
border: 1px solid white;
}
}
}
.by-hashicorp{
display: inline-block;
vertical-align:top;
height: $header-height;
margin-left: 3px;
padding-top: 2px;
color: black;
line-height: $header-height;
font-family: $header-font-family;
font-weight: 600;
font-size: 0;
text-decoration: none;
&.white{
color: white;
font-weight: 300;
svg{
path,
polygon{
fill: white;
}
line{
stroke: white;
}
}
&:focus,
&:hover{
text-decoration: none;
color: white;
}
}
&:focus,
&:hover{
text-decoration: none;
}
.svg-wrap{
font-size: 13px;
}
svg{
&.svg-by{
width: $by-hashicorp-width;
height: $by-hashicorp-height;
margin-bottom: -4px;
margin-left: 4px;
}
&.svg-logo{
width: 16px;
height: 16px;
margin-bottom: -3px;
margin-left: 4px;
}
path,
polygon{
fill: black;
@include transition(all 300ms ease-in);
&:hover{
@include transition(all 300ms ease-in);
}
}
line{
stroke: black;
@include transition(all 300ms ease-in);
&:hover{
@include transition(all 300ms ease-in);
}
}
}
}
.hashicorp-project{
display: inline-block;
height: 30px;
line-height: 30px;
text-decoration: none;
font-size: 14px;
color: $black;
font-weight: 600;
&.white{
color: white;
svg{
path,
polygon{
fill: white;
}
line{
stroke: white;
}
}
}
&:focus{
text-decoration: none;
}
&:hover{
text-decoration: none;
svg{
&.svg-by{
line{
stroke: $purple;
}
}
}
}
span{
margin-right: 4px;
font-family: $header-font-family;
font-weight: 500;
}
span,
svg{
display: inline-block;
}
svg{
&.svg-by{
width: $by-hashicorp-width;
height: $by-hashicorp-height;
margin-bottom: -4px;
margin-left: -3px;
}
&.svg-logo{
width: 30px;
height: 30px;
margin-bottom: -10px;
margin-left: -1px;
}
path,
line{
fill: $black;
@include transition(all 300ms ease-in);
&:hover{
@include transition(all 300ms ease-in);
}
}
}
}
@media (max-width: 480px) {
.navigation {
.main-links{
margin-right: 0;
}
}
}
@media (max-width: 414px) {
#header {
.navbar-toggle{
padding-top: 10px;
height: $header-mobile-height;
}
.navbar-brand {
height: $header-mobile-height;
.logo{
height: $header-mobile-height;
line-height: $header-mobile-height;
}
.by-hashicorp{
height: $header-mobile-height;
line-height: $header-mobile-height;
padding-top: 0;
}
}
.main-links,
.external-links {
li > a {
line-height: $header-mobile-height;
}
}
}
}

View File

@ -0,0 +1,293 @@
//
// Hashicorp Sidebar
// - Shared throughout projects
// - Edits should not be made here
// --------------------------------------------------
// Base variables
// --------------------------------------------------
$screen-tablet: 768px;
$gray-darker: #212121; // #212121 - text
$gray-secondary: #757575; // #757575 - secondary text, icons
$gray: #bdbdbd; // #bdbdbd - hint text
$gray-light: #e0e0e0; // #e0e0e0 - divider
$gray-lighter: #f5f5f5; // #f5f5f5 - background
$link-color: $gray-darker;
$link-bg: transparent;
$link-hover-color: $gray-lighter;
$link-hover-bg: $gray-lighter;
$link-active-color: $gray-darker;
$link-active-bg: $gray-light;
$link-disabled-color: $gray-light;
$link-disabled-bg: transparent;
/* -- Sidebar style ------------------------------- */
// Sidebar variables
// --------------------------------------------------
$zindex-sidebar-fixed: 1035;
$sidebar-desktop-width: 280px;
$sidebar-width: 240px;
$sidebar-padding: 16px;
$sidebar-divider: $sidebar-padding/2;
$sidebar-icon-width: 40px;
$sidebar-icon-height: 20px;
@mixin sidebar-nav-base {
text-align: center;
&:last-child{
border-bottom: none;
}
li > a {
background-color: $link-bg;
}
li:hover > a {
background-color: $link-hover-bg;
}
li:focus > a, li > a:focus {
background-color: $link-bg;
}
> .open > a {
&,
&:hover,
&:focus {
background-color: $link-hover-bg;
}
}
> .active > a {
&,
&:hover,
&:focus {
background-color: $link-active-bg;
}
}
> .disabled > a {
&,
&:hover,
&:focus {
background-color: $link-disabled-bg;
}
}
// Dropdown menu items
> .dropdown {
// Remove background color from open dropdown
> .dropdown-menu {
background-color: $link-hover-bg;
> li > a {
&:focus {
background-color: $link-hover-bg;
}
&:hover {
background-color: $link-hover-bg;
}
}
> .active > a {
&,
&:hover,
&:focus {
color: $link-active-color;
background-color: $link-active-bg;
}
}
}
}
}
//
// Sidebar
// --------------------------------------------------
// Sidebar Elements
//
// Basic style of sidebar elements
.sidebar {
position: relative;
display: block;
min-height: 100%;
overflow-y: auto;
overflow-x: hidden;
border: none;
@include transition(all 0.5s cubic-bezier(0.55, 0, 0.1, 1));
@include clearfix();
background-color: $white;
ul{
padding-left: 0;
list-style-type: none;
}
.sidebar-divider, .divider {
width: 80%;
height: 1px;
margin: 8px auto;
background-color: lighten($gray, 20%);
}
// Sidebar heading
//----------------
.sidebar-header {
position: relative;
margin-bottom: $sidebar-padding;
@include transition(all .2s ease-in-out);
}
.sidebar-image {
padding-top: 24px;
img {
display: block;
margin: 0 auto;
}
}
// Sidebar icons
//----------------
.sidebar-icon {
display: inline-block;
height: $sidebar-icon-height;
margin-right: $sidebar-divider;
text-align: left;
font-size: $sidebar-icon-height;
vertical-align: middle;
&:before, &:after {
vertical-align: middle;
}
}
.sidebar-nav {
margin: 0;
padding: 0;
@include sidebar-nav-base();
// Links
//----------------
li {
position: relative;
list-style-type: none;
text-align: center;
a {
position: relative;
cursor: pointer;
user-select: none;
@include hashi-a-style-core();
svg{
top: 2px;
width: 14px;
height: 14px;
margin-bottom: -2px;
margin-right: 4px;
}
}
}
}
}
// Sidebar toggling
//
// Hide sidebar
.sidebar {
width: 0;
@include translate3d(-$sidebar-desktop-width, 0, 0);
&.open {
min-width: $sidebar-desktop-width;
width: $sidebar-desktop-width;
@include translate3d(0, 0, 0);
}
}
// Sidebar positions: fix the left/right sidebars
.sidebar-fixed-left,
.sidebar-fixed-right,
.sidebar-stacked {
position: fixed;
top: 0;
bottom: 0;
z-index: $zindex-sidebar-fixed;
}
.sidebar-stacked {
left: 0;
}
.sidebar-fixed-left {
left: 0;
box-shadow: 2px 0px 25px rgba(0,0,0,0.15);
-webkit-box-shadow: 2px 0px 25px rgba(0,0,0,0.15);
}
.sidebar-fixed-right {
right: 0;
box-shadow: 0px 2px 25px rgba(0,0,0,0.15);
-webkit-box-shadow: 0px 2px 25px rgba(0,0,0,0.15);
@include translate3d($sidebar-desktop-width, 0, 0);
&.open {
@include translate3d(0, 0, 0);
}
.icon-material-sidebar-arrow:before {
content: "\e614"; // icon-material-arrow-forward
}
}
// Sidebar size
//
// Change size of sidebar and sidebar elements on small screens
@media (max-width: $screen-tablet) {
.sidebar.open {
min-width: $sidebar-width;
width: $sidebar-width;
}
.sidebar .sidebar-header {
//height: $sidebar-width * 9/16; // 16:9 header dimension
}
.sidebar .sidebar-image {
/* img {
width: $sidebar-width/4 - $sidebar-padding;
height: $sidebar-width/4 - $sidebar-padding;
} */
}
}
.sidebar-overlay {
visibility: hidden;
position: fixed;
top: 0;
left: 0;
right: 0;
bottom: 0;
opacity: 0;
background: $white;
z-index: $zindex-sidebar-fixed - 1;
-webkit-transition: visibility 0 linear .4s,opacity .4s cubic-bezier(.4,0,.2,1);
-moz-transition: visibility 0 linear .4s,opacity .4s cubic-bezier(.4,0,.2,1);
transition: visibility 0 linear .4s,opacity .4s cubic-bezier(.4,0,.2,1);
-webkit-transform: translateZ(0);
-moz-transform: translateZ(0);
-ms-transform: translateZ(0);
-o-transform: translateZ(0);
transform: translateZ(0);
}
.sidebar-overlay.active {
opacity: 0.3;
visibility: visible;
-webkit-transition-delay: 0;
-moz-transition-delay: 0;
transition-delay: 0;
}

View File

@ -0,0 +1,87 @@
//
// Hashicorp Nav (header/footer) Utiliy Vars and Mixins
//
// Notes:
// - Include this in Application.scss before header and feature-footer
// - Open Sans Google (Semibold - 600) font needs to be included if not already
// --------------------------------------------------
// Variables
$font-family-open-sans: 'Open Sans', 'Helvetica Neue', Helvetica, Arial, sans-serif;
$header-font-family: $font-family-open-sans;
$header-font-weight: 600; // semi-bold
$header-height: 74px;
$header-mobile-height: 60px;
$by-hashicorp-width: 74px;
$by-hashicorp-height: 16px;
$nav-margin-right: 12px;
// Mixins
@mixin hashi-a-style-core{
font-family: $header-font-family;
font-weight: $header-font-weight;
font-size: 14px;
//letter-spacing: 0.0625em;
}
@mixin hashi-a-style{
margin: 0 15px;
padding: 0;
line-height: 22px;
@include hashi-a-style-core();
@include transition( all 300ms ease-in );
&:hover{
@include transition( all 300ms ease-in );
background-color: transparent;
}
}
//general shared project mixins
@mixin img-retina($image1x, $image, $width, $height) {
background-image: url($image1x);
background-size: $width $height;
background-repeat: no-repeat;
@media (min--moz-device-pixel-ratio: 1.3),
(-o-min-device-pixel-ratio: 2.6/2),
(-webkit-min-device-pixel-ratio: 1.3),
(min-device-pixel-ratio: 1.3),
(min-resolution: 1.3dppx) {
/* on retina, use image that's scaled by 2 */
background-image: url($image);
background-size: $width $height;
}
}
//
// -------------------------
@mixin anti-alias() {
text-rendering: optimizeLegibility;
-webkit-font-smoothing: antialiased;
}
@mixin open-light() {
font-family: $font-family-open-sans;
font-weight: 300;
}
@mixin open() {
font-family: $font-family-open-sans;
font-weight: 400;
}
@mixin open-sb() {
font-family: $font-family-open-sans;
font-weight: 600;
}
@mixin open-bold() {
font-family: $font-family-open-sans;
font-weight: 700;
}
@mixin bez-1-transition{
@include transition( all 300ms ease-in-out );
}

View File

@ -0,0 +1,72 @@
//
// Mixins Specific to project
// - make edits to mixins here
// --------------------------------------------------
// Variables
$project-logo-width: 38px;
$project-logo-height: 40px;
$project-logo-pad-left: 8px;
// Mixins
@mixin project-a-style{
color: $white;
font-weight: 400;
opacity: .75;
-webkit-font-smoothing: antialiased;
&:hover{
color: $white;
opacity: 1;
}
}
@mixin project-footer-a-style{
color: $black;
font-weight: 400;
&:hover{
color: $purple;
svg path{
fill: $purple;
}
}
}
@mixin project-footer-a-subpage-style{
color: $white;
font-weight: 300;
svg path{
fill: $white;
}
&:hover{
color: $purple;
svg path{
fill: $purple;
}
}
}
@mixin project-svg-external-links-style{
svg path{
fill: $black;
}
&:hover{
svg path{
fill: $blue;
}
}
}
@mixin project-by-hashicorp-style{
&:hover{
line{
stroke: $blue;
}
}
}

View File

@ -12,6 +12,6 @@ Terraform has detailed logs which can be enabled by setting the `TF_LOG` environ
You can set `TF_LOG` to one of the log levels `TRACE`, `DEBUG`, `INFO`, `WARN` or `ERROR` to change the verbosity of the logs. `TRACE` is the most verbose and it is the default if `TF_LOG` is set to something other than a log level name. You can set `TF_LOG` to one of the log levels `TRACE`, `DEBUG`, `INFO`, `WARN` or `ERROR` to change the verbosity of the logs. `TRACE` is the most verbose and it is the default if `TF_LOG` is set to something other than a log level name.
To persist logged output you can set TF_LOG_PATH in order to force the log to always go to a specific file when logging is enabled. Note that even when TF_LOG_PATH is set, TF_LOG must be set in order for any logging to be enabled. To persist logged output you can set `TF_LOG_PATH` in order to force the log to always go to a specific file when logging is enabled. Note that even when `TF_LOG_PATH` is set, `TF_LOG` must be set in order for any logging to be enabled.
If you find a bug with Terraform, please include the detailed log by using a service such as gist. If you find a bug with Terraform, please include the detailed log by using a service such as gist.

View File

@ -33,11 +33,11 @@ resource "azure_instance" "web" {
The following arguments are supported: The following arguments are supported:
* `settings_file` - (Optional) Contents of a valid `publishsettings` file, used to * `publish_settings` - (Optional) Contents of a valid `publishsettings` file,
authenticate with the Azure API. You can download the settings file here: used to authenticate with the Azure API. You can download the settings file
https://manage.windowsazure.com/publishsettings. You must either provide here: https://manage.windowsazure.com/publishsettings. You must either
(or source from the `AZURE_SETTINGS_FILE` environment variable) a settings provide publish settings or both a `subscription_id` and `certificate`. It
file or both a `subscription_id` and `certificate`. can also be sourced from the `AZURE_PUBLISH_SETTINGS` environment variable.
* `subscription_id` - (Optional) The subscription ID to use. If a * `subscription_id` - (Optional) The subscription ID to use. If a
`settings_file` is not provided `subscription_id` is required. It can also `settings_file` is not provided `subscription_id` is required. It can also
@ -47,6 +47,16 @@ The following arguments are supported:
Azure API. If a `settings_file` is not provided `certificate` is required. Azure API. If a `settings_file` is not provided `certificate` is required.
It can also be sourced from the `AZURE_CERTIFICATE` environment variable. It can also be sourced from the `AZURE_CERTIFICATE` environment variable.
These arguments are supported for backwards compatibility, and may be removed
in a future version:
* `settings_file` - __Deprecated: please use `publish_settings` instead.__
Path to or contents of a valid `publishsettings` file, used to
authenticate with the Azure API. You can download the settings file here:
https://manage.windowsazure.com/publishsettings. You must either provide
(or source from the `AZURE_SETTINGS_FILE` environment variable) a settings
file or both a `subscription_id` and `certificate`.
## Testing: ## Testing:
The following environment variables must be set for the running of the The following environment variables must be set for the running of the

View File

@ -19,7 +19,7 @@ resource "cloudstack_disk" "default" {
attach = "true" attach = "true"
disk_offering = "custom" disk_offering = "custom"
size = 50 size = 50
virtual-machine = "server-1" virtual_machine = "server-1"
zone = "zone-1" zone = "zone-1"
} }
``` ```

View File

@ -0,0 +1,39 @@
---
layout: "dyn"
page_title: "Provider: Dyn"
sidebar_current: "docs-dyn-index"
description: |-
The Dyn provider is used to interact with the resources supported by Dyn. The provider needs to be configured with the proper credentials before it can be used.
---
# Dyn Provider
The Dyn provider is used to interact with the
resources supported by Dyn. The provider needs to be configured
with the proper credentials before it can be used.
Use the navigation to the left to read about the available resources.
## Example Usage
```
# Configure the Dyn provider
provider "dyn" {
customer_name = "${var.dyn_customer_name}"
username = "${var.dyn_username}"
password = "${var.dyn_password}"
}
# Create a record
resource "dyn_record" "www" {
...
}
```
## Argument Reference
The following arguments are supported:
* `customer_name` - (Required) The Dyn customer name. It must be provided, but it can also be sourced from the `DYN_CUSTOMER_NAME` environment variable.
* `username` - (Required) The Dyn username. It must be provided, but it can also be sourced from the `DYN_USERNAME` environment variable.
* `password` - (Required) The Dyn password. It must be provided, but it can also be sourced from the `DYN_PASSWORD` environment variable.

View File

@ -0,0 +1,41 @@
---
layout: "dyn"
page_title: "Dyn: dyn_record"
sidebar_current: "docs-dyn-resource-record"
description: |-
Provides a Dyn DNS record resource.
---
# dyn\_record
Provides a Dyn DNS record resource.
## Example Usage
```
# Add a record to the domain
resource "dyn_record" "foobar" {
zone = "${var.dyn_zone}"
name = "terraform"
value = "192.168.0.11"
type = "A"
ttl = 3600
}
```
## Argument Reference
The following arguments are supported:
* `name` - (Required) The name of the record.
* `type` - (Required) The type of the record.
* `value` - (Required) The value of the record.
* `zone` - (Required) The DNS zone to add the record to.
* `ttl` - (Optional) The TTL of the record. Default uses the zone default.
## Attributes Reference
The following attributes are exported:
* `id` - The record ID.
* `fqdn` - The FQDN of the record, built from the `name` and the `zone`.

View File

@ -19,7 +19,7 @@ Use the navigation to the left to read about the available resources.
``` ```
# Configure the Google Cloud provider # Configure the Google Cloud provider
provider "google" { provider "google" {
account_file = "${file("account.json")}" credentials = "${file("account.json")}"
project = "my-gce-project" project = "my-gce-project"
region = "us-central1" region = "us-central1"
} }
@ -34,12 +34,12 @@ resource "google_compute_instance" "default" {
The following keys can be used to configure the provider. The following keys can be used to configure the provider.
* `account_file` - (Required) Contents of the JSON file used to describe your * `credentials` - (Optional) Contents of the JSON file used to describe your
account credentials, downloaded from Google Cloud Console. More details on account credentials, downloaded from Google Cloud Console. More details on
retrieving this file are below. The `account file` can be "" if you are running retrieving this file are below. Credentials may be blank if you are running
terraform from a GCE instance with a properly-configured [Compute Engine Terraform from a GCE instance with a properly-configured [Compute Engine
Service Account](https://cloud.google.com/compute/docs/authentication). This Service Account](https://cloud.google.com/compute/docs/authentication). This
can also be specified with the `GOOGLE_ACCOUNT_FILE` shell environment can also be specified with the `GOOGLE_CREDENTIALS` shell environment
variable. variable.
* `project` - (Required) The ID of the project to apply any resources to. This * `project` - (Required) The ID of the project to apply any resources to. This
@ -48,6 +48,19 @@ The following keys can be used to configure the provider.
* `region` - (Required) The region to operate under. This can also be specified * `region` - (Required) The region to operate under. This can also be specified
with the `GOOGLE_REGION` shell environment variable. with the `GOOGLE_REGION` shell environment variable.
The following keys are supported for backwards compatibility, and may be
removed in a future version:
* `account_file` - __Deprecated: please use `credentials` instead.__
Path to or contents of the JSON file used to describe your
account credentials, downloaded from Google Cloud Console. More details on
retrieving this file are below. The `account file` can be "" if you are running
terraform from a GCE instance with a properly-configured [Compute Engine
Service Account](https://cloud.google.com/compute/docs/authentication). This
can also be specified with the `GOOGLE_ACCOUNT_FILE` shell environment
variable.
## Authentication JSON File ## Authentication JSON File
Authenticating with Google Cloud services requires a JSON Authenticating with Google Cloud services requires a JSON

View File

@ -35,7 +35,7 @@ The following arguments are supported:
Changing this forces a new resource to be created. Changing this forces a new resource to be created.
* `private_key` - (Required) Write only private key in PEM format. * `private_key` - (Required) Write only private key in PEM format.
Changing this forces a new resource to be created. Changing this forces a new resource to be created.
* `description` - (Required) A local certificate file in PEM format. The chain * `certificate` - (Required) A local certificate file in PEM format. The chain
may be at most 5 certs long, and must include at least one intermediate cert. may be at most 5 certs long, and must include at least one intermediate cert.
Changing this forces a new resource to be created. Changing this forces a new resource to be created.

View File

@ -53,17 +53,18 @@ The following arguments are supported:
* `device_owner` - (Optional) The device owner of the Port. Changing this creates * `device_owner` - (Optional) The device owner of the Port. Changing this creates
a new port. a new port.
* `security_groups` - (Optional) A list of security groups to apply to the port. * `security_group_ids` - (Optional) A list of security group IDs to apply to the
The security groups must be specified by ID and not name (as opposed to how port. The security groups must be specified by ID and not name (as opposed
they are configured with the Compute Instance). to how they are configured with the Compute Instance).
* `device_id` - (Optional) The ID of the device attached to the port. Changing this * `device_id` - (Optional) The ID of the device attached to the port. Changing this
creates a new port. creates a new port.
* `fixed_ips` - (Optional) An array of desired IPs for this port. * `fixed_ip` - (Optional) An array of desired IPs for this port. The structure is
described below.
The `fixed_ips` block supports: The `fixed_ip` block supports:
* `subnet_id` - (Required) Subnet in which to allocate IP address for * `subnet_id` - (Required) Subnet in which to allocate IP address for
this port. this port.

View File

@ -25,7 +25,7 @@ resource "packet_project" "tf_project_1" {
The following arguments are supported: The following arguments are supported:
* `name` - (Required) The name of the SSH key for identification * `name` - (Required) The name of the Project in Packet.net
* `payment_method` - (Required) The id of the payment method on file to use for services created * `payment_method` - (Required) The id of the payment method on file to use for services created
on this project. on this project.
@ -33,8 +33,8 @@ on this project.
The following attributes are exported: The following attributes are exported:
* `id` - The unique ID of the key * `id` - The unique ID of the project
* `payment_method` - The id of the payment method on file to use for services created * `payment_method` - The id of the payment method on file to use for services created
on this project. on this project.
* `created` - The timestamp for when the SSH key was created * `created` - The timestamp for when the Project was created
* `updated` - The timestamp for the last time the SSH key was udpated * `updated` - The timestamp for the last time the Project was updated

View File

@ -14,7 +14,7 @@ Renders a template from a file.
``` ```
resource "template_file" "init" { resource "template_file" "init" {
filename = "${path.module}/init.tpl" template = "${file("${path.module}/init.tpl")}"
vars { vars {
consul_address = "${aws_instance.consul.private_ip}" consul_address = "${aws_instance.consul.private_ip}"
@ -27,17 +27,24 @@ resource "template_file" "init" {
The following arguments are supported: The following arguments are supported:
* `filename` - (Required) The filename for the template. Use [path * `template` - (Required) The contents of the template. These can be loaded
variables](/docs/configuration/interpolation.html#path-variables) to make from a file on disk using the [`file()` interpolation
this path relative to different path roots. function](/docs/configuration/interpolation.html#file_path_).
* `vars` - (Optional) Variables for interpolation within the template. * `vars` - (Optional) Variables for interpolation within the template.
The following arguments are maintained for backwards compatibility and may be
removed in a future version:
* `filename` - __Deprecated, please use `template` instead_. The filename for
the template. Use [path variables](/docs/configuration/interpolation.html#path-variables) to make
this path relative to different path roots.
## Attributes Reference ## Attributes Reference
The following attributes are exported: The following attributes are exported:
* `filename` - See Argument Reference above. * `template` - See Argument Reference above.
* `vars` - See Argument Reference above. * `vars` - See Argument Reference above.
* `rendered` - The final rendered template. * `rendered` - The final rendered template.

View File

@ -17,7 +17,8 @@ The provider needs to be configured with the proper credentials before it can be
Use the navigation to the left to read about the available resources. Use the navigation to the left to read about the available resources.
~> **NOTE:** The VMware vSphere Provider currently represents _initial support_ ~> **NOTE:** The VMware vSphere Provider currently represents _initial support_
and therefore may undergo significant changes as the community improves it. and therefore may undergo significant changes as the community improves it. This
provider at this time only supports IPv4 addresses on virtual machines.
## Example Usage ## Example Usage

View File

@ -56,7 +56,7 @@ The following arguments are supported:
Network interfaces support the following attributes: Network interfaces support the following attributes:
* `label` - (Required) Label to assign to this network interface * `label` - (Required) Label to assign to this network interface
* `ip_address` - (Optional) Static IP to assign to this network interface. Interface will use DHCP if this is left blank. * `ip_address` - (Optional) Static IP to assign to this network interface. Interface will use DHCP if this is left blank. Currently only IPv4 IP addresses are supported.
* `subnet_mask` - (Optional) Subnet mask to use when statically assigning an IP. * `subnet_mask` - (Optional) Subnet mask to use when statically assigning an IP.
<a id="disks"></a> <a id="disks"></a>

View File

@ -36,10 +36,10 @@ resource "aws_instance" "web" {
environment = "_default" environment = "_default"
run_list = ["cookbook::recipe"] run_list = ["cookbook::recipe"]
node_name = "webserver1" node_name = "webserver1"
secret_key_path = "../encrypted_data_bag_secret" secret_key = "${file("../encrypted_data_bag_secret")}"
server_url = "https://chef.company.com/organizations/org1" server_url = "https://chef.company.com/organizations/org1"
validation_client_name = "chef-validator" validation_client_name = "chef-validator"
validation_key_path = "../chef-validator.pem" validation_key = "${file("../chef-validator.pem")}"
version = "12.4.1" version = "12.4.1"
} }
} }
@ -83,9 +83,10 @@ The following arguments are supported:
Chef Client run. The run-list will also be saved to the Chef Server after a successful Chef Client run. The run-list will also be saved to the Chef Server after a successful
initial run. initial run.
* `secret_key_path (string)` - (Optional) The path to the secret key that is used * `secret_key (string)` - (Optional) The contents of the secret key that is used
by the client to decrypt data bags on the Chef Server. The key will be uploaded to the remote by the client to decrypt data bags on the Chef Server. The key will be uploaded to the remote
machine. machine. These can be loaded from a file on disk using the [`file()` interpolation
function](/docs/configuration/interpolation.html#file_path_).
* `server_url (string)` - (Required) The URL to the Chef server. This includes the path to * `server_url (string)` - (Required) The URL to the Chef server. This includes the path to
the organization. See the example. the organization. See the example.
@ -100,9 +101,16 @@ The following arguments are supported:
* `validation_client_name (string)` - (Required) The name of the validation client to use * `validation_client_name (string)` - (Required) The name of the validation client to use
for the initial communication with the Chef Server. for the initial communication with the Chef Server.
* `validation_key_path (string)` - (Required) The path to the validation key that is needed * `validation_key (string)` - (Required) The contents of the validation key that is needed
by the node to register itself with the Chef Server. The key will be uploaded to the remote by the node to register itself with the Chef Server. The key will be uploaded to the remote
machine. machine. These can be loaded from a file on disk using the [`file()`
interpolation function](/docs/configuration/interpolation.html#file_path_).
* `version (string)` - (Optional) The Chef Client version to install on the remote machine. * `version (string)` - (Optional) The Chef Client version to install on the remote machine.
If not set the latest available version will be installed. If not set the latest available version will be installed.
These are supported for backwards compatibility and may be removed in a
future version:
* `validation_key_path (string)` - __Deprecated: please use `validation_key` instead__.
* `secret_key_path (string)` - __Deprecated: please use `secret_key` instead__.

View File

@ -1,28 +1,42 @@
<div class="skew-item" id="footer-wrap"> <div class="skew-item ">
<div id="footer"> <div id="footer" class="navigation <%= current_page.url == "/" ? "black" : "white" %>">
<div class="container"> <div class="container">
<div class="row"> <div class="row">
<div class="footer-links col-sm-7 col-xs-12"> <div class="col-xs-12">
<ul class="footer-links nav navbar-nav">
<li class="li-under"><a href="/intro/index.html">Intro</a></li>
<li class="active li-under"><a href="/docs/index.html">Docs</a></li>
<li class="li-under"><a href="/community.html">Community</a></li>
<% if current_page.url != '/' %> <% if current_page.url != '/' %>
<li class="li-under"><a href="<%= github_url :current_page %>">Edit this page</a></li> <div class="edit-page-link"><a href="<%= github_url :current_page %>">Edit this page</a></div>
<% end %> <% end %>
<div class="footer-links">
<ul class="main-links nav navbar-nav">
<li><a href="/intro/index.html">Intro</a></li>
<li><a href="/docs/index.html">Docs</a></li>
<li><a href="/community.html">Community</a></li>
</ul> </ul>
</div> <ul class="external-links nav navbar-nav">
<div class="footer-hashi col-sm-5 col-xs-12"> <li class="first download">
<div class="pull-right"> <a href="/downloads.html"><%= partial "layouts/svg/svg-download" %>Download</a>
<span>Copyright &copy; <%= Time.now.year %>. A <a href="https://www.hashicorp.com">HashiCorp</a> Project.</span> </li>
<a class="hashi-logo" href="https://www.hashicorp.com"><i class="hashi-logo"></i></a> <li class="github">
<a href="https://github.com/hashicorp/terraform"><%= partial "layouts/svg/svg-github" %>GitHub</a>
</li>
</ul>
</ div>
<div class="footer-hashi pull-right">
<div class="">
<a class="hashicorp-project <%= current_page.url == "/" ? "black" : "white" %>" href="https://www.hashicorp.com">
<span class="project-text">A </span>
<%= partial "layouts/svg/svg-by-hashicorp" %>
<span class="project-text">Project</span>
<%= partial "layouts/svg/svg-hashicorp-logo" %>
</a>
</div> </div>
</div> </div>
</div> </div>
</div> </div>
<div class="skew-item" id="footer-bg"></div> <div class="skew-item" id="footer-bg"></div>
</div> </div>
</div> </div>
</div>
</div> </div>
<script> <script>

View File

@ -1,25 +1,30 @@
<div id="header" class="<%= current_page.data.page_title == "home" ? "" : "navbar-static-top" %>"> <div id="header" class="navigation white <%= current_page.data.page_title == "home" ? "" : "navbar-static-top" %>">
<div class="container"> <div class="container">
<div class="col-sm-12 col-md-4 nav-logo"> <div class="row">
<div class="col-xs-12">
<div class="navbar-header"> <div class="navbar-header">
<button class="navbar-toggle" type="button" data-toggle="collapse" data-target=".bs-navbar-collapse"> <div class="navbar-brand">
<a class="logo" href="/">Terraform</a>
<a class="by-hashicorp white" href="https://hashicorp.com/"><span class="svg-wrap">by</span><%= partial "layouts/svg/svg-by-hashicorp" %><%= partial "layouts/svg/svg-hashicorp-logo" %>Hashicorp</a>
</div>
<button class="navbar-toggle white" type="button">
<span class="sr-only">Toggle navigation</span> <span class="sr-only">Toggle navigation</span>
<span class="icon-bar"></span> <span class="icon-bar"></span>
<span class="icon-bar"></span> <span class="icon-bar"></span>
<span class="icon-bar"></span> <span class="icon-bar"></span>
</button> </button>
<a class="navbar-brand logo" href="/">Terraform</a>
</div> </div>
</div> <div class="buttons hidden-xs">
<nav class="navigation-links" role="navigation">
<div class="col-sm-12 col-md-8 nav-white"> <ul class="external-links nav navbar-nav navbar-right">
<nav class="collapse navbar-collapse bs-navbar-collapse" role="navigation"> <li class="first download">
<ul class="buttons nav navbar-nav navbar-right rls-sb"> <a href="/downloads.html"><%= partial "layouts/svg/svg-download" %>Download</a>
<li class="first download terra-btn"><a href="/downloads.html">Download</a></li> </li>
<li class="github terra-btn"><a href="https://github.com/hashicorp/terraform">GitHub</a></li> <li class="github">
<a href="https://github.com/hashicorp/terraform"><%= partial "layouts/svg/svg-github" %>GitHub</a>
</li>
</ul> </ul>
<ul class="main-links nav navbar-nav navbar-right">
<ul class="main-links nav navbar-nav navbar-right rls-sb">
<li class="first li-under"><a href="/intro/index.html">Intro</a></li> <li class="first li-under"><a href="/intro/index.html">Intro</a></li>
<li class="li-under"><a href="/docs/index.html">Docs</a></li> <li class="li-under"><a href="/docs/index.html">Docs</a></li>
<li class="li-under"><a href="/community.html">Community</a></li> <li class="li-under"><a href="/community.html">Community</a></li>
@ -27,4 +32,6 @@
</nav> </nav>
</div> </div>
</div> </div>
</div>
</div>
</div> </div>

View File

@ -0,0 +1,26 @@
<!-- Overlay for fixed sidebar -->
<div class="sidebar-overlay"></div>
<!-- Material sidebar -->
<aside id="sidebar" class="sidebar sidebar-default sidebar-fixed-right" role="navigation">
<!-- Sidebar header -->
<div class="sidebar-header header-cover">
<!-- Sidebar brand image -->
<div class="sidebar-image">
<img src="<%= image_path('logo-header-black@2x.png') %>" width="50px" height="56px">
</div>
</div>
<!-- Sidebar navigation -->
<ul class="main nav sidebar-nav">
<li class="first"><a href="/intro/index.html">Intro</a></li>
<li class=""><a href="/docs/index.html">Docs</a></li>
<li class=""><a href="/community.html">Community</a></li>
</ul>
<div class="divider"></div>
<!-- Sidebar navigation 2-->
<ul class="external nav sidebar-nav">
<li class="first"><a class="v-btn gray sml" href="/downloads.html"><%= partial "layouts/svg/svg-download" %>Download</a></li>
<li class=""><a class="v-btn gray sml" href="https://github.com/hashicorp/terraform"><%= partial "layouts/svg/svg-github" %>GitHub</a></li>
</ul>
</aside>

View File

@ -0,0 +1,24 @@
<% wrap_layout :inner do %>
<% content_for :sidebar do %>
<div class="docs-sidebar hidden-print affix-top" role="complementary">
<ul class="nav docs-sidenav">
<li<%= sidebar_current("docs-home") %>>
<a href="/docs/providers/index.html">&laquo; Documentation Home</a>
</li>
<li<%= sidebar_current("docs-dyn-index") %>>
<a href="/docs/providers/dyn/index.html">Dyn Provider</a>
</li>
<li<%= sidebar_current(/^docs-dyn-resource/) %>>
<a href="#">Resources</a>
<ul class="nav nav-visible">
<li<%= sidebar_current("docs-dyn-resource-record") %>>
<a href="/docs/providers/dyn/r/record.html">dyn_record</a>
</li>
</ul>
</li>
</ul>
</div>
<% end %>
<%= yield %>
<% end %>

View File

@ -1,4 +1,5 @@
<%= partial "layouts/meta" %> <%= partial "layouts/meta" %>
<%= partial "layouts/header" %> <%= partial "layouts/header" %>
<%= partial "layouts/sidebar" %>
<%= yield %> <%= yield %>
<%= partial "layouts/footer" %> <%= partial "layouts/footer" %>

Some files were not shown because too many files have changed in this diff Show More