merging in upstream, because rebase was insane
This commit is contained in:
commit
d70cdde233
|
@ -20,3 +20,4 @@ website/node_modules
|
|||
*~
|
||||
.*.swp
|
||||
.idea
|
||||
*.test
|
||||
|
|
28
CHANGELOG.md
28
CHANGELOG.md
|
@ -1,11 +1,25 @@
|
|||
## 0.6.7 (Unreleased)
|
||||
## 0.6.8 (Unreleased)
|
||||
|
||||
FEATURES:
|
||||
|
||||
* **New resource: `digitalocean_floating_ip`** [GH-3748]
|
||||
|
||||
IMPROVEMENTS:
|
||||
|
||||
BUG FIXES:
|
||||
|
||||
* provider/aws: Fixed a bug which could result in a panic when reading EC2 metadata [GH-4024]
|
||||
* provisioner/chef: Fix issue with path separators breaking the Chef provisioner on Windows [GH-4041]
|
||||
* providers/aws: Fix issue recreating security group rule if it has been destroyed [GH-4050]
|
||||
|
||||
## 0.6.7 (November 23, 2015)
|
||||
|
||||
FEATURES:
|
||||
|
||||
* **New provider: `tls`** - A utility provider for generating TLS keys/self-signed certificates for development and testing [GH-2778]
|
||||
* **New provider: `dyn`** - Manage DNS records on Dyn
|
||||
* **New resource: `aws_cloudformation_stack`** [GH-2636]
|
||||
* **New resource: `aws_cloudtrail`** [GH-3094]
|
||||
* **New resource: `aws_cloudtrail`** [GH-3094], [GH-4010]
|
||||
* **New resource: `aws_route`** [GH-3548]
|
||||
* **New resource: `aws_codecommit_repository`** [GH-3274]
|
||||
* **New resource: `aws_kinesis_firehose_delivery_stream`** [GH-3833]
|
||||
|
@ -63,17 +77,24 @@ BUG FIXES:
|
|||
* `terraform remote config`: update `--help` output [GH-3632]
|
||||
* core: modules on Git branches now update properly [GH-1568]
|
||||
* core: Fix issue preventing input prompts for unset variables during plan [GH-3843]
|
||||
* core: Fix issue preventing input prompts for unset variables during refresh [GH-4017]
|
||||
* core: Orphan resources can now be targets [GH-3912]
|
||||
* helper/schema: skip StateFunc when value is nil [GH-4002]
|
||||
* provider/google: Timeout when deleting large instance_group_manager [GH-3591]
|
||||
* provider/aws: Fix issue with order of Termincation Policies in AutoScaling Groups.
|
||||
This will introduce plans on upgrade to this version, in order to correct the ordering [GH-2890]
|
||||
* provider/aws: Allow cluster name, not only ARN for `aws_ecs_service` [GH-3668]
|
||||
* provider/aws: Fix a bug where a non-lower-cased `maintenance_window` can cause unnecessary planned changes [GH-4020]
|
||||
* provider/aws: Only set `weight` on an `aws_route53_record` if it has been set in configuration [GH-3900]
|
||||
* provider/aws: ignore association not exist on route table destroy [GH-3615]
|
||||
* provider/aws: Fix policy encoding issue with SNS Topics [GH-3700]
|
||||
* provider/aws: Correctly export ARN in `aws_iam_saml_provider` [GH-3827]
|
||||
* provider/aws: Fix issue deleting users who are attached to a group [GH-4005]
|
||||
* provider/aws: Fix crash in Route53 Record if Zone not found [GH-3945]
|
||||
* providers/aws: Fix typo in error checking for IAM Policy Attachments #3970
|
||||
* providers/aws: Retry deleting IAM Server Cert on dependency violation [GH-3898]
|
||||
* providers/aws: Update Spot Instance request to provide connection information [GH-3940]
|
||||
* providers/aws: Fix typo in error checking for IAM Policy Attachments [GH-3970]
|
||||
* provider/aws: Fix issue with LB Cookie Stickiness and empty expiration period [GH-3908]
|
||||
* provider/aws: Tolerate ElastiCache clusters being deleted outside Terraform [GH-3767]
|
||||
* provider/aws: Downcase Route 53 record names in statefile to match API output [GH-3574]
|
||||
* provider/aws: Fix issue that could occur if no ECS Cluster was found for a give name [GH-3829]
|
||||
|
@ -84,6 +105,7 @@ BUG FIXES:
|
|||
* provider/aws: Fix issue with updating the `aws_ecs_task_definition` where `aws_ecs_service` didn't wait for a new computed ARN [GH-3924]
|
||||
* provider/aws: Prevent crashing when deleting `aws_ecs_service` that is already gone [GH-3914]
|
||||
* provider/aws: Allow spaces in `aws_db_subnet_group.name` (undocumented in the API) [GH-3955]
|
||||
* provider/aws: Make VPC ID required on subnets [GH-4021]
|
||||
* provider/azure: various bugfixes [GH-3695]
|
||||
* provider/digitalocean: fix issue preventing SSH fingerprints from working [GH-3633]
|
||||
* provider/digitalocean: Fixing the DigitalOcean Droplet 404 potential on refresh of state [GH-3768]
|
||||
|
|
|
@ -5,6 +5,7 @@
|
|||
VAGRANTFILE_API_VERSION = "2"
|
||||
|
||||
$script = <<SCRIPT
|
||||
GOVERSION="1.5.1"
|
||||
SRCROOT="/opt/go"
|
||||
SRCPATH="/opt/gopath"
|
||||
|
||||
|
@ -18,8 +19,8 @@ sudo apt-get install -y build-essential curl git-core libpcre3-dev mercurial pkg
|
|||
|
||||
# Install Go
|
||||
cd /tmp
|
||||
wget -q https://storage.googleapis.com/golang/go1.4.2.linux-${ARCH}.tar.gz
|
||||
tar -xf go1.4.2.linux-${ARCH}.tar.gz
|
||||
wget --quiet https://storage.googleapis.com/golang/go${GOVERSION}.linux-${ARCH}.tar.gz
|
||||
tar -xvf go${GOVERSION}.linux-${ARCH}.tar.gz
|
||||
sudo mv go $SRCROOT
|
||||
sudo chmod 775 $SRCROOT
|
||||
sudo chown vagrant:vagrant $SRCROOT
|
||||
|
|
|
@ -12,6 +12,8 @@ import (
|
|||
|
||||
"github.com/aws/aws-sdk-go/aws/credentials"
|
||||
"github.com/aws/aws-sdk-go/aws/credentials/ec2rolecreds"
|
||||
"github.com/aws/aws-sdk-go/aws/ec2metadata"
|
||||
"github.com/aws/aws-sdk-go/aws/session"
|
||||
)
|
||||
|
||||
// Provider returns a terraform.ResourceProvider.
|
||||
|
@ -42,7 +44,7 @@ func Provider() terraform.ResourceProvider {
|
|||
conn, err := net.DialTimeout("tcp", "169.254.169.254:80", 100*time.Millisecond)
|
||||
if err == nil {
|
||||
conn.Close()
|
||||
providers = append(providers, &ec2rolecreds.EC2RoleProvider{})
|
||||
providers = append(providers, &ec2rolecreds.EC2RoleProvider{Client: ec2metadata.New(session.New())})
|
||||
}
|
||||
|
||||
credVal, credErr = credentials.NewChainCredentials(providers).Get()
|
||||
|
|
|
@ -171,7 +171,7 @@ resource "aws_instance" "test" {
|
|||
// one snapshot in our created AMI.
|
||||
// This is an Amazon Linux HVM AMI. A public HVM AMI is required
|
||||
// because paravirtual images cannot be copied between accounts.
|
||||
ami = "ami-8fff43e4"
|
||||
ami = "ami-5449393e"
|
||||
instance_type = "t2.micro"
|
||||
tags {
|
||||
Name = "terraform-acc-ami-copy-victim"
|
||||
|
|
|
@ -240,7 +240,7 @@ resource "aws_autoscaling_notification" "example" {
|
|||
`
|
||||
|
||||
const testAccASGNotificationConfig_update = `
|
||||
resource "aws_sns_topic" "user_updates" {
|
||||
resource "aws_sns_topic" "topic_example" {
|
||||
name = "user-updates-topic"
|
||||
}
|
||||
|
||||
|
@ -286,7 +286,7 @@ resource "aws_autoscaling_notification" "example" {
|
|||
"autoscaling:EC2_INSTANCE_TERMINATE",
|
||||
"autoscaling:EC2_INSTANCE_LAUNCH_ERROR"
|
||||
]
|
||||
topic_arn = "${aws_sns_topic.user_updates.arn}"
|
||||
topic_arn = "${aws_sns_topic.topic_example.arn}"
|
||||
}`
|
||||
|
||||
const testAccASGNotificationConfig_pagination = `
|
||||
|
|
|
@ -22,6 +22,11 @@ func resourceAwsCloudTrail() *schema.Resource {
|
|||
Required: true,
|
||||
ForceNew: true,
|
||||
},
|
||||
"enable_logging": &schema.Schema{
|
||||
Type: schema.TypeBool,
|
||||
Optional: true,
|
||||
Default: true,
|
||||
},
|
||||
"s3_bucket_name": &schema.Schema{
|
||||
Type: schema.TypeString,
|
||||
Required: true,
|
||||
|
@ -84,6 +89,14 @@ func resourceAwsCloudTrailCreate(d *schema.ResourceData, meta interface{}) error
|
|||
|
||||
d.SetId(*t.Name)
|
||||
|
||||
// AWS CloudTrail sets newly-created trails to false.
|
||||
if v, ok := d.GetOk("enable_logging"); ok && v.(bool) {
|
||||
err := cloudTrailSetLogging(conn, v.(bool), d.Id())
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
return resourceAwsCloudTrailRead(d, meta)
|
||||
}
|
||||
|
||||
|
@ -115,6 +128,12 @@ func resourceAwsCloudTrailRead(d *schema.ResourceData, meta interface{}) error {
|
|||
d.Set("include_global_service_events", trail.IncludeGlobalServiceEvents)
|
||||
d.Set("sns_topic_name", trail.SnsTopicName)
|
||||
|
||||
logstatus, err := cloudTrailGetLoggingStatus(conn, trail.Name)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
d.Set("enable_logging", logstatus)
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
|
@ -149,6 +168,15 @@ func resourceAwsCloudTrailUpdate(d *schema.ResourceData, meta interface{}) error
|
|||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if d.HasChange("enable_logging") {
|
||||
log.Printf("[DEBUG] Updating logging on CloudTrail: %s", input)
|
||||
err := cloudTrailSetLogging(conn, d.Get("enable_logging").(bool), *input.Name)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
log.Printf("[DEBUG] CloudTrail updated: %s", t)
|
||||
|
||||
return resourceAwsCloudTrailRead(d, meta)
|
||||
|
@ -165,3 +193,45 @@ func resourceAwsCloudTrailDelete(d *schema.ResourceData, meta interface{}) error
|
|||
|
||||
return err
|
||||
}
|
||||
|
||||
func cloudTrailGetLoggingStatus(conn *cloudtrail.CloudTrail, id *string) (bool, error) {
|
||||
GetTrailStatusOpts := &cloudtrail.GetTrailStatusInput{
|
||||
Name: id,
|
||||
}
|
||||
resp, err := conn.GetTrailStatus(GetTrailStatusOpts)
|
||||
if err != nil {
|
||||
return false, fmt.Errorf("Error retrieving logging status of CloudTrail (%s): %s", *id, err)
|
||||
}
|
||||
|
||||
return *resp.IsLogging, err
|
||||
}
|
||||
|
||||
func cloudTrailSetLogging(conn *cloudtrail.CloudTrail, enabled bool, id string) error {
|
||||
if enabled {
|
||||
log.Printf(
|
||||
"[DEBUG] Starting logging on CloudTrail (%s)",
|
||||
id)
|
||||
StartLoggingOpts := &cloudtrail.StartLoggingInput{
|
||||
Name: aws.String(id),
|
||||
}
|
||||
if _, err := conn.StartLogging(StartLoggingOpts); err != nil {
|
||||
return fmt.Errorf(
|
||||
"Error starting logging on CloudTrail (%s): %s",
|
||||
id, err)
|
||||
}
|
||||
} else {
|
||||
log.Printf(
|
||||
"[DEBUG] Stopping logging on CloudTrail (%s)",
|
||||
id)
|
||||
StopLoggingOpts := &cloudtrail.StopLoggingInput{
|
||||
Name: aws.String(id),
|
||||
}
|
||||
if _, err := conn.StopLogging(StopLoggingOpts); err != nil {
|
||||
return fmt.Errorf(
|
||||
"Error stopping logging on CloudTrail (%s): %s",
|
||||
id, err)
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
|
|
@ -39,6 +39,41 @@ func TestAccAWSCloudTrail_basic(t *testing.T) {
|
|||
})
|
||||
}
|
||||
|
||||
func TestAccAWSCloudTrail_enable_logging(t *testing.T) {
|
||||
var trail cloudtrail.Trail
|
||||
|
||||
resource.Test(t, resource.TestCase{
|
||||
PreCheck: func() { testAccPreCheck(t) },
|
||||
Providers: testAccProviders,
|
||||
CheckDestroy: testAccCheckAWSCloudTrailDestroy,
|
||||
Steps: []resource.TestStep{
|
||||
resource.TestStep{
|
||||
Config: testAccAWSCloudTrailConfig,
|
||||
Check: resource.ComposeTestCheckFunc(
|
||||
testAccCheckCloudTrailExists("aws_cloudtrail.foobar", &trail),
|
||||
// AWS will create the trail with logging turned off.
|
||||
// Test that "enable_logging" default works.
|
||||
testAccCheckCloudTrailLoggingEnabled("aws_cloudtrail.foobar", true, &trail),
|
||||
),
|
||||
},
|
||||
resource.TestStep{
|
||||
Config: testAccAWSCloudTrailConfigModified,
|
||||
Check: resource.ComposeTestCheckFunc(
|
||||
testAccCheckCloudTrailExists("aws_cloudtrail.foobar", &trail),
|
||||
testAccCheckCloudTrailLoggingEnabled("aws_cloudtrail.foobar", false, &trail),
|
||||
),
|
||||
},
|
||||
resource.TestStep{
|
||||
Config: testAccAWSCloudTrailConfig,
|
||||
Check: resource.ComposeTestCheckFunc(
|
||||
testAccCheckCloudTrailExists("aws_cloudtrail.foobar", &trail),
|
||||
testAccCheckCloudTrailLoggingEnabled("aws_cloudtrail.foobar", true, &trail),
|
||||
),
|
||||
},
|
||||
},
|
||||
})
|
||||
}
|
||||
|
||||
func testAccCheckCloudTrailExists(n string, trail *cloudtrail.Trail) resource.TestCheckFunc {
|
||||
return func(s *terraform.State) error {
|
||||
rs, ok := s.RootModule().Resources[n]
|
||||
|
@ -63,6 +98,30 @@ func testAccCheckCloudTrailExists(n string, trail *cloudtrail.Trail) resource.Te
|
|||
}
|
||||
}
|
||||
|
||||
func testAccCheckCloudTrailLoggingEnabled(n string, desired bool, trail *cloudtrail.Trail) resource.TestCheckFunc {
|
||||
return func(s *terraform.State) error {
|
||||
rs, ok := s.RootModule().Resources[n]
|
||||
if !ok {
|
||||
return fmt.Errorf("Not found: %s", n)
|
||||
}
|
||||
|
||||
conn := testAccProvider.Meta().(*AWSClient).cloudtrailconn
|
||||
params := cloudtrail.GetTrailStatusInput{
|
||||
Name: aws.String(rs.Primary.ID),
|
||||
}
|
||||
resp, err := conn.GetTrailStatus(¶ms)
|
||||
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if *resp.IsLogging != desired {
|
||||
return fmt.Errorf("Expected logging status %t, given %t", desired, *resp.IsLogging)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
func testAccCheckAWSCloudTrailDestroy(s *terraform.State) error {
|
||||
conn := testAccProvider.Meta().(*AWSClient).cloudtrailconn
|
||||
|
||||
|
@ -134,6 +193,7 @@ resource "aws_cloudtrail" "foobar" {
|
|||
s3_bucket_name = "${aws_s3_bucket.foo.id}"
|
||||
s3_key_prefix = "/prefix"
|
||||
include_global_service_events = false
|
||||
enable_logging = false
|
||||
}
|
||||
|
||||
resource "aws_s3_bucket" "foo" {
|
||||
|
|
|
@ -71,6 +71,11 @@ func resourceAwsElasticacheCluster() *schema.Resource {
|
|||
Type: schema.TypeString,
|
||||
Optional: true,
|
||||
Computed: true,
|
||||
StateFunc: func(val interface{}) string {
|
||||
// Elasticache always changes the maintenance
|
||||
// to lowercase
|
||||
return strings.ToLower(val.(string))
|
||||
},
|
||||
},
|
||||
"subnet_group_name": &schema.Schema{
|
||||
Type: schema.TypeString,
|
||||
|
@ -141,6 +146,7 @@ func resourceAwsElasticacheCluster() *schema.Resource {
|
|||
"snapshot_window": &schema.Schema{
|
||||
Type: schema.TypeString,
|
||||
Optional: true,
|
||||
Computed: true,
|
||||
},
|
||||
|
||||
"snapshot_retention_limit": &schema.Schema{
|
||||
|
|
|
@ -6,6 +6,7 @@ import (
|
|||
"log"
|
||||
"regexp"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/aws/aws-sdk-go/aws"
|
||||
"github.com/aws/aws-sdk-go/aws/awserr"
|
||||
|
@ -256,8 +257,23 @@ func resourceAwsElbCreate(d *schema.ResourceData, meta interface{}) error {
|
|||
}
|
||||
|
||||
log.Printf("[DEBUG] ELB create configuration: %#v", elbOpts)
|
||||
if _, err := elbconn.CreateLoadBalancer(elbOpts); err != nil {
|
||||
return fmt.Errorf("Error creating ELB: %s", err)
|
||||
err = resource.Retry(1*time.Minute, func() error {
|
||||
_, err := elbconn.CreateLoadBalancer(elbOpts)
|
||||
|
||||
if err != nil {
|
||||
if awsErr, ok := err.(awserr.Error); ok {
|
||||
// Check for IAM SSL Cert error, eventual consistancy issue
|
||||
if awsErr.Code() == "CertificateNotFound" {
|
||||
return fmt.Errorf("[WARN] Error creating ELB Listener with SSL Cert, retrying: %s", err)
|
||||
}
|
||||
}
|
||||
return resource.RetryError{Err: err}
|
||||
}
|
||||
return nil
|
||||
})
|
||||
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// Assign the elb's unique identifier for use later
|
||||
|
@ -394,6 +410,7 @@ func resourceAwsElbUpdate(d *schema.ResourceData, meta interface{}) error {
|
|||
LoadBalancerPorts: ports,
|
||||
}
|
||||
|
||||
log.Printf("[DEBUG] ELB Delete Listeners opts: %s", deleteListenersOpts)
|
||||
_, err := elbconn.DeleteLoadBalancerListeners(deleteListenersOpts)
|
||||
if err != nil {
|
||||
return fmt.Errorf("Failure removing outdated ELB listeners: %s", err)
|
||||
|
@ -406,6 +423,7 @@ func resourceAwsElbUpdate(d *schema.ResourceData, meta interface{}) error {
|
|||
Listeners: add,
|
||||
}
|
||||
|
||||
log.Printf("[DEBUG] ELB Create Listeners opts: %s", createListenersOpts)
|
||||
_, err := elbconn.CreateLoadBalancerListeners(createListenersOpts)
|
||||
if err != nil {
|
||||
return fmt.Errorf("Failure adding new or updated ELB listeners: %s", err)
|
||||
|
|
|
@ -179,6 +179,33 @@ func TestAccAWSELB_tags(t *testing.T) {
|
|||
})
|
||||
}
|
||||
|
||||
func TestAccAWSELB_iam_server_cert(t *testing.T) {
|
||||
var conf elb.LoadBalancerDescription
|
||||
// var td elb.TagDescription
|
||||
testCheck := func(*terraform.State) error {
|
||||
if len(conf.ListenerDescriptions) != 1 {
|
||||
return fmt.Errorf(
|
||||
"TestAccAWSELB_iam_server_cert expected 1 listener, got %d",
|
||||
len(conf.ListenerDescriptions))
|
||||
}
|
||||
return nil
|
||||
}
|
||||
resource.Test(t, resource.TestCase{
|
||||
PreCheck: func() { testAccPreCheck(t) },
|
||||
Providers: testAccProviders,
|
||||
CheckDestroy: testAccCheckAWSELBDestroy,
|
||||
Steps: []resource.TestStep{
|
||||
resource.TestStep{
|
||||
Config: testAccELBIAMServerCertConfig,
|
||||
Check: resource.ComposeTestCheckFunc(
|
||||
testAccCheckAWSELBExists("aws_elb.bar", &conf),
|
||||
testCheck,
|
||||
),
|
||||
},
|
||||
},
|
||||
})
|
||||
}
|
||||
|
||||
func testAccLoadTags(conf *elb.LoadBalancerDescription, td *elb.TagDescription) resource.TestCheckFunc {
|
||||
return func(s *terraform.State) error {
|
||||
conn := testAccProvider.Meta().(*AWSClient).elbconn
|
||||
|
@ -1001,3 +1028,97 @@ resource "aws_security_group" "bar" {
|
|||
}
|
||||
}
|
||||
`
|
||||
|
||||
// This IAM Server config is lifted from
|
||||
// builtin/providers/aws/resource_aws_iam_server_certificate_test.go
|
||||
var testAccELBIAMServerCertConfig = `
|
||||
resource "aws_iam_server_certificate" "test_cert" {
|
||||
name = "terraform-test-cert"
|
||||
certificate_body = <<EOF
|
||||
-----BEGIN CERTIFICATE-----
|
||||
MIIDCDCCAfACAQEwDQYJKoZIhvcNAQELBQAwgY4xCzAJBgNVBAYTAlVTMREwDwYD
|
||||
VQQIDAhOZXcgWW9yazERMA8GA1UEBwwITmV3IFlvcmsxFjAUBgNVBAoMDUJhcmVm
|
||||
b290IExhYnMxGDAWBgNVBAMMD0phc29uIEJlcmxpbnNreTEnMCUGCSqGSIb3DQEJ
|
||||
ARYYamFzb25AYmFyZWZvb3Rjb2RlcnMuY29tMB4XDTE1MDYyMTA1MzcwNVoXDTE2
|
||||
MDYyMDA1MzcwNVowgYgxCzAJBgNVBAYTAlVTMREwDwYDVQQIDAhOZXcgWW9yazEL
|
||||
MAkGA1UEBwwCTlkxFjAUBgNVBAoMDUJhcmVmb290IExhYnMxGDAWBgNVBAMMD0ph
|
||||
c29uIEJlcmxpbnNreTEnMCUGCSqGSIb3DQEJARYYamFzb25AYmFyZWZvb3Rjb2Rl
|
||||
cnMuY29tMIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQD2AVGKRIx+EFM0kkg7
|
||||
6GoJv9uy0biEDHB4phQBqnDIf8J8/gq9eVvQrR5jJC9Uz4zp5wG/oLZlGuF92/jD
|
||||
bI/yS+DOAjrh30vN79Au74jGN2Cw8fIak40iDUwjZaczK2Gkna54XIO9pqMcbQ6Q
|
||||
mLUkQXsqlJ7Q4X2kL3b9iMsXcQIDAQABMA0GCSqGSIb3DQEBCwUAA4IBAQCDGNvU
|
||||
eioQMVPNlmmxW3+Rwo0Kl+/HtUOmqUDKUDvJnelxulBr7O8w75N/Z7h7+aBJCUkt
|
||||
tz+DwATZswXtsal6TuzHHpAhpFql82jQZVE8OYkrX84XKRQpm8ZnbyZObMdXTJWk
|
||||
ArC/rGVIWsvhlbgGM8zu7a3zbeuAESZ8Bn4ZbJxnoaRK8p36/alvzAwkgzSf3oUX
|
||||
HtU4LrdunevBs6/CbKCWrxYcvNCy8EcmHitqCfQL5nxCCXpgf/Mw1vmIPTwbPSJq
|
||||
oUkh5yjGRKzhh7QbG1TlFX6zUp4vb+UJn5+g4edHrqivRSjIqYrC45ygVMOABn21
|
||||
hpMXOlZL+YXfR4Kp
|
||||
-----END CERTIFICATE-----
|
||||
EOF
|
||||
|
||||
certificate_chain = <<EOF
|
||||
-----BEGIN CERTIFICATE-----
|
||||
MIID8TCCAtmgAwIBAgIJAKX2xeCkfFcbMA0GCSqGSIb3DQEBCwUAMIGOMQswCQYD
|
||||
VQQGEwJVUzERMA8GA1UECAwITmV3IFlvcmsxETAPBgNVBAcMCE5ldyBZb3JrMRYw
|
||||
FAYDVQQKDA1CYXJlZm9vdCBMYWJzMRgwFgYDVQQDDA9KYXNvbiBCZXJsaW5za3kx
|
||||
JzAlBgkqhkiG9w0BCQEWGGphc29uQGJhcmVmb290Y29kZXJzLmNvbTAeFw0xNTA2
|
||||
MjEwNTM2MDZaFw0yNTA2MTgwNTM2MDZaMIGOMQswCQYDVQQGEwJVUzERMA8GA1UE
|
||||
CAwITmV3IFlvcmsxETAPBgNVBAcMCE5ldyBZb3JrMRYwFAYDVQQKDA1CYXJlZm9v
|
||||
dCBMYWJzMRgwFgYDVQQDDA9KYXNvbiBCZXJsaW5za3kxJzAlBgkqhkiG9w0BCQEW
|
||||
GGphc29uQGJhcmVmb290Y29kZXJzLmNvbTCCASIwDQYJKoZIhvcNAQEBBQADggEP
|
||||
ADCCAQoCggEBAMteFbwfLz7NyQn3eDxxw22l1ZPBrzfPON0HOAq8nHat4kT4A2cI
|
||||
45kCtxKMzCVoG84tXoX/rbjGkez7lz9lEfvEuSh+I+UqinFA/sefhcE63foVMZu1
|
||||
2t6O3+utdxBvOYJwAQaiGW44x0h6fTyqDv6Gc5Ml0uoIVeMWPhT1MREoOcPDz1gb
|
||||
Ep3VT2aqFULLJedP37qbzS4D04rn1tS7pcm3wYivRyjVNEvs91NsWEvvE1WtS2Cl
|
||||
2RBt+ihXwq4UNB9UPYG75+FuRcQQvfqameyweyKT9qBmJLELMtYa/KTCYvSch4JY
|
||||
YVPAPOlhFlO4BcTto/gpBes2WEAWZtE/jnECAwEAAaNQME4wHQYDVR0OBBYEFOna
|
||||
aiYnm5583EY7FT/mXwTBuLZgMB8GA1UdIwQYMBaAFOnaaiYnm5583EY7FT/mXwTB
|
||||
uLZgMAwGA1UdEwQFMAMBAf8wDQYJKoZIhvcNAQELBQADggEBABp/dKQ489CCzzB1
|
||||
IX78p6RFAdda4e3lL6uVjeS3itzFIIiKvdf1/txhmsEeCEYz0El6aMnXLkpk7jAr
|
||||
kCwlAOOz2R2hlA8k8opKTYX4IQQau8DATslUFAFOvRGOim/TD/Yuch+a/VF2VQKz
|
||||
L2lUVi5Hjp9KvWe2HQYPjnJaZs/OKAmZQ4uP547dqFrTz6sWfisF1rJ60JH70cyM
|
||||
qjZQp/xYHTZIB8TCPvLgtVIGFmd/VAHVBFW2p9IBwtSxBIsEPwYQOV3XbwhhmGIv
|
||||
DWx5TpnEzH7ZM33RNbAKcdwOBxdRY+SI/ua5hYCm4QngAqY69lEuk4zXZpdDLPq1
|
||||
qxxQx0E=
|
||||
-----END CERTIFICATE-----
|
||||
EOF
|
||||
|
||||
private_key = <<EOF
|
||||
-----BEGIN RSA PRIVATE KEY-----
|
||||
MIICXQIBAAKBgQD2AVGKRIx+EFM0kkg76GoJv9uy0biEDHB4phQBqnDIf8J8/gq9
|
||||
eVvQrR5jJC9Uz4zp5wG/oLZlGuF92/jDbI/yS+DOAjrh30vN79Au74jGN2Cw8fIa
|
||||
k40iDUwjZaczK2Gkna54XIO9pqMcbQ6QmLUkQXsqlJ7Q4X2kL3b9iMsXcQIDAQAB
|
||||
AoGALmVBQ5p6BKx/hMKx7NqAZSZSAP+clQrji12HGGlUq/usanZfAC0LK+f6eygv
|
||||
5QbfxJ1UrxdYTukq7dm2qOSooOMUuukWInqC6ztjdLwH70CKnl0bkNB3/NkW2VNc
|
||||
32YiUuZCM9zaeBuEUclKNs+dhD2EeGdJF8KGntWGOTU/M4ECQQD9gdYb38PvaMdu
|
||||
opM3sKJF5n9pMoLDleBpCGqq3nD3DFn0V6PHQAwn30EhRN+7BbUEpde5PmfoIdAR
|
||||
uDlj/XPlAkEA+GyY1e4uU9rz+1K4ubxmtXTp9ZIR2LsqFy5L/MS5hqX2zq5GGq8g
|
||||
jZYDxnxPEUrxaWQH4nh0qdu3skUBi4a0nQJBAKJaqLkpUd7eB/t++zHLWeHSgP7q
|
||||
bny8XABod4f+9fICYwntpuJQzngqrxeTeIXaXdggLkxg/0LXhN4UUg0LoVECQQDE
|
||||
Pi1h2dyY+37/CzLH7q+IKopjJneYqQmv9C+sxs70MgjM7liM3ckub9IdqrdfJr+c
|
||||
DJw56APo5puvZNm6mbf1AkBVMDyfdOOyoHpJjrhmZWo6QqynujfwErrBYQ0sZQ3l
|
||||
O57Z0RUNQ8DRyymhLd2t5nAHTfpcFA1sBeKE6CziLbZB
|
||||
-----END RSA PRIVATE KEY-----
|
||||
EOF
|
||||
}
|
||||
|
||||
resource "aws_elb" "bar" {
|
||||
name = "foobar-terraform-test"
|
||||
availability_zones = ["us-west-2a", "us-west-2b", "us-west-2c"]
|
||||
|
||||
listener {
|
||||
instance_port = 8000
|
||||
instance_protocol = "https"
|
||||
lb_port = 80
|
||||
// Protocol should be case insensitive
|
||||
lb_protocol = "HttPs"
|
||||
ssl_certificate_id = "${aws_iam_server_certificate.test_cert.arn}"
|
||||
}
|
||||
|
||||
tags {
|
||||
bar = "baz"
|
||||
}
|
||||
|
||||
cross_zone_load_balancing = true
|
||||
}
|
||||
`
|
||||
|
|
|
@ -2,7 +2,6 @@ package aws
|
|||
|
||||
import (
|
||||
"fmt"
|
||||
"os"
|
||||
"testing"
|
||||
|
||||
"github.com/aws/aws-sdk-go/aws"
|
||||
|
@ -13,7 +12,6 @@ import (
|
|||
|
||||
func TestAccAWSFlowLog_basic(t *testing.T) {
|
||||
var flowLog ec2.FlowLog
|
||||
lgn := os.Getenv("LOG_GROUP_NAME")
|
||||
|
||||
resource.Test(t, resource.TestCase{
|
||||
PreCheck: func() { testAccPreCheck(t) },
|
||||
|
@ -21,7 +19,7 @@ func TestAccAWSFlowLog_basic(t *testing.T) {
|
|||
CheckDestroy: testAccCheckFlowLogDestroy,
|
||||
Steps: []resource.TestStep{
|
||||
resource.TestStep{
|
||||
Config: fmt.Sprintf(testAccFlowLogConfig_basic, lgn),
|
||||
Config: testAccFlowLogConfig_basic,
|
||||
Check: resource.ComposeTestCheckFunc(
|
||||
testAccCheckFlowLogExists("aws_flow_log.test_flow_log", &flowLog),
|
||||
testAccCheckAWSFlowLogAttributes(&flowLog),
|
||||
|
@ -33,7 +31,6 @@ func TestAccAWSFlowLog_basic(t *testing.T) {
|
|||
|
||||
func TestAccAWSFlowLog_subnet(t *testing.T) {
|
||||
var flowLog ec2.FlowLog
|
||||
lgn := os.Getenv("LOG_GROUP_NAME")
|
||||
|
||||
resource.Test(t, resource.TestCase{
|
||||
PreCheck: func() { testAccPreCheck(t) },
|
||||
|
@ -41,7 +38,7 @@ func TestAccAWSFlowLog_subnet(t *testing.T) {
|
|||
CheckDestroy: testAccCheckFlowLogDestroy,
|
||||
Steps: []resource.TestStep{
|
||||
resource.TestStep{
|
||||
Config: fmt.Sprintf(testAccFlowLogConfig_subnet, lgn),
|
||||
Config: testAccFlowLogConfig_subnet,
|
||||
Check: resource.ComposeTestCheckFunc(
|
||||
testAccCheckFlowLogExists("aws_flow_log.test_flow_log_subnet", &flowLog),
|
||||
testAccCheckAWSFlowLogAttributes(&flowLog),
|
||||
|
@ -143,6 +140,9 @@ resource "aws_iam_role" "test_role" {
|
|||
EOF
|
||||
}
|
||||
|
||||
resource "aws_cloudwatch_log_group" "foobar" {
|
||||
name = "foo-bar"
|
||||
}
|
||||
resource "aws_flow_log" "test_flow_log" {
|
||||
# log_group_name needs to exist before hand
|
||||
# until we have a CloudWatch Log Group Resource
|
||||
|
@ -155,7 +155,7 @@ resource "aws_flow_log" "test_flow_log" {
|
|||
resource "aws_flow_log" "test_flow_log_subnet" {
|
||||
# log_group_name needs to exist before hand
|
||||
# until we have a CloudWatch Log Group Resource
|
||||
log_group_name = "%s"
|
||||
log_group_name = "${aws_cloudwatch_log_group.foobar.name}"
|
||||
iam_role_arn = "${aws_iam_role.test_role.arn}"
|
||||
subnet_id = "${aws_subnet.test_subnet.id}"
|
||||
traffic_type = "ALL"
|
||||
|
@ -200,11 +200,14 @@ resource "aws_iam_role" "test_role" {
|
|||
}
|
||||
EOF
|
||||
}
|
||||
resource "aws_cloudwatch_log_group" "foobar" {
|
||||
name = "foo-bar"
|
||||
}
|
||||
|
||||
resource "aws_flow_log" "test_flow_log_subnet" {
|
||||
# log_group_name needs to exist before hand
|
||||
# until we have a CloudWatch Log Group Resource
|
||||
log_group_name = "%s"
|
||||
log_group_name = "${aws_cloudwatch_log_group.foobar.name}"
|
||||
iam_role_arn = "${aws_iam_role.test_role.arn}"
|
||||
subnet_id = "${aws_subnet.test_subnet.id}"
|
||||
traffic_type = "ALL"
|
||||
|
|
|
@ -33,6 +33,14 @@ func TestAccAWSGroupMembership_basic(t *testing.T) {
|
|||
testAccCheckAWSGroupMembershipAttributes(&group, []string{"test-user-two", "test-user-three"}),
|
||||
),
|
||||
},
|
||||
|
||||
resource.TestStep{
|
||||
Config: testAccAWSGroupMemberConfigUpdateDown,
|
||||
Check: resource.ComposeTestCheckFunc(
|
||||
testAccCheckAWSGroupMembershipExists("aws_iam_group_membership.team", &group),
|
||||
testAccCheckAWSGroupMembershipAttributes(&group, []string{"test-user-three"}),
|
||||
),
|
||||
},
|
||||
},
|
||||
})
|
||||
}
|
||||
|
@ -167,3 +175,23 @@ resource "aws_iam_group_membership" "team" {
|
|||
group = "${aws_iam_group.group.name}"
|
||||
}
|
||||
`
|
||||
|
||||
const testAccAWSGroupMemberConfigUpdateDown = `
|
||||
resource "aws_iam_group" "group" {
|
||||
name = "test-group"
|
||||
path = "/"
|
||||
}
|
||||
|
||||
resource "aws_iam_user" "user_three" {
|
||||
name = "test-user-three"
|
||||
path = "/"
|
||||
}
|
||||
|
||||
resource "aws_iam_group_membership" "team" {
|
||||
name = "tf-testing-group-membership"
|
||||
users = [
|
||||
"${aws_iam_user.user_three.name}",
|
||||
]
|
||||
group = "${aws_iam_group.group.name}"
|
||||
}
|
||||
`
|
||||
|
|
|
@ -6,10 +6,12 @@ import (
|
|||
"fmt"
|
||||
"log"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/aws/aws-sdk-go/aws"
|
||||
"github.com/aws/aws-sdk-go/aws/awserr"
|
||||
"github.com/aws/aws-sdk-go/service/iam"
|
||||
"github.com/hashicorp/terraform/helper/resource"
|
||||
"github.com/hashicorp/terraform/helper/schema"
|
||||
)
|
||||
|
||||
|
@ -124,14 +126,24 @@ func resourceAwsIAMServerCertificateRead(d *schema.ResourceData, meta interface{
|
|||
|
||||
func resourceAwsIAMServerCertificateDelete(d *schema.ResourceData, meta interface{}) error {
|
||||
conn := meta.(*AWSClient).iamconn
|
||||
_, err := conn.DeleteServerCertificate(&iam.DeleteServerCertificateInput{
|
||||
ServerCertificateName: aws.String(d.Get("name").(string)),
|
||||
log.Printf("[INFO] Deleting IAM Server Certificate: %s", d.Id())
|
||||
err := resource.Retry(1*time.Minute, func() error {
|
||||
_, err := conn.DeleteServerCertificate(&iam.DeleteServerCertificateInput{
|
||||
ServerCertificateName: aws.String(d.Get("name").(string)),
|
||||
})
|
||||
|
||||
if err != nil {
|
||||
if awsErr, ok := err.(awserr.Error); ok {
|
||||
if awsErr.Code() == "DeleteConflict" && strings.Contains(awsErr.Message(), "currently in use by arn") {
|
||||
return fmt.Errorf("[WARN] Conflict deleting server certificate: %s, retrying", awsErr.Message())
|
||||
}
|
||||
}
|
||||
return resource.RetryError{Err: err}
|
||||
}
|
||||
return nil
|
||||
})
|
||||
|
||||
if err != nil {
|
||||
if awsErr, ok := err.(awserr.Error); ok {
|
||||
return fmt.Errorf("[WARN] Error deleting server certificate: %s: %s", awsErr.Code(), awsErr.Message())
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
|
|
|
@ -132,6 +132,44 @@ func resourceAwsIamUserUpdate(d *schema.ResourceData, meta interface{}) error {
|
|||
func resourceAwsIamUserDelete(d *schema.ResourceData, meta interface{}) error {
|
||||
iamconn := meta.(*AWSClient).iamconn
|
||||
|
||||
// IAM Users must be removed from all groups before they can be deleted
|
||||
var groups []string
|
||||
var marker *string
|
||||
truncated := aws.Bool(true)
|
||||
|
||||
for *truncated == true {
|
||||
listOpts := iam.ListGroupsForUserInput{
|
||||
UserName: aws.String(d.Id()),
|
||||
}
|
||||
|
||||
if marker != nil {
|
||||
listOpts.Marker = marker
|
||||
}
|
||||
|
||||
r, err := iamconn.ListGroupsForUser(&listOpts)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
for _, g := range r.Groups {
|
||||
groups = append(groups, *g.GroupName)
|
||||
}
|
||||
|
||||
// if there's a marker present, we need to save it for pagination
|
||||
if r.Marker != nil {
|
||||
*marker = *r.Marker
|
||||
}
|
||||
*truncated = *r.IsTruncated
|
||||
}
|
||||
|
||||
for _, g := range groups {
|
||||
// use iam group membership func to remove user from all groups
|
||||
log.Printf("[DEBUG] Removing IAM User %s from IAM Group %s", d.Id(), g)
|
||||
if err := removeUsersFromGroup(iamconn, []*string{aws.String(d.Id())}, g); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
request := &iam.DeleteUserInput{
|
||||
UserName: aws.String(d.Id()),
|
||||
}
|
||||
|
|
|
@ -2,6 +2,7 @@ package aws
|
|||
|
||||
import (
|
||||
"fmt"
|
||||
"log"
|
||||
"strings"
|
||||
|
||||
"github.com/aws/aws-sdk-go/aws"
|
||||
|
@ -41,6 +42,14 @@ func resourceAwsLBCookieStickinessPolicy() *schema.Resource {
|
|||
Type: schema.TypeInt,
|
||||
Optional: true,
|
||||
ForceNew: true,
|
||||
ValidateFunc: func(v interface{}, k string) (ws []string, es []error) {
|
||||
value := v.(int)
|
||||
if value <= 0 {
|
||||
es = append(es, fmt.Errorf(
|
||||
"LB Cookie Expiration Period must be greater than zero if specified"))
|
||||
}
|
||||
return
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
@ -51,11 +60,15 @@ func resourceAwsLBCookieStickinessPolicyCreate(d *schema.ResourceData, meta inte
|
|||
|
||||
// Provision the LBStickinessPolicy
|
||||
lbspOpts := &elb.CreateLBCookieStickinessPolicyInput{
|
||||
CookieExpirationPeriod: aws.Int64(int64(d.Get("cookie_expiration_period").(int))),
|
||||
LoadBalancerName: aws.String(d.Get("load_balancer").(string)),
|
||||
PolicyName: aws.String(d.Get("name").(string)),
|
||||
LoadBalancerName: aws.String(d.Get("load_balancer").(string)),
|
||||
PolicyName: aws.String(d.Get("name").(string)),
|
||||
}
|
||||
|
||||
if v := d.Get("cookie_expiration_period").(int); v > 0 {
|
||||
lbspOpts.CookieExpirationPeriod = aws.Int64(int64(v))
|
||||
}
|
||||
|
||||
log.Printf("[DEBUG] LB Cookie Stickiness Policy opts: %#v", lbspOpts)
|
||||
if _, err := elbconn.CreateLBCookieStickinessPolicy(lbspOpts); err != nil {
|
||||
return fmt.Errorf("Error creating LBCookieStickinessPolicy: %s", err)
|
||||
}
|
||||
|
@ -66,6 +79,7 @@ func resourceAwsLBCookieStickinessPolicyCreate(d *schema.ResourceData, meta inte
|
|||
PolicyNames: []*string{aws.String(d.Get("name").(string))},
|
||||
}
|
||||
|
||||
log.Printf("[DEBUG] LB Cookie Stickiness create configuration: %#v", setLoadBalancerOpts)
|
||||
if _, err := elbconn.SetLoadBalancerPoliciesOfListener(setLoadBalancerOpts); err != nil {
|
||||
return fmt.Errorf("Error setting LBCookieStickinessPolicy: %s", err)
|
||||
}
|
||||
|
|
|
@ -94,7 +94,6 @@ resource "aws_lb_cookie_stickiness_policy" "foo" {
|
|||
name = "foo-policy"
|
||||
load_balancer = "${aws_elb.lb.id}"
|
||||
lb_port = 80
|
||||
cookie_expiration_period = 600
|
||||
}
|
||||
`
|
||||
|
||||
|
|
|
@ -174,9 +174,10 @@ func resourceAwsSecurityGroupRuleRead(d *schema.ResourceData, meta interface{})
|
|||
p := expandIPPerm(d, sg)
|
||||
|
||||
if len(rules) == 0 {
|
||||
return fmt.Errorf(
|
||||
"[WARN] No %s rules were found for Security Group (%s) looking for Security Group Rule (%s)",
|
||||
log.Printf("[WARN] No %s rules were found for Security Group (%s) looking for Security Group Rule (%s)",
|
||||
ruleType, *sg.GroupName, d.Id())
|
||||
d.SetId("")
|
||||
return nil
|
||||
}
|
||||
|
||||
for _, r := range rules {
|
||||
|
|
|
@ -97,6 +97,14 @@ func resourceAwsSpotInstanceRequestCreate(d *schema.ResourceData, meta interface
|
|||
},
|
||||
}
|
||||
|
||||
// If the instance is configured with a Network Interface (a subnet, has
|
||||
// public IP, etc), then the instanceOpts.SecurityGroupIds and SubnetId will
|
||||
// be nil
|
||||
if len(instanceOpts.NetworkInterfaces) > 0 {
|
||||
spotOpts.LaunchSpecification.SecurityGroupIds = instanceOpts.NetworkInterfaces[0].Groups
|
||||
spotOpts.LaunchSpecification.SubnetId = instanceOpts.NetworkInterfaces[0].SubnetId
|
||||
}
|
||||
|
||||
// Make the spot instance request
|
||||
log.Printf("[DEBUG] Requesting spot bid opts: %s", spotOpts)
|
||||
resp, err := conn.RequestSpotInstances(spotOpts)
|
||||
|
@ -172,6 +180,10 @@ func resourceAwsSpotInstanceRequestRead(d *schema.ResourceData, meta interface{}
|
|||
// Instance ID is not set if the request is still pending
|
||||
if request.InstanceId != nil {
|
||||
d.Set("spot_instance_id", *request.InstanceId)
|
||||
// Read the instance data, setting up connection information
|
||||
if err := readInstance(d, meta); err != nil {
|
||||
return fmt.Errorf("[ERR] Error reading Spot Instance Data: %s", err)
|
||||
}
|
||||
}
|
||||
d.Set("spot_request_state", *request.State)
|
||||
d.Set("tags", tagsToMap(request.Tags))
|
||||
|
@ -179,6 +191,54 @@ func resourceAwsSpotInstanceRequestRead(d *schema.ResourceData, meta interface{}
|
|||
return nil
|
||||
}
|
||||
|
||||
func readInstance(d *schema.ResourceData, meta interface{}) error {
|
||||
conn := meta.(*AWSClient).ec2conn
|
||||
|
||||
resp, err := conn.DescribeInstances(&ec2.DescribeInstancesInput{
|
||||
InstanceIds: []*string{aws.String(d.Get("spot_instance_id").(string))},
|
||||
})
|
||||
if err != nil {
|
||||
// If the instance was not found, return nil so that we can show
|
||||
// that the instance is gone.
|
||||
if ec2err, ok := err.(awserr.Error); ok && ec2err.Code() == "InvalidInstanceID.NotFound" {
|
||||
return fmt.Errorf("no instance found")
|
||||
}
|
||||
|
||||
// Some other error, report it
|
||||
return err
|
||||
}
|
||||
|
||||
// If nothing was found, then return no state
|
||||
if len(resp.Reservations) == 0 {
|
||||
return fmt.Errorf("no instances found")
|
||||
}
|
||||
|
||||
instance := resp.Reservations[0].Instances[0]
|
||||
|
||||
// Set these fields for connection information
|
||||
if instance != nil {
|
||||
d.Set("public_dns", instance.PublicDnsName)
|
||||
d.Set("public_ip", instance.PublicIpAddress)
|
||||
d.Set("private_dns", instance.PrivateDnsName)
|
||||
d.Set("private_ip", instance.PrivateIpAddress)
|
||||
|
||||
// set connection information
|
||||
if instance.PublicIpAddress != nil {
|
||||
d.SetConnInfo(map[string]string{
|
||||
"type": "ssh",
|
||||
"host": *instance.PublicIpAddress,
|
||||
})
|
||||
} else if instance.PrivateIpAddress != nil {
|
||||
d.SetConnInfo(map[string]string{
|
||||
"type": "ssh",
|
||||
"host": *instance.PrivateIpAddress,
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func resourceAwsSpotInstanceRequestUpdate(d *schema.ResourceData, meta interface{}) error {
|
||||
conn := meta.(*AWSClient).ec2conn
|
||||
|
||||
|
|
|
@ -62,6 +62,26 @@ func TestAccAWSSpotInstanceRequest_vpc(t *testing.T) {
|
|||
})
|
||||
}
|
||||
|
||||
func TestAccAWSSpotInstanceRequest_SubnetAndSG(t *testing.T) {
|
||||
var sir ec2.SpotInstanceRequest
|
||||
|
||||
resource.Test(t, resource.TestCase{
|
||||
PreCheck: func() { testAccPreCheck(t) },
|
||||
Providers: testAccProviders,
|
||||
CheckDestroy: testAccCheckAWSSpotInstanceRequestDestroy,
|
||||
Steps: []resource.TestStep{
|
||||
resource.TestStep{
|
||||
Config: testAccAWSSpotInstanceRequestConfig_SubnetAndSG,
|
||||
Check: resource.ComposeTestCheckFunc(
|
||||
testAccCheckAWSSpotInstanceRequestExists(
|
||||
"aws_spot_instance_request.foo", &sir),
|
||||
testAccCheckAWSSpotInstanceRequest_InstanceAttributes(&sir),
|
||||
),
|
||||
},
|
||||
},
|
||||
})
|
||||
}
|
||||
|
||||
func testCheckKeyPair(keyName string, sir *ec2.SpotInstanceRequest) resource.TestCheckFunc {
|
||||
return func(*terraform.State) error {
|
||||
if sir.LaunchSpecification.KeyName == nil {
|
||||
|
@ -178,6 +198,44 @@ func testAccCheckAWSSpotInstanceRequestAttributes(
|
|||
}
|
||||
}
|
||||
|
||||
func testAccCheckAWSSpotInstanceRequest_InstanceAttributes(
|
||||
sir *ec2.SpotInstanceRequest) resource.TestCheckFunc {
|
||||
return func(s *terraform.State) error {
|
||||
conn := testAccProvider.Meta().(*AWSClient).ec2conn
|
||||
resp, err := conn.DescribeInstances(&ec2.DescribeInstancesInput{
|
||||
InstanceIds: []*string{sir.InstanceId},
|
||||
})
|
||||
if err != nil {
|
||||
if ec2err, ok := err.(awserr.Error); ok && ec2err.Code() == "InvalidInstanceID.NotFound" {
|
||||
return fmt.Errorf("Spot Instance not found")
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
// If nothing was found, then return no state
|
||||
if len(resp.Reservations) == 0 {
|
||||
return fmt.Errorf("Spot Instance not found")
|
||||
}
|
||||
|
||||
instance := resp.Reservations[0].Instances[0]
|
||||
|
||||
var sgMatch bool
|
||||
for _, s := range instance.SecurityGroups {
|
||||
// Hardcoded name for the security group that should be added inside the
|
||||
// VPC
|
||||
if *s.GroupName == "tf_test_sg_ssh" {
|
||||
sgMatch = true
|
||||
}
|
||||
}
|
||||
|
||||
if !sgMatch {
|
||||
return fmt.Errorf("Error in matching Spot Instance Security Group, expected 'tf_test_sg_ssh', got %s", instance.SecurityGroups)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
func testAccCheckAWSSpotInstanceRequestAttributesVPC(
|
||||
sir *ec2.SpotInstanceRequest) resource.TestCheckFunc {
|
||||
return func(s *terraform.State) error {
|
||||
|
@ -249,3 +307,44 @@ resource "aws_spot_instance_request" "foo_VPC" {
|
|||
}
|
||||
}
|
||||
`
|
||||
|
||||
const testAccAWSSpotInstanceRequestConfig_SubnetAndSG = `
|
||||
resource "aws_spot_instance_request" "foo" {
|
||||
ami = "ami-6f6d635f"
|
||||
spot_price = "0.05"
|
||||
instance_type = "t1.micro"
|
||||
wait_for_fulfillment = true
|
||||
subnet_id = "${aws_subnet.tf_test_subnet.id}"
|
||||
vpc_security_group_ids = ["${aws_security_group.tf_test_sg_ssh.id}"]
|
||||
associate_public_ip_address = true
|
||||
}
|
||||
|
||||
resource "aws_vpc" "default" {
|
||||
cidr_block = "10.0.0.0/16"
|
||||
enable_dns_hostnames = true
|
||||
|
||||
tags {
|
||||
Name = "tf_test_vpc"
|
||||
}
|
||||
}
|
||||
|
||||
resource "aws_subnet" "tf_test_subnet" {
|
||||
vpc_id = "${aws_vpc.default.id}"
|
||||
cidr_block = "10.0.0.0/24"
|
||||
map_public_ip_on_launch = true
|
||||
|
||||
tags {
|
||||
Name = "tf_test_subnet"
|
||||
}
|
||||
}
|
||||
|
||||
resource "aws_security_group" "tf_test_sg_ssh" {
|
||||
name = "tf_test_sg_ssh"
|
||||
description = "tf_test_sg_ssh"
|
||||
vpc_id = "${aws_vpc.default.id}"
|
||||
|
||||
tags {
|
||||
Name = "tf_test_sg_ssh"
|
||||
}
|
||||
}
|
||||
`
|
||||
|
|
|
@ -22,9 +22,8 @@ func resourceAwsSubnet() *schema.Resource {
|
|||
Schema: map[string]*schema.Schema{
|
||||
"vpc_id": &schema.Schema{
|
||||
Type: schema.TypeString,
|
||||
Optional: true,
|
||||
Required: true,
|
||||
ForceNew: true,
|
||||
Computed: true,
|
||||
},
|
||||
|
||||
"cidr_block": &schema.Schema{
|
||||
|
|
|
@ -168,24 +168,34 @@ func resourceAwsVpnGatewayAttach(d *schema.ResourceData, meta interface{}) error
|
|||
d.Id(),
|
||||
d.Get("vpc_id").(string))
|
||||
|
||||
_, err := conn.AttachVpnGateway(&ec2.AttachVpnGatewayInput{
|
||||
req := &ec2.AttachVpnGatewayInput{
|
||||
VpnGatewayId: aws.String(d.Id()),
|
||||
VpcId: aws.String(d.Get("vpc_id").(string)),
|
||||
}
|
||||
|
||||
err := resource.Retry(30*time.Second, func() error {
|
||||
_, err := conn.AttachVpnGateway(req)
|
||||
if err != nil {
|
||||
if ec2err, ok := err.(awserr.Error); ok {
|
||||
if "InvalidVpnGatewayID.NotFound" == ec2err.Code() {
|
||||
//retry
|
||||
return fmt.Errorf("Gateway not found, retry for eventual consistancy")
|
||||
}
|
||||
}
|
||||
return resource.RetryError{Err: err}
|
||||
}
|
||||
return nil
|
||||
})
|
||||
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// A note on the states below: the AWS docs (as of July, 2014) say
|
||||
// that the states would be: attached, attaching, detached, detaching,
|
||||
// but when running, I noticed that the state is usually "available" when
|
||||
// it is attached.
|
||||
|
||||
// Wait for it to be fully attached before continuing
|
||||
log.Printf("[DEBUG] Waiting for VPN gateway (%s) to attach", d.Id())
|
||||
stateConf := &resource.StateChangeConf{
|
||||
Pending: []string{"detached", "attaching"},
|
||||
Target: "available",
|
||||
Target: "attached",
|
||||
Refresh: vpnGatewayAttachStateRefreshFunc(conn, d.Id(), "available"),
|
||||
Timeout: 1 * time.Minute,
|
||||
}
|
||||
|
@ -271,6 +281,7 @@ func vpnGatewayAttachStateRefreshFunc(conn *ec2.EC2, id string, expected string)
|
|||
resp, err := conn.DescribeVpnGateways(&ec2.DescribeVpnGatewaysInput{
|
||||
VpnGatewayIds: []*string{aws.String(id)},
|
||||
})
|
||||
|
||||
if err != nil {
|
||||
if ec2err, ok := err.(awserr.Error); ok && ec2err.Code() == "InvalidVpnGatewayID.NotFound" {
|
||||
resp = nil
|
||||
|
@ -288,10 +299,6 @@ func vpnGatewayAttachStateRefreshFunc(conn *ec2.EC2, id string, expected string)
|
|||
|
||||
vpnGateway := resp.VpnGateways[0]
|
||||
|
||||
if time.Now().Sub(start) > 10*time.Second {
|
||||
return vpnGateway, expected, nil
|
||||
}
|
||||
|
||||
if len(vpnGateway.VpcAttachments) == 0 {
|
||||
// No attachments, we're detached
|
||||
return vpnGateway, "detached", nil
|
||||
|
|
|
@ -58,7 +58,7 @@ func TestAccAzureInstance_separateHostedService(t *testing.T) {
|
|||
"azure_instance.foo", testAccHostedServiceName, &dpmt),
|
||||
testAccCheckAzureInstanceBasicAttributes(&dpmt),
|
||||
resource.TestCheckResourceAttr(
|
||||
"azure_instance.foo", "name", "terraform-test"),
|
||||
"azure_instance.foo", "name", instanceName),
|
||||
resource.TestCheckResourceAttr(
|
||||
"azure_instance.foo", "hosted_service_name", "terraform-testing-service"),
|
||||
resource.TestCheckResourceAttr(
|
||||
|
@ -392,8 +392,8 @@ resource "azure_hosted_service" "foo" {
|
|||
}
|
||||
|
||||
resource "azure_instance" "foo" {
|
||||
name = "terraform-test"
|
||||
hosted_service_name = "${azure_hosted_service.foo.name}"
|
||||
name = "%s"
|
||||
hosted_service_name = "${azure_hosted_service.foo.name}"
|
||||
image = "Ubuntu Server 14.04 LTS"
|
||||
size = "Basic_A1"
|
||||
storage_service_name = "%s"
|
||||
|
@ -407,7 +407,7 @@ resource "azure_instance" "foo" {
|
|||
public_port = 22
|
||||
private_port = 22
|
||||
}
|
||||
}`, testAccHostedServiceName, testAccStorageServiceName)
|
||||
}`, testAccHostedServiceName, instanceName, testAccStorageServiceName)
|
||||
|
||||
var testAccAzureInstance_advanced = fmt.Sprintf(`
|
||||
resource "azure_virtual_network" "foo" {
|
||||
|
|
|
@ -18,10 +18,11 @@ func Provider() terraform.ResourceProvider {
|
|||
},
|
||||
|
||||
ResourcesMap: map[string]*schema.Resource{
|
||||
"digitalocean_domain": resourceDigitalOceanDomain(),
|
||||
"digitalocean_droplet": resourceDigitalOceanDroplet(),
|
||||
"digitalocean_record": resourceDigitalOceanRecord(),
|
||||
"digitalocean_ssh_key": resourceDigitalOceanSSHKey(),
|
||||
"digitalocean_domain": resourceDigitalOceanDomain(),
|
||||
"digitalocean_droplet": resourceDigitalOceanDroplet(),
|
||||
"digitalocean_floating_ip": resourceDigitalOceanFloatingIp(),
|
||||
"digitalocean_record": resourceDigitalOceanRecord(),
|
||||
"digitalocean_ssh_key": resourceDigitalOceanSSHKey(),
|
||||
},
|
||||
|
||||
ConfigureFunc: providerConfigure,
|
||||
|
|
|
@ -0,0 +1,159 @@
|
|||
package digitalocean
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"log"
|
||||
"time"
|
||||
|
||||
"github.com/digitalocean/godo"
|
||||
"github.com/hashicorp/terraform/helper/resource"
|
||||
"github.com/hashicorp/terraform/helper/schema"
|
||||
)
|
||||
|
||||
func resourceDigitalOceanFloatingIp() *schema.Resource {
|
||||
return &schema.Resource{
|
||||
Create: resourceDigitalOceanFloatingIpCreate,
|
||||
Read: resourceDigitalOceanFloatingIpRead,
|
||||
Delete: resourceDigitalOceanFloatingIpDelete,
|
||||
|
||||
Schema: map[string]*schema.Schema{
|
||||
"ip_address": &schema.Schema{
|
||||
Type: schema.TypeString,
|
||||
Optional: true,
|
||||
Computed: true,
|
||||
},
|
||||
|
||||
"region": &schema.Schema{
|
||||
Type: schema.TypeString,
|
||||
Required: true,
|
||||
ForceNew: true,
|
||||
},
|
||||
|
||||
"droplet_id": &schema.Schema{
|
||||
Type: schema.TypeInt,
|
||||
Optional: true,
|
||||
ForceNew: true,
|
||||
},
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
func resourceDigitalOceanFloatingIpCreate(d *schema.ResourceData, meta interface{}) error {
|
||||
client := meta.(*godo.Client)
|
||||
|
||||
log.Printf("[INFO] Create a FloatingIP In a Region")
|
||||
regionOpts := &godo.FloatingIPCreateRequest{
|
||||
Region: d.Get("region").(string),
|
||||
}
|
||||
|
||||
log.Printf("[DEBUG] FloatingIP Create: %#v", regionOpts)
|
||||
floatingIp, _, err := client.FloatingIPs.Create(regionOpts)
|
||||
if err != nil {
|
||||
return fmt.Errorf("Error creating FloatingIP: %s", err)
|
||||
}
|
||||
|
||||
d.SetId(floatingIp.IP)
|
||||
|
||||
if v, ok := d.GetOk("droplet_id"); ok {
|
||||
|
||||
log.Printf("[INFO] Assigning the Floating IP to the Droplet %s", v.(int))
|
||||
action, _, err := client.FloatingIPActions.Assign(d.Id(), v.(int))
|
||||
if err != nil {
|
||||
return fmt.Errorf(
|
||||
"Error Assigning FloatingIP (%s) to the droplet: %s", d.Id(), err)
|
||||
}
|
||||
|
||||
_, unassignedErr := waitForFloatingIPReady(d, "completed", []string{"new", "in-progress"}, "status", meta, action.ID)
|
||||
if unassignedErr != nil {
|
||||
return fmt.Errorf(
|
||||
"Error waiting for FloatingIP (%s) to be Assigned: %s", d.Id(), unassignedErr)
|
||||
}
|
||||
}
|
||||
|
||||
return resourceDigitalOceanFloatingIpRead(d, meta)
|
||||
}
|
||||
|
||||
func resourceDigitalOceanFloatingIpRead(d *schema.ResourceData, meta interface{}) error {
|
||||
client := meta.(*godo.Client)
|
||||
|
||||
log.Printf("[INFO] Reading the details of the FloatingIP %s", d.Id())
|
||||
floatingIp, _, err := client.FloatingIPs.Get(d.Id())
|
||||
if err != nil {
|
||||
return fmt.Errorf("Error retrieving FloatingIP: %s", err)
|
||||
}
|
||||
|
||||
if _, ok := d.GetOk("droplet_id"); ok {
|
||||
log.Printf("[INFO] The region of the Droplet is %s", floatingIp.Droplet.Region)
|
||||
d.Set("region", floatingIp.Droplet.Region.Slug)
|
||||
} else {
|
||||
d.Set("region", floatingIp.Region.Slug)
|
||||
}
|
||||
|
||||
d.Set("ip_address", floatingIp.IP)
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func resourceDigitalOceanFloatingIpDelete(d *schema.ResourceData, meta interface{}) error {
|
||||
client := meta.(*godo.Client)
|
||||
|
||||
if _, ok := d.GetOk("droplet_id"); ok {
|
||||
log.Printf("[INFO] Unassigning the Floating IP from the Droplet")
|
||||
action, _, err := client.FloatingIPActions.Unassign(d.Id())
|
||||
if err != nil {
|
||||
return fmt.Errorf(
|
||||
"Error Unassigning FloatingIP (%s) from the droplet: %s", d.Id(), err)
|
||||
}
|
||||
|
||||
_, unassignedErr := waitForFloatingIPReady(d, "completed", []string{"new", "in-progress"}, "status", meta, action.ID)
|
||||
if unassignedErr != nil {
|
||||
return fmt.Errorf(
|
||||
"Error waiting for FloatingIP (%s) to be unassigned: %s", d.Id(), unassignedErr)
|
||||
}
|
||||
}
|
||||
|
||||
log.Printf("[INFO] Deleting FloatingIP: %s", d.Id())
|
||||
_, err := client.FloatingIPs.Delete(d.Id())
|
||||
if err != nil {
|
||||
return fmt.Errorf("Error deleting FloatingIP: %s", err)
|
||||
}
|
||||
|
||||
d.SetId("")
|
||||
return nil
|
||||
}
|
||||
|
||||
func waitForFloatingIPReady(
|
||||
d *schema.ResourceData, target string, pending []string, attribute string, meta interface{}, actionId int) (interface{}, error) {
|
||||
log.Printf(
|
||||
"[INFO] Waiting for FloatingIP (%s) to have %s of %s",
|
||||
d.Id(), attribute, target)
|
||||
|
||||
stateConf := &resource.StateChangeConf{
|
||||
Pending: pending,
|
||||
Target: target,
|
||||
Refresh: newFloatingIPStateRefreshFunc(d, attribute, meta, actionId),
|
||||
Timeout: 60 * time.Minute,
|
||||
Delay: 10 * time.Second,
|
||||
MinTimeout: 3 * time.Second,
|
||||
|
||||
NotFoundChecks: 60,
|
||||
}
|
||||
|
||||
return stateConf.WaitForState()
|
||||
}
|
||||
|
||||
func newFloatingIPStateRefreshFunc(
|
||||
d *schema.ResourceData, attribute string, meta interface{}, actionId int) resource.StateRefreshFunc {
|
||||
client := meta.(*godo.Client)
|
||||
return func() (interface{}, string, error) {
|
||||
|
||||
log.Printf("[INFO] Assigning the Floating IP to the Droplet")
|
||||
action, _, err := client.FloatingIPActions.Get(d.Id(), actionId)
|
||||
if err != nil {
|
||||
return nil, "", fmt.Errorf("Error retrieving FloatingIP (%s) ActionId (%d): %s", d.Id(), actionId, err)
|
||||
}
|
||||
|
||||
log.Printf("[INFO] The FloatingIP Action Status is %s", action.Status)
|
||||
return &action, action.Status, nil
|
||||
}
|
||||
}
|
|
@ -0,0 +1,121 @@
|
|||
package digitalocean
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"testing"
|
||||
|
||||
"github.com/digitalocean/godo"
|
||||
"github.com/hashicorp/terraform/helper/resource"
|
||||
"github.com/hashicorp/terraform/terraform"
|
||||
)
|
||||
|
||||
func TestAccDigitalOceanFloatingIP_Region(t *testing.T) {
|
||||
var floatingIP godo.FloatingIP
|
||||
|
||||
resource.Test(t, resource.TestCase{
|
||||
PreCheck: func() { testAccPreCheck(t) },
|
||||
Providers: testAccProviders,
|
||||
CheckDestroy: testAccCheckDigitalOceanFloatingIPDestroy,
|
||||
Steps: []resource.TestStep{
|
||||
resource.TestStep{
|
||||
Config: testAccCheckDigitalOceanFloatingIPConfig_region,
|
||||
Check: resource.ComposeTestCheckFunc(
|
||||
testAccCheckDigitalOceanFloatingIPExists("digitalocean_floating_ip.foobar", &floatingIP),
|
||||
resource.TestCheckResourceAttr(
|
||||
"digitalocean_floating_ip.foobar", "region", "nyc3"),
|
||||
),
|
||||
},
|
||||
},
|
||||
})
|
||||
}
|
||||
|
||||
func TestAccDigitalOceanFloatingIP_Droplet(t *testing.T) {
|
||||
var floatingIP godo.FloatingIP
|
||||
|
||||
resource.Test(t, resource.TestCase{
|
||||
PreCheck: func() { testAccPreCheck(t) },
|
||||
Providers: testAccProviders,
|
||||
CheckDestroy: testAccCheckDigitalOceanFloatingIPDestroy,
|
||||
Steps: []resource.TestStep{
|
||||
resource.TestStep{
|
||||
Config: testAccCheckDigitalOceanFloatingIPConfig_droplet,
|
||||
Check: resource.ComposeTestCheckFunc(
|
||||
testAccCheckDigitalOceanFloatingIPExists("digitalocean_floating_ip.foobar", &floatingIP),
|
||||
resource.TestCheckResourceAttr(
|
||||
"digitalocean_floating_ip.foobar", "region", "sgp1"),
|
||||
),
|
||||
},
|
||||
},
|
||||
})
|
||||
}
|
||||
|
||||
func testAccCheckDigitalOceanFloatingIPDestroy(s *terraform.State) error {
|
||||
client := testAccProvider.Meta().(*godo.Client)
|
||||
|
||||
for _, rs := range s.RootModule().Resources {
|
||||
if rs.Type != "digitalocean_floating_ip" {
|
||||
continue
|
||||
}
|
||||
|
||||
// Try to find the key
|
||||
_, _, err := client.FloatingIPs.Get(rs.Primary.ID)
|
||||
|
||||
if err == nil {
|
||||
fmt.Errorf("Floating IP still exists")
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func testAccCheckDigitalOceanFloatingIPExists(n string, floatingIP *godo.FloatingIP) resource.TestCheckFunc {
|
||||
return func(s *terraform.State) error {
|
||||
rs, ok := s.RootModule().Resources[n]
|
||||
|
||||
if !ok {
|
||||
return fmt.Errorf("Not found: %s", n)
|
||||
}
|
||||
|
||||
if rs.Primary.ID == "" {
|
||||
return fmt.Errorf("No Record ID is set")
|
||||
}
|
||||
|
||||
client := testAccProvider.Meta().(*godo.Client)
|
||||
|
||||
// Try to find the FloatingIP
|
||||
foundFloatingIP, _, err := client.FloatingIPs.Get(rs.Primary.ID)
|
||||
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if foundFloatingIP.IP != rs.Primary.ID {
|
||||
return fmt.Errorf("Record not found")
|
||||
}
|
||||
|
||||
*floatingIP = *foundFloatingIP
|
||||
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
var testAccCheckDigitalOceanFloatingIPConfig_region = `
|
||||
resource "digitalocean_floating_ip" "foobar" {
|
||||
region = "nyc3"
|
||||
}`
|
||||
|
||||
var testAccCheckDigitalOceanFloatingIPConfig_droplet = `
|
||||
|
||||
resource "digitalocean_droplet" "foobar" {
|
||||
name = "baz"
|
||||
size = "1gb"
|
||||
image = "centos-5-8-x32"
|
||||
region = "sgp1"
|
||||
ipv6 = true
|
||||
private_networking = true
|
||||
}
|
||||
|
||||
resource "digitalocean_floating_ip" "foobar" {
|
||||
droplet_id = "${digitalocean_droplet.foobar.id}"
|
||||
region = "${digitalocean_droplet.foobar.region}"
|
||||
}`
|
|
@ -104,33 +104,6 @@ func TestAccDigitalOceanRecord_HostnameValue(t *testing.T) {
|
|||
})
|
||||
}
|
||||
|
||||
func TestAccDigitalOceanRecord_RelativeHostnameValue(t *testing.T) {
|
||||
var record godo.DomainRecord
|
||||
|
||||
resource.Test(t, resource.TestCase{
|
||||
PreCheck: func() { testAccPreCheck(t) },
|
||||
Providers: testAccProviders,
|
||||
CheckDestroy: testAccCheckDigitalOceanRecordDestroy,
|
||||
Steps: []resource.TestStep{
|
||||
resource.TestStep{
|
||||
Config: testAccCheckDigitalOceanRecordConfig_relative_cname,
|
||||
Check: resource.ComposeTestCheckFunc(
|
||||
testAccCheckDigitalOceanRecordExists("digitalocean_record.foobar", &record),
|
||||
testAccCheckDigitalOceanRecordAttributesHostname("a.b", &record),
|
||||
resource.TestCheckResourceAttr(
|
||||
"digitalocean_record.foobar", "name", "terraform"),
|
||||
resource.TestCheckResourceAttr(
|
||||
"digitalocean_record.foobar", "domain", "foobar-test-terraform.com"),
|
||||
resource.TestCheckResourceAttr(
|
||||
"digitalocean_record.foobar", "value", "a.b"),
|
||||
resource.TestCheckResourceAttr(
|
||||
"digitalocean_record.foobar", "type", "CNAME"),
|
||||
),
|
||||
},
|
||||
},
|
||||
})
|
||||
}
|
||||
|
||||
func TestAccDigitalOceanRecord_ExternalHostnameValue(t *testing.T) {
|
||||
var record godo.DomainRecord
|
||||
|
||||
|
|
|
@ -17,7 +17,7 @@ func TestAccDockerImage_basic(t *testing.T) {
|
|||
resource.TestCheckResourceAttr(
|
||||
"docker_image.foo",
|
||||
"latest",
|
||||
"b7cf8f0d9e82c9d96bd7afd22c600bfdb86b8d66c50d29164e5ad2fb02f7187b"),
|
||||
"d52aff8195301dba95e8e3d14f0c3738a874237afd54233d250a2fc4489bfa83"),
|
||||
),
|
||||
},
|
||||
},
|
||||
|
|
|
@ -74,7 +74,7 @@ const testAccComputeSslCertificate_basic = `
|
|||
resource "google_compute_ssl_certificate" "foobar" {
|
||||
name = "terraform-test"
|
||||
description = "very descriptive"
|
||||
private_key = "${file("~/cert/example.key")}"
|
||||
certificate = "${file("~/cert/example.crt")}"
|
||||
private_key = "${file("test-fixtures/ssl_cert/test.key")}"
|
||||
certificate = "${file("test-fixtures/ssl_cert/test.crt")}"
|
||||
}
|
||||
`
|
||||
|
|
|
@ -142,15 +142,15 @@ resource "google_compute_url_map" "foobar" {
|
|||
resource "google_compute_ssl_certificate" "foobar1" {
|
||||
name = "terraform-test1"
|
||||
description = "very descriptive"
|
||||
private_key = "${file("~/cert/example.key")}"
|
||||
certificate = "${file("~/cert/example.crt")}"
|
||||
private_key = "${file("test-fixtures/ssl_cert/test.key")}"
|
||||
certificate = "${file("test-fixtures/ssl_cert/test.crt")}"
|
||||
}
|
||||
|
||||
resource "google_compute_ssl_certificate" "foobar2" {
|
||||
name = "terraform-test2"
|
||||
description = "very descriptive"
|
||||
private_key = "${file("~/cert/example.key")}"
|
||||
certificate = "${file("~/cert/example.crt")}"
|
||||
private_key = "${file("test-fixtures/ssl_cert/test.key")}"
|
||||
certificate = "${file("test-fixtures/ssl_cert/test.crt")}"
|
||||
}
|
||||
`
|
||||
|
||||
|
@ -199,14 +199,14 @@ resource "google_compute_url_map" "foobar" {
|
|||
resource "google_compute_ssl_certificate" "foobar1" {
|
||||
name = "terraform-test1"
|
||||
description = "very descriptive"
|
||||
private_key = "${file("~/cert/example.key")}"
|
||||
certificate = "${file("~/cert/example.crt")}"
|
||||
private_key = "${file("test-fixtures/ssl_cert/test.key")}"
|
||||
certificate = "${file("test-fixtures/ssl_cert/test.crt")}"
|
||||
}
|
||||
|
||||
resource "google_compute_ssl_certificate" "foobar2" {
|
||||
name = "terraform-test2"
|
||||
description = "very descriptive"
|
||||
private_key = "${file("~/cert/example.key")}"
|
||||
certificate = "${file("~/cert/example.crt")}"
|
||||
private_key = "${file("test-fixtures/ssl_cert/test.key")}"
|
||||
certificate = "${file("test-fixtures/ssl_cert/test.crt")}"
|
||||
}
|
||||
`
|
||||
|
|
|
@ -101,6 +101,7 @@ func testAccGoogleSqlDatabaseDestroy(s *terraform.State) error {
|
|||
var testGoogleSqlDatabase_basic = fmt.Sprintf(`
|
||||
resource "google_sql_database_instance" "instance" {
|
||||
name = "tf-lw-%d"
|
||||
region = "us-central"
|
||||
settings {
|
||||
tier = "D0"
|
||||
}
|
||||
|
|
|
@ -0,0 +1,21 @@
|
|||
-----BEGIN CERTIFICATE-----
|
||||
MIIDgjCCAmoCCQCPrrFCwXharzANBgkqhkiG9w0BAQsFADCBgjELMAkGA1UEBhMC
|
||||
VVMxETAPBgNVBAgMCE5ldy1Zb3JrMQwwCgYDVQQHDANOWUMxFTATBgNVBAoMDE9y
|
||||
Z2FuaXphdGlvbjEQMA4GA1UECwwHU2VjdGlvbjEQMA4GA1UEAwwHTXkgTmFtZTEX
|
||||
MBUGCSqGSIb3DQEJARYIbWVAbWUubWUwHhcNMTUxMTIwMTM0MTIwWhcNMTYxMTE5
|
||||
MTM0MTIwWjCBgjELMAkGA1UEBhMCVVMxETAPBgNVBAgMCE5ldy1Zb3JrMQwwCgYD
|
||||
VQQHDANOWUMxFTATBgNVBAoMDE9yZ2FuaXphdGlvbjEQMA4GA1UECwwHU2VjdGlv
|
||||
bjEQMA4GA1UEAwwHTXkgTmFtZTEXMBUGCSqGSIb3DQEJARYIbWVAbWUubWUwggEi
|
||||
MA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDbTuIV7EySLAijNAnsXG7HO/m4
|
||||
pu1Yy2sWWcqIifaSq0pL3JUGmWRKFRTb4msFIuKrkvsMLxWy6zIOnx0okRb7sTKb
|
||||
XLBiN7zjSLCD6k31zlllO0GHkPu923VeGZ52xlIWxo22R2yoRuddD0YkQPctV7q9
|
||||
H7sKJq2141Ut9reMT2LKVRPlzf8wTcv+F+cAc3/i9Tib90GqclGrwk6XE59RBgzT
|
||||
m9V7b/V+uusDtj6T3/ne5MHnq4g6lUz4mE7FneDVealjx7fHXtWSmR7dfbJilJj1
|
||||
foR/wPBeopdR5wAZS26bHjFIBMqAc7AgxbXdMorEDIY4i2OFjPTu22YYtmFZAgMB
|
||||
AAEwDQYJKoZIhvcNAQELBQADggEBAHmgedgYDSIPiyaZnCWG56jFqYtHYS5xMOFS
|
||||
T4FBEPsqgjbSYgjiugeQ37+nsbg/NQf4Z/Ca9CS20f7et8pjZWYqbqdGbifHSUAP
|
||||
MsR3MK/8EsNVskioufvgExNrqHbcJD8aKrBHAyA6NbjaTnnBPrwdfcXxnWdpPNOh
|
||||
yG6xSdi807t2e7dX59Nr6Fg6DHd9XPEM7VL/k5RBQyBf1ZgrO9cwA2jl8UtWKpaa
|
||||
fO24S7Acwggi9TjJnyHOhWh21DEUEQG+czXAd5/LSjynTcI7xmuyfEgqJPIrskPv
|
||||
OqM8II/iNr9Zglvp6hlmzIWnhgwLZiEljYGuMRNhr21jlHsCCYY=
|
||||
-----END CERTIFICATE-----
|
|
@ -0,0 +1,17 @@
|
|||
-----BEGIN CERTIFICATE REQUEST-----
|
||||
MIICyDCCAbACAQAwgYIxCzAJBgNVBAYTAlVTMREwDwYDVQQIDAhOZXctWW9yazEM
|
||||
MAoGA1UEBwwDTllDMRUwEwYDVQQKDAxPcmdhbml6YXRpb24xEDAOBgNVBAsMB1Nl
|
||||
Y3Rpb24xEDAOBgNVBAMMB015IE5hbWUxFzAVBgkqhkiG9w0BCQEWCG1lQG1lLm1l
|
||||
MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA207iFexMkiwIozQJ7Fxu
|
||||
xzv5uKbtWMtrFlnKiIn2kqtKS9yVBplkShUU2+JrBSLiq5L7DC8VsusyDp8dKJEW
|
||||
+7Eym1ywYje840iwg+pN9c5ZZTtBh5D7vdt1XhmedsZSFsaNtkdsqEbnXQ9GJED3
|
||||
LVe6vR+7CiatteNVLfa3jE9iylUT5c3/ME3L/hfnAHN/4vU4m/dBqnJRq8JOlxOf
|
||||
UQYM05vVe2/1frrrA7Y+k9/53uTB56uIOpVM+JhOxZ3g1XmpY8e3x17Vkpke3X2y
|
||||
YpSY9X6Ef8DwXqKXUecAGUtumx4xSATKgHOwIMW13TKKxAyGOItjhYz07ttmGLZh
|
||||
WQIDAQABoAAwDQYJKoZIhvcNAQELBQADggEBAGtNMtOtE7gUP5DbkZNxPsoGazkM
|
||||
c3//gjH3MsTFzQ39r1uNq3fnbBBoYeQnsI05Bf7kSEVeT6fzdl5aBhOWxFF6uyTI
|
||||
TZzcH9kvZ2IwFDbsa6vqrIJ6jIkpCIfPR8wN5LlBca9oZwJnt4ejF3RB5YBfnmeo
|
||||
t5JXTbxGRvPBVRZCfJgcxcn731m1Rc8c9wud2IaNWiLob2J/92BJhSt/aiYps/TJ
|
||||
ww5dRi6zhpxhR+RjlstG3C6oeYeQlSgzeBjhRcxtPHQWfcVfRLCtubqvuUQPcpw2
|
||||
YqMujh4vyKo+JEtqI8gqp4Bu0HVI1vr1vhblntFrQb0kueqV94HarE0uH+c=
|
||||
-----END CERTIFICATE REQUEST-----
|
|
@ -0,0 +1,27 @@
|
|||
-----BEGIN RSA PRIVATE KEY-----
|
||||
MIIEowIBAAKCAQEA207iFexMkiwIozQJ7Fxuxzv5uKbtWMtrFlnKiIn2kqtKS9yV
|
||||
BplkShUU2+JrBSLiq5L7DC8VsusyDp8dKJEW+7Eym1ywYje840iwg+pN9c5ZZTtB
|
||||
h5D7vdt1XhmedsZSFsaNtkdsqEbnXQ9GJED3LVe6vR+7CiatteNVLfa3jE9iylUT
|
||||
5c3/ME3L/hfnAHN/4vU4m/dBqnJRq8JOlxOfUQYM05vVe2/1frrrA7Y+k9/53uTB
|
||||
56uIOpVM+JhOxZ3g1XmpY8e3x17Vkpke3X2yYpSY9X6Ef8DwXqKXUecAGUtumx4x
|
||||
SATKgHOwIMW13TKKxAyGOItjhYz07ttmGLZhWQIDAQABAoIBABEjzyOrfiiGbH5k
|
||||
2MmyR64mj9PQqAgijdIHXn7hWXYJERtwt+z2HBJ2J1UwEvEp0tFaAWjoXSfInfbq
|
||||
lJrRDBzLsorV6asjdA3HZpRIwaMOZ4oz4WE5AZPLDRc3pVzfDxdcmUK/vkxAjmCF
|
||||
ixPWR/sxOhUB39phP35RsByRhbLfdGQkSspmD41imASqdqG96wsuc9Rk1Qjx9szr
|
||||
kUxZkQGKUkRz4yQCwTR4+w2I21/cT5kxwM/KZG5f62tqB9urtFuTONrm7Z7xJv1T
|
||||
BkHxQJxtsGhG8Dp8RB3t5PLou39xaBrjS5lpzJYtzrja25XGNEuONiQlWEDmk7li
|
||||
acJWPQECgYEA98hjLlSO2sudUI36kJWc9CBqFznnUD2hIWRBM/Xc7mBhFGWxoxGm
|
||||
f2xri91XbfH3oICIIBs52AdCyfjYbpF0clq8pSL+gHzRQTLcLUKVz3BxnxJAxyIG
|
||||
QYPxmtMLVSzB5eZh+bPvcCyzd2ALDE1vFClQI/BcK/2dsJcXP2gSqdECgYEA4pTA
|
||||
3okbdWOutnOwakyfVAbXjMx81D9ii2ZGHbuPY4PSD/tAe8onkEzHJgvinjddbi9p
|
||||
oGwFhPqgfdWX7YNz5qsj9HP6Ehy7dw/EwvmX49yHsere85LiPMn/T9KkK0Pbn+HY
|
||||
+0Q+ov/2wV3J7zPo8fffyQYizUKexGUN3XspGQkCgYEArFsMeobBE/q8g/MuzvHz
|
||||
SnFduqhBebRU59hH7q/gLUSHYtvWM7ssWMh/Crw9e7HrcQ7XIZYup1FtqPZa/pZZ
|
||||
LM5nGGt+IrwwBq0tMKJ3eOMbde4Jdzr4pQv1vJ9+65GFkritgDckn5/IeoopRTZ7
|
||||
xMd0AnvIcaUp0lNXDXkEOnECgYAk2C2YwlDdwOzrLFrWnkkWX9pzQdlWpkv/AQ2L
|
||||
zjEd7JSfFqtAtfnDBEkqDaq3MaeWwEz70jT/j8XDUJVZARQ6wT+ig615foSZcs37
|
||||
Kp0hZ34FV30TvKHfYrWKpGUfx/QRxqcDDPDmjprwjLDGnflWR4lzZfUIzbmFlC0y
|
||||
A9IGCQKBgH3ieP6nYCJexppvdxoycFkp3bSPr26MOCvACNsa+wJxBo59Zxs0YAmJ
|
||||
9f6OOdUExueRY5iZCy0KPSgjYj96RuR0gV3cKc/WdOot4Ypgc/TK+r/UPDM2VAHk
|
||||
yJuxkyXdOrstesxZIxpourS3kONtQUqMFmdqQeBngZl4v7yBtiRW
|
||||
-----END RSA PRIVATE KEY-----
|
|
@ -30,7 +30,7 @@ func TestAccHerokuCert_Basic(t *testing.T) {
|
|||
|
||||
resource "heroku_cert" "ssl_certificate" {
|
||||
app = "${heroku_app.foobar.name}"
|
||||
depends_on = "heroku_addon.ssl"
|
||||
depends_on = ["heroku_addon.ssl"]
|
||||
certificate_chain="${file("` + certificateChainFile + `")}"
|
||||
private_key="${file("` + wd + `/test-fixtures/terraform.key")}"
|
||||
}
|
||||
|
|
|
@ -581,8 +581,8 @@ func addHardDisk(vm *object.VirtualMachine, size, iops int64, diskType string) e
|
|||
}
|
||||
}
|
||||
|
||||
// createNetworkDevice creates VirtualDeviceConfigSpec for Network Device.
|
||||
func createNetworkDevice(f *find.Finder, label, adapterType string) (*types.VirtualDeviceConfigSpec, error) {
|
||||
// buildNetworkDevice builds VirtualDeviceConfigSpec for Network Device.
|
||||
func buildNetworkDevice(f *find.Finder, label, adapterType string) (*types.VirtualDeviceConfigSpec, error) {
|
||||
network, err := f.Network(context.TODO(), "*"+label)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
|
@ -626,8 +626,8 @@ func createNetworkDevice(f *find.Finder, label, adapterType string) (*types.Virt
|
|||
}
|
||||
}
|
||||
|
||||
// createVMRelocateSpec creates VirtualMachineRelocateSpec to set a place for a new VirtualMachine.
|
||||
func createVMRelocateSpec(rp *object.ResourcePool, ds *object.Datastore, vm *object.VirtualMachine) (types.VirtualMachineRelocateSpec, error) {
|
||||
// buildVMRelocateSpec builds VirtualMachineRelocateSpec to set a place for a new VirtualMachine.
|
||||
func buildVMRelocateSpec(rp *object.ResourcePool, ds *object.Datastore, vm *object.VirtualMachine) (types.VirtualMachineRelocateSpec, error) {
|
||||
var key int
|
||||
|
||||
devices, err := vm.Device(context.TODO())
|
||||
|
@ -673,8 +673,8 @@ func getDatastoreObject(client *govmomi.Client, f *object.DatacenterFolders, nam
|
|||
return ref.Reference(), nil
|
||||
}
|
||||
|
||||
// createStoragePlacementSpecCreate creates StoragePlacementSpec for create action.
|
||||
func createStoragePlacementSpecCreate(f *object.DatacenterFolders, rp *object.ResourcePool, storagePod object.StoragePod, configSpec types.VirtualMachineConfigSpec) types.StoragePlacementSpec {
|
||||
// buildStoragePlacementSpecCreate builds StoragePlacementSpec for create action.
|
||||
func buildStoragePlacementSpecCreate(f *object.DatacenterFolders, rp *object.ResourcePool, storagePod object.StoragePod, configSpec types.VirtualMachineConfigSpec) types.StoragePlacementSpec {
|
||||
vmfr := f.VmFolder.Reference()
|
||||
rpr := rp.Reference()
|
||||
spr := storagePod.Reference()
|
||||
|
@ -692,8 +692,8 @@ func createStoragePlacementSpecCreate(f *object.DatacenterFolders, rp *object.Re
|
|||
return sps
|
||||
}
|
||||
|
||||
// createStoragePlacementSpecClone creates StoragePlacementSpec for clone action.
|
||||
func createStoragePlacementSpecClone(c *govmomi.Client, f *object.DatacenterFolders, vm *object.VirtualMachine, rp *object.ResourcePool, storagePod object.StoragePod) types.StoragePlacementSpec {
|
||||
// buildStoragePlacementSpecClone builds StoragePlacementSpec for clone action.
|
||||
func buildStoragePlacementSpecClone(c *govmomi.Client, f *object.DatacenterFolders, vm *object.VirtualMachine, rp *object.ResourcePool, storagePod object.StoragePod) types.StoragePlacementSpec {
|
||||
vmr := vm.Reference()
|
||||
vmfr := f.VmFolder.Reference()
|
||||
rpr := rp.Reference()
|
||||
|
@ -802,7 +802,7 @@ func (vm *virtualMachine) createVirtualMachine(c *govmomi.Client) error {
|
|||
networkDevices := []types.BaseVirtualDeviceConfigSpec{}
|
||||
for _, network := range vm.networkInterfaces {
|
||||
// network device
|
||||
nd, err := createNetworkDevice(finder, network.label, "e1000")
|
||||
nd, err := buildNetworkDevice(finder, network.label, "e1000")
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
@ -857,7 +857,7 @@ func (vm *virtualMachine) createVirtualMachine(c *govmomi.Client) error {
|
|||
sp := object.StoragePod{
|
||||
object.NewFolder(c.Client, d),
|
||||
}
|
||||
sps := createStoragePlacementSpecCreate(dcFolders, resourcePool, sp, configSpec)
|
||||
sps := buildStoragePlacementSpecCreate(dcFolders, resourcePool, sp, configSpec)
|
||||
datastore, err = findDatastore(c, sps)
|
||||
if err != nil {
|
||||
return err
|
||||
|
@ -974,7 +974,7 @@ func (vm *virtualMachine) deployVirtualMachine(c *govmomi.Client) error {
|
|||
sp := object.StoragePod{
|
||||
object.NewFolder(c.Client, d),
|
||||
}
|
||||
sps := createStoragePlacementSpecClone(c, dcFolders, template, resourcePool, sp)
|
||||
sps := buildStoragePlacementSpecClone(c, dcFolders, template, resourcePool, sp)
|
||||
datastore, err = findDatastore(c, sps)
|
||||
if err != nil {
|
||||
return err
|
||||
|
@ -986,7 +986,7 @@ func (vm *virtualMachine) deployVirtualMachine(c *govmomi.Client) error {
|
|||
}
|
||||
log.Printf("[DEBUG] datastore: %#v", datastore)
|
||||
|
||||
relocateSpec, err := createVMRelocateSpec(resourcePool, datastore, template)
|
||||
relocateSpec, err := buildVMRelocateSpec(resourcePool, datastore, template)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
@ -997,7 +997,7 @@ func (vm *virtualMachine) deployVirtualMachine(c *govmomi.Client) error {
|
|||
networkConfigs := []types.CustomizationAdapterMapping{}
|
||||
for _, network := range vm.networkInterfaces {
|
||||
// network device
|
||||
nd, err := createNetworkDevice(finder, network.label, "vmxnet3")
|
||||
nd, err := buildNetworkDevice(finder, network.label, "vmxnet3")
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
|
|
@ -8,7 +8,7 @@ import (
|
|||
"io"
|
||||
"log"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"path"
|
||||
"regexp"
|
||||
"strings"
|
||||
"text/template"
|
||||
|
@ -322,7 +322,7 @@ func (p *Provisioner) runChefClientFunc(
|
|||
chefCmd string,
|
||||
confDir string) func(terraform.UIOutput, communicator.Communicator) error {
|
||||
return func(o terraform.UIOutput, comm communicator.Communicator) error {
|
||||
fb := filepath.Join(confDir, firstBoot)
|
||||
fb := path.Join(confDir, firstBoot)
|
||||
var cmd string
|
||||
|
||||
// Policyfiles do not support chef environments, so don't pass the `-E` flag.
|
||||
|
@ -337,8 +337,8 @@ func (p *Provisioner) runChefClientFunc(
|
|||
return fmt.Errorf("Error creating logfile directory %s: %v", logfileDir, err)
|
||||
}
|
||||
|
||||
logFile := filepath.Join(logfileDir, p.NodeName)
|
||||
f, err := os.Create(filepath.Join(logFile))
|
||||
logFile := path.Join(logfileDir, p.NodeName)
|
||||
f, err := os.Create(path.Join(logFile))
|
||||
if err != nil {
|
||||
return fmt.Errorf("Error creating logfile %s: %v", logFile, err)
|
||||
}
|
||||
|
@ -354,7 +354,7 @@ func (p *Provisioner) runChefClientFunc(
|
|||
|
||||
// Output implementation of terraform.UIOutput interface
|
||||
func (p *Provisioner) Output(output string) {
|
||||
logFile := filepath.Join(logfileDir, p.NodeName)
|
||||
logFile := path.Join(logfileDir, p.NodeName)
|
||||
f, err := os.OpenFile(logFile, os.O_APPEND|os.O_WRONLY, 0666)
|
||||
if err != nil {
|
||||
log.Printf("Error creating logfile %s: %v", logFile, err)
|
||||
|
@ -389,7 +389,7 @@ func (p *Provisioner) deployConfigFiles(
|
|||
f := strings.NewReader(contents)
|
||||
|
||||
// Copy the validation key to the new instance
|
||||
if err := comm.Upload(filepath.Join(confDir, validationKey), f); err != nil {
|
||||
if err := comm.Upload(path.Join(confDir, validationKey), f); err != nil {
|
||||
return fmt.Errorf("Uploading %s failed: %v", validationKey, err)
|
||||
}
|
||||
|
||||
|
@ -400,7 +400,7 @@ func (p *Provisioner) deployConfigFiles(
|
|||
}
|
||||
s := strings.NewReader(contents)
|
||||
// Copy the secret key to the new instance
|
||||
if err := comm.Upload(filepath.Join(confDir, secretKey), s); err != nil {
|
||||
if err := comm.Upload(path.Join(confDir, secretKey), s); err != nil {
|
||||
return fmt.Errorf("Uploading %s failed: %v", secretKey, err)
|
||||
}
|
||||
}
|
||||
|
@ -420,7 +420,7 @@ func (p *Provisioner) deployConfigFiles(
|
|||
}
|
||||
|
||||
// Copy the client config to the new instance
|
||||
if err := comm.Upload(filepath.Join(confDir, clienrb), &buf); err != nil {
|
||||
if err := comm.Upload(path.Join(confDir, clienrb), &buf); err != nil {
|
||||
return fmt.Errorf("Uploading %s failed: %v", clienrb, err)
|
||||
}
|
||||
|
||||
|
@ -449,7 +449,7 @@ func (p *Provisioner) deployConfigFiles(
|
|||
}
|
||||
|
||||
// Copy the first-boot.json to the new instance
|
||||
if err := comm.Upload(filepath.Join(confDir, firstBoot), bytes.NewReader(d)); err != nil {
|
||||
if err := comm.Upload(path.Join(confDir, firstBoot), bytes.NewReader(d)); err != nil {
|
||||
return fmt.Errorf("Uploading %s failed: %v", firstBoot, err)
|
||||
}
|
||||
|
||||
|
@ -469,8 +469,8 @@ func (p *Provisioner) deployOhaiHints(
|
|||
defer f.Close()
|
||||
|
||||
// Copy the hint to the new instance
|
||||
if err := comm.Upload(filepath.Join(hintDir, filepath.Base(hint)), f); err != nil {
|
||||
return fmt.Errorf("Uploading %s failed: %v", filepath.Base(hint), err)
|
||||
if err := comm.Upload(path.Join(hintDir, path.Base(hint)), f); err != nil {
|
||||
return fmt.Errorf("Uploading %s failed: %v", path.Base(hint), err)
|
||||
}
|
||||
}
|
||||
|
||||
|
|
|
@ -87,14 +87,16 @@ func (c *RefreshCommand) Run(args []string) int {
|
|||
c.Ui.Error(err.Error())
|
||||
return 1
|
||||
}
|
||||
if !validateContext(ctx, c.Ui) {
|
||||
return 1
|
||||
}
|
||||
|
||||
if err := ctx.Input(c.InputMode()); err != nil {
|
||||
c.Ui.Error(fmt.Sprintf("Error configuring: %s", err))
|
||||
return 1
|
||||
}
|
||||
|
||||
if !validateContext(ctx, c.Ui) {
|
||||
return 1
|
||||
}
|
||||
|
||||
newState, err := ctx.Refresh()
|
||||
if err != nil {
|
||||
c.Ui.Error(fmt.Sprintf("Error refreshing state: %s", err))
|
||||
|
|
|
@ -8,6 +8,7 @@ import (
|
|||
"strings"
|
||||
"testing"
|
||||
|
||||
"bytes"
|
||||
"github.com/hashicorp/terraform/terraform"
|
||||
"github.com/mitchellh/cli"
|
||||
)
|
||||
|
@ -413,6 +414,34 @@ func TestRefresh_varFileDefault(t *testing.T) {
|
|||
}
|
||||
}
|
||||
|
||||
func TestRefresh_varsUnset(t *testing.T) {
|
||||
// Disable test mode so input would be asked
|
||||
test = false
|
||||
defer func() { test = true }()
|
||||
|
||||
defaultInputReader = bytes.NewBufferString("bar\n")
|
||||
|
||||
state := testState()
|
||||
statePath := testStateFile(t, state)
|
||||
|
||||
p := testProvider()
|
||||
ui := new(cli.MockUi)
|
||||
c := &RefreshCommand{
|
||||
Meta: Meta{
|
||||
ContextOpts: testCtxConfig(p),
|
||||
Ui: ui,
|
||||
},
|
||||
}
|
||||
|
||||
args := []string{
|
||||
"-state", statePath,
|
||||
testFixturePath("refresh-unset-var"),
|
||||
}
|
||||
if code := c.Run(args); code != 0 {
|
||||
t.Fatalf("bad: %d\n\n%s", code, ui.ErrorWriter.String())
|
||||
}
|
||||
}
|
||||
|
||||
func TestRefresh_backup(t *testing.T) {
|
||||
state := testState()
|
||||
statePath := testStateFile(t, state)
|
||||
|
|
|
@ -0,0 +1,7 @@
|
|||
variable "should_ask" {}
|
||||
|
||||
provider "test" {}
|
||||
|
||||
resource "test_instance" "foo" {
|
||||
ami = "${var.should_ask}"
|
||||
}
|
|
@ -70,6 +70,26 @@ func TestLoadFileHeredoc(t *testing.T) {
|
|||
}
|
||||
}
|
||||
|
||||
func TestLoadFileEscapedQuotes(t *testing.T) {
|
||||
c, err := LoadFile(filepath.Join(fixtureDir, "escapedquotes.tf"))
|
||||
if err != nil {
|
||||
t.Fatalf("err: %s", err)
|
||||
}
|
||||
|
||||
if c == nil {
|
||||
t.Fatal("config should not be nil")
|
||||
}
|
||||
|
||||
if c.Dir != "" {
|
||||
t.Fatalf("bad: %#v", c.Dir)
|
||||
}
|
||||
|
||||
actual := resourcesStr(c.Resources)
|
||||
if actual != strings.TrimSpace(escapedquotesResourcesStr) {
|
||||
t.Fatalf("bad:\n%s", actual)
|
||||
}
|
||||
}
|
||||
|
||||
func TestLoadFileBasic(t *testing.T) {
|
||||
c, err := LoadFile(filepath.Join(fixtureDir, "basic.tf"))
|
||||
if err != nil {
|
||||
|
@ -557,6 +577,102 @@ func TestLoad_temporary_files(t *testing.T) {
|
|||
}
|
||||
}
|
||||
|
||||
func TestLoad_hclAttributes(t *testing.T) {
|
||||
c, err := LoadFile(filepath.Join(fixtureDir, "attributes.tf"))
|
||||
if err != nil {
|
||||
t.Fatalf("Bad: %s", err)
|
||||
}
|
||||
|
||||
if c == nil {
|
||||
t.Fatal("config should not be nil")
|
||||
}
|
||||
|
||||
actual := resourcesStr(c.Resources)
|
||||
print(actual)
|
||||
if actual != strings.TrimSpace(jsonAttributeStr) {
|
||||
t.Fatalf("bad:\n%s", actual)
|
||||
}
|
||||
|
||||
r := c.Resources[0]
|
||||
if r.Name != "test" && r.Type != "cloudstack_firewall" {
|
||||
t.Fatalf("Bad: %#v", r)
|
||||
}
|
||||
|
||||
raw := r.RawConfig
|
||||
if raw.Raw["ipaddress"] != "192.168.0.1" {
|
||||
t.Fatalf("Bad: %s", raw.Raw["ipAddress"])
|
||||
}
|
||||
|
||||
rule := raw.Raw["rule"].([]map[string]interface{})[0]
|
||||
if rule["protocol"] != "tcp" {
|
||||
t.Fatalf("Bad: %s", rule["protocol"])
|
||||
}
|
||||
|
||||
if rule["source_cidr"] != "10.0.0.0/8" {
|
||||
t.Fatalf("Bad: %s", rule["source_cidr"])
|
||||
}
|
||||
|
||||
ports := rule["ports"].([]interface{})
|
||||
|
||||
if ports[0] != "80" {
|
||||
t.Fatalf("Bad ports: %s", ports[0])
|
||||
}
|
||||
if ports[1] != "1000-2000" {
|
||||
t.Fatalf("Bad ports: %s", ports[1])
|
||||
}
|
||||
}
|
||||
|
||||
func TestLoad_jsonAttributes(t *testing.T) {
|
||||
c, err := LoadFile(filepath.Join(fixtureDir, "attributes.tf.json"))
|
||||
if err != nil {
|
||||
t.Fatalf("Bad: %s", err)
|
||||
}
|
||||
|
||||
if c == nil {
|
||||
t.Fatal("config should not be nil")
|
||||
}
|
||||
|
||||
actual := resourcesStr(c.Resources)
|
||||
print(actual)
|
||||
if actual != strings.TrimSpace(jsonAttributeStr) {
|
||||
t.Fatalf("bad:\n%s", actual)
|
||||
}
|
||||
|
||||
r := c.Resources[0]
|
||||
if r.Name != "test" && r.Type != "cloudstack_firewall" {
|
||||
t.Fatalf("Bad: %#v", r)
|
||||
}
|
||||
|
||||
raw := r.RawConfig
|
||||
if raw.Raw["ipaddress"] != "192.168.0.1" {
|
||||
t.Fatalf("Bad: %s", raw.Raw["ipAddress"])
|
||||
}
|
||||
|
||||
rule := raw.Raw["rule"].([]map[string]interface{})[0]
|
||||
if rule["protocol"] != "tcp" {
|
||||
t.Fatalf("Bad: %s", rule["protocol"])
|
||||
}
|
||||
|
||||
if rule["source_cidr"] != "10.0.0.0/8" {
|
||||
t.Fatalf("Bad: %s", rule["source_cidr"])
|
||||
}
|
||||
|
||||
ports := rule["ports"].([]interface{})
|
||||
|
||||
if ports[0] != "80" {
|
||||
t.Fatalf("Bad ports: %s", ports[0])
|
||||
}
|
||||
if ports[1] != "1000-2000" {
|
||||
t.Fatalf("Bad ports: %s", ports[1])
|
||||
}
|
||||
}
|
||||
|
||||
const jsonAttributeStr = `
|
||||
cloudstack_firewall[test] (x1)
|
||||
ipaddress
|
||||
rule
|
||||
`
|
||||
|
||||
const heredocProvidersStr = `
|
||||
aws
|
||||
access_key
|
||||
|
@ -571,6 +687,13 @@ aws_iam_policy[policy] (x1)
|
|||
policy
|
||||
`
|
||||
|
||||
const escapedquotesResourcesStr = `
|
||||
aws_instance[quotes] (x1)
|
||||
ami
|
||||
vars
|
||||
user: var.ami
|
||||
`
|
||||
|
||||
const basicOutputsStr = `
|
||||
web_ip
|
||||
vars
|
||||
|
|
|
@ -0,0 +1,15 @@
|
|||
provider "cloudstack" {
|
||||
api_url = "bla"
|
||||
api_key = "bla"
|
||||
secret_key = "bla"
|
||||
}
|
||||
|
||||
resource "cloudstack_firewall" "test" {
|
||||
ipaddress = "192.168.0.1"
|
||||
|
||||
rule {
|
||||
source_cidr = "10.0.0.0/8"
|
||||
protocol = "tcp"
|
||||
ports = ["80", "1000-2000"]
|
||||
}
|
||||
}
|
|
@ -0,0 +1,27 @@
|
|||
{
|
||||
"provider": {
|
||||
"cloudstack": {
|
||||
"api_url": "bla",
|
||||
"api_key": "bla",
|
||||
"secret_key": "bla"
|
||||
}
|
||||
},
|
||||
"resource": {
|
||||
"cloudstack_firewall": {
|
||||
"test": {
|
||||
"ipaddress": "192.168.0.1",
|
||||
"rule": [
|
||||
{
|
||||
"source_cidr": "10.0.0.0/8",
|
||||
"protocol": "tcp",
|
||||
"ports": [
|
||||
"80",
|
||||
"1000-2000"
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
@ -0,0 +1,7 @@
|
|||
variable "ami" {
|
||||
default = [ "ami", "abc123" ]
|
||||
}
|
||||
|
||||
resource "aws_instance" "quotes" {
|
||||
ami = "${join(\",\", var.ami)}"
|
||||
}
|
|
@ -0,0 +1,525 @@
|
|||
{
|
||||
"ImportPath": "github.com/hashicorp/terraform",
|
||||
"GoVersion": "go1.5.1",
|
||||
"Packages": [
|
||||
"./..."
|
||||
],
|
||||
"Deps": [
|
||||
{
|
||||
"ImportPath": "github.com/Azure/azure-sdk-for-go/core/http",
|
||||
"Comment": "v1.2-275-g3b480ea",
|
||||
"Rev": "3b480eaaf6b4236d43a3c06cba969da6f53c8b66"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/Azure/azure-sdk-for-go/core/tls",
|
||||
"Comment": "v1.2-275-g3b480ea",
|
||||
"Rev": "3b480eaaf6b4236d43a3c06cba969da6f53c8b66"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/Azure/azure-sdk-for-go/management",
|
||||
"Comment": "v1.2-275-g3b480ea",
|
||||
"Rev": "3b480eaaf6b4236d43a3c06cba969da6f53c8b66"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/Azure/azure-sdk-for-go/storage",
|
||||
"Comment": "v1.2-275-g3b480ea",
|
||||
"Rev": "3b480eaaf6b4236d43a3c06cba969da6f53c8b66"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/apparentlymart/go-cidr/cidr",
|
||||
"Rev": "a3ebdb999b831ecb6ab8a226e31b07b2b9061c47"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/apparentlymart/go-rundeck-api/rundeck",
|
||||
"Comment": "v0.0.1",
|
||||
"Rev": "cddcfbabbe903e9c8df35ff9569dbb8d67789200"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/armon/circbuf",
|
||||
"Rev": "bbbad097214e2918d8543d5201d12bfd7bca254d"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/aws/aws-sdk-go/aws",
|
||||
"Comment": "v1.0.0-1-g328e030",
|
||||
"Rev": "328e030f73f66922cb9c1357de794ee1bf0ca2b5"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/aws/aws-sdk-go/private/endpoints",
|
||||
"Comment": "v1.0.0-1-g328e030",
|
||||
"Rev": "328e030f73f66922cb9c1357de794ee1bf0ca2b5"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/aws/aws-sdk-go/private/protocol/ec2query",
|
||||
"Comment": "v1.0.0-1-g328e030",
|
||||
"Rev": "328e030f73f66922cb9c1357de794ee1bf0ca2b5"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/aws/aws-sdk-go/private/protocol/json/jsonutil",
|
||||
"Comment": "v1.0.0-1-g328e030",
|
||||
"Rev": "328e030f73f66922cb9c1357de794ee1bf0ca2b5"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/aws/aws-sdk-go/private/protocol/jsonrpc",
|
||||
"Comment": "v1.0.0-1-g328e030",
|
||||
"Rev": "328e030f73f66922cb9c1357de794ee1bf0ca2b5"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/aws/aws-sdk-go/private/protocol/query",
|
||||
"Comment": "v1.0.0-1-g328e030",
|
||||
"Rev": "328e030f73f66922cb9c1357de794ee1bf0ca2b5"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/aws/aws-sdk-go/private/protocol/rest",
|
||||
"Comment": "v1.0.0-1-g328e030",
|
||||
"Rev": "328e030f73f66922cb9c1357de794ee1bf0ca2b5"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/aws/aws-sdk-go/private/protocol/restjson",
|
||||
"Comment": "v1.0.0-1-g328e030",
|
||||
"Rev": "328e030f73f66922cb9c1357de794ee1bf0ca2b5"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/aws/aws-sdk-go/private/protocol/restxml",
|
||||
"Comment": "v1.0.0-1-g328e030",
|
||||
"Rev": "328e030f73f66922cb9c1357de794ee1bf0ca2b5"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/aws/aws-sdk-go/private/protocol/xml/xmlutil",
|
||||
"Comment": "v1.0.0-1-g328e030",
|
||||
"Rev": "328e030f73f66922cb9c1357de794ee1bf0ca2b5"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/aws/aws-sdk-go/private/signer/v4",
|
||||
"Comment": "v1.0.0-1-g328e030",
|
||||
"Rev": "328e030f73f66922cb9c1357de794ee1bf0ca2b5"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/aws/aws-sdk-go/private/waiter",
|
||||
"Comment": "v1.0.0-1-g328e030",
|
||||
"Rev": "328e030f73f66922cb9c1357de794ee1bf0ca2b5"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/aws/aws-sdk-go/service/autoscaling",
|
||||
"Comment": "v1.0.0-1-g328e030",
|
||||
"Rev": "328e030f73f66922cb9c1357de794ee1bf0ca2b5"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/aws/aws-sdk-go/service/cloudformation",
|
||||
"Comment": "v1.0.0-1-g328e030",
|
||||
"Rev": "328e030f73f66922cb9c1357de794ee1bf0ca2b5"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/aws/aws-sdk-go/service/cloudtrail",
|
||||
"Comment": "v1.0.0-1-g328e030",
|
||||
"Rev": "328e030f73f66922cb9c1357de794ee1bf0ca2b5"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/aws/aws-sdk-go/service/cloudwatch",
|
||||
"Comment": "v1.0.0-1-g328e030",
|
||||
"Rev": "328e030f73f66922cb9c1357de794ee1bf0ca2b5"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/aws/aws-sdk-go/service/cloudwatchlogs",
|
||||
"Comment": "v1.0.0-1-g328e030",
|
||||
"Rev": "328e030f73f66922cb9c1357de794ee1bf0ca2b5"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/aws/aws-sdk-go/service/codecommit",
|
||||
"Comment": "v1.0.0-1-g328e030",
|
||||
"Rev": "328e030f73f66922cb9c1357de794ee1bf0ca2b5"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/aws/aws-sdk-go/service/codedeploy",
|
||||
"Comment": "v1.0.0-1-g328e030",
|
||||
"Rev": "328e030f73f66922cb9c1357de794ee1bf0ca2b5"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/aws/aws-sdk-go/service/directoryservice",
|
||||
"Comment": "v1.0.0-1-g328e030",
|
||||
"Rev": "328e030f73f66922cb9c1357de794ee1bf0ca2b5"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/aws/aws-sdk-go/service/dynamodb",
|
||||
"Comment": "v1.0.0-1-g328e030",
|
||||
"Rev": "328e030f73f66922cb9c1357de794ee1bf0ca2b5"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/aws/aws-sdk-go/service/ec2",
|
||||
"Comment": "v1.0.0-1-g328e030",
|
||||
"Rev": "328e030f73f66922cb9c1357de794ee1bf0ca2b5"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/aws/aws-sdk-go/service/ecs",
|
||||
"Comment": "v1.0.0-1-g328e030",
|
||||
"Rev": "328e030f73f66922cb9c1357de794ee1bf0ca2b5"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/aws/aws-sdk-go/service/efs",
|
||||
"Comment": "v1.0.0-1-g328e030",
|
||||
"Rev": "328e030f73f66922cb9c1357de794ee1bf0ca2b5"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/aws/aws-sdk-go/service/elasticache",
|
||||
"Comment": "v1.0.0-1-g328e030",
|
||||
"Rev": "328e030f73f66922cb9c1357de794ee1bf0ca2b5"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/aws/aws-sdk-go/service/elasticsearchservice",
|
||||
"Comment": "v1.0.0-1-g328e030",
|
||||
"Rev": "328e030f73f66922cb9c1357de794ee1bf0ca2b5"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/aws/aws-sdk-go/service/elb",
|
||||
"Comment": "v1.0.0-1-g328e030",
|
||||
"Rev": "328e030f73f66922cb9c1357de794ee1bf0ca2b5"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/aws/aws-sdk-go/service/firehose",
|
||||
"Comment": "v1.0.0-1-g328e030",
|
||||
"Rev": "328e030f73f66922cb9c1357de794ee1bf0ca2b5"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/aws/aws-sdk-go/service/glacier",
|
||||
"Comment": "v1.0.0-1-g328e030",
|
||||
"Rev": "328e030f73f66922cb9c1357de794ee1bf0ca2b5"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/aws/aws-sdk-go/service/iam",
|
||||
"Comment": "v1.0.0-1-g328e030",
|
||||
"Rev": "328e030f73f66922cb9c1357de794ee1bf0ca2b5"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/aws/aws-sdk-go/service/kinesis",
|
||||
"Comment": "v1.0.0-1-g328e030",
|
||||
"Rev": "328e030f73f66922cb9c1357de794ee1bf0ca2b5"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/aws/aws-sdk-go/service/lambda",
|
||||
"Comment": "v1.0.0-1-g328e030",
|
||||
"Rev": "328e030f73f66922cb9c1357de794ee1bf0ca2b5"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/aws/aws-sdk-go/service/opsworks",
|
||||
"Comment": "v1.0.0-1-g328e030",
|
||||
"Rev": "328e030f73f66922cb9c1357de794ee1bf0ca2b5"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/aws/aws-sdk-go/service/rds",
|
||||
"Comment": "v1.0.0-1-g328e030",
|
||||
"Rev": "328e030f73f66922cb9c1357de794ee1bf0ca2b5"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/aws/aws-sdk-go/service/route53",
|
||||
"Comment": "v1.0.0-1-g328e030",
|
||||
"Rev": "328e030f73f66922cb9c1357de794ee1bf0ca2b5"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/aws/aws-sdk-go/service/s3",
|
||||
"Comment": "v1.0.0-1-g328e030",
|
||||
"Rev": "328e030f73f66922cb9c1357de794ee1bf0ca2b5"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/aws/aws-sdk-go/service/sns",
|
||||
"Comment": "v1.0.0-1-g328e030",
|
||||
"Rev": "328e030f73f66922cb9c1357de794ee1bf0ca2b5"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/aws/aws-sdk-go/service/sqs",
|
||||
"Comment": "v1.0.0-1-g328e030",
|
||||
"Rev": "328e030f73f66922cb9c1357de794ee1bf0ca2b5"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/coreos/etcd/client",
|
||||
"Comment": "v2.3.0-alpha.0-90-gd435d44",
|
||||
"Rev": "d435d443bb7659a2ff400c185fe5c6eea9fc81ed"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/coreos/etcd/pkg/pathutil",
|
||||
"Comment": "v2.3.0-alpha.0-90-gd435d44",
|
||||
"Rev": "d435d443bb7659a2ff400c185fe5c6eea9fc81ed"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/coreos/etcd/pkg/types",
|
||||
"Comment": "v2.3.0-alpha.0-90-gd435d44",
|
||||
"Rev": "d435d443bb7659a2ff400c185fe5c6eea9fc81ed"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/cyberdelia/heroku-go/v3",
|
||||
"Rev": "8344c6a3e281a99a693f5b71186249a8620eeb6b"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/digitalocean/godo",
|
||||
"Comment": "v0.9.0-10-g4ac7bea",
|
||||
"Rev": "4ac7bea157899131b3f94085219a4c650e19f696"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/dylanmei/iso8601",
|
||||
"Rev": "2075bf119b58e5576c6ed9f867b8f3d17f2e54d4"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/dylanmei/winrmtest",
|
||||
"Rev": "3e9661c52c45dab9a8528966a23d421922fca9b9"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/fsouza/go-dockerclient",
|
||||
"Rev": "0f5764b4d2f5b8928a05db1226a508817a9a01dd"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/go-ini/ini",
|
||||
"Comment": "v0-54-g2e44421",
|
||||
"Rev": "2e44421e256d82ebbf3d4d4fcabe8930b905eff3"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/google/go-querystring/query",
|
||||
"Rev": "2a60fc2ba6c19de80291203597d752e9ba58e4c0"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/hashicorp/atlas-go/archive",
|
||||
"Comment": "20141209094003-81-g6c9afe8",
|
||||
"Rev": "6c9afe8bb88099b424db07dea18f434371de8199"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/hashicorp/atlas-go/v1",
|
||||
"Comment": "20141209094003-81-g6c9afe8",
|
||||
"Rev": "6c9afe8bb88099b424db07dea18f434371de8199"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/hashicorp/consul/api",
|
||||
"Comment": "v0.6.0-rc2-8-g4d42ff6",
|
||||
"Rev": "4d42ff66e304e3f09eaae621ea4b0792e435064a"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/hashicorp/errwrap",
|
||||
"Rev": "7554cd9344cec97297fa6649b055a8c98c2a1e55"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/hashicorp/go-checkpoint",
|
||||
"Rev": "e4b2dc34c0f698ee04750bf2035d8b9384233e1b"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/hashicorp/go-cleanhttp",
|
||||
"Rev": "5df5ddc69534f1a4697289f1dca2193fbb40213f"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/hashicorp/go-getter",
|
||||
"Rev": "c5e245982bdb4708f89578c8e0054d82b5197401"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/hashicorp/go-multierror",
|
||||
"Rev": "d30f09973e19c1dfcd120b2d9c4f168e68d6b5d5"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/hashicorp/go-version",
|
||||
"Rev": "2b9865f60ce11e527bd1255ba82036d465570aa3"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/hashicorp/hcl",
|
||||
"Rev": "1688f22977e3b0bbdf1aaa5e2528cf10c2e93e78"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/hashicorp/logutils",
|
||||
"Rev": "0dc08b1671f34c4250ce212759ebd880f743d883"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/hashicorp/serf/coordinate",
|
||||
"Comment": "v0.6.4-145-ga72c045",
|
||||
"Rev": "a72c0453da2ba628a013e98bf323a76be4aa1443"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/hashicorp/yamux",
|
||||
"Rev": "ddcd0a6ec7c55e29f235e27935bf98d302281bd3"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/imdario/mergo",
|
||||
"Comment": "0.2.0-8-gbb554f9",
|
||||
"Rev": "bb554f9fd6ee4cd190eef868de608ced813aeda1"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/jmespath/go-jmespath",
|
||||
"Comment": "0.2.2",
|
||||
"Rev": "3433f3ea46d9f8019119e7dd41274e112a2359a9"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/kardianos/osext",
|
||||
"Rev": "345163ffe35aa66560a4cd7dddf00f3ae21c9fda"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/masterzen/simplexml/dom",
|
||||
"Rev": "95ba30457eb1121fa27753627c774c7cd4e90083"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/masterzen/winrm/soap",
|
||||
"Rev": "06208eee5d76e4a422494e25629cefec42b9b3ac"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/masterzen/winrm/winrm",
|
||||
"Rev": "06208eee5d76e4a422494e25629cefec42b9b3ac"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/masterzen/xmlpath",
|
||||
"Rev": "13f4951698adc0fa9c1dda3e275d489a24201161"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/mitchellh/cli",
|
||||
"Rev": "8102d0ed5ea2709ade1243798785888175f6e415"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/mitchellh/colorstring",
|
||||
"Rev": "8631ce90f28644f54aeedcb3e389a85174e067d1"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/mitchellh/copystructure",
|
||||
"Rev": "6fc66267e9da7d155a9d3bd489e00dad02666dc6"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/mitchellh/go-homedir",
|
||||
"Rev": "d682a8f0cf139663a984ff12528da460ca963de9"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/mitchellh/go-linereader",
|
||||
"Rev": "07bab5fdd9580500aea6ada0e09df4aa28e68abd"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/mitchellh/mapstructure",
|
||||
"Rev": "281073eb9eb092240d33ef253c404f1cca550309"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/mitchellh/packer/common/uuid",
|
||||
"Comment": "v0.8.6-228-g25108c8",
|
||||
"Rev": "25108c8d13912434d0f32faaf1ea13cdc537b21e"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/mitchellh/panicwrap",
|
||||
"Rev": "89dc8accc8fec9dfa9b8e1ffdd6793265253de16"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/mitchellh/prefixedio",
|
||||
"Rev": "89d9b535996bf0a185f85b59578f2e245f9e1724"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/mitchellh/reflectwalk",
|
||||
"Rev": "eecf4c70c626c7cfbb95c90195bc34d386c74ac6"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/nesv/go-dynect/dynect",
|
||||
"Comment": "v0.2.0-8-g841842b",
|
||||
"Rev": "841842b16b39cf2b5007278956976d7d909bd98b"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/nu7hatch/gouuid",
|
||||
"Rev": "179d4d0c4d8d407a32af483c2354df1d2c91e6c3"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/packer-community/winrmcp/winrmcp",
|
||||
"Rev": "3d184cea22ee1c41ec1697e0d830ff0c78f7ea97"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/packethost/packngo",
|
||||
"Rev": "f03d7dc788a8b57b62d301ccb98c950c325756f8"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/pborman/uuid",
|
||||
"Rev": "cccd189d45f7ac3368a0d127efb7f4d08ae0b655"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/pearkes/cloudflare",
|
||||
"Rev": "3d4cd12a4c3a7fc29b338b774f7f8b7e3d5afc2e"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/pearkes/dnsimple",
|
||||
"Rev": "78996265f576c7580ff75d0cb2c606a61883ceb8"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/pearkes/mailgun",
|
||||
"Rev": "b88605989c4141d22a6d874f78800399e5bb7ac2"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/rackspace/gophercloud",
|
||||
"Comment": "v1.0.0-757-g761cff8",
|
||||
"Rev": "761cff8afb6a8e7f42c5554a90dae72f341bb481"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/satori/go.uuid",
|
||||
"Rev": "d41af8bb6a7704f00bc3b7cba9355ae6a5a80048"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/soniah/dnsmadeeasy",
|
||||
"Comment": "v1.1-2-g5578a8c",
|
||||
"Rev": "5578a8c15e33958c61cf7db720b6181af65f4a9e"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/tent/http-link-go",
|
||||
"Rev": "ac974c61c2f990f4115b119354b5e0b47550e888"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/ugorji/go/codec",
|
||||
"Rev": "ea9cd21fa0bc41ee4bdd50ac7ed8cbc7ea2ed960"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/vmware/govmomi",
|
||||
"Comment": "v0.2.0-94-gdaf6c9c",
|
||||
"Rev": "daf6c9cce2d14cdd05fc38319ad58a5e0d3f7654"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/xanzy/go-cloudstack/cloudstack",
|
||||
"Comment": "v1.2.0-48-g0e6e56f",
|
||||
"Rev": "0e6e56fc0db3f48f060273f2e2ffe5d8d41b0112"
|
||||
},
|
||||
{
|
||||
"ImportPath": "golang.org/x/crypto/curve25519",
|
||||
"Rev": "beef0f4390813b96e8e68fd78570396d0f4751fc"
|
||||
},
|
||||
{
|
||||
"ImportPath": "golang.org/x/crypto/pkcs12",
|
||||
"Rev": "beef0f4390813b96e8e68fd78570396d0f4751fc"
|
||||
},
|
||||
{
|
||||
"ImportPath": "golang.org/x/crypto/ssh",
|
||||
"Rev": "beef0f4390813b96e8e68fd78570396d0f4751fc"
|
||||
},
|
||||
{
|
||||
"ImportPath": "golang.org/x/net/context",
|
||||
"Rev": "4f2fc6c1e69d41baf187332ee08fbd2b296f21ed"
|
||||
},
|
||||
{
|
||||
"ImportPath": "golang.org/x/oauth2",
|
||||
"Rev": "442624c9ec9243441e83b374a9e22ac549b5c51d"
|
||||
},
|
||||
{
|
||||
"ImportPath": "google.golang.org/api/compute/v1",
|
||||
"Rev": "030d584ade5f79aa2ed0ce067e8f7da50c9a10d5"
|
||||
},
|
||||
{
|
||||
"ImportPath": "google.golang.org/api/container/v1",
|
||||
"Rev": "030d584ade5f79aa2ed0ce067e8f7da50c9a10d5"
|
||||
},
|
||||
{
|
||||
"ImportPath": "google.golang.org/api/dns/v1",
|
||||
"Rev": "030d584ade5f79aa2ed0ce067e8f7da50c9a10d5"
|
||||
},
|
||||
{
|
||||
"ImportPath": "google.golang.org/api/gensupport",
|
||||
"Rev": "030d584ade5f79aa2ed0ce067e8f7da50c9a10d5"
|
||||
},
|
||||
{
|
||||
"ImportPath": "google.golang.org/api/googleapi",
|
||||
"Rev": "030d584ade5f79aa2ed0ce067e8f7da50c9a10d5"
|
||||
},
|
||||
{
|
||||
"ImportPath": "google.golang.org/api/sqladmin/v1beta4",
|
||||
"Rev": "030d584ade5f79aa2ed0ce067e8f7da50c9a10d5"
|
||||
},
|
||||
{
|
||||
"ImportPath": "google.golang.org/api/storage/v1",
|
||||
"Rev": "030d584ade5f79aa2ed0ce067e8f7da50c9a10d5"
|
||||
},
|
||||
{
|
||||
"ImportPath": "google.golang.org/cloud/compute/metadata",
|
||||
"Rev": "975617b05ea8a58727e6c1a06b6161ff4185a9f2"
|
||||
},
|
||||
{
|
||||
"ImportPath": "google.golang.org/cloud/internal",
|
||||
"Rev": "975617b05ea8a58727e6c1a06b6161ff4185a9f2"
|
||||
}
|
||||
]
|
||||
}
|
|
@ -919,7 +919,7 @@ func (m schemaMap) diffString(
|
|||
var originalN interface{}
|
||||
var os, ns string
|
||||
o, n, _, _ := d.diffChange(k)
|
||||
if schema.StateFunc != nil {
|
||||
if schema.StateFunc != nil && n != nil {
|
||||
originalN = n
|
||||
n = schema.StateFunc(n)
|
||||
}
|
||||
|
|
|
@ -124,7 +124,7 @@ func TestValueType_Zero(t *testing.T) {
|
|||
}
|
||||
|
||||
func TestSchemaMap_Diff(t *testing.T) {
|
||||
cases := []struct {
|
||||
cases := map[string]struct {
|
||||
Schema map[string]*Schema
|
||||
State *terraform.InstanceState
|
||||
Config map[string]interface{}
|
||||
|
@ -132,12 +132,7 @@ func TestSchemaMap_Diff(t *testing.T) {
|
|||
Diff *terraform.InstanceDiff
|
||||
Err bool
|
||||
}{
|
||||
/*
|
||||
* String decode
|
||||
*/
|
||||
|
||||
// #0
|
||||
{
|
||||
"#0": {
|
||||
Schema: map[string]*Schema{
|
||||
"availability_zone": &Schema{
|
||||
Type: TypeString,
|
||||
|
@ -166,8 +161,7 @@ func TestSchemaMap_Diff(t *testing.T) {
|
|||
Err: false,
|
||||
},
|
||||
|
||||
// #1
|
||||
{
|
||||
"#1": {
|
||||
Schema: map[string]*Schema{
|
||||
"availability_zone": &Schema{
|
||||
Type: TypeString,
|
||||
|
@ -194,8 +188,7 @@ func TestSchemaMap_Diff(t *testing.T) {
|
|||
Err: false,
|
||||
},
|
||||
|
||||
// #2
|
||||
{
|
||||
"#2": {
|
||||
Schema: map[string]*Schema{
|
||||
"availability_zone": &Schema{
|
||||
Type: TypeString,
|
||||
|
@ -216,8 +209,7 @@ func TestSchemaMap_Diff(t *testing.T) {
|
|||
Err: false,
|
||||
},
|
||||
|
||||
// #3 Computed, but set in config
|
||||
{
|
||||
"#3 Computed, but set in config": {
|
||||
Schema: map[string]*Schema{
|
||||
"availability_zone": &Schema{
|
||||
Type: TypeString,
|
||||
|
@ -248,8 +240,7 @@ func TestSchemaMap_Diff(t *testing.T) {
|
|||
Err: false,
|
||||
},
|
||||
|
||||
// #4 Default
|
||||
{
|
||||
"#4 Default": {
|
||||
Schema: map[string]*Schema{
|
||||
"availability_zone": &Schema{
|
||||
Type: TypeString,
|
||||
|
@ -274,8 +265,7 @@ func TestSchemaMap_Diff(t *testing.T) {
|
|||
Err: false,
|
||||
},
|
||||
|
||||
// #5 DefaultFunc, value
|
||||
{
|
||||
"#5 DefaultFunc, value": {
|
||||
Schema: map[string]*Schema{
|
||||
"availability_zone": &Schema{
|
||||
Type: TypeString,
|
||||
|
@ -302,8 +292,7 @@ func TestSchemaMap_Diff(t *testing.T) {
|
|||
Err: false,
|
||||
},
|
||||
|
||||
// #6 DefaultFunc, configuration set
|
||||
{
|
||||
"#6 DefaultFunc, configuration set": {
|
||||
Schema: map[string]*Schema{
|
||||
"availability_zone": &Schema{
|
||||
Type: TypeString,
|
||||
|
@ -332,8 +321,7 @@ func TestSchemaMap_Diff(t *testing.T) {
|
|||
Err: false,
|
||||
},
|
||||
|
||||
// #7 String with StateFunc
|
||||
{
|
||||
"String with StateFunc": {
|
||||
Schema: map[string]*Schema{
|
||||
"availability_zone": &Schema{
|
||||
Type: TypeString,
|
||||
|
@ -364,8 +352,37 @@ func TestSchemaMap_Diff(t *testing.T) {
|
|||
Err: false,
|
||||
},
|
||||
|
||||
// #8 Variable (just checking)
|
||||
{
|
||||
"StateFunc not called with nil value": {
|
||||
Schema: map[string]*Schema{
|
||||
"availability_zone": &Schema{
|
||||
Type: TypeString,
|
||||
Optional: true,
|
||||
Computed: true,
|
||||
StateFunc: func(a interface{}) string {
|
||||
t.Fatalf("should not get here!")
|
||||
return ""
|
||||
},
|
||||
},
|
||||
},
|
||||
|
||||
State: nil,
|
||||
|
||||
Config: map[string]interface{}{},
|
||||
|
||||
Diff: &terraform.InstanceDiff{
|
||||
Attributes: map[string]*terraform.ResourceAttrDiff{
|
||||
"availability_zone": &terraform.ResourceAttrDiff{
|
||||
Old: "",
|
||||
New: "",
|
||||
NewComputed: true,
|
||||
},
|
||||
},
|
||||
},
|
||||
|
||||
Err: false,
|
||||
},
|
||||
|
||||
"#8 Variable (just checking)": {
|
||||
Schema: map[string]*Schema{
|
||||
"availability_zone": &Schema{
|
||||
Type: TypeString,
|
||||
|
@ -395,8 +412,7 @@ func TestSchemaMap_Diff(t *testing.T) {
|
|||
Err: false,
|
||||
},
|
||||
|
||||
// #9 Variable computed
|
||||
{
|
||||
"#9 Variable computed": {
|
||||
Schema: map[string]*Schema{
|
||||
"availability_zone": &Schema{
|
||||
Type: TypeString,
|
||||
|
@ -426,12 +442,7 @@ func TestSchemaMap_Diff(t *testing.T) {
|
|||
Err: false,
|
||||
},
|
||||
|
||||
/*
|
||||
* Int decode
|
||||
*/
|
||||
|
||||
// #10
|
||||
{
|
||||
"#10 Int decode": {
|
||||
Schema: map[string]*Schema{
|
||||
"port": &Schema{
|
||||
Type: TypeInt,
|
||||
|
@ -460,12 +471,7 @@ func TestSchemaMap_Diff(t *testing.T) {
|
|||
Err: false,
|
||||
},
|
||||
|
||||
/*
|
||||
* Bool decode
|
||||
*/
|
||||
|
||||
// #11
|
||||
{
|
||||
"#11 bool decode": {
|
||||
Schema: map[string]*Schema{
|
||||
"port": &Schema{
|
||||
Type: TypeBool,
|
||||
|
@ -494,12 +500,7 @@ func TestSchemaMap_Diff(t *testing.T) {
|
|||
Err: false,
|
||||
},
|
||||
|
||||
/*
|
||||
* Bool
|
||||
*/
|
||||
|
||||
// #12
|
||||
{
|
||||
"#12 Bool": {
|
||||
Schema: map[string]*Schema{
|
||||
"delete": &Schema{
|
||||
Type: TypeBool,
|
||||
|
@ -521,12 +522,7 @@ func TestSchemaMap_Diff(t *testing.T) {
|
|||
Err: false,
|
||||
},
|
||||
|
||||
/*
|
||||
* List decode
|
||||
*/
|
||||
|
||||
// #13
|
||||
{
|
||||
"#13 List decode": {
|
||||
Schema: map[string]*Schema{
|
||||
"ports": &Schema{
|
||||
Type: TypeList,
|
||||
|
@ -565,8 +561,7 @@ func TestSchemaMap_Diff(t *testing.T) {
|
|||
Err: false,
|
||||
},
|
||||
|
||||
// #14
|
||||
{
|
||||
"#14": {
|
||||
Schema: map[string]*Schema{
|
||||
"ports": &Schema{
|
||||
Type: TypeList,
|
||||
|
@ -609,8 +604,7 @@ func TestSchemaMap_Diff(t *testing.T) {
|
|||
Err: false,
|
||||
},
|
||||
|
||||
// #15
|
||||
{
|
||||
"#15": {
|
||||
Schema: map[string]*Schema{
|
||||
"ports": &Schema{
|
||||
Type: TypeList,
|
||||
|
@ -643,8 +637,7 @@ func TestSchemaMap_Diff(t *testing.T) {
|
|||
Err: false,
|
||||
},
|
||||
|
||||
// #16
|
||||
{
|
||||
"#16": {
|
||||
Schema: map[string]*Schema{
|
||||
"ports": &Schema{
|
||||
Type: TypeList,
|
||||
|
@ -671,8 +664,7 @@ func TestSchemaMap_Diff(t *testing.T) {
|
|||
Err: false,
|
||||
},
|
||||
|
||||
// #17
|
||||
{
|
||||
"#17": {
|
||||
Schema: map[string]*Schema{
|
||||
"ports": &Schema{
|
||||
Type: TypeList,
|
||||
|
@ -709,8 +701,7 @@ func TestSchemaMap_Diff(t *testing.T) {
|
|||
Err: false,
|
||||
},
|
||||
|
||||
// #18
|
||||
{
|
||||
"#18": {
|
||||
Schema: map[string]*Schema{
|
||||
"ports": &Schema{
|
||||
Type: TypeList,
|
||||
|
@ -754,8 +745,7 @@ func TestSchemaMap_Diff(t *testing.T) {
|
|||
Err: false,
|
||||
},
|
||||
|
||||
// #19
|
||||
{
|
||||
"#19": {
|
||||
Schema: map[string]*Schema{
|
||||
"ports": &Schema{
|
||||
Type: TypeList,
|
||||
|
@ -781,12 +771,7 @@ func TestSchemaMap_Diff(t *testing.T) {
|
|||
Err: false,
|
||||
},
|
||||
|
||||
/*
|
||||
* Set
|
||||
*/
|
||||
|
||||
// #20
|
||||
{
|
||||
"#20 Set": {
|
||||
Schema: map[string]*Schema{
|
||||
"ports": &Schema{
|
||||
Type: TypeSet,
|
||||
|
@ -828,8 +813,7 @@ func TestSchemaMap_Diff(t *testing.T) {
|
|||
Err: false,
|
||||
},
|
||||
|
||||
// #21
|
||||
{
|
||||
"#21 Set": {
|
||||
Schema: map[string]*Schema{
|
||||
"ports": &Schema{
|
||||
Type: TypeSet,
|
||||
|
@ -855,8 +839,7 @@ func TestSchemaMap_Diff(t *testing.T) {
|
|||
Err: false,
|
||||
},
|
||||
|
||||
// #22
|
||||
{
|
||||
"#22 Set": {
|
||||
Schema: map[string]*Schema{
|
||||
"ports": &Schema{
|
||||
Type: TypeSet,
|
||||
|
@ -885,8 +868,7 @@ func TestSchemaMap_Diff(t *testing.T) {
|
|||
Err: false,
|
||||
},
|
||||
|
||||
// #23
|
||||
{
|
||||
"#23 Set": {
|
||||
Schema: map[string]*Schema{
|
||||
"ports": &Schema{
|
||||
Type: TypeSet,
|
||||
|
@ -932,8 +914,7 @@ func TestSchemaMap_Diff(t *testing.T) {
|
|||
Err: false,
|
||||
},
|
||||
|
||||
// #24
|
||||
{
|
||||
"#24 Set": {
|
||||
Schema: map[string]*Schema{
|
||||
"ports": &Schema{
|
||||
Type: TypeSet,
|
||||
|
@ -969,8 +950,7 @@ func TestSchemaMap_Diff(t *testing.T) {
|
|||
Err: false,
|
||||
},
|
||||
|
||||
// #25
|
||||
{
|
||||
"#25 Set": {
|
||||
Schema: map[string]*Schema{
|
||||
"ports": &Schema{
|
||||
Type: TypeSet,
|
||||
|
@ -1018,8 +998,7 @@ func TestSchemaMap_Diff(t *testing.T) {
|
|||
Err: false,
|
||||
},
|
||||
|
||||
// #26
|
||||
{
|
||||
"#26 Set": {
|
||||
Schema: map[string]*Schema{
|
||||
"ports": &Schema{
|
||||
Type: TypeSet,
|
||||
|
@ -1063,8 +1042,7 @@ func TestSchemaMap_Diff(t *testing.T) {
|
|||
Err: false,
|
||||
},
|
||||
|
||||
// #27
|
||||
{
|
||||
"#27 Set": {
|
||||
Schema: map[string]*Schema{
|
||||
"ports": &Schema{
|
||||
Type: TypeSet,
|
||||
|
@ -1092,8 +1070,7 @@ func TestSchemaMap_Diff(t *testing.T) {
|
|||
Err: false,
|
||||
},
|
||||
|
||||
// #28
|
||||
{
|
||||
"#28 Set": {
|
||||
Schema: map[string]*Schema{
|
||||
"ingress": &Schema{
|
||||
Type: TypeSet,
|
||||
|
@ -1145,12 +1122,7 @@ func TestSchemaMap_Diff(t *testing.T) {
|
|||
Err: false,
|
||||
},
|
||||
|
||||
/*
|
||||
* List of structure decode
|
||||
*/
|
||||
|
||||
// #29
|
||||
{
|
||||
"#29 List of structure decode": {
|
||||
Schema: map[string]*Schema{
|
||||
"ingress": &Schema{
|
||||
Type: TypeList,
|
||||
|
@ -1192,12 +1164,7 @@ func TestSchemaMap_Diff(t *testing.T) {
|
|||
Err: false,
|
||||
},
|
||||
|
||||
/*
|
||||
* ComputedWhen
|
||||
*/
|
||||
|
||||
// #30
|
||||
{
|
||||
"#30 ComputedWhen": {
|
||||
Schema: map[string]*Schema{
|
||||
"availability_zone": &Schema{
|
||||
Type: TypeString,
|
||||
|
@ -1227,8 +1194,7 @@ func TestSchemaMap_Diff(t *testing.T) {
|
|||
Err: false,
|
||||
},
|
||||
|
||||
// #31
|
||||
{
|
||||
"#31": {
|
||||
Schema: map[string]*Schema{
|
||||
"availability_zone": &Schema{
|
||||
Type: TypeString,
|
||||
|
@ -1306,12 +1272,7 @@ func TestSchemaMap_Diff(t *testing.T) {
|
|||
},
|
||||
*/
|
||||
|
||||
/*
|
||||
* Maps
|
||||
*/
|
||||
|
||||
// #32
|
||||
{
|
||||
"#32 Maps": {
|
||||
Schema: map[string]*Schema{
|
||||
"config_vars": &Schema{
|
||||
Type: TypeMap,
|
||||
|
@ -1345,8 +1306,7 @@ func TestSchemaMap_Diff(t *testing.T) {
|
|||
Err: false,
|
||||
},
|
||||
|
||||
// #33
|
||||
{
|
||||
"#33 Maps": {
|
||||
Schema: map[string]*Schema{
|
||||
"config_vars": &Schema{
|
||||
Type: TypeMap,
|
||||
|
@ -1383,8 +1343,7 @@ func TestSchemaMap_Diff(t *testing.T) {
|
|||
Err: false,
|
||||
},
|
||||
|
||||
// #34
|
||||
{
|
||||
"#34 Maps": {
|
||||
Schema: map[string]*Schema{
|
||||
"vars": &Schema{
|
||||
Type: TypeMap,
|
||||
|
@ -1424,8 +1383,7 @@ func TestSchemaMap_Diff(t *testing.T) {
|
|||
Err: false,
|
||||
},
|
||||
|
||||
// #35
|
||||
{
|
||||
"#35 Maps": {
|
||||
Schema: map[string]*Schema{
|
||||
"vars": &Schema{
|
||||
Type: TypeMap,
|
||||
|
@ -1446,8 +1404,7 @@ func TestSchemaMap_Diff(t *testing.T) {
|
|||
Err: false,
|
||||
},
|
||||
|
||||
// #36
|
||||
{
|
||||
"#36 Maps": {
|
||||
Schema: map[string]*Schema{
|
||||
"config_vars": &Schema{
|
||||
Type: TypeList,
|
||||
|
@ -1486,8 +1443,7 @@ func TestSchemaMap_Diff(t *testing.T) {
|
|||
Err: false,
|
||||
},
|
||||
|
||||
// #37
|
||||
{
|
||||
"#37 Maps": {
|
||||
Schema: map[string]*Schema{
|
||||
"config_vars": &Schema{
|
||||
Type: TypeList,
|
||||
|
@ -1529,12 +1485,7 @@ func TestSchemaMap_Diff(t *testing.T) {
|
|||
Err: false,
|
||||
},
|
||||
|
||||
/*
|
||||
* ForceNews
|
||||
*/
|
||||
|
||||
// #38
|
||||
{
|
||||
"#38 ForceNews": {
|
||||
Schema: map[string]*Schema{
|
||||
"availability_zone": &Schema{
|
||||
Type: TypeString,
|
||||
|
@ -1579,8 +1530,7 @@ func TestSchemaMap_Diff(t *testing.T) {
|
|||
Err: false,
|
||||
},
|
||||
|
||||
// #39 Set
|
||||
{
|
||||
"#39 Set": {
|
||||
Schema: map[string]*Schema{
|
||||
"availability_zone": &Schema{
|
||||
Type: TypeString,
|
||||
|
@ -1630,8 +1580,7 @@ func TestSchemaMap_Diff(t *testing.T) {
|
|||
Err: false,
|
||||
},
|
||||
|
||||
// #40 Set
|
||||
{
|
||||
"#40 Set": {
|
||||
Schema: map[string]*Schema{
|
||||
"instances": &Schema{
|
||||
Type: TypeSet,
|
||||
|
@ -1669,8 +1618,7 @@ func TestSchemaMap_Diff(t *testing.T) {
|
|||
Err: false,
|
||||
},
|
||||
|
||||
// #41 Set
|
||||
{
|
||||
"#41 Set": {
|
||||
Schema: map[string]*Schema{
|
||||
"route": &Schema{
|
||||
Type: TypeSet,
|
||||
|
@ -1730,8 +1678,7 @@ func TestSchemaMap_Diff(t *testing.T) {
|
|||
Err: false,
|
||||
},
|
||||
|
||||
// #42 Set
|
||||
{
|
||||
"#42 Set": {
|
||||
Schema: map[string]*Schema{
|
||||
"route": &Schema{
|
||||
Type: TypeSet,
|
||||
|
@ -1796,8 +1743,7 @@ func TestSchemaMap_Diff(t *testing.T) {
|
|||
Err: false,
|
||||
},
|
||||
|
||||
// #43 - Computed maps
|
||||
{
|
||||
"#43 - Computed maps": {
|
||||
Schema: map[string]*Schema{
|
||||
"vars": &Schema{
|
||||
Type: TypeMap,
|
||||
|
@ -1821,8 +1767,7 @@ func TestSchemaMap_Diff(t *testing.T) {
|
|||
Err: false,
|
||||
},
|
||||
|
||||
// #44 - Computed maps
|
||||
{
|
||||
"#44 - Computed maps": {
|
||||
Schema: map[string]*Schema{
|
||||
"vars": &Schema{
|
||||
Type: TypeMap,
|
||||
|
@ -1858,8 +1803,7 @@ func TestSchemaMap_Diff(t *testing.T) {
|
|||
Err: false,
|
||||
},
|
||||
|
||||
// #45 - Empty
|
||||
{
|
||||
"#45 - Empty": {
|
||||
Schema: map[string]*Schema{},
|
||||
|
||||
State: &terraform.InstanceState{},
|
||||
|
@ -1871,8 +1815,7 @@ func TestSchemaMap_Diff(t *testing.T) {
|
|||
Err: false,
|
||||
},
|
||||
|
||||
// #46 - Float
|
||||
{
|
||||
"#46 - Float": {
|
||||
Schema: map[string]*Schema{
|
||||
"some_threshold": &Schema{
|
||||
Type: TypeFloat,
|
||||
|
@ -1901,8 +1844,7 @@ func TestSchemaMap_Diff(t *testing.T) {
|
|||
Err: false,
|
||||
},
|
||||
|
||||
// #47 - https://github.com/hashicorp/terraform/issues/824
|
||||
{
|
||||
"#47 - https://github.com/hashicorp/terraform/issues/824": {
|
||||
Schema: map[string]*Schema{
|
||||
"block_device": &Schema{
|
||||
Type: TypeSet,
|
||||
|
@ -1955,8 +1897,7 @@ func TestSchemaMap_Diff(t *testing.T) {
|
|||
Err: false,
|
||||
},
|
||||
|
||||
// #48 - Zero value in state shouldn't result in diff
|
||||
{
|
||||
"#48 - Zero value in state shouldn't result in diff": {
|
||||
Schema: map[string]*Schema{
|
||||
"port": &Schema{
|
||||
Type: TypeBool,
|
||||
|
@ -1978,8 +1919,7 @@ func TestSchemaMap_Diff(t *testing.T) {
|
|||
Err: false,
|
||||
},
|
||||
|
||||
// #49 Set - Same as #48 but for sets
|
||||
{
|
||||
"#49 Set - Same as #48 but for sets": {
|
||||
Schema: map[string]*Schema{
|
||||
"route": &Schema{
|
||||
Type: TypeSet,
|
||||
|
@ -2021,8 +1961,7 @@ func TestSchemaMap_Diff(t *testing.T) {
|
|||
Err: false,
|
||||
},
|
||||
|
||||
// #50 - A set computed element shouldn't cause a diff
|
||||
{
|
||||
"#50 - A set computed element shouldn't cause a diff": {
|
||||
Schema: map[string]*Schema{
|
||||
"active": &Schema{
|
||||
Type: TypeBool,
|
||||
|
@ -2044,8 +1983,7 @@ func TestSchemaMap_Diff(t *testing.T) {
|
|||
Err: false,
|
||||
},
|
||||
|
||||
// #51 - An empty set should show up in the diff
|
||||
{
|
||||
"#51 - An empty set should show up in the diff": {
|
||||
Schema: map[string]*Schema{
|
||||
"instances": &Schema{
|
||||
Type: TypeSet,
|
||||
|
@ -2085,8 +2023,7 @@ func TestSchemaMap_Diff(t *testing.T) {
|
|||
Err: false,
|
||||
},
|
||||
|
||||
// #52 - Map with empty value
|
||||
{
|
||||
"#52 - Map with empty value": {
|
||||
Schema: map[string]*Schema{
|
||||
"vars": &Schema{
|
||||
Type: TypeMap,
|
||||
|
@ -2117,8 +2054,7 @@ func TestSchemaMap_Diff(t *testing.T) {
|
|||
Err: false,
|
||||
},
|
||||
|
||||
// #53 - Unset bool, not in state
|
||||
{
|
||||
"#53 - Unset bool, not in state": {
|
||||
Schema: map[string]*Schema{
|
||||
"force": &Schema{
|
||||
Type: TypeBool,
|
||||
|
@ -2136,8 +2072,7 @@ func TestSchemaMap_Diff(t *testing.T) {
|
|||
Err: false,
|
||||
},
|
||||
|
||||
// #54 - Unset set, not in state
|
||||
{
|
||||
"#54 - Unset set, not in state": {
|
||||
Schema: map[string]*Schema{
|
||||
"metadata_keys": &Schema{
|
||||
Type: TypeSet,
|
||||
|
@ -2157,8 +2092,7 @@ func TestSchemaMap_Diff(t *testing.T) {
|
|||
Err: false,
|
||||
},
|
||||
|
||||
// #55 - Unset list in state, should not show up computed
|
||||
{
|
||||
"#55 - Unset list in state, should not show up computed": {
|
||||
Schema: map[string]*Schema{
|
||||
"metadata_keys": &Schema{
|
||||
Type: TypeList,
|
||||
|
@ -2182,8 +2116,7 @@ func TestSchemaMap_Diff(t *testing.T) {
|
|||
Err: false,
|
||||
},
|
||||
|
||||
// #56 - Set element computed substring
|
||||
{
|
||||
"#56 - Set element computed substring": {
|
||||
Schema: map[string]*Schema{
|
||||
"ports": &Schema{
|
||||
Type: TypeSet,
|
||||
|
@ -2218,9 +2151,7 @@ func TestSchemaMap_Diff(t *testing.T) {
|
|||
Err: false,
|
||||
},
|
||||
|
||||
// #57 - Computed map without config that's known to be empty does not
|
||||
// generate diff
|
||||
{
|
||||
"#57 Computed map without config that's known to be empty does not generate diff": {
|
||||
Schema: map[string]*Schema{
|
||||
"tags": &Schema{
|
||||
Type: TypeMap,
|
||||
|
@ -2241,8 +2172,7 @@ func TestSchemaMap_Diff(t *testing.T) {
|
|||
Err: false,
|
||||
},
|
||||
|
||||
// #58 Set with hyphen keys
|
||||
{
|
||||
"#58 Set with hyphen keys": {
|
||||
Schema: map[string]*Schema{
|
||||
"route": &Schema{
|
||||
Type: TypeSet,
|
||||
|
@ -2298,8 +2228,7 @@ func TestSchemaMap_Diff(t *testing.T) {
|
|||
Err: false,
|
||||
},
|
||||
|
||||
// #59: StateFunc in nested set (#1759)
|
||||
{
|
||||
"#59: StateFunc in nested set (#1759)": {
|
||||
Schema: map[string]*Schema{
|
||||
"service_account": &Schema{
|
||||
Type: TypeList,
|
||||
|
@ -2364,8 +2293,7 @@ func TestSchemaMap_Diff(t *testing.T) {
|
|||
Err: false,
|
||||
},
|
||||
|
||||
// #60 - Removing set elements
|
||||
{
|
||||
"#60 - Removing set elements": {
|
||||
Schema: map[string]*Schema{
|
||||
"instances": &Schema{
|
||||
Type: TypeSet,
|
||||
|
@ -2418,10 +2346,10 @@ func TestSchemaMap_Diff(t *testing.T) {
|
|||
},
|
||||
}
|
||||
|
||||
for i, tc := range cases {
|
||||
for tn, tc := range cases {
|
||||
c, err := config.NewRawConfig(tc.Config)
|
||||
if err != nil {
|
||||
t.Fatalf("#%d err: %s", i, err)
|
||||
t.Fatalf("#%q err: %s", tn, err)
|
||||
}
|
||||
|
||||
if len(tc.ConfigVariables) > 0 {
|
||||
|
@ -2431,18 +2359,18 @@ func TestSchemaMap_Diff(t *testing.T) {
|
|||
}
|
||||
|
||||
if err := c.Interpolate(vars); err != nil {
|
||||
t.Fatalf("#%d err: %s", i, err)
|
||||
t.Fatalf("#%q err: %s", tn, err)
|
||||
}
|
||||
}
|
||||
|
||||
d, err := schemaMap(tc.Schema).Diff(
|
||||
tc.State, terraform.NewResourceConfig(c))
|
||||
if err != nil != tc.Err {
|
||||
t.Fatalf("#%d err: %s", i, err)
|
||||
t.Fatalf("#%q err: %s", tn, err)
|
||||
}
|
||||
|
||||
if !reflect.DeepEqual(tc.Diff, d) {
|
||||
t.Fatalf("#%d:\n\nexpected: %#v\n\ngot:\n\n%#v", i, tc.Diff, d)
|
||||
t.Fatalf("#%q:\n\nexpected: %#v\n\ngot:\n\n%#v", tn, tc.Diff, d)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
@ -2640,17 +2568,16 @@ func TestSchemaMap_InputDefault(t *testing.T) {
|
|||
}
|
||||
|
||||
func TestSchemaMap_InternalValidate(t *testing.T) {
|
||||
cases := []struct {
|
||||
cases := map[string]struct {
|
||||
In map[string]*Schema
|
||||
Err bool
|
||||
}{
|
||||
{
|
||||
"nothing": {
|
||||
nil,
|
||||
false,
|
||||
},
|
||||
|
||||
// No optional and no required
|
||||
{
|
||||
"Both optional and required": {
|
||||
map[string]*Schema{
|
||||
"foo": &Schema{
|
||||
Type: TypeInt,
|
||||
|
@ -2661,8 +2588,7 @@ func TestSchemaMap_InternalValidate(t *testing.T) {
|
|||
true,
|
||||
},
|
||||
|
||||
// No optional and no required
|
||||
{
|
||||
"No optional and no required": {
|
||||
map[string]*Schema{
|
||||
"foo": &Schema{
|
||||
Type: TypeInt,
|
||||
|
@ -2671,8 +2597,7 @@ func TestSchemaMap_InternalValidate(t *testing.T) {
|
|||
true,
|
||||
},
|
||||
|
||||
// Missing Type
|
||||
{
|
||||
"Missing Type": {
|
||||
map[string]*Schema{
|
||||
"foo": &Schema{
|
||||
Required: true,
|
||||
|
@ -2681,8 +2606,7 @@ func TestSchemaMap_InternalValidate(t *testing.T) {
|
|||
true,
|
||||
},
|
||||
|
||||
// Required but computed
|
||||
{
|
||||
"Required but computed": {
|
||||
map[string]*Schema{
|
||||
"foo": &Schema{
|
||||
Type: TypeInt,
|
||||
|
@ -2693,8 +2617,7 @@ func TestSchemaMap_InternalValidate(t *testing.T) {
|
|||
true,
|
||||
},
|
||||
|
||||
// Looks good
|
||||
{
|
||||
"Looks good": {
|
||||
map[string]*Schema{
|
||||
"foo": &Schema{
|
||||
Type: TypeString,
|
||||
|
@ -2704,8 +2627,7 @@ func TestSchemaMap_InternalValidate(t *testing.T) {
|
|||
false,
|
||||
},
|
||||
|
||||
// Computed but has default
|
||||
{
|
||||
"Computed but has default": {
|
||||
map[string]*Schema{
|
||||
"foo": &Schema{
|
||||
Type: TypeInt,
|
||||
|
@ -2717,8 +2639,7 @@ func TestSchemaMap_InternalValidate(t *testing.T) {
|
|||
true,
|
||||
},
|
||||
|
||||
// Required but has default
|
||||
{
|
||||
"Required but has default": {
|
||||
map[string]*Schema{
|
||||
"foo": &Schema{
|
||||
Type: TypeInt,
|
||||
|
@ -2730,8 +2651,7 @@ func TestSchemaMap_InternalValidate(t *testing.T) {
|
|||
true,
|
||||
},
|
||||
|
||||
// List element not set
|
||||
{
|
||||
"List element not set": {
|
||||
map[string]*Schema{
|
||||
"foo": &Schema{
|
||||
Type: TypeList,
|
||||
|
@ -2740,8 +2660,7 @@ func TestSchemaMap_InternalValidate(t *testing.T) {
|
|||
true,
|
||||
},
|
||||
|
||||
// List default
|
||||
{
|
||||
"List default": {
|
||||
map[string]*Schema{
|
||||
"foo": &Schema{
|
||||
Type: TypeList,
|
||||
|
@ -2752,8 +2671,7 @@ func TestSchemaMap_InternalValidate(t *testing.T) {
|
|||
true,
|
||||
},
|
||||
|
||||
// List element computed
|
||||
{
|
||||
"List element computed": {
|
||||
map[string]*Schema{
|
||||
"foo": &Schema{
|
||||
Type: TypeList,
|
||||
|
@ -2767,8 +2685,7 @@ func TestSchemaMap_InternalValidate(t *testing.T) {
|
|||
true,
|
||||
},
|
||||
|
||||
// List element with Set set
|
||||
{
|
||||
"List element with Set set": {
|
||||
map[string]*Schema{
|
||||
"foo": &Schema{
|
||||
Type: TypeList,
|
||||
|
@ -2780,8 +2697,7 @@ func TestSchemaMap_InternalValidate(t *testing.T) {
|
|||
true,
|
||||
},
|
||||
|
||||
// Set element with no Set set
|
||||
{
|
||||
"Set element with no Set set": {
|
||||
map[string]*Schema{
|
||||
"foo": &Schema{
|
||||
Type: TypeSet,
|
||||
|
@ -2792,8 +2708,7 @@ func TestSchemaMap_InternalValidate(t *testing.T) {
|
|||
false,
|
||||
},
|
||||
|
||||
// Required but computed
|
||||
{
|
||||
"Required but computedWhen": {
|
||||
map[string]*Schema{
|
||||
"foo": &Schema{
|
||||
Type: TypeInt,
|
||||
|
@ -2804,8 +2719,7 @@ func TestSchemaMap_InternalValidate(t *testing.T) {
|
|||
true,
|
||||
},
|
||||
|
||||
// Conflicting attributes cannot be required
|
||||
{
|
||||
"Conflicting attributes cannot be required": {
|
||||
map[string]*Schema{
|
||||
"blacklist": &Schema{
|
||||
Type: TypeBool,
|
||||
|
@ -2820,8 +2734,7 @@ func TestSchemaMap_InternalValidate(t *testing.T) {
|
|||
true,
|
||||
},
|
||||
|
||||
// Attribute with conflicts cannot be required
|
||||
{
|
||||
"Attribute with conflicts cannot be required": {
|
||||
map[string]*Schema{
|
||||
"whitelist": &Schema{
|
||||
Type: TypeBool,
|
||||
|
@ -2832,8 +2745,7 @@ func TestSchemaMap_InternalValidate(t *testing.T) {
|
|||
true,
|
||||
},
|
||||
|
||||
// ConflictsWith cannot be used w/ Computed
|
||||
{
|
||||
"ConflictsWith cannot be used w/ Computed": {
|
||||
map[string]*Schema{
|
||||
"blacklist": &Schema{
|
||||
Type: TypeBool,
|
||||
|
@ -2848,8 +2760,7 @@ func TestSchemaMap_InternalValidate(t *testing.T) {
|
|||
true,
|
||||
},
|
||||
|
||||
// ConflictsWith cannot be used w/ ComputedWhen
|
||||
{
|
||||
"ConflictsWith cannot be used w/ ComputedWhen": {
|
||||
map[string]*Schema{
|
||||
"blacklist": &Schema{
|
||||
Type: TypeBool,
|
||||
|
@ -2864,8 +2775,7 @@ func TestSchemaMap_InternalValidate(t *testing.T) {
|
|||
true,
|
||||
},
|
||||
|
||||
// Sub-resource invalid
|
||||
{
|
||||
"Sub-resource invalid": {
|
||||
map[string]*Schema{
|
||||
"foo": &Schema{
|
||||
Type: TypeList,
|
||||
|
@ -2880,8 +2790,7 @@ func TestSchemaMap_InternalValidate(t *testing.T) {
|
|||
true,
|
||||
},
|
||||
|
||||
// Sub-resource valid
|
||||
{
|
||||
"Sub-resource valid": {
|
||||
map[string]*Schema{
|
||||
"foo": &Schema{
|
||||
Type: TypeList,
|
||||
|
@ -2899,8 +2808,7 @@ func TestSchemaMap_InternalValidate(t *testing.T) {
|
|||
false,
|
||||
},
|
||||
|
||||
// ValidateFunc on non-primitive
|
||||
{
|
||||
"ValidateFunc on non-primitive": {
|
||||
map[string]*Schema{
|
||||
"foo": &Schema{
|
||||
Type: TypeSet,
|
||||
|
@ -2914,13 +2822,13 @@ func TestSchemaMap_InternalValidate(t *testing.T) {
|
|||
},
|
||||
}
|
||||
|
||||
for i, tc := range cases {
|
||||
for tn, tc := range cases {
|
||||
err := schemaMap(tc.In).InternalValidate(schemaMap{})
|
||||
if err != nil != tc.Err {
|
||||
if tc.Err {
|
||||
t.Fatalf("%d: Expected error did not occur:\n\n%#v", i, tc.In)
|
||||
t.Fatalf("%q: Expected error did not occur:\n\n%#v", tn, tc.In)
|
||||
}
|
||||
t.Fatalf("%d: Unexpected error occurred:\n\n%#v", i, tc.In)
|
||||
t.Fatalf("%q: Unexpected error occurred:\n\n%#v", tn, tc.In)
|
||||
}
|
||||
}
|
||||
|
||||
|
|
|
@ -9,8 +9,8 @@ if [ -z $VERSION ]; then
|
|||
fi
|
||||
|
||||
# Make sure we have a bintray API key
|
||||
if [ -z $BINTRAY_API_KEY ]; then
|
||||
echo "Please set your bintray API key in the BINTRAY_API_KEY env var."
|
||||
if [[ -z $AWS_ACCESS_KEY_ID || -z $AWS_SECRET_ACCESS_KEY ]]; then
|
||||
echo "Please set AWS access keys as env vars before running this script."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
|
@ -31,8 +31,11 @@ for FILENAME in $(find ./pkg -mindepth 1 -maxdepth 1 -type f); do
|
|||
done
|
||||
|
||||
# Make the checksums
|
||||
echo "==> Signing..."
|
||||
pushd ./pkg/dist
|
||||
rm -f ./terraform_${VERSION}_SHA256SUMS*
|
||||
shasum -a256 * > ./terraform_${VERSION}_SHA256SUMS
|
||||
gpg --default-key 348FFC4C --detach-sig ./terraform_${VERSION}_SHA256SUMS
|
||||
popd
|
||||
|
||||
# Upload
|
||||
|
|
|
@ -12,6 +12,7 @@ import (
|
|||
"github.com/aws/aws-sdk-go/aws/awserr"
|
||||
"github.com/aws/aws-sdk-go/aws/credentials"
|
||||
"github.com/aws/aws-sdk-go/aws/credentials/ec2rolecreds"
|
||||
"github.com/aws/aws-sdk-go/aws/ec2metadata"
|
||||
"github.com/aws/aws-sdk-go/aws/session"
|
||||
"github.com/aws/aws-sdk-go/service/s3"
|
||||
"github.com/hashicorp/go-cleanhttp"
|
||||
|
@ -64,7 +65,7 @@ func s3Factory(conf map[string]string) (Client, error) {
|
|||
}},
|
||||
&credentials.EnvProvider{},
|
||||
&credentials.SharedCredentialsProvider{Filename: "", Profile: ""},
|
||||
&ec2rolecreds.EC2RoleProvider{},
|
||||
&ec2rolecreds.EC2RoleProvider{Client: ec2metadata.New(session.New())},
|
||||
})
|
||||
|
||||
// Make sure we got some sort of working credentials.
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
package terraform
|
||||
|
||||
// The main version number that is being run at the moment.
|
||||
const Version = "0.6.7"
|
||||
const Version = "0.6.8"
|
||||
|
||||
// A pre-release marker for the version. If this is "" (empty string)
|
||||
// then it means that it is a final release. Otherwise, this is a pre-release
|
||||
|
|
|
@ -2,6 +2,6 @@ set :base_url, "https://www.terraform.io/"
|
|||
|
||||
activate :hashicorp do |h|
|
||||
h.name = "terraform"
|
||||
h.version = "0.6.6"
|
||||
h.version = "0.6.7"
|
||||
h.github_slug = "hashicorp/terraform"
|
||||
end
|
||||
|
|
|
@ -37,7 +37,8 @@ body.page-sub{
|
|||
.edit-page-link{
|
||||
position: absolute;
|
||||
top: -70px;
|
||||
right: 30px;;
|
||||
right: 30px;
|
||||
z-index: 9999;
|
||||
|
||||
a{
|
||||
text-transform: uppercase;
|
||||
|
|
|
@ -178,29 +178,22 @@ A template resource looks like:
|
|||
|
||||
```
|
||||
resource "template_file" "example" {
|
||||
filename = "template.txt"
|
||||
vars {
|
||||
hello = "goodnight"
|
||||
world = "moon"
|
||||
}
|
||||
template = "${hello} ${world}!"
|
||||
vars {
|
||||
hello = "goodnight"
|
||||
world = "moon"
|
||||
}
|
||||
}
|
||||
|
||||
output "rendered" {
|
||||
value = "${template_file.example.rendered}"
|
||||
value = "${template_file.example.rendered}"
|
||||
}
|
||||
```
|
||||
|
||||
Assuming `template.txt` looks like this:
|
||||
|
||||
```
|
||||
${hello} ${world}!
|
||||
```
|
||||
|
||||
Then the rendered value would be `goodnight moon!`.
|
||||
|
||||
You may use any of the built-in functions in your template.
|
||||
|
||||
|
||||
### Using Templates with Count
|
||||
|
||||
Here is an example that combines the capabilities of templates with the interpolation
|
||||
|
@ -220,8 +213,8 @@ variable "hostnames" {
|
|||
|
||||
resource "template_file" "web_init" {
|
||||
// here we expand multiple template_files - the same number as we have instances
|
||||
count = "${var.count}"
|
||||
filename = "templates/web_init.tpl"
|
||||
count = "${var.count}"
|
||||
template = "${file("templates/web_init.tpl")}"
|
||||
vars {
|
||||
// that gives us access to use count.index to do the lookup
|
||||
hostname = "${lookup(var.hostnames, count.index)}"
|
||||
|
|
|
@ -24,7 +24,7 @@ to this artifact will trigger a change to that instance.
|
|||
# Read the AMI
|
||||
resource "atlas_artifact" "web" {
|
||||
name = "hashicorp/web"
|
||||
type = "amazon.ami"
|
||||
type = "amazon.image"
|
||||
build = "latest"
|
||||
metadata {
|
||||
arch = "386"
|
||||
|
|
|
@ -63,6 +63,8 @@ The following arguments are supported:
|
|||
endpoint to assume to write to a user’s log group.
|
||||
* `cloud_watch_logs_group_arn` - (Optional) Specifies a log group name using an Amazon Resource Name (ARN),
|
||||
that represents the log group to which CloudTrail logs will be delivered.
|
||||
* `enable_logging` - (Optional) Enables logging for the trail. Defaults to `true`.
|
||||
Setting this to `false` will pause logging.
|
||||
* `include_global_service_events` - (Optional) Specifies whether the trail is publishing events
|
||||
from global services such as IAM to the log files. Defaults to `true`.
|
||||
* `sns_topic_name` - (Optional) Specifies the name of the Amazon SNS topic
|
||||
|
|
|
@ -42,4 +42,4 @@ The following attributes are exported:
|
|||
* `network_interface` - Contains the ID of the attached network interface.
|
||||
|
||||
|
||||
[1]: http://docs.aws.amazon.com/fr_fr/AWSEC2/latest/APIReference/API_AssociateAddress.html
|
||||
[1]: http://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_AssociateAddress.html
|
||||
|
|
|
@ -130,10 +130,13 @@ The following attributes are exported:
|
|||
* `availability_zone` - The availability zone of the instance.
|
||||
* `placement_group` - The placement group of the instance.
|
||||
* `key_name` - The key name of the instance
|
||||
* `private_dns` - The Private DNS name of the instance
|
||||
* `private_ip` - The private IP address.
|
||||
* `public_dns` - The public DNS name of the instance
|
||||
* `public_ip` - The public IP address.
|
||||
* `public_dns` - The public DNS name assigned to the instance. For EC2-VPC, this
|
||||
is only available if you've enabled DNS hostnames for your VPC
|
||||
* `public_ip` - The public IP address assigned to the instance, if applicable.
|
||||
* `private_dns` - The private DNS name assigned to the instance. Can only be
|
||||
used inside the Amazon EC2, and only available if you've enabled DNS hostnames
|
||||
for your VPC
|
||||
* `private_ip` - The private IP address assigned to the instance
|
||||
* `security_groups` - The associated security groups.
|
||||
* `vpc_security_group_ids` - The associated security groups in non-default VPC
|
||||
* `subnet_id` - The VPC subnet ID.
|
||||
|
|
|
@ -90,4 +90,4 @@ this instance is a read replica
|
|||
[3]: /docs/providers/aws/r/rds_cluster.html
|
||||
[4]: http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Aurora.Managing.html
|
||||
[5]: /docs/configuration/resources.html#count
|
||||
[6]: http://docs.aws.amazon.com/fr_fr/AmazonRDS/latest/APIReference/API_CreateDBInstance.html
|
||||
[6]: http://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_CreateDBInstance.html
|
||||
|
|
|
@ -72,3 +72,10 @@ should only be used for informational purposes, not for resource dependencies:
|
|||
of the Spot Instance Request.
|
||||
* `spot_instance_id` - The Instance ID (if any) that is currently fulfilling
|
||||
the Spot Instance request.
|
||||
* `public_dns` - The public DNS name assigned to the instance. For EC2-VPC, this
|
||||
is only available if you've enabled DNS hostnames for your VPC
|
||||
* `public_ip` - The public IP address assigned to the instance, if applicable.
|
||||
* `private_dns` - The private DNS name assigned to the instance. Can only be
|
||||
used inside the Amazon EC2, and only available if you've enabled DNS hostnames
|
||||
for your VPC
|
||||
* `private_ip` - The private IP address assigned to the instance
|
||||
|
|
|
@ -0,0 +1,44 @@
|
|||
---
|
||||
layout: "digitalocean"
|
||||
page_title: "DigitalOcean: digitalocean_floating_ip"
|
||||
sidebar_current: "docs-do-resource-floating-ip"
|
||||
description: |-
|
||||
Provides a DigitalOcean Floating IP resource.
|
||||
---
|
||||
|
||||
# digitalocean\_floating_ip
|
||||
|
||||
Provides a DigitalOcean Floating IP to represent a publicly-accessible static IP addresses that can be mapped to one of your Droplets.
|
||||
|
||||
## Example Usage
|
||||
|
||||
```
|
||||
resource "digitalocean_droplet" "foobar" {
|
||||
name = "baz"
|
||||
size = "1gb"
|
||||
image = "centos-5-8-x32"
|
||||
region = "sgp1"
|
||||
ipv6 = true
|
||||
private_networking = true
|
||||
}
|
||||
|
||||
resource "digitalocean_floating_ip" "foobar" {
|
||||
droplet_id = "${digitalocean_droplet.foobar.id}"
|
||||
region = "${digitalocean_droplet.foobar.region}"
|
||||
}
|
||||
```
|
||||
|
||||
## Argument Reference
|
||||
|
||||
The following arguments are supported:
|
||||
|
||||
* `region` - (Required) The region that the Floating IP is reserved to.
|
||||
* `droplet_id` - (Optional) The ID of Droplet that the Floating IP will be assigned to.
|
||||
|
||||
~> **NOTE:** A Floating IP can be assigned to a region OR a droplet_id. If both region AND droplet_id are specified, then the Floating IP will be assigned to the droplet and use that region
|
||||
|
||||
## Attributes Reference
|
||||
|
||||
The following attributes are exported:
|
||||
|
||||
* `ip_address` - The IP Address of the resource
|
|
@ -25,7 +25,7 @@ Use the navigation to the left to read about the available resources.
|
|||
```
|
||||
# Template for initial configuration bash script
|
||||
resource "template_file" "init" {
|
||||
filename = "init.tpl"
|
||||
template = "${file("init.tpl")}"
|
||||
|
||||
vars {
|
||||
consul_address = "${aws_instance.consul.private_ip}"
|
||||
|
|
|
@ -5,7 +5,7 @@
|
|||
<div class="navbar-header">
|
||||
<div class="navbar-brand">
|
||||
<a class="logo" href="/">Terraform</a>
|
||||
<a class="by-hashicorp white" href="https://hashicorp.com/"><span class="svg-wrap">by</span><%= partial "layouts/svg/svg-by-hashicorp" %><%= partial "layouts/svg/svg-hashicorp-logo" %>Hashicorp</a>
|
||||
<a class="by-hashicorp white" href="https://hashicorp.com/"><span class="svg-wrap">by</span><%= partial "layouts/svg/svg-by-hashicorp" %><%= partial "layouts/svg/svg-hashicorp-logo" %>HashiCorp</a>
|
||||
</div>
|
||||
<button class="navbar-toggle white" type="button">
|
||||
<span class="sr-only">Toggle navigation</span>
|
||||
|
|
|
@ -21,6 +21,10 @@
|
|||
<a href="/docs/providers/do/r/droplet.html">digitalocean_droplet</a>
|
||||
</li>
|
||||
|
||||
<li<%= sidebar_current("docs-do-resource-floating-ip") %>>
|
||||
<a href="/docs/providers/do/r/floating_ip.html">digitalocean_floating_ip</a>
|
||||
</li>
|
||||
|
||||
<li<%= sidebar_current("docs-do-resource-record") %>>
|
||||
<a href="/docs/providers/do/r/record.html">digitalocean_record</a>
|
||||
</li>
|
||||
|
|
Loading…
Reference in New Issue